Home

The architect’s new toolbox: prompts, patterns, and trade-offs

December 10, 2025

Post Image

From missing docs to ai co-pilots

Most of my architecture jobs start the same way: I arrive when a lot is already in place. Systems are running, key people are gone, and documentation is either outdated or simply missing. At that point, documenting or refactoring the architecture feels like archaeology. You dig through code, old diagrams, half-written Confluence pages, and try to reconstruct why things were built the way they were.

This is where AI has quietly become part of my daily toolbox. Not to magically make me faster, but to make my work more consistent, more complete, and better thought through.

From concerns to reusable prompts

Every architect has the same repeating questions:

  • Is this secure?
  • Will this scale?
  • Where are the failure points?
  • Is this too complex for the team?

I now turn these checks into reusable prompts. Instead of walking through each concern manually every time, I use AI to review designs systematically:

  • “Review this architecture for security risks.”
  • “List scalability bottlenecks.”
  • “What compliance concerns should I consider here?”

With tools like ChatGPT, AWS Bedrock, or agents built with LangChain or CrewAI, these checks become repeatable workflows. It is basically a personal architecture checklist that never forgets anything.

Handling long documents

RFPs, specs, vendor docs… no architect escapes them. AI is excellent at chewing through these:

  • Summarising key requirements
  • Extracting constraints
  • Comparing versions

Instead of reading hundreds of pages, I start with AI summaries and then verify what matters most. It saves time, but more importantly, it ensures I don’t miss important details buried deep in documents. It doesn't replace reading ourselves, but it helps reading the stuff that is important faster.

Testing the architecture

Another useful pattern is asking AI to act as a risk finder:

  • “List failure scenarios for this design.”
  • “What could go wrong in production?”
  • “Which components are single points of failure?”

The answers are not always perfect, but they frequently surface risks worth discussing with the team. It feels like having a permanent design-review partner who keeps asking, “What if?”

Quick diagrams

AI also helps produce first-draft diagrams. With Mermaid or simple text descriptions, I can quickly generate system views that I later refine. They are not final deliverables but useful working tools for visualising and communicating ideas faster and more clearly. Going form sequence to use-case diagrams, Mermaid charts are quickly generated, fine tuning and validation is easely done in the Mermaid online editor.

Using AI as a devil’s advocate

One of my favourite use cases is asking AI to attack my design:

“Play devil’s advocate: what is wrong with this architecture?”

It bluntly highlights fragile assumptions, complexity hotspots, or operational risks I may have discounted. This doesn’t replace peer reviews, but it prepares me better for them.

Where AI (still) fails

AI still struggles with two things:

  • Domain context – it doesn’t fully understand specific business rules or niche industries.
  • Organisational reality – politics, budgets, team skills, company culture.

AI may propose technically perfect solutions that are completely unrealistic for a given company. This is where the architect’s judgement remains essential. AI advises, humans decide.

The real benefit

Using AI does not make me dramatically faster. The real benefit is quality:

  • More consistent checks
  • Better documentation
  • Fewer blind spots in designs

AI is not replacing architects. It’s becoming a very useful assistant that handles analysis, drafting, and cross-checking, while I stay focused on business context, trade-offs, and communication.

For me, this new toolbox is simple: prompts for structure, patterns for reuse, and trade-offs made by humans, not machines.

What’s next

In upcoming posts, I’ll zoom in on all of this in much more detail: my actual prompt templates, building small agents with LangChain and CrewAI, AI-driven architecture reviews, and where Human Common Sense still beats Large Language Models every single time.