← The archive Dispatch

Article · Field notes

Interviews in the Age of AI: Build Together

13 Jan 2026 3 min read

Interviews in the Age of AI: Build Together

I started with a small interview challenge and turned it into a public project:

What I care about isn’t the stack. Interviews still test for a world that doesn’t exist anymore.


The old interview format is dying

For years, technical interviews looked like this:

  • no internet
  • no docs
  • no helper tools
  • one person, one editor, one timer

I never loved this format.

It tests memory and stress response. It doesn’t test how someone actually works on a real team.

In real life we use docs, search, AI tools, and each other. Interviews should reflect that.

What I want interviews to become

Start with a shared build. For example, a T3-based address book:

  • fetch data
  • search
  • filter
  • sort
  • render responsive cards

Then do the part that matters, together:

  1. Live code a new requirement
  2. Write a mini spec on the fly
  3. Debate technical trade-offs
  4. Sketch algorithm choices
  5. Decide what to postpone and why

This is where engineering shows up for me. Not “who remembers API trivia”, but:

  • who asks sharp questions
  • who can structure ambiguity
  • who collaborates under constraints
  • who can make safe, reversible decisions quickly

The AI-era skill isn’t coding faster

For me, the AI-era skill is thinking better with other people.

AI produces code quickly. It can’t own accountability, production risk, or product judgment.

In an interview I pay attention to this:

🎯Can we turn an idea into a clear spec?
🛡️Can we split work into safe increments?
🔍Can we identify failure modes early?
📢Can we explain trade-offs to non-engineers?
🔄Can we adapt when assumptions break?

From one address book, many interview directions

One small base project opens up real discussions.

From the same address book, I can explore with a candidate:

  • product thinking (what problem are we actually solving?)
  • UX decisions (search behaviour, empty states, mobile-first flows)
  • API design (contracts, versioning, error models)
  • data modelling (normalisation, indexing, trade-offs)
  • algorithm choices (sort/filter strategy, performance implications)
  • testing strategy (unit vs integration vs end-to-end)
  • reliability (timeouts, retries, graceful degradation)
  • security and auth boundaries
  • observability (logs, traces, metrics, alerting)
  • team collaboration (specs, RFCs, code review style, rollout planning)

That’s why I like this format. We can build, discuss, challenge assumptions, and see how someone thinks while the situation changes.


The interview I want to have

Give me a real problem. Let’s build version one together. Then ask: “where do we take this next, and why?”

I learn more from that conversation than from any perfect kata.

In this AI era, I trust engineers who can think, ship, and evolve systems with other people.

Give me a real problem. Let’s build version one together.

That tells me much more than speed alone.