Part 3: Process Appendix

How I collaborated with AI agents throughout this project — the working method, the prompting techniques, and where domain knowledge shaped the output.

This project was built with Claude as the primary collaborator, with NotebookLM and ChatGPT used for specific verification and cross-checking tasks. What follows is a short account of the working method I found productive, the techniques that made the collaboration useful rather than sloppy, and where my domain training cut across what the AI proposed.

What came from whom

Collaboration stance

Prompting techniques that worked

Structured output with provenance tracking

I asked Claude to return research outputs with explicit confidence tags — sourced, derived, judgement, hypothetical — next to every number, along with a named reference. This turned the research document into something I could audit at a glance rather than a wall of prose I had to trust.

Multi-agent verification

For citations that mattered, I pulled the original papers into NotebookLM and asked it to verify specific claims Claude had made. Running a second model against the same source material catches hallucinations that self-consistency checks within a single system would miss.

Intermediate artefacts before execution

Before asking Claude to build the interactive demo, I had it generate the outcome tree as a static diagram. Inspecting the structure as a standalone artefact caught design problems that would have been buried inside running code.

Persona-driven prompting

For open-ended discussion I assigned Claude specific system-level roles — start-up owner, VC investor, sceptical reviewer — which constrained the feedback distribution and kept me from drifting into vague affirmation.

Where domain judgement shaped the output

AI agents have reasonable defaults — but defaults are generic. Three categories of intervention mattered most: