Workflow Teams

Working with AI in a Team

Every guide about AI coding assumes you're working alone. You're not. This guide covers the coordination problems that emerge when a whole team uses AI — shared context files, prompt strategies, PR workflows, and onboarding in a codebase where AI is already part of how things get built.

Last reviewed: Apr 22 2026


The Coordination Gap

When you use AI by yourself, your setup is entirely your own. Your system prompts, your .cursorrules, your prompt habits — they live on your machine and evolve with your preferences. You tune them over time. They get good because you use them constantly and fix what doesn't work.

When you work on a team, this breaks in two specific ways.

The divergence problem. Eight developers, each with their own AI configuration, each building slightly different habits around prompting, each getting slightly different output. One person always asks for tests. One never does. One gets results in the team's error-handling style. Another gets a different pattern that AI invented. Over time, the codebase accumulates these variations. Reviews get harder. Patterns are inconsistent. Nobody is wrong — but nobody is coordinated.

The knowledge silo problem. You've spent three weeks learning what prompts work well for your specific codebase — how to ask for code that fits your service layer, how to get tests in the right format, what context AI needs to stop hallucinating the internal API. That knowledge lives in your head. When a teammate asks AI for the same thing, they're starting from scratch. A solution that took you three hours to arrive at takes them a whole day.

The Insight

The value of good AI configuration multiplies with team size. A shared context file that took two hours to write benefits every developer on the team, every day. A prompt that took thirty minutes to get right benefits everyone who copies it. The leverage is in the sharing.

This guide covers four areas: shared context files (where AI gets its instructions), AI in the PR workflow (before and during review), shared prompt strategies (accumulated team knowledge), and onboarding (using AI to learn a new codebase and teaching others to do the same).


Part 1: Shared Context Files

Every major AI coding tool reads a project-level configuration file that tells it how to behave in your specific codebase. These files are the single highest-leverage place for team coordination around AI. When they're good, every developer's AI sessions start with the right context automatically. When they're missing or neglected, every session starts from scratch.

The Files and What They Do

The file format varies by tool, but the concept is the same: instructions that AI reads at the start of every session, before you say anything.

All of these files should live in version control. They're part of the project, not part of an individual developer's setup.

What to Put in the Context File

The goal is to tell AI everything it needs to generate code that fits your codebase — without you explaining it in every session. A useful team context file covers:

# [Project Name] — AI Context

## Stack
# List versions that matter. AI's training data contains multiple
# versions of every library. Specifying versions prevents it from
# suggesting deprecated patterns.
- TypeScript 5.x, Node.js 20+
- Express 4.x — NOT Fastify, NOT Hono
- PostgreSQL via Kysely (NOT TypeORM, NOT raw SQL except migrations)
- Zod for validation — applied at every API boundary
- Vitest for unit tests, Supertest for integration tests

## Project structure
# Tell AI where things live so it puts new code in the right place.
- src/routes/     — route handlers (thin, no business logic here)
- src/services/   — business logic
- src/db/queries/ — database queries (never inline in services)
- src/middleware/  — Express middleware
- src/types/       — shared TypeScript types
- tests/           — mirrors src/ structure

## Patterns to follow
# Positive instructions: what to do.
- Use Result<T, AppError> return types for operations that can fail
- Log with the logger at src/lib/logger.ts — never console.log
- Read config from src/config.ts — never process.env directly
- Service functions are pure: no request/response objects

## Patterns to avoid
# Negative instructions are often more important than positive ones.
# AI will default to these patterns if you don't explicitly forbid them.
- Do NOT use `any` in TypeScript — use `unknown` and narrow explicitly
- Do NOT put business logic in route handlers or middleware
- Do NOT use class components in React — functional components only
- Do NOT import directly from node_modules paths — use the re-exports in src/lib/

## Security rules (non-negotiable)
# Security constraints stated here apply to every session automatically.
- Never log request bodies that may contain passwords, tokens, or PII
- All external input must pass Zod validation before touching business logic
- Use parameterized queries — never string-concatenate SQL
- Auth middleware is src/middleware/auth.ts — always apply to protected routes
- No hard-coded secrets, even in tests — use environment variables

## Testing conventions
- Every service function needs a unit test
- Every API endpoint needs an integration test against a test database
- Test files live next to source files: src/services/user.ts → src/services/user.test.ts
- Use describe/it blocks, not test()
- Mock at the module boundary, not inside functions
The Most Important Section

"Patterns to avoid" is often more valuable than "patterns to follow." AI has default preferences that come from its training data. If your team chose a different approach — a specific error-handling pattern, a specific service structure, a specific logging library — you need to explicitly disable the default. Otherwise AI will use it anyway.

Cursor's Rule Files in Depth

Cursor's .cursor/rules/ directory lets you split your context into multiple files, each activated differently. This is useful for larger teams where different parts of the codebase have different conventions.

.cursor/rules/
  base.mdc           # Always active — core stack, patterns, security rules
  frontend.mdc       # Active when editing files in src/components/ or src/pages/
  backend.mdc        # Active when editing files in src/routes/ or src/services/
  migrations.mdc     # Active when editing files in migrations/
  tests.mdc          # Active when editing *.test.ts or *.spec.ts files
---
description: Rules for writing database migrations
globs: migrations/**/*.ts, migrations/**/*.sql
alwaysApply: false
---

# Migration Rules

- All migrations are irreversible by default — write them to be safe to run in production
- Never DROP COLUMN or DROP TABLE in a migration — mark columns as deprecated in a comment and schedule removal for a future release
- Add indexes in a separate migration from the column change — never in the same ALTER TABLE
- Migrations run in filename order — prefix with timestamp: 20260414_add_user_roles.ts
- Test migrations against a copy of production schema before merging
- Every migration needs a corresponding rollback script in migrations/rollbacks/

The globs field activates the rule only when the open file matches the pattern. Developers editing a migration file automatically get migration-specific rules without those rules polluting every other context.

Maintaining the Context File as a Team

The context file is a living document. It should evolve as your codebase evolves, as you discover new AI failure modes, and as your conventions change. The maintenance process is simple but important:

The Rule That Pays Off Most

The most valuable rules in a context file are the ones that encode decisions you had to make explicitly — where you evaluated options and chose one. "Use Kysely for all database access" is more valuable than "write clean code." The explicit decisions are exactly the ones AI won't infer from context and will get wrong without instructions.

What Doesn't Belong in the Context File

The context file is not a style guide, a wiki, or a document for human readers. Keep it focused on instructions AI needs to generate code correctly:


Part 2: AI in PR Review

Pull request review is where the quality of AI-generated code gets tested. It's also the place where teams accumulate the most inconsistency in how they handle AI. Some developers disclose everything. Some disclose nothing. Some use AI to review before submitting. Some submit immediately. Without a shared approach, reviewers can't calibrate their expectations.

Before You Open the PR: AI Self-Review

The most valuable use of AI in the PR workflow happens before review opens — when you review your own AI-assisted work before anyone else sees it. This catches the category of errors that AI is most likely to introduce: hallucinated API usage, missing error paths, authorization gaps.

The key is adversarial framing. Don't ask AI to validate your code — ask it to find problems:

You

Here is a diff of changes I'm about to submit for review. This code was largely AI-generated. I want you to find problems — not validate correctness. Specifically look for:

  • API calls or method names that may not exist in the library version we're using
  • Happy path code with no error handling on the failure path
  • Authentication checks that are missing authorization (checking "logged in" but not "allowed to access this resource")
  • Database operations that should be in a transaction but aren't
  • Logging that might capture sensitive data
  • Anything that diverges from the patterns in our CLAUDE.md

If everything is fine, say so explicitly. Otherwise, list issues with file and line number.

[paste the diff]

Run this on every PR where AI did significant work. You'll catch a category of issues that your own review misses — because when you wrote the code alongside AI, you're too close to it to see the gaps.

What to Include in the PR Description

The standard PR description was designed for human-authored code. For AI-assisted PRs, reviewers need additional information to calibrate their review depth and attention. A minimal addition to your PR template:

## AI assistance
- [ ] Fully AI-generated from prompt
- [ ] AI-suggested approach, manually implemented
- [ ] AI-assisted (significant manual modification)
- [ ] Human-written with AI review only

## Self-review checklist (AI-assisted PRs)
- [ ] Verified all external API calls exist in the version we're using
- [ ] Manually tested error paths, not just happy path
- [ ] Checked authorization, not just authentication
- [ ] Database operations are transaction-safe
- [ ] No sensitive data in logs

## Where to look hardest
[What parts of this PR are you least confident about?
Where did AI struggle? Where did you override AI suggestions?]

The "where to look hardest" field is the most valuable addition. AI-assisted code often has a specific part where the developer was uncertain — where AI gave plausible-but-unverified output, or where the domain complexity exceeded what AI could handle well. Making that uncertainty explicit directs reviewer attention to where it's most needed.

How Reviewers Should Use AI on PRs

Reviewers can use AI as a second reader — distinct from the author, with no prior context, asked to be adversarial. The value is that AI has no social pressure to be polite. It will name problems a human reviewer might soften.

You (as reviewer)

You are reviewing a pull request in a TypeScript/Express/PostgreSQL application. The author used AI assistance heavily. Your job is to find problems, not approve.

Specifically flag:

  • Methods or properties called on objects that may not exist
  • Missing validation on external inputs
  • Error states that aren't handled
  • Places where a race condition or concurrent request could cause incorrect state
  • Security issues: injection, data exposure, broken access control
  • Patterns that diverge from the standard approach described in our CLAUDE.md: [paste relevant rules]

For each issue: file name, approximate line, category, and what specifically is wrong. If nothing is wrong, say "No issues found."

[paste the diff]

Note the instruction to paste the relevant rules from your CLAUDE.md. Without this context, AI doesn't know what your team's standard patterns are, and it can't flag deviations.

The Speed Asymmetry Problem

AI writes code faster than humans review it. If a developer can produce three features a day with AI assistance, review becomes the bottleneck. The temptation is to review faster — to skim, to approve on the assumption that "AI probably got it right."

This is wrong in the specific direction AI fails. AI errors aren't random typos or syntax mistakes that jump out in a quick skim. They're structural: a method that doesn't exist, a missing authorization check, a database operation outside a transaction. These require deliberate reading, not faster reading.

The solution is tiered review — deciding how much attention a PR needs based on risk, not volume:

Classify PRs at creation time. The author self-declares a tier; reviewers can escalate. This keeps review quality high where it matters without creating a bottleneck on low-risk changes.

What AI Review Misses

AI review is a useful check but not a substitute for human review. AI misses: business logic authorization errors where the check exists but applies the wrong rules, race conditions that require understanding of your specific concurrency model, context about whether this change is consistent with recent decisions the team made verbally, and anything that requires understanding your specific data model or user population.


Part 3: Team Prompt Strategies

Good prompts are discovered, not invented. A prompt that works well for your codebase — that generates code in the right style, catches the right issues, asks the right clarifying questions — represents accumulated learning that took time to arrive at. That learning shouldn't live in one person's head.

A shared prompt library is how teams capture that learning. Not a Notion document with a few examples — a version-controlled set of prompts, structured for reuse, with honest documentation of where each prompt works and where it doesn't.

What Belongs in the Team Library

The criterion is simple: does this prompt depend on knowing your team's specific stack, conventions, or codebase? If yes, it belongs in the team library. Generic prompts that work anywhere don't need to be shared — they're the same for everyone.

Good candidates for a team prompt library:

Structure for Reusable Prompts

A prompt that works for you in your session often doesn't work for a teammate who doesn't have the surrounding context you had. Reusable prompts need to be self-contained — enough context baked in that anyone on the team can use them without additional explanation.

# New API Endpoint

## When to use
When adding a new route to the Express API.

## Context to provide before using this prompt
- Paste the relevant Zod schema (or describe the data shape)
- Describe what the endpoint should do
- Mention which service functions already exist vs. which need to be created

## The prompt
I need to add a new API endpoint to our Express/TypeScript/PostgreSQL application.

Stack context:
- Routes go in src/routes/ — thin handlers only, no business logic
- Business logic goes in src/services/ — plain functions, no req/res
- Database queries in src/db/queries/ — using Kysely
- All inputs validated with Zod before reaching service layer
- Services return Result<T, AppError> — never throw from services
- Errors are handled by the global error handler in src/middleware/errors.ts
- Use the logger at src/lib/logger.ts — never console.log

Endpoint spec:
[DESCRIBE THE ENDPOINT HERE]

Please:
1. Create the Zod validation schema for the request
2. Create the route handler in src/routes/[ROUTE FILE]
3. Create or update the service function in src/services/[SERVICE FILE]
4. Add the database query to src/db/queries/[QUERY FILE] if needed
5. Write unit tests for the service function
6. Write an integration test for the route using Supertest

## Expected output
Separate code blocks for each file, with the file path as the label.
Each file should be complete and importable without modification.

## Known limitations
- Does not generate DB migrations — run that separately
- May not know about custom middleware applied to specific route groups — check src/routes/index.ts
- Integration tests may need the test DB seed updated if new data shapes are needed

Where to Store the Prompt Library

The library lives in the repository, in a prompts/ directory at the root. Organized by category, version-controlled, reviewed like code. Not in Notion, not in a shared Slack channel, not as bookmarks in someone's browser.

prompts/
  README.md              # How to use the library, how to contribute
  backend/
    new-api-endpoint.md
    add-validation.md
    security-review.md
    database-query.md
  frontend/
    new-component.md
    add-form.md
    state-management.md
  review/
    pre-submit-review.md
    security-check.md
    pr-description.md
  testing/
    unit-test-generation.md
    integration-test.md
    edge-cases.md
  debugging/
    error-investigation.md
    performance-analysis.md
  onboarding/
    explore-module.md
    understand-flow.md

Contributing a New Prompt

The contribution process should be lightweight — a PR with the prompt file and a brief note about what problem it solves. The review bar is practical, not pedantic:

A prompt that meets these criteria is better than most of what individuals run in their personal sessions. The bar isn't perfection — it's "documented and tested."

The "Known Limitations" Field is Non-Negotiable

A prompt library without known limitations documentation becomes a trap: teammates use prompts confidently in situations where the prompt fails, wonder why they get bad output, and either waste time or stop trusting the library. Document the failure modes. "This prompt works well for service-layer functions but generates incorrect output for middleware" saves someone two hours of debugging.

Keeping the Library Useful

Prompt libraries decay. Dependencies major-version. Conventions change. New patterns replace old ones. A stale prompt library is worse than no library — people discover the prompts are wrong and stop trusting all of them.


Part 4: Onboarding with AI

Joining a new codebase is one of the hardest things a developer does. The first weeks are spent pattern-matching: how does error handling work here, where does business logic live, how are tests structured, which parts of the code are stable and which are in flux. Previously this required a mix of documentation (sparse), pair programming (expensive), and reading a lot of code (slow).

AI changes this significantly. You can ask the codebase to explain itself.

The First Day: Getting the Mental Model

On day one, before writing a single line of code, spend an hour with AI building a map of how the system works. This isn't about generating code — it's about understanding enough to generate code correctly later.

You (new joiner)

I've just joined this project. I'm going to paste some key files. I want you to explain the architecture — not the code line by line, but the big picture: how requests flow through the system, how data access is structured, how errors propagate, and what the testing approach is. Ask me for more files if you need them to complete the picture.

Claude

I'll need a few files to get the full picture. Start with:

  • The main entry point (usually src/index.ts or server.ts)
  • One representative route handler
  • One representative service file
  • One test file
  • Your CLAUDE.md or context file if you have one

That'll let me trace a full request lifecycle and understand the patterns.

This conversation produces something that would take days to assemble from documentation: a concise explanation of how this specific codebase works, in language calibrated to what you actually asked about. It's not perfect — AI will occasionally misread patterns or miss context that's obvious to experienced team members — but it's dramatically faster than reading code cold.

The Onboarding Prompt Set

Several prompts are worth having in the library specifically for new joiners:

# Explore an Unfamiliar Module

## When to use
When you need to understand a part of the codebase before making changes to it.

## Context to provide
- Paste the files in the module you need to understand
- Describe what you're trying to do (what change are you planning to make?)

## The prompt
I'm about to make changes to [MODULE NAME] in our codebase. I'm not yet familiar with this part.
Please explain:

1. What this module is responsible for (its role in the system)
2. What the main data flows are — how data enters, transforms, and leaves
3. What patterns this module uses for error handling and validation
4. What tests exist and what they cover
5. What I should be careful not to break when making changes in this area
6. What I should read next to understand the parts that depend on this module

Files I'm attaching: [LIST THE FILES]

Be specific to these files, not generic. If something is unclear or I've given you
insufficient context, tell me what additional files would help.

## Expected output
A structured explanation with the five sections above. Specific references to
the actual code patterns you see in the files provided.

## Known limitations
- AI builds its explanation from the files you provide — if you miss an important
  dependency, the explanation will be incomplete
- May miss informal conventions that aren't visible in the code (ask a teammate for those)
- Should be treated as a starting point for understanding, not a definitive reference
# Trace a Request Flow

## When to use
When you want to understand how a specific user action is handled end-to-end.

## Context to provide
Describe the user action or API call you want to trace.
Paste the relevant route handler, service, and DB query files.

## The prompt
Trace this request through the system: [DESCRIBE THE REQUEST — e.g., "POST /tasks with a title, due date, and project_id"]

Walk through each step:
1. Where does validation happen? What gets rejected and why?
2. What does the route handler do with valid input?
3. What does the service layer do? What can fail here?
4. What database operation happens? What does the query return?
5. How does the response get built?
6. What does the error path look like if the database is unavailable? If the input is invalid? If the user doesn't have permission?

Reference actual function names and file paths from the code I've provided.

## Expected output
A numbered walkthrough with specific references to functions and files.
Separate "happy path" from "error paths."

Onboarding New Teammates

When a new developer joins your team, the context file and prompt library you've built are as useful for onboarding as they are for daily development. During onboarding:

The Onboarding Paradox

New developers who use AI heavily in their first weeks can produce working code before they understand the codebase. This looks like productivity but is a risk: they're generating code in a context they don't yet fully understand, and reviews catch less because reviewers assume the new person learned the patterns. Explicit onboarding structure — starting with understanding, not generation — prevents this.

Using AI to Update Onboarding Materials

Onboarding documentation is the thing that gets most outdated fastest. Every team has a README that describes the architecture from two years ago. AI can help keep it current:

You

Here is our current onboarding README and here are the current versions of the key files it describes. Identify: (1) what the README says that is now inaccurate or outdated, (2) what's missing from the README that someone joining today would need to know. Be specific — reference both the README text and the current code.

This doesn't write the documentation for you — it tells you what to fix. Run it every quarter, or before every significant new hire. The actual writing still needs human judgment, but identifying gaps is a mechanical task AI does well.


Part 5: Day-to-Day Team Habits

Beyond the systems — the context file, the prompt library, the PR process — there are habits that make day-to-day team AI use smoother. These are the small coordination practices that prevent common friction.

Sharing Discoveries in the Moment

When you find a prompt that works especially well, or discover that AI consistently gets something wrong in a specific situation, that's signal for the team. The question is where it goes: not in Slack where it'll be buried by tomorrow, but somewhere the team will actually use it.

A lightweight system that works: a dedicated Slack channel or GitHub Discussion thread called "AI discoveries" — one post per discovery, short, with a link to a PR where it was used or a code snippet of the failing output. Anything that gets three reactions gets promoted to the context file or prompt library.

Retrospective Questions Worth Asking

Add AI-specific questions to your regular retrospectives. Not "did we use AI enough?" — that question produces defensiveness, not learning. Questions about patterns and failures:

The outputs from these questions go directly back into the context file and prompt library. This is how team AI configuration improves over time — not by individual developers getting better in isolation, but by collective learning feeding back into shared infrastructure.

Deciding What AI Handles vs. What It Doesn't

Not all tasks benefit from AI equally. Teams that use AI indiscriminately — for everything, without calibration — often waste time on tasks where AI is unreliable and miss opportunities where it's excellent. It's worth making explicit, as a team, what categories AI is trusted for:

These categories are specific to your team and codebase. The important thing is making them explicit so developers don't have to make the judgment individually every time.

Calibration Over Time

These categories should shift as your context file improves and as your team gets better at prompting for your specific codebase. If AI consistently gets service layer code right because your context file is good, that moves from "moderate" to "high confidence." Calibrate based on actual outcomes in your retrospectives, not on general assumptions about what AI can and can't do.

Handling Disagreements About AI Use

Teams using AI will develop different comfort levels. Some developers will use it for almost everything; others will use it reluctantly for specific tasks. Both are legitimate. The coordination problem isn't about adoption rate — it's about the shared infrastructure.

Two things worth making explicit to prevent friction:


Getting Started

If your team is currently using AI individually with no shared coordination, don't try to implement everything at once. A realistic four-week path from "zero coordination" to "basic shared infrastructure":

1

Week 1: Audit what exists

Have each developer share what they have in their personal context files and their three most-used prompts. You'll likely find a lot of overlap — people have arrived at similar answers to similar problems independently. That overlap is the starting point for your shared context file.

2

Week 2: Write the first shared context file

Take the overlap from week 1 — the rules everyone already follows — and put them in a single CLAUDE.md / .cursorrules. Commit it. Have everyone switch to it. Run one sprint with it and collect feedback: what did AI get right that it got wrong before? What new issues appeared?

3

Week 3: Add one team prompt and update the PR template

Pick the one prompt that would benefit the most developers if it were shared — probably a code review prompt or a test generation prompt. Write it up properly: when to use, context to provide, known limitations. Create the prompts/ directory. In parallel, add the AI disclosure fields to your PR template.

4

Week 4: Run the first AI retrospective

Ask the retrospective questions listed above. What did AI get wrong this sprint? What worked well? Update the context file with any new rules that emerged. Add one or two prompts from discoveries during the sprint. This is the feedback loop that makes everything improve — run it every sprint from here on.

After four weeks you'll have: a shared context file in version control, a small prompt library with a few trusted prompts, a PR process that makes AI use visible, and a retrospective practice that keeps it improving. That's the foundation everything else builds on.


Working with AI in a Team — Summary

Related Guides

AI for Technical Leads & Architects

The leadership-side companion: how leads define the shared context file, build the PR process, and make architecture decisions with AI.

AI Prompt Library

A concrete example of what a mature shared prompt library looks like: 53 prompts across 10 categories, searchable and copy-paste ready.

CLI-First AI Development

If your team uses Claude Code or Aider, this guide covers CLAUDE.md setup, shell automation, and terminal workflows in depth.

Back to Home