Process Smells in AI-Assisted Workflows
Specification:
- You keep correcting AI mid-task instead of front-loading intent
- Specs are undercooked
- You accept "close enough" output and manually patch it
- Acceptance criteria are vague
- You can't tell if output is correct without running it
- Lost architectural grip on the area
Context Management:
- You re-explain the same project context across sessions
- Missing a persistent briefing doc
- You forget where a parallel thread left off
- No re-entry protocol
- You hold task state in your head instead of externalizing it
- Doesn't scale past ~3 threads
Delegation:
- You do a task manually because "it's faster than explaining it to AI"
- Avoiding the spec investment (fine if one-off, smell if recurring)
- You micromanage AI output token-by-token instead of reviewing at the outcome level
- Haven't let go of the maker identity
- You can't walk away from a running agent session
- Trust/verification pipeline isn't set up
Verification:
- You approve AI PRs faster than you'd approve a junior's
- Calibration drift
- You review diffs line-by-line when you should be checking behavior
- Wrong verification level
- You skip testing because "it looked right"
- False fluency problem
Prioritization:
- You start new AI tasks before verifying completed ones
- Generation is more fun than review
- You optimize tooling instead of shipping
- Meta-work trap
- You can't articulate what you're blocked on
- Planning debt
Energy:
- You feel busy but can't point to what you shipped
- Motion vs. progress
- You're fatigued from reading, not writing
- Verification exhaustion (needs different recovery than creative fatigue)
Somewhat related: