A half-day workshop that changes how your team uses AI.
Not a tool tutorial. A hands-on workshop that transforms your team from ad-hoc prompting to structured AI engineering. Your developers leave with workflows, review patterns, and a 30-day action plan they can use tomorrow.
Led by someone who builds production software with AI agents every day — not someone who read about it.
Your team has AI tools. They do not have AI workflows.
The gap between having AI tools and using AI effectively is the gap between typing prompts and engineering with AI.
3 hours. 4 modules. Immediate results.
Each module builds on the last. Your team leaves with workflows they can use the next day.
From ad-hoc prompting to structured workflows
45 minWhy most teams are vibe coding instead of engineering with AI. The CLAUDE.md pattern. Structured prompting that produces consistent, reviewable output.
Code review for AI output
45 minAI generates different bugs than humans. Learn what to review, what to automate, and how to scale quality gates for higher-volume code output.
Hands-on: build a feature with AI agents
60 minLive coding session using your team's actual codebase. Architecture decision first, structured prompt, AI execution, human review. The full workflow applied to a real task.
Team workflow design
30 minDesign your team-specific AI operating model. Which tasks get automated? What stays manual? How do you measure improvement? Action plan your team can start tomorrow.
What makes this workshop different.
Taught by a practitioner, not a presenter.
Daily practitioner
I build production software with AI agents every day. The workflows I teach come from shipping real products — 4 SaaS platforms, not PowerPoint slides.
Your codebase, not demos
The hands-on module uses your actual codebase. Your team practices on real context, not tutorial examples that do not translate to production.
Discipline, not tools
Tools change quarterly. Engineering discipline compounds permanently. This workshop teaches the thinking process, not the button clicks.
Take-home materials
CLAUDE.md templates, prompt patterns, review checklists, pre-commit hooks, and a 30-day action plan. Everything your team needs to continue independently.
Common questions
What teams ask about the AI engineering workshop.
How long is the workshop?
Half day (3 hours). Enough time to cover structured workflows, code review patterns, hands-on practice, and team-specific workflow design. Compact enough to not disrupt your sprint. Thorough enough to change how your team works.
Is this remote or in-person?
Both options available. Remote workshops use screen sharing and collaborative tools for the hands-on sections. In-person works better for larger teams (8+) where real-time pairing adds value. Same content, same outcomes either way.
What team size works best?
Ideal is 4-12 developers. Smaller teams get more personalized hands-on time. Larger teams benefit from breakout sessions during the practical module. For teams over 12, I recommend splitting into two sessions for maximum engagement.
What skill level do participants need?
Mid-level to senior developers get the most value. The workshop focuses on workflow design and engineering discipline, not basic tool tutorials. Junior developers can attend but benefit most when paired with a senior during hands-on sections.
Do we need specific AI tools installed beforehand?
I will provide a setup guide 1 week before the workshop. Typically Claude Code or Cursor for the hands-on sections. If your team has existing AI tool preferences, the workflow patterns adapt to any tool — the discipline is tool-agnostic.
What do participants leave with?
A CLAUDE.md template customized for your codebase, structured prompt patterns, an AI-specific code review checklist, pre-commit hook examples, a task automation map for your team, and a 30-day action plan. All practical, all immediately usable.
How is this different from an AI tool tutorial?
Tutorials teach buttons and features. This workshop teaches engineering discipline — how to think before prompting, how to structure AI workflows, how to review AI output, and how to measure improvement. Tools change quarterly. Discipline compounds permanently.
What is the cost?
Contact me for pricing. It varies based on team size, remote vs. in-person, and whether you want follow-up coaching sessions. The ROI is typically visible within the first sprint — teams report 30-50% reduction in boilerplate engineering time.
Find out how much AI productivity your team is leaving on the table.
A quick conversation about your team size, current AI tools, and development workflow. I will tell you whether the workshop is the right fit — and if so, what your team should expect to gain.
Half-day investment. Permanent workflow improvement.
Schedule a WorkshopRemote or in-person. 4-12 developers. Immediate results.