You bought the licenses. GitHub Copilot, maybe a ChatGPT team plan, maybe Claude. Your engineers have access to AI tools. And nothing has changed.

"75% of engineers use AI tools — yet most organizations see no measurable performance gains."

— Faros AI, The AI Productivity Paradox (2025) — 10,000+ developers across 1,255 teams ↗

They're using AI as autocomplete. Faster typing, same thinking. Someone pastes a function into ChatGPT, gets a suggestion back, spends twenty minutes deciding it's not quite right, and rewrites it by hand. Another engineer has Copilot running in VS Code but keeps dismissing the suggestions because they're wrong — because Copilot doesn't have the context to be right.

Meanwhile the tools have moved on. Claude Code can scaffold an entire application, maintain context across a full codebase, run tests, fix what it broke, and iterate — if you know how to direct it. But knowing how to direct it requires a fundamentally different way of working. Not typing faster. Thinking differently about what your job is.

The gap between "we have AI tools" and "our team works differently" is where most organisations are stuck. You can't close that gap by buying more licenses, sending a company-wide email about AI adoption, or adding an online course to the training budget.

AI-First is a workflow shift, not a tooling problem

The tools are not the bottleneck. Your engineers' mental model is. They still see themselves as the ones who write the code. AI is the assistant — a better autocomplete, a fancier search engine, a sometimes-useful rubber duck.

AI-First is different. The engineer directs the AI as the primary method of building software. They provide context, architecture, constraints, and judgment. The AI does the volume work. The engineer's value shifts from typing code to knowing what to build and validating that it's correct.

You cannot buy your way into this shift. You cannot mandate it. And it's not something your tech leads failed to learn — it's that this particular transition doesn't happen through courses, articles, or experimentation in isolation. It's a workflow shift, not a knowledge gap. Your best engineers are exactly the right people to lead it. They just need someone who's already made the transition to work alongside them, show them what it looks like in practice, and get them over the hump so they can carry it forward.

AI-Assisted

  • AI as autocomplete
  • Engineer writes, AI suggests
  • Platform expertise required
  • Marginal speed improvement
  • Same workflow, shinier tools

AI-First

  • AI as primary interface
  • Engineer directs, AI executes
  • Engineering judgment required
  • Fundamentally different output
  • New workflow, new capabilities

Who this is for

You gave your team AI tools and you're not seeing the results. Most organisations started where you did — someone rolled out Copilot or a ChatGPT team plan because it was the obvious first step. It was the right instinct. But tooling without a plan for how the work actually changes is just a subscription. The tools are there. What's missing is a deliberate shift in how your team uses them.

You're an engineering leader who needs clarity. You know AI should be changing how your team delivers, but the signal is buried under vendor hype and mixed results. You need someone who can cut through that, show your team what "good" actually looks like, and give you an honest read on where they are.

Not a coding bootcamp. Not a keynote on AI strategy. Not an AI vendor evaluation. And not about building AI into your products — that's a separate conversation entirely.

What we offer

Four formats, from low-commitment introduction to full hands-on immersion. Each one is a live, interactive session led by an engineer who uses AI-First workflows as his daily working method.

Online Session

2 hours · Remote · €1,500

A live demonstration of AI-First engineering on a real codebase. Not a screencast. Not slides with bullet points about productivity gains. Your team watches an experienced engineer build software the way they'll be working in twelve months — and asks questions about what they're seeing.

This session creates clarity. Your team will understand the difference between what they're doing now and what's possible. It won't make them AI-First engineers. It will show them what that looks like.

AI-First Fundamentals

1-Day On-Site Full day · Up to 8 engineers · €5,000 all-in

Morning: your team sees AI-First engineering demonstrated and explained. Not a polished demo — a real engineering task, narrated in real time, showing the decision-making, the prompting, the iteration, the failures, and the recovery.

Afternoon: your team does it themselves. Guided exercises on real code, progressing from heavily scaffolded first steps to more independent work. Everyone leaves with a configured working environment and a concrete list of what to do differently starting Monday morning.

Morning
  • The AI-First thesis — what has changed, where most teams are, where they need to be
  • Live demonstration on a real codebase, fully narrated
  • Discussion — what this means for this team and this delivery model
Afternoon
  • Tooling setup — getting everyone configured and working
  • Guided exercise — first AI-First task with hands-on support
  • Guided exercise — same structure, more autonomy
  • Wrap-up — key takeaways, what to do Monday morning

One day gives your team the fundamentals and a first supervised experience. What one day cannot do is build habits. If your team needs more than a starting push, the 2-day format is the right choice.

Recommended

AI-First Immersion

2-Day On-Site Two days · Up to 8 engineers · €8,500 all-in

Day one is the Fundamentals session. Day two is where the shift sticks.

The morning is extended hands-on practice — independent work on prepared cases with coaching, then a real task from your team's stack tackled AI-First. Your engineers build confidence through repetition, not theory.

The afternoon is the Curveball Challenge.

The Curveball Challenge

A time-boxed simulation of the kind of requirement changes that derail real projects. Your engineers start with a working application on a stack they know well. Then the curveballs start landing — the kind that would normally mean "find a specialist" or "slip the timeline."

A complete technology shift: the .NET application is now a Python FastAPI service. An integration swap: the client dropped their product feed and moved to a new PIM system. A vendor mandate: swap out Google Analytics for Salesforce Marketing Cloud. Each curveball is drawn from the kind of real-world delivery chaos that engineering teams face regularly.

The exercise is designed to be achievable. Some engineers will clear the first hurdle quickly and move on to the next. Others will take longer — and that's fine. The point is not to test your engineers. The point is to prove — through direct experience — that when you work AI-First, these curveballs stop being the blockers everyone assumes they are.

This is the story your team will tell their colleagues on Monday morning. And it's the story you'll tell your board when they ask what changed.

Bespoke Engagements

Your codebase. Your backlog. Your problems.

The Fundamentals and Immersion formats work with prepared codebases. Bespoke engagements go deeper — we work on your actual code, your actual backlog, your actual delivery pipeline.

This is the fully embedded format. We connect to your project management tools, pull real tickets from your JIRA or Azure DevOps, and tackle genuine engineering problems AI-First — with your team, on their machines, in their environment. The learning happens on work that would have needed doing anyway.

When bespoke makes sense

  • Specific technology focus. Your team needs to migrate Sitecore XP codebases to XM Cloud. Or modernise a legacy .NET monolith. Or adopt a new framework. The enablement is built entirely around that objective.
  • Real delivery integration. We work from your backlog, solve real problems, and your team sees AI-First applied to the work they're accountable for — not a training exercise they'll forget by Thursday.
  • Larger teams. Multiple squads, parallel sessions, staggered schedules. The standard formats cap at 8 engineers — bespoke scales to the organisation.
  • Extended embedding. A week or more working alongside your team. The longer the engagement, the deeper the habit formation and the more your team's own delivery processes adapt around the new workflow.

Bespoke engagements are scoped through a discovery conversation. Duration, focus, and pricing depend on what you need.

Discuss your requirements →

Pricing

Format Price
Online Session 2 hours, remote €1,500
AI-First Fundamentals 1-Day On-Site · up to 8 engineers €5,000
Bespoke scoped per engagement Let's talk

One number. No surprises. All on-site prices are all-in — travel, accommodation, and preparation included. No expense reports, no hourly billing, no invoice that's larger than the quote.

On-site pricing applies to locations within the EU and the UK. For other locations, contact us for a tailored quote. All prices exclude VAT.

What you provide

  • Laptops with administrative rights for all participants
  • Claude Pro or Claude Team licenses for all participants — these sessions are built around Claude Code. We'll advise on the right plan during the discovery call.
  • Dedicated time. Participants freed from standups, meetings, and delivery obligations for the full duration
  • At least one engineering leader in the room — not to learn the tooling, but to see the implications for how the team works, estimates, and delivers

About the codebase

We maintain a selection of prepared codebases across common technology stacks. During the discovery call, we agree on one that's closest to what your team works with day-to-day.

If your organisation owns its own code and can clear it for use with AI tooling — no client NDAs, no IP restrictions — working on your actual codebase is even better. But in practice, most agencies and consultancies work under client agreements that make this difficult. The prepared cases are designed to be realistic enough that the learning transfers directly.

Why this works

Daily practice, not theory

The person leading the session uses AI-First engineering as his actual working method — building applications, migrating platforms, shipping production code. The demonstrations are not rehearsed. The workflow is real.

Real code, not toy examples

The exercises happen on realistic codebases matched to your team's technology stack. The code is real, the problems are real, and the learning transfers directly.

Built on Claude Code

These sessions use Claude Code — the tool we use every day for real engineering work. We teach what we know, not what we've read about.

The Curveball Challenge

No other AI workshop does this. Stack shifts, integration swaps, vendor mandates — the kind of real-world delivery chaos that used to require specialists and weeks of ramp-up, tackled AI-First in a single afternoon.

Honest about limitations

One day is a starting push, not a transformation. Two days is where habits begin to form. We're direct about what each format can achieve because overpromising is how training budgets get wasted.

Your team has the tools.
They just need someone to show them what's possible.

Book a discovery call →