AI, if applied right, is helping teams work faster, better, and more confidently. And while nobody is able to exactly quantify the productivity gain – depending on who you ask it might be 10% or 10x – the exact number doesn't matter anyway. Even if it's only 10%, that must make everyone ask themselves: how can I realize that productivity gain for myself and my team? Certainly everyone with budget responsibility is rightfully asking themselves that question.
This shift creates peer pressure as well. Once some teams move faster with AI, that forces everyone else to catch up or risk falling behind. An increased level of output for the same input becomes the new baseline, not the exception. That brings its own tension. Not everybody is on board the AI hype train and many people in the tech industry have concerns about code quality, environmental, ethical, or geopolitical aspects of AI. While those are all legitimate concerns that shouldn't be ignored, letting them drive a decision to opt out and not leverage this new capability is asking a lot from any decision maker when the competition is moving ahead and happily increasing their output, and threatening to get ahead or away.
Successfully leveraging AI in software engineering, in particular for mid to large teams and organizations, is not trivial though. It’s not as simple as just getting everyone a Claude account and then expecting velocity to double overnight. It's a foundational shift in how a team operates. At the same time, there are few established best practices, and the ones that exist keep changing on a daily basis.
anchorWhat Gains Are to Be Realized?
While the field continues to develop and change fast, the productivity gains across a range of tasks are real and increasingly well-understood. First and most obviously, AI accelerates the rate at which code is produced. What began as smarter autocompletion has quickly evolved into full code and test generation or even agents that can build entire features and submodules, commit to git and open pull requests automatically. That doesn’t just shave seconds off keystrokes but can save hours of an engineer's time.
Applications of AI don't stop at code generation though but extend beyond it, e.g. to code discovery and onboarding. Engineers often spend days and weeks trying to wrap their heads around an unfamiliar codebase. With AI-powered tools, that process can be compressed as AI can prepare information for easier consumption: system diagrams can be generated on demand, unfamiliar code, design patterns, or component relationships can be explained and summarized in an instant. Searching a codebase no longer relies on precise keyword matches—semantic search is changing how large codebases are navigated.
AI also speeds up the work that happens before a single line of code is written. Whether it’s drafting tasks, feature specs, or architecture proposals, AI can help teams move faster by generating solid first drafts. From there, AI tools can review and refine those drafts, reducing the time teams need to spend in meetings discussing every detail—because the groundwork is already in place.
In QA, debugging and maintenance, AI is proving valuable as well. Many teams report strong results from AI-powered pull request review agents, which often catch issues early and cheaply. When systems misbehave in production, AI tools can assist in analyzing logs, correlating symptoms, and narrowing down possible causes. What might have taken a human hours of digging can now be accelerated with a well-constructed prompt. None of these tools replace expertise, but they accelerate a wide range of tasks significantly.
anchorRisks and Challenges
But it’s not all fun and games. AI will makes mistakes—often with unsettling confidence. These range from inventing non-existent packages to producing inefficient or flat-out incorrect implementations, or generating code that lacks architectural coherence. This becomes especially problematic for large systems and teams, where consistency and clarity are critical. Full-on vibe-coding might be fun in a solo weekend project, but is not a reasonable practice for larger teams that need to coordinate their work, share knowledge, and ensure consistency. The last thing you want is a model stitching together random patterns and practices it’s seen across the internet and injecting them into your codebase. The result is neither reliable nor maintainable.
Avoiding that outcome requires strong guardrails. Providing AI the tools, context and direction it needs to be efficient is the essential first step. That requires new skills and techniques like context-engineering which must be established across the entire organization, requiring new infrastructure and processes. Human oversight is equally critical. Developers need to review AI output with the same or greater scrutiny they apply to human-written code. They also need to know where to trust the AI more, and where to double-check extra rigorously. Without this human-in-the-loop approach, teams really do risk ending up with exactly what the skeptics predict: an unmaintainable pile of garbage with eventually no way out.
Another serious challenge is intellectual property and data security. AI models don’t clearly distinguish between inspiration and duplication. They can output snippets that violate licenses or reproduce copyrighted material. At the same time, teams risk leaking their own intellectual property to the AI providers, training the models that might reproduce the same code elsewhere eventually. That could mean giving away competitive secrets or exposing sensitive user data. It's essential to have in place systems to sanitize prompts, guardrails that prevent sensitive data from leaving secure environments, and the ability to trace and audit AI outputs.
Then there’s the challenge of tooling complexity and fragmentation. The ecosystem is moving fast. There are multiple competing approaches to everything from code generation to test automation. Teams trying to integrate too many tools at once find themselves in a mess of conflicting workflows and overlapping features. Some developers use one tool, others use another—making it hard to share knowledge, align on practices, or build shared infrastructure. The result is low consistency and increased complexity.
Finally, if code production is in fact accelerated successfully through AI, the engineering infrastructure must be able to keep up with the changes coming in. More code being written and merged faster only helps if the delivery pipeline is ready for it and can in fact ship all that code to production systems efficiently and reliably. If deployments are slow (or maybe even still manual 😱) or if testing isn't stable and comprehensive, accelerated code production doesn't translate to accelerated value creation – in fact, the opposite: larger, more complex releases that are riskier and result in more production bugs and rework. Velocity drops instead of accelerating.
All that shows that AI only increases the importance of what has always distinguished great engineering teams from the rest: a strong engineering culture and the infrastructure and tooling that engineers thrive on. The companies that will struggle with adopting AI will be the same ones that have struggled with quality, velocity, and consistency before AI. AI just increases the pressure – if practices are weak, systems brittle, reviews inconsistent, AI will expose and amplify that, making the teams that are already at a competitive disadvantage fall behind further.
anchorGuiding Teams Through Adopting AI
As an engineering consultancy that's (seemingly) paid to write code for clients, we've thought about what the rise of AI-accelerated engineering means for us quite a bit, and not without worrying about the future. Yet, over time we realized more and more that our work has never been just about writing code—it’s been about shaping the systems and processes that reliably and efficiently turn ideas into running systems. We have spent years helping clients build strong engineering cultures and the infrastructure needed to support them. As explained above, that work is more important than ever, now with an added dimension: enabling teams to harness AI in ways that genuinely accelerate their ability to deliver value—sustainably, and at scale.
We are prepared to guide clients through adopting AI in their engineering organizations—so they can accelerate value creation in a sustainable, practical way:
- First, we assess the status quo, covering team size and composition, experience level, tooling and infrastructure, development practices, testing and release processes, observability, etc. The goal is to understand the organization’s maturity and identify blockers that might get in the way of accelerated value delivery.
- Together with our clients, we define what success looks like. For some teams, that might mean adopting agentic coding to dramatically increase feature velocity. Others may choose a more measured approach, starting with AI-assisted workflows that keep humans in the driver’s seat. Alongside this, we establish tracking for key metrics—such as velocity, lead time for changes, and change failure rate—so progress is visible, measurable, and grounded in real outcomes.
- If necessary, we overcome any delivery impediments our clients might face, e.g. fixing broken or adding missing automation, infrastructure and observability. As noted earlier: any pain caused by a weak delivery pipeline will only get worse once AI increases the volume and speed of code production.
- Once the necessary foundation is in place, we roll out tools incrementally along with the necessary guardrails, supporting resources, and mentoring of engineers. Our team will work with our clients' teams as we've done for many years, introducing them to the new ways of working as teammates.
Adopting AI to accelerate a software engineering team’s output isn’t just about rolling out another tool—it’s a fundamental shift in how teams work. AI won’t replace developers. But teams that learn to use it effectively will outpace those that don’t. We’re here to guide you through that transition as teammates, so you don’t have to navigate it alone.
