Yesterday I gathered our entire tech team for a conversation I'd been putting off. I kept delaying because the ground won't stop moving. I was rewriting slides the night before as new stuff dropped.
What pushed me was a set of numbers I couldn't ignore anymore. 46% of code on GitHub is no longer written by humans. Claude Code alone accounts for 4% of all public commits, with a 42,000x growth rate in 13 months. Anthropic's Head of Claude Code casually mentioned shipping 22 PRs in a single day, all of them AI generated. These aren't predictions. They're a normal workday in 2026.
The adoption numbers look great on paper. 80% of organizations use generative AI somewhere. 84% of developers use or plan to use AI tools. Half of them daily. But only 6% of organizations manage to extract real business value from any of them.
That 6% isn't because the tools are bad. I've seen what they can do, we all have. It's because of how we use them. Everyone on our team had their own setup: some swore by Copilot, others by Claude, others toggled between Cursor and ChatGPT depending on what they were doing. No shared metrics. No common language for what "good" looks like. No way to tell whether we were getting better or just going faster in the wrong direction.
I put together a slide of CEO predictions from the last three years. Emad Mostaque said there'd be no programmers in five years, back in July 2023. Matt Garman from AWS predicted most developers wouldn't be coding within 24 months. Dario Amodei suggested we're 6-12 months from AI doing most of what software engineers do. Ryan Dahl declared the era of humans writing code is over.
The deadlines passed. The developers are still here. There are actually more of us now than in 2023. And every one of these companies kept hiring engineers the whole time.
What the data supports is augmentation, not extinction. Charity Majors put it simply: writing code is the easiest part.
This is what worried me enough to call the meeting. A Veracode study from early 2026 tested LLMs on 80 coding tasks across four languages and four vulnerability types. The syntax pass rate has climbed to near perfect for recent models. The security pass rate? Flat. Around 50%.
Cross-site scripting: 15% pass rate. Log injection: 13%. Meanwhile, Gartner predicts a 2500% increase in software defects by 2028.
The prompt is a security surface now. And you can't fix that by having the same AI agent review its own homeworks.
I laid out four things I think we need. Not a mandate, a starting framework we'd fill in together.
Redefining roles. Developers become full stack problem solvers and AI orchestrators. The analyst, dev, and tester roles are converging. Tech leads become the people who design the workflows between humans and AI, structure documentation and context so models can actually use it. The most valuable thing you can invest in isn't learning new tools. It's going deeper on your domain. AI commoditizes writing code. It can't commoditize knowing the problem.
Governance. I used an F1 analogy in the talk: when you increase engine power, the first thing you upgrade is the brakes. We need human review protocols, quality gates at each phase, and metrics that track value (cycle time, defects, test coverage) instead of lines of code.
AI native security. Security constraints go into the prompt, not just the CI pipeline. No AI code hits production without human review. Full stop.
Clear policies. What data can go into a prompt. What never leaves our perimeter. Who owns the output (a human, always). How we communicate AI use to clients. Only 1 in 5 companies has mature governance for AI agents right now. We want to be in that fifth.
We're not starting from zero. The experiments, the workarounds, the failed prompts over the last months, all of that was learning. Now we turn it into a system.
I keep coming back to this: whatever the CEOs say on earnings calls, the developer role isn't vanishing. It's changing shape. The people who'll thrive aren't necessarily the ones writing the cleverest prompts. They're the ones who understand the problem well enough to make the AI's output actually worth something.
Gianluca Nastasi
Co-Founder & CTO at Hyntelo