The Pragmatic Engineer, one of the most respected software engineering newsletters in the industry, published its 2026 AI tooling survey on March 7. The results paint a picture of an industry that has fully embraced AI tools — while simultaneously growing less confident in them.
The headline numbers: 95% of the 906 engineers surveyed use AI tools at least weekly. 75% use AI for at least half their work. And 56% report using AI for 70% or more of their engineering tasks.
This isn’t a survey of curious early adopters. The median respondent has 11–15 years of experience. These are senior engineers and engineering leaders, mostly in Europe and the US, and they’ve made AI a non-negotiable part of their daily workflow.
The tools they actually use
The biggest surprise: Claude Code is now the #1 most-used AI coding tool, overtaking both GitHub Copilot and Cursor. It launched in May 2025. Eight months later, it’s on top. As newsletter author Gergely Orosz noted, it’s now “nearly as widespread as GitHub Copilot was in the spring 2023 survey” — showing just how fast the AI tools landscape moves.
The satisfaction gap is even more dramatic. When asked which tool they “most love,” 46% said Claude Code. Cursor came in at 19%. GitHub Copilot, despite being the most entrenched enterprise option, scored just 9%.
Most engineers don’t pick just one tool. 70% use 2–4 AI tools simultaneously, and 15% use five or more. The average experienced developer juggles 2.3 AI coding tools.
Company size determines tool choice more than anything else. At startups, 75% use Claude Code and 42% use Cursor. At companies with 10,000+ employees, 56% default to GitHub Copilot — largely because Microsoft enterprise contracts make it the path of least resistance.
The trust paradox
Here’s the tension: while AI adoption is approaching 100%, trust is going the other direction.
Stack Overflow’s 2025 developer survey found that trust in AI accuracy dropped from 40% to 29% year over year — an 11-percentage-point decline. 46% of developers actively distrust AI tool accuracy, compared to 33% who trust it. Only 3% say they “highly trust” AI output.
The primary frustration, cited by 45% of respondents: AI solutions that are “almost right, but not quite.” And 66% say they spend significant time fixing AI-generated code that looked correct at first glance.
There’s also a productivity perception gap. Developers self-report 25–39% productivity gains from AI tools. But controlled studies, like the METR study, found that experienced developers can actually be slower once review time is accounted for. Developers believed they worked 20% faster even when they were objectively slower. AI may change how productive you feel more than how productive you actually are.
Overall positive sentiment toward AI tools dropped from 72% to 60% in a single year. The industry uses AI more than ever, trusts it less than ever, and can’t stop using it anyway.
The “agent shift” is the real story
The Pragmatic Engineer survey confirms what practitioners already feel: the defining trend of 2025–2026 isn’t just AI adoption — it’s the shift from autocomplete to agents.
55% of respondents now regularly use AI agents, up from near-zero 18 months ago. Staff-level and senior engineers lead adoption at 63.5%, while regular engineers are at 49.7%. And agent users are nearly twice as likely to be positive about AI tools (61%) compared to non-users (36%).
The old workflow: you type code, AI autocompletes the next line. The new workflow: you describe what you want, an agent plans an approach, edits multiple files, runs tests, iterates on failures, and presents completed work for your review. The developer’s role shifts from writer to reviewer.
Autocomplete-style tools like basic Copilot are table stakes now. The differentiation — and the satisfaction — is in agentic workflows: tools that can plan, execute across files, run tests, and iterate. That’s why Claude Code (terminal-native, agent-first) and Cursor (IDE with background agents) are winning developer love, while basic autocomplete tools are losing it.
What this means if you’re entering the field
The data has a sobering implication for new developers. Junior hiring at big tech companies collapsed from 32% of new hires in 2019 to just 7% today. Entry-level tech positions saw a 73% hiring drop in the past year. Companies still post junior roles — postings grew 47% — but they quietly fill them with experienced engineers.
The bar for entry has risen. Employers expect new hires to be “AI-native” — fluent in AI tools, capable of system-level thinking, and able to evaluate agent output. Toy portfolio projects don’t cut it. What matters now: demonstrated ability to build real systems, integrate AI tools into professional workflows, and think at a level above individual code.
The paradox for aspiring developers: fewer junior positions exist, but AI-specific entry-level roles are growing fast for those who can demonstrate the right skills. The doors are narrower, but the ones that are open lead somewhere very valuable.
What AI Uni teaches about this
AI Uni’s curriculum is designed for exactly this moment: building real AI-powered projects from day one, learning to evaluate and direct agent output, and graduating with a portfolio that demonstrates systems-level thinking. The AI Software Development and AI Product & Business majors directly address the skills this survey shows employers demanding.
Try 2 Free Sessions