Here’s a growth curve that would make most SaaS founders weep: Cursor, the AI-powered code editor built by startup Anysphere, hit $2 billion in annualized recurring revenue in February 2026. That’s double what it was three months earlier. Bloomberg broke the news on March 2, and the company is already in discussions for a new funding round at a $50 billion valuation.
But the revenue number isn’t even the real story. Three days after Bloomberg’s report, Cursor launched a feature called Automations — and it signals something much bigger than a single product update.
What Automations actually does
Until now, AI coding tools worked like a very smart assistant. You asked a question, it helped. You prompted, it generated. The human was always in the driver’s seat, always typing the next instruction.
Automations flips that model. Instead of waiting for a developer to prompt it, Cursor’s agents now trigger automatically based on events: a new GitHub pull request, a Slack message, a PagerDuty incident, a Linear issue, or a simple timer. No human prompt required.
Here’s what that looks like in practice:
- Security review: Every new pull request gets automatically audited for vulnerabilities. The agent skips issues already discussed in the PR, and only posts high-risk findings to Slack for human attention.
- Incident response: When PagerDuty fires, an agent investigates logs, examines recent code changes, and sends the on-call engineer both a diagnosis and a proposed fix — as a ready-to-review pull request.
- Risk classification: PRs get assessed by blast radius and complexity. Low-risk changes get auto-approved. Complex ones get routed to the right human reviewer.
Cursor reports hundreds of automations running per hour across their user base. Their automated code review bot, Bugbot, now reviews more than 2 million pull requests per month across enterprise customers.
The thesis statement from Cursor’s own blog puts it bluntly: “The best software engineering teams of 2027 will not be the ones with the best programmers. They will be the ones with the best agents, and the best humans managing them.”
The enterprise money is real
What makes Cursor’s growth trajectory unusual isn’t just the topline number — it’s where the money is coming from. In late 2024, when Cursor was at $400M ARR, corporate buyers represented about 25% of revenue. By November 2025 ($1B ARR), that had grown to 45%. Today, at $2B ARR, enterprise accounts for roughly 60% of revenue.
Named customers include Rippling, Stripe, Discord, Airtable, Brex, and Samsara. Rippling has been the most aggressive adopter: their engineers built automations that aggregate meeting notes, Slack threads, and GitHub activity into deduplicated dashboards — plus automated on-call handoffs and incident triage.
The typical enterprise deployment pattern: companies ran 3–6 month pilots through mid-2025, then signed organization-wide contracts in Q4, locking in 500 to 5,000+ seats at $40/month per developer.
The trust paradox
Here’s where it gets interesting. While 84% of developers now use AI tools in their workflows, trust in AI accuracy has actually fallen from 40% to 29% year over year, according to Stack Overflow’s 2025 survey. More developers actively distrust AI output (46%) than trust it (33%).
The number one frustration, cited by 45% of respondents: “AI solutions that are almost right, but not quite.” And 66% of developers say they spend significant time fixing “almost-right” AI-generated code.
And now Cursor is asking developers to let these agents run autonomously in the background, triggered by events, verifying their own work?
That’s a significant leap of faith. Cursor addresses it by designing agents that produce diffs and comments for human review rather than merging code directly, and by building in self-verification — agents run tests before alerting humans. But the trust gap is real, and it’s widening, not closing.
What this means for your career
Whether you’re bullish or skeptical on always-on agents, the direction is clear: the value of a developer is shifting from writing code to managing the systems that write code. Cursor CEO Michael Truell believes that within 5–10 years, “a completely new, higher-level, and more efficient way of building software will emerge.”
The skill set that matters isn’t going away — it’s evolving. You still need to understand systems, architecture, and what good code looks like. But increasingly, you need those skills so you can evaluate and direct agents, not so you can type every line yourself.
This is also why Truell has warned that “vibe coding” — the casual, prompt-and-pray approach — builds “shaky foundations” that “start to crumble.” The people who thrive in an agent-driven world will be the ones who understand the fundamentals deeply enough to spot when agents get it wrong.
What AI Uni teaches about this
AI Uni’s AI Software Development major teaches exactly this progression: from understanding core programming concepts to architecting AI-powered workflows and evaluating agent output. Courses like AI Fluency and Workflow Architecture prepare students to manage and direct AI tools — not just prompt them, but build real systems with them.
Try 2 Free Sessions