Skip to content

Instantly share code, notes, and snippets.

@igorcosta
Created May 7, 2025 04:28
Show Gist options
  • Select an option

  • Save igorcosta/bb40cdab6b1468c373d647164afae7ef to your computer and use it in GitHub Desktop.

Select an option

Save igorcosta/bb40cdab6b1468c373d647164afae7ef to your computer and use it in GitHub Desktop.
What would you do if you had a chance to sneak peak in the year 2030 and you were a Hubber?

After the last git commit

How we got here

It's 7th of May 2025, and I'm leaving. Before I go, I'd like to share my vision of the future. If you're reading this in 2030, you probably wish you'd read it five years earlier. Consider this a time capsule from a future that arrived faster than expected.

Let’s rewind. It’s hard to believe developers once worked without AI. In domains like embedded systems, mining infrastructure, and medical software, adoption was slower due to regulation and legacy constraints. But everywhere else, including browsers, APIs, simulations, and cloud environments where AI coding agents took over.

Why did AI take over coding so quickly? One word: economics. In 2021, a junior developer in San Francisco earned over USD $260,000 to build basic prototypes. Managing 100 of them cost over $26 million per year. By 2025, an AI-powered team could deliver equivalent output at less than six percent of that cost. Quality improved, fatigue vanished, iteration cycles collapsed. Capital flooded in. AI agents didn’t replace software engineering - they redefined it.

In mid-2021, GitHub Copilot launched with OpenAI’s GPT-3. It was revolutionary. Within months, TabNine, JetBrains, Claude.ai, OpenAI Canvas, and Windsurf appeared. Cursor also emerged as a leaner, faster, more opinionated tool, designed for developers frustrated by constant context switching.

By 2024, browser-based tools exploded. Base44, Lovable, Bolt.new, and Canva AI Builder gave non-developers the ability to create without code. Meanwhile, GitHub announced its ambition to empower one billion “developers.” Most thought it meant onboarding new users. In hindsight, it meant unleashing autonomous agents. Not more coders, but expanded cognitive capacity.

GitHub Copilot still led by users: 15 million by 2025, $1.8 billion in revenue. But its product strategy splintered. Multiple product lines such as Workspace, Spark, Business, Enterprise, and Free fragmented the team’s focus. Innovation slowed. Cursor soared. To stall the momentum, GitHub dropped Copilot’s price to zero for targeted segments, prioritising platform control over ARR.

The competitive landscape

This year, three players dominate 78 percent of the market:

  • Cursor: Developer-first, fast, deeply integrated into coding environments. Think opinionated workflows, hot-reload reasoning, and model swap freedom.
  • OpenAI Code: Fully vertical, tightly integrated with cloud infrastructure and enterprise compliance. Default for large regulated organisations.
  • GitHub Copilot: Widely used, heavily adopted in corporate GitHub environments, but seen as slower to adapt. Broad, not sharp.

Low-code platforms like Cline, Lovable, and Base44 serve a vast range of solopreneurs and small teams who prioritise outcomes over syntax.

How AI agents work now

The “code completion” paradigm died five years ago. AI agents now handle 96 percent of engineering automation in Fortune 500 teams. They read repos, manage dependencies, simulate outcomes, test edge cases, and maintain long-running memory threads. Seventy-two percent of all code commits in fast-cycle teams originate from agents.

These agents operate independently. Each has a role: testing, infrastructure patching, observability, compliance. They work in cycles, responding to change in minutes. In regulated sectors, AI-generated compliance layers are considered more accurate than human-written specs.

The modern SDLC has morphed into what we call Adaptive Continuous Engineering (ACE). It’s always-on, event-driven, and learning-based. Pipelines never “start” - they persist. Releases are no longer treated as milestones. They are metadata, embedded in an ongoing stream.

flowchart LR
    A[Intent Captured] --> B[Agent Planning]
    B --> C[Auto-Coding]
    C --> D[Test and Simulate]
    D --> E[Contextual QA]
    E --> F[Live Deploy]
    F --> G[Runtime Feedback Loop]
    G --> H[Agent Memory Update]
    H --> B
Loading

This is ACE in motion: continuous feedback, continuous deployment, continuous reasoning. It is not CI/CD - it is CI/RL/CM (Continuous Integration, Reinforcement Learning, Context Management).

The developer’s role

Developers remain critical, but the skillset has evolved. They no longer write most of the code. Instead, they:

  • Direct intent
  • Tune agent objectives
  • Interpret emergent behaviour
  • Manage security boundaries
  • Coordinate exception handling

In practice, developers act as cognitive directors. They command six to ten AI agents per day. They no longer review code line by line. They evaluate agent-generated decisions and ensure alignment with intent.

GitHub’s 2029 telemetry showed 94 percent of enterprise commits originated in agent-authored pull requests. Average latency from intent to deploy dropped to 23 minutes. Engineers now spend just 12 percent of time editing code, and 88 percent managing flow, risk, and goals.

What we learned

AI didn’t eliminate complexity. It shifted it: from syntax to semantics, from files to flows, from control to coordination.

Trust didn’t disappear - it shifted. The new debt isn’t tech debt. It’s trust drift, prompt rot, policy gaps, agent misalignment.

The question isn’t “Can we ship?” It’s “Should this be shipped - and why did the agent decide that way?”

Reclaiming momentum: the Copilot pivot

By 2026, GitHub makes a strategic pivot. Copilot evolves from assistant to platform. It integrates into VSCode, Edge DevTools, GitHub Projects, and GitHub Actions. GitHub drops its internal Model Capability Program, embracing community-led governance and benchmarking.

The shift wasn’t immediate. Internally, it sparked tension-between scale and focus, between experimentation and stability. But leadership held the line, betting that enterprise trust would matter more than being first.

Key innovations include:

  • Copilot Control Plane: Enterprise boundary setting and policy tuning
  • Intent-Aware Reviews: Justify, not just generate
  • Agent Summons: Invite agents into threads, like teammates
  • Copilot Open Models: 8B, 32B, and 270B models released in 2028 to spark open innovation

These changes didn't just reclaim market momentum - they redefined how teams operate. GitHub stopped chasing indie hackers and speculative novelty. It doubled down on what large organisations needed: control, scale, predictability. Copilot transformed from a feature into foundational infrastructure - embedded deep in the workflows of the world's most complex systems. Invisible. Boring. Essential. And quietly, the industry started building on top of it.

The 2030 marketplace

Before we look ahead, we need to understand what customers demand today, the demand is higher than in 2025. AI tooling is no longer about productivity - it’s about dependability and control. The marketplace has matured. Buyers are focused on scaling engineering systems, not experiments.

They want:

  • Agent governance: Clear ownership, explainability, and policy compliance
  • Intent traceability: Agent output linked directly to planning and prioritisation
  • Latency resilience: Fast, predictable execution with built-in redundancy
  • Multi-agent alignment: Consistency across roles, models, and layered AI stacks

Customers are no longer buying standalone tools. They’re assembling interoperable ecosystems.

What’s next

Looking toward 2035, the horizon is shifting again. The next competitive edge won’t come from better completions - but from better coordination.

  • Open ecosystems will win by integrating into existing decisions, not rewriting them
  • Smaller, faster models will power more secure and localised workflows
  • Auditable agents will be essential for regulated and mission-critical applications
  • Hybrid orchestration will become standard - combining open source agents, vendor logic, and internal copilots

The goal isn’t just more automation - it’s alignment. Systems that adapt, respond, and justify their choices in context.

The real story of the 2030s may not be about Copilot at all. It may be about what comes next - coordination without configuration.

The next challenge isn’t just better automation - it’s making agents trustworthy, accountable, and adaptive.

The future isn’t about replacing developers - it’s about enabling them to scale decisions, not just output.

The 2030s may not belong to any one platform. They may belong to whatever ecosystem best aligns humans and machines at scale.

The biggest opportunity ahead? Make agents feel like teammates.

One final note

This isn’t a prediction. It’s a reflection. The shift already happened. If you’re still managing backlogs manually or assigning dev tasks line-by-line, you’re five years behind.

But here’s the good news: the next shift is happening now. And this time, you’re early.

@igorcosta
Copy link
Author

Update 8th May 2025

  • Embeddings will reshape agent latency

Thinking out loud: Based on what I’ve written, I believe GitHub’s next major infrastructure step, following Git LFS, it will be support for embedding code. By 2026 or 2027, GitHub will likely natively index code embeddings. This will allow agents and developers to query repositories semantically, not just syntactically. Every integration could benefit from this foundation, and there’s a real opportunity to build a business around it.

A key bottleneck in agent autonomy is the high-latency, high-bandwidth nature of cloud-based context exchange. Today, most LLM-driven agents send prompts, source files, and system state to remote APIs, resulting in 150ms round trips: especially for large or distributed repositories.

The natural evolution is embedding code directly at the source. In this model, every commit is converted into a vector embedding and stored alongside the repo. When a developer clones a repository, they also clone its embeddings. This allows local-first agent reasoning, with access latencies reduced to 2–5ms.

That’s a 3 to 5x improvement in responsiveness and contextual accuracy compared to current approaches.

Early projects like vectorvfs are exploring this direction. With the right team, this architecture could be implemented at scale within 6 to 9 months.

@igorcosta
Copy link
Author

Update 18th May 2025

I wish Copilot had shipped a feature like this back in 2024. It is still not too late. We should avoid building many first-party integrations and extensions; the community can lead that work. Our job is to back them with solid tools and support.

Copilot Open Models: 8 B, 32 B, and 270 B models released in 2028 to spark open innovation

Windsurf shipped its own models three days ago. The new models are smaller and faster, yet their accuracy is still below GPT-4.1. Windsurf aims to handle 98 percent of the tasks a developer can do. See the latest benchmark: Wave 9 SWE-1.

cc: @ashtom @lostintangent

@igorcosta
Copy link
Author

Nov 2025

The industry just accepts that MCP is not safe and scalable. I've won!

@igorcosta
Copy link
Author

30th May 2025

Sakana AI Labs just released Darwin Gödel Machine, https://sakana.ai/dgm/, with similar concept as ACE

@igorcosta
Copy link
Author

23th June 2025

Solidification on what the strategy looks like seems like the paradigm shift is accelerating by 10%, which I might have to revise some of the 18 months goal to less than 5 months by end of this year.

A few updates.

  • Wix acquired Base44, (improving the credibility of their platform niche from template to recipes to land page.
  • Good friend @karpathy, discussed in public of future of software at YCombinator 2025, we had a great discussion on what would that look like as we reviewed this piece.

@igorcosta
Copy link
Author

#31 July 2025

A lot has changed, some of my early predictions were spot on, but too soon. Here what happened in the past 30 days.

  • 25th June Google launched Gemini CLI ( an open source cli tool for developers) led by good friend Ryan and Scott, this was 2 days after my last update. It spread like wildfire and I think they have a good shot, mainly because is built in the open, trust can't be eroded if you know what you're using.

  • Google also launched Jules (Vibe coding cloud based IDE https://jules.google/

  • Google Deepmind is working on a Diffusion model for coding, like 400tk/s or above.

  • Claude Code CLI is launched to compete with Google Gemini CLI, turns out there are a lot of new cool ideas around this.

  • The deal OpenAI was working to acquire windsurf fell through and Google Deepmind acquired their CEO, founding engineer to work.

  • Vibe coding is the new flavour of this year that will fade way in 2026 - Lovable is dominat in this sector, they raised $200M

New Vibe coding tools competing in this sector, Copilot Spark, Google

http://getmocha.com/
http://bolt.new/
http://rocket.new/
http://softgen.ai/
http://v0.dev/
http://lovable.dev/
http://orchids.app/
http://dualite.dev/

  • AWS launched Kiro.dev, to compete with the other players
  • Chinese Labs are dominating the dense a large models (All open source) for coding challenges and general propose, Kimi K2, GLM-4.5.

And so many more, the definitive goal for this year with all these tools are to actually increase productivity output for devs and increase high quality, maybe a signal that the market will shift from serving tks to outcome base consumption? A new kind of SasS business model?

@bibryam
Copy link

bibryam commented Nov 12, 2025

I loved reading this. No further predictions reflections?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment