• 5 Year Frontier
  • Posts
  • Future of Coding With Graphite CEO Merrill Lutsky | 5YF #38

Future of Coding With Graphite CEO Merrill Lutsky | 5YF #38

The 100X Engineer, Vibe Coding Bottlenecks, Agents As Evaluators, AI Roll-ups

Future Of Software Creation: Scaling Review

Hi there,

Happy release day! Today we dive into the future of software development in a world of vibe coding. Tune in here 🎧

I sit down with Merrill Lutsky, CEO and co-founder of Graphite — a New York startup bringing AI-acceleration and automation to code review. Founded in 2020 and funded to the tune of $70M, Graphite has become a key part of the developer ecosystem with over 45,000 developers on its platform.

While the rest of the industry chases code generation with copilots and vibe coding, Graphite is focused on what happens after the pull request is created. As more code is generated with AI, they enable developers to scale the evaluation, testing, and review process before it is released. It points to a growing bottleneck that has become incredibly important in a world where the majority of code is generated by AI.

What role do developers play in an era of vibe coding? How can we scale ourselves to keep up with the pace of code being generated to ensure its quality and impact? We dive into these and more.

LLMs are great at generating code. They’re also really, really, good now at reviewing it, understanding it, and improving it.

Was this forwarded to you? Subscribe to get the next one in your inbox!

My 5 Year Outlook:

  • Agent-Led Code Generation By Default

    We know its coming; the move from AI-assisted coding to AI-led.

  • The Multi-Disciplinary Developer

    The shift to higher order tasks has developers owning more of the outcomes.

  • Adaptive Software Interfaces

    UI and UX dynamically personalized to each user.

Curious? Read on as I unpack each below 👇🏼

Agent-Led Code Generation By Default

Just as GitHub Copilot made autocomplete table stakes, the next wave is full agentic development: software that builds itself based on high-level specs, product requirements, or even natural language prompts.

Modality shift that happens in the next couple of years where most of the development goes from happening locally to remotely — and being primarily agent-driven.

The runaway adoption of vibe coding tools like Cursor, Windsurf, and Lovable is accelerating this shift. These tools don’t just help write code—they capture workflows, metadata, and developer behavior, which improve the performance of foundation models like Claude and GPT-4o. It’s a compounding feedback loop: more usage leads to better models, which leads to more agent capabilities.

And it’s not limited to autocomplete. A new generation of agents is targeting more complex engineering workflows. Tools like Devin—the autonomous software engineer from Cognition Labs—are now capable of handling multi-layered tasks: building full-stack apps, configuring CI/CD pipelines, resolving bugs across unknown codebases, and managing entire repositories from prompt to production.

Devin has demonstrated the ability to operate independently across multiple sessions, scoping, building, testing, and shipping usable code—all without human assistance. It’s not just producing snippets; it’s executing entire projects. That shows how close we are to handing off higher-order engineering work to AI agents.

Further up the frontier, Poolside AI (featured on 5YF #20) is going one step further: building its own foundation model, trained entirely on code. Unlike Devin, which relies on general-purpose models, Poolside is betting on verticalization—designing a model from the ground up with coding as the native language. The goal? Go beyond assistance and eliminate the developer loop entirely: request in, deployable software out. CEO Jason Warner argues the future will be owned by the foundation models, not the apps built on top.

Agent-led code generation is no longer theoretical. The speed, efficiency, and iteration cycles enabled by AI already point to a world where models drive most software creation. As inference costs drop and capabilities rise, the default dev environment may look less like an IDE—and more like managing an autonomous software factory.

But as generation accelerates, the bottleneck shifts. Reviewing, validating, and deploying code becomes the real challenge. That’s where tools like Graphite’s Diamond come in—an agent that automates pull request review and is already used by startups to merge code without human intervention. Programmatic refactoring, batch upgrades, and GitHub-native agent workflows are becoming the norm. In this new era, developers won’t just write code—they’ll review, direct, and orchestrate AI output at scale.

Merrill Lutsky CEO of Graphite

Graphite is bringing AI-acceleration and automation to code review. Founded in 2020 out of New York, Graphite has become a key part of the developer ecosystem — as more code is generated with AI, they enable developers to scale the evaluation, testing, and review process before it is released. A growing bottleneck that has become incredibly important. The startup has raised over $70M from leading VC’s such Accel, A16Z, Menlo as well as a receiving a strategic investment from model provider Anthropic. Last year Graphite grew its revenue 20X and is trusted by over 45,000 developers at top engineering organizations such as Shopify and Figma.

Merrill Lutsky is co-founder and CEO of Graphite. His second startup, Merrill has helped develop and manage software products for high output engineering companies such as Square, Oscar Insurance, and SelfMade. He holds a degree in Applies Math and Economics from Harvard.

The Multi-Disciplinary Developer

In a world of AI-generated code, the role of the software developer shifts from creator to conductor. The edge will no longer go to the fastest coder—but to the engineer who can manage agents, review intelligently, and bring in architecture, design, and cross-functional thinking.

We’ll likely have fewer of what we call software engineers today. The remaining ones will be those with deep architectural understanding, strong AI fluency, and the ability to bring cross-disciplinary skills to bear.

In this world, coding is not the bottleneck — judgment is. Developers will increasingly own product specs, user experience, AI prompting, and domain-specific decisions. A feature doesn’t start with lines of code, it starts with defining what should be built — and the best developers will be those who understand the user’s needs, data flows, security concerns, and business model impact.

The most valuable developers will be the ones who know how to work with agents, bring broader context, and differentiate the product in ways code alone can’t.

The 10X engineer of the future may be the one who knows how to scope the right feature, not write the code themselves. One that appreciates the intent and overall outcome more than any model can.

We’ve seen this shift before in other industries. Designers who can write frontend code tend to ship better, tighter user experiences. Marketers who understand data and run SQL queries unlock better targeting and insights. In a world where AI is taking over execution, the differentiator becomes synthesis—blending skills across product, design, data, and domain expertise. The best developers in five years may not look like engineers today—they may be full-stack thinkers with a toolkit that spans across AI orchestration, interface logic, user research, and strategic execution.

Adaptive Software Interfaces

As more of the software lifecycle gets automated, what unlocks is a radically different end-user experience. UI and UX can become personalized, dynamic, and responsive to the user’s context — even adjusting features on the fly.

More customization, even user-modifiable software. The software extends itself based on what the user needs.

We already see this type of personalization in media, social, and content platforms. Netflix, YouTube, and TikTok all adapt what you see based on tight feedback loops of engagement. But the interface—the actual software design—remains static. That’s starting to change.

Imagine onboarding flows that adapt to your experience level, enterprise dashboards that prioritize features based on usage patterns, or creative tools like Figma that present different UI for power users vs. beginners. AI-native interfaces could shift layout, complexity, and control surfaces based on your goals, behaviors, or even role.

Industries like healthcare, finance, and B2B SaaS are moving in this direction with adaptive dashboards, modular interfaces, and embedded AI copilots. Companies like Salesforce (Einstein), Hubspot, and Notion are evolving their products into systems that learn from users and respond dynamically—blurring the line between static UI and conversational interface.

In five years, software won’t just react to user input—it will proactively reshape itself to fit the user.

The boundary between human intention and machine execution is blurring— in the future developers won’t just write software, they’ll direct living systems that learn, adapt, and build alongside them. The future of creation belongs to those who embrace AI not as a tool, but as a collaborator in shaping what comes next.

Collaborate and create.