Craft and quality beat speed and scale, with or without agents

Linear is a tool for planning and building products that streamlines issues, projects, and product roadmaps. Connect with Tom on Twitter. This episode’s shoutout goes to user ozz, who won a Populist badge for their answer to “Column width not working in DataTables bootstrap.”

### Transcript

**Ryan Donovan:** Edge AI is changing the way we live, work, and interact with the world. Create future-ready devices with Infineon PSOC Edge, the next generation of machine learning and Edge AI microcontrollers. Learn more at infineon.com/psocedge.

*[Intro Music]*

**Ryan Donovan:** Hello, ladies and gentlemen, and welcome to the Stack Overflow Podcast, a place to talk all things software and technology. My name is Ryan Donovan, I’m your host, and today we are talking about AI agents — but how productive are they really? We’ve had some survey data that has shown mixed results. We’ve heard a lot from others that maybe people aren’t using them much, or maybe they have an impact, but we’re gonna find out from somebody who is actually using them in the field.

So, my guest today is Tom Moor, who is head of engineering at Linear. Welcome to the show, Tom.

**Tom Moor:** Hey, Ryan. Thanks for having me.

**Ryan Donovan:** Of course. At the top of the show, we like to get to know our guests a little bit. How did you get into software and technology?

**Tom Moor:** Oh man, okay, we’re going way back. Interestingly, I got into technology through music, which might not be too rare. Very early on, I was buying physical music magazines monthly, and one of them had a tutorial on how to make a website for your music. It was literally HTML printed on paper, which you would copy by hand into Windows Notepad back then.

Then, you’d see this “magical moment” — seeing it in Internet Explorer and being able to tweak it. That was a foundational experience for me. From there, I got into building games, like a lot of engineers do. I made a bunch of Flash games and even managed to sell a few for sponsorship to early flash portals.

Eventually, I built some Android games and gradually worked my way into the startup scene. I was attracted to the idea of separating income from time. After teaching myself to code and gaining skills, I moved to San Francisco, joined the founding team of a startup that got into an accelerator, and stayed there for a decade navigating the startup world.

**Ryan Donovan:** Obviously, in that time, technology and software development lifecycles have changed quite a bit. You were probably there at the start of the whole cloud-native movement?

**Tom Moor:** Yeah, I’d say it was taking off. Our first startup was around 2010-2011. I remember hosting our own MongoDB because there was no platform offering it at the time. That was the source of many challenges; scaling wasn’t figured out yet. When I started, people were still proudly coding websites in Notepad.

**Ryan Donovan:** That’s right. Or dropping marquee tags all over the place.

**Tom Moor:** Yeah, definitely.

**Ryan Donovan:** Today everyone’s talking about AI agents, the big hot technology. We’ve seen survey data where about 50% say AI agents bring productivity gains, but we’ve also noticed that the more people use AI, the less they tend to trust it. Are AI agents really delivering the productivity gains promised?

**Tom Moor:** The decreasing trust surprised me, though the productivity gains didn’t. One thing I’m curious about is the definition of agents in your survey — are you referring to agents that live in your editor and are controlled carefully, or cloud-based agents acting like team members? I wonder if people conflate these.

**Ryan Donovan:** That’s an open question. Sometimes an agent is the cursor doing things for you, or sometimes it’s proactive. What do you mean when you talk about agents?

**Tom Moor:** Let me give some context by introducing Linear. Linear is a purpose-built platform for building software. It’s very end-to-end — ideas come in from customers and software ships out the other side.

At its core, it’s a really nice issue tracker where work is defined, contextualized, and assigned. Three years ago, all work was assigned to humans. Now a percentage is assigned to agents—mostly cloud-based ones because it’s a team product.

These agents get triggered by assigning them work in Linear. They respond with pull requests or answers. While the best agents today might be those in your editor because you can steer them, cloud-based agents act like teammates handling tasks asynchronously.

**Ryan Donovan:** So, you see agents living in editors as enhanced autocomplete tools, while cloud agents are independent contributors?

**Tom Moor:** Exactly. Autocomplete AI like GitHub Copilot began around 2019. Then ChatGPT came along, leading to agent-like tools such as Cursor and Claude Code. The agents have some reasoning and tool use — chain-of-thought thinking — which qualifies them as “agents.”

When I use agents, I give them some initial code, method names, or patterns to follow. To get the best results, you want to write a small spec.

**Ryan Donovan:** So the more constraints and information you provide, the fewer mistakes they make.

**Tom Moor:** Right. I think some folks try agents without providing that context and get frustrated. Think of agents like junior engineers — you can’t just say “fix this” without some details. They need your mental context encoded in the prompt or issue description.

The great thing about remote agents is that they get all the context available in Linear — issue descriptions, stack traces, support comments, and so forth. This leads to much better results.

**Ryan Donovan:** Having a central place that aggregates all the context and feeds it to agents is critical. That’s something we’re working on internally at Stack Overflow too.

You’ve been using agents and building them into your product. What do you think drives hesitancy among others?

**Tom Moor:** Often, a few bad experiences make people check out. They spend 5-10 minutes using an agent, feel it’s a waste of time, and decide to solve things themselves. That’s understandable.

But it might help to introspect why it failed — was the prompt insufficient? Tweaking the description a little could save a lot of effort down the line, such as checking out branches, pushing commits, code reviews, etc. These small steps add up on professional teams with hundreds of bugs.

**Ryan Donovan:** Sounds like people haven’t yet figured out how to standardize workflows for automation using agents. I heard a quote from a friend: “never spend six minutes doing something by hand where you can spend six hours failing to automate it.”

**Tom Moor:** Exactly. Also, some negativity around AI comes from extraneous factors — environmental concerns, opinions about prominent figures, etc. That can color perceptions of the technology itself. But AI is an incredible tool and we’re only beginning to harness it.

**Ryan Donovan:** People in tech are skeptical, having been through many hype cycles.

**Tom Moor:** Yeah, nothing compares to AI hype, even crypto.

**Ryan Donovan:** Every time Sam Altman speaks, he says we might be building God — so there’s a lot to be cautious about.

How do you think teams can ground their AI agent workflows in reality?

**Tom Moor:** Be aware of an agent’s capabilities and tools. For example, at Linear, we feed agents small bugs — even spelling mistakes. This lets support teams assign small fixes to agents who may not even have coding access. Developers just review and merge the pull requests.

That reduces friction. In some companies, spelling errors remain for months because fixing is too much effort or process. Now, these get resolved in about 30 minutes.

**Ryan Donovan:** You mentioned craft and quality being more important than speed and scale, which is often the focus in AI discussions. How do you maintain craft and quality?

**Tom Moor:** There are a couple aspects.

First, offloading some work to agents frees time for developers to focus on improving features and product aspects.

Second, craft in software is about shipping constantly and iterating rapidly. At Linear, we deploy hundreds of times a day, all continuous deployment. This allows us to iterate with design partners, beta users, and internally in many small cycles.

To polish features, you need hundreds of refinement cycles. If you move slowly, that’s impossible.

Agents help accelerate coding and find bugs before they happen, enabling faster, higher-quality iterations.

**Ryan Donovan:** So tighter feedback cycles lead to both speed and quality — not a new concept but AI accelerates it further.

**Tom Moor:** Exactly.

**Ryan Donovan:** AI code review is another interesting area. Most developers think humans should still do code review to avoid mistakes. How do you have AI code review without it causing problems?

**Tom Moor:** The human is still key — humans review the AI’s review. It’s a double layer of scrutiny.

AI has caught logical mistakes in complex code where humans might overlook things. It’s great at spotting security issues, like file traversal or un-sanitized inputs, which are hard to hold in one’s head.

Of course, we also use traditional non-AI tools. The best approach is a combination for thorough coverage.

**Ryan Donovan:** I’ve noticed people get more comfortable with AI tools and there’s a push toward non-generative AI or heuristic layers combined with LLMs.

**Tom Moor:** Yes, when working daily with AI, you have to know when to rely on heuristics or conditionals before escalating to a large language model. It’s essential not to overuse AI as a hammer for everything.

**Ryan Donovan:** That makes sense why trust decreases over time — people learn AI’s actual capabilities and limitations and adjust their reliance accordingly.

**Tom Moor:** Right. AI is ultimately still a word predictor. Knowing that helps calibrate your trust.

**Ryan Donovan:** So human software engineering fundamentals are more important than ever to use these tools effectively.

**Tom Moor:** Absolutely. The better your fundamentals, the better you are at using and verifying AI.

**Ryan Donovan:** If junior engineers’ work is massively automated by agents, how do we cultivate senior engineers?

**Tom Moor:** We’re not replacing juniors, just augmenting them. Juniors today are often more adept at using these tools and help teach seniors. Apprenticeship flows both ways.

**Ryan Donovan:** So the learning dynamic evolves but continues.

**Tom Moor:** Exactly.

**Ryan Donovan:** You also integrate Linear with Cursor—one of the more talked-about AI coding agents. How does that plugin work?

**Tom Moor:** Linear is an issue tracker, so we created a developer platform to bring agents into teams like teammates. Agents can register apps via our APIs. Cursor is one of the biggest in the system.

An issue arrives in Linear, you assign it to Cursor, which then begins work remotely. It can ask for more input if needed. You retain ultimate responsibility and can review its progress.

When Cursor is ready, you get a branch with a pull request to review and merge or pull locally and tweak. Agents handle the first chunk, making it easier to continue.

We also support custom agents, giving teams flexibility.

**Ryan Donovan:** So Linear, Slack, GitHub, all connect to these agents, which can work across tools?

**Tom Moor:** Yes, agents live in the cloud like human teammates, accessible from all tools — Slack, Teams, GitHub, Linear. You can start work in Slack, finish in Linear, and code-review in GitHub with the same agent.

We treat agents as first-class citizens on the platform. Everything is collaborative and visible, with real-time updates.

**Ryan Donovan:** That sounds like orchestration on top of orchestration — agents are platforms themselves.

**Tom Moor:** Indeed. They orchestrate tools and workflows.

**Ryan Donovan:** What kinds of AI features have you baked directly into Linear?

**Tom Moor:** We added a “product intelligence” layer focused on triage.

Issues coming in from support, Slack, etc., go into an inbox where an agent researches and labels them immediately. It finds related issues, suggests projects, and even potential assignees.

On large teams, this helps find who knows the relevant code — no more spreadsheets or guesswork.

We show why an assignee is suggested — for example, due to recent work on similar issues.

**Ryan Donovan:** That’s a huge productivity boost — instead of hunting for the right person, just get the info instantly.

**Tom Moor:** Exactly. To users, it looks like a simple panel with suggestions, but behind the scenes, it’s doing deep research over minutes.

**Ryan Donovan:** That’s the future — not constantly checking inboxes but having prioritized, actionable queues.

**Tom Moor:** At Linear, we have a rotation called “goalie” for triage — engineers apply that focus to researching issues, passing off fixes to agents, or handling them directly.

Eventually, the system should suggest assigning certain bugs directly to agents based on past success.

**Ryan Donovan:**

Ladies and gentlemen, it’s time for our Stack Overflow shoutout! Congratulations to “Ozz,” who earned the Populist badge for an outstanding answer to “Column width not working in DataTables bootstrap.” You can find the link in the show notes.

I’m Ryan Donovan, editor of the blog and host of the podcast here at Stack Overflow. If you have questions, comments, or topics for us, email me at [email protected], or reach out via LinkedIn.

**Tom Moor:** Thanks for listening! I’m Tom Moor, head of engineering at Linear. You can find me on Twitter at @TomMoor.

**Ryan Donovan:** And you can find Linear at linear.app.

Thank you for joining us!
https://stackoverflow.blog/2025/10/28/craft-and-quality-beat-speed-and-scale-with-or-without-agents/

Leave a Reply

Your email address will not be published. Required fields are marked *