TL;DR
- Sometimes bugs just need to be fixed ASAP
- Having AI fix them with monitoring data straight from production is very fast and satisfying
- You should use Scout for errors and MCP
On bugs
Once, when I was a very junior developer, I was discussing a bug with a very senior developer (let's call him Burt). Satisfied with the fix, I said something like "oh, that was a great bug". He looked at me as if his eyes were going to fall out of his head. Clearly, this enraged him. He briefly went off about how there are no great bugs, there are only bugs to squash – and that’s all.
Looking back, I’m sure the experience taught me something about event listeners or whatever classic jQuery bug was in that app. In fact, I still think there’s some truth to what I said. But yes, that bug wasn’t exactly great. The great thing was actually the path of knowledge it sent me down. Now, in late 2025, I’ve found that these paths still exist, and they are just as satisfying to traverse. Sometimes, we need to walk down them.
But, at this stage of the game, I can sympathize a lot more with Burt’s frustration in that moment. So, if we take it for granted that there are indeed only bugs to be squashed out there, we might as well do it as fast as we can. Enter our MCP server!
Developing Scout’s MCP server
Let me share a bit more about this experience; in short, working on our MCP server has been a blast. (You should definitely check it out by the way, and you can start here).
We had been leaping forward and improving our API to use it as the basis for a lot of the MCP server’s interactions. One of those API improvements was exposing what we consider "insights" in our app:
- N+1's
- slow queries that take up more than a certain portion of request total time or a certain absolute number of seconds
- and memory bloat
We were working on ways to keep these generated on demand, while also not overwhelming our infrastructure due to what will obviously be the appropriately huge adoption of our MCP server and new API.
As it turns out, one of those insight generation and caching mechanisms had a bug.
Happily, we had just gotten to a point where we had errors exposed, and it was time to start dogfooding the MCP with them. The first real task we gave it was this insight error.
1can't dump hash with default proc
This was the kind of error that, when it came in, elicited an unprofessional groan, because I knew that someone was going to spend toil time stepping through stacks to connect the dots. No one was going to learn anything from this one.
Firsthand experience with Scout’s MCP server
There were no deeper secrets or insights to be gained except the exact location where something was creating an unserializable hash. Further, we couldn’t immediately reproduce it in development, which always increases the challenge. When the debugger doesn't have anything to latch onto, it’s just you and the source code.
At some point, I casually suggested that we ask the MCP server to figure this out and one of our engineers listened. And he came back a little later saying that it had been found in one shot. Actually, here’s the direct quote from Slack:
Well... It found the line of code which was super nested and would have taken a while to find. I asked it to explain itself and got a reproducible way to do it in the console and it traced the call. ... 9.5/10 would do it again
Not only did that save time, it was fun. And I think this is an underappreciated part of working with LLMs (or at least, under-discussed).
Seeing the machine perform a relatively complex codebase analysis, with the help of information (errors, in this case) from the actual running system, is pretty amazing. If I crack open my crusty, cynical shell of software disillusionment for a minute, I actually find myself rooting for it to succeed, which is weirdly wholesome.
Still satisfying, just in a different way
There are days where the psychic energy required to start investigating a bug feels huge. People use "productive procrastination" to avoid that kind of thing all the time. I do it. Even having a stack trace that points at the problem line can still feel like a very sparse map, or like being equipped with a moped for what might be a trans-continental journey. Not great. To contrast, it’s so satisfying to kick off an agent, grab a coffee and come back to a solution!
Plugging errors into AI assistants that will tirelessly step through code and can implement testable solutions shifts the entire experience.
Do you need to pound through call sites and across files sometimes to learn things? Maybe. Although, I would argue that doing it the "hard way" like that is also less useful now, when AI can go further and quickly summarize the situation. Instead, when the bug is actually just a bug to be squashed, let a robot do it – and go do something more productive.





