‹ Back to Blog

Getting Scout Data Into Your AI Workflow

AI Dev Tools Engineering ScoutApp

If you’ve spent any time in developer tooling lately, you’ve probably noticed a pattern: every product is rushing to add a chatbot, an AI summary, or some kind of “magic” button. We get it — it’s tempting. But at Scout, we’ve been deliberately taking a different approach.

Instead of building AI into our product first, we’ve focused on making Scout’s data accessible to the AI tools you’re already using. Your coding assistant can read your codebase, understand your architecture, and propose changes — but it’s blind to how your application actually behaves in production. That’s where we come in. Scout bridges that gap: real performance data, real errors, real traces, delivered wherever your agent needs them.

Over the past few months, we’ve shipped a series of updates across our API, MCP servers, and a brand new CLI — all with this philosophy in mind. Here’s what’s new and why we built it.

API: Filling the Gaps

Everything starts with the API. Scout’s API has been around for a while, but it had some notable blind spots. If you were trying to get a full picture of your application’s performance programmatically, you’d hit walls pretty quickly.

We’ve addressed that. The API now includes background job data — Sidekiq, Resque, Celery, delayed_job, whatever your queue processor of choice is — with the same level of detail you’d expect from our endpoint monitoring: throughput, execution time, latency, error rates, and full trace support. We also added usage and billing endpoints, so you (or your agent) can check transaction counts, node usage, and plan limits without leaving the terminal.

These aren’t flashy features, but they’re foundational. Every other tool we’ve built sits on top of this API, so closing these gaps means the entire ecosystem gets better.

MCP Servers: Two Ways In

For developers using AI coding assistants like Claude Code, Cursor, or VS Code Copilot, we offer a Model Context Protocol (MCP) server that puts Scout data directly into your assistant’s context. Ask “what are the slowest endpoints in my app?” or “show me recent N+1 query insights” and get answers grounded in live telemetry.

We’ve built two options here:

Hosted MCP Server — Zero infrastructure on your end. Authenticate via OAuth and you’re connected. This is the fastest path to getting Scout data into your AI workflow.

Local MCP Server — A Python package (also available as a Docker image) that runs entirely on your machine. It ships with an interactive setup wizard (npx @scout_apm/wizard) that auto-detects your platform and walks you through configuration.

Both options expose the same set of tools — 17 in total — covering apps, endpoints, traces, errors, insights, background jobs, and usage data. The local server also bundles setup guides for 14 frameworks, so your assistant can help you configure Scout instrumentation without you having to dig through docs.

We added OAuth support alongside API key authentication because we want the setup experience to feel modern and secure, especially for teams. Whether you’re running the hosted server or the local one, you can choose whichever auth method fits your workflow.

A New CLI, Built for Developers and Agents

We also shipped a brand new CLI, written in Go and available via Homebrew.

The Scout CLI gives you terminal access to everything: app metrics, endpoint performance, distributed traces, error groups, performance insights (N+1 queries, memory bloat, slow queries), background jobs, and transaction usage. It renders data with human-friendly tables and inline ASCII charts, making it genuinely useful for quick investigations without leaving the terminal.

But here’s where it gets interesting for agentic workflows. The CLI supports a --toon flag that outputs data in TOON format — a token-efficient structured format designed specifically for LLM consumption. When you pipe the CLI’s output (say, scout endpoints list --app 6 | llm "which endpoints are slowest?"), it automatically switches to TOON format. No flags needed, no extra configuration. Your agent gets structured, efficient data by default.

It also supports --json for any tooling or scripts that prefer raw JSON. The idea is the same data, in whatever format the consumer needs.

Why Options Matter

We’ve intentionally built multiple paths to the same data: an API, two MCP servers, and a CLI. That might seem redundant, but we think it reflects reality. The AI tooling landscape is moving incredibly fast. Six months ago, most developers hadn’t heard of MCP. Today, it’s becoming a standard interface for AI assistants. Tomorrow, the preferred integration pattern might be something else entirely.

Rather than betting on a single interface, we’re meeting developers — and their agents — wherever they are. Prefer a CLI you can pipe into scripts? We’ve got that. Want your coding assistant to have direct access to production metrics? MCP. Building your own integration? The API is there.

Looking Ahead

This is just the beginning. We have more integrations on the way, designed to embed Scout data even deeper into the tools and workflows developers are already using.

We’re also bringing more intelligence into the core Scout product itself. But we’re doing so deliberately by building features that we, as developers, would actually want to use in our day-to-day work. Not AI for the sake of AI. We started with data access because that’s where developers and their agents need us right now: providing the production context that no amount of static code analysis can replace. The rest follows naturally from there.

The way we see it, we’re never going to compete with your coding assistant when it comes to understanding your codebase. That’s not our job. Our job is to make sure that assistant has the performance data it needs to make better decisions. That’s what we’re focused on, and we think it’s the right place to start.

Give the new tools a try — we’d love to hear what you think.

Cheers, The Scout Team