❮ Back to Blog

Adding FastMCP support to the Scout Python Agent

Background

Like many developers, I’ve spent the last year adjusting to a world where I’m spending more keystrokes arguing, or, “explaining” to an agent the code I’d like to be written. No longer must we toil, writing every line code by hand like our great grandparents probably did. 


We know our users (fellow developers with deadlines) are also spending an increasing amount of time interfacing with prompt windows. Hence, Scout recently published our first Model Context Protocol (MCP) server. I’ll spare the details on what MCPs do, but the end result is that a user can ask an agent questions about their app performance, and that agent, wielding new Scout-MCP enabled powers, can gather and analyze interesting and relevant performance data and metrics from Scout.

Building this MCP server was straightforward, thanks largely to the FastMCP Python library.

We think FastMCP is great, and suspect our users do as well. Now, what about monitoring the performance of those fancy new MCP servers? 

Adding FastMCP Instrumentation

When we talk about “instrumenting” a library, we mean helping the application using these libraries, to report to us relevant information about that application's performance. In the case of FastMCP, we’d like to be able to report info about an MCP tool call, specifically its execution time, invocation parameters, and any other relevant context. 

Library instrumentation happens through a couple mechanisms in the Scout agent. We use monkey-patching with wrapt.decorator for direct method replacement in libraries like Redis, or hook into dispatch/routing functions for Flask, Bottle, etc. 

In this case however, I was pleased to see that FastMCP recently added Middleware https://gofastmcp.com/servers/middleware as a built-in way to hook into the request/response cycle. Awesome! Middleware can do really anything you’d want to an incoming request, very much including inspecting the request and handling errors, that’s all we need! 

The lowest level hook FastMCP provides is the __call__ method, which would give us complete control over every message coming through the server. However, this is overkill for our purposes, and FastMCP makes things easier by providing specialized hooks that we can override. 

Tool Calls

For our initial instrumentation, we’re only going to support tool calls. This is far and away the most widely used and supported feature of MCP servers, and the thing that would be the most important and relevant to get performance insights into. The  on_call_tool hook is the one we want in this case. 

We start by defining a new class: FastMCP.ScoutMiddleware following the naming convention of our other middleware classes. The first thing we want to do is create a new TrackedRequest with the name of the tool being called. 

	tracked_request = TrackedRequest.instance()
        tracked_request.is_real_request = True


        # Get tool name from execution context
        tool_name = getattr(context.message, "name", "unknown")
        operation = f"Controller/{tool_name}"
        tracked_request.operation = operation

We prepend the operation name with Controller/ because this is one of the two allowed operation types that Scout supports. The other being Job/ for background jobs. We decided that while MCP tool calls are not technically controllers in the classic sense (think Django controllers or Flask endpoints) they’re closer to a controller than a job. 

Now, the minimal step here is to create a span, and await the call_next which executes the next middleware or handler in the chain. 

with tracked_request.span(operation=operation, should_capture_backtrace=False):                
	result = await call_next(context)                
	return result

And just like that, we’re done with the bare minimum instrumentation for FastMCP, thanks to that helpful middleware method! 

Adding Error Monitoring

Scout offers free error monitoring for the first 5000 errors received through our managed errors service. To send these errors, we just need to wrap the call_next in a try/except. So, we modify the above to be:

try:                
		result = await call_next(context)                
   		return result            
    except Exception as exc:                
   		tracked_request.tag("error", "true")               
    	ErrorMonitor.send(                    
        	(type(exc), exc, exc.__traceback__),                    
        	custom_controller=operation,                    
            custom_params={"tool": tool_name, "arguments": arguments},                
           )                
           raise

Just like that, any Exception that would have bubbled up from this call now gets sent to Scout. We re-raise it in a leave-no-trace way.

Collecting Arguments

A tracked request with the associated tool name is cool, but becomes far more useful when we can see the exact arguments that the tool was called with. The context.message object contains just that, so we can add attach arguments to our request with:

        arguments = getattr(context.message, "arguments", {})
        if arguments:
            filtered_args = filter_element("", arguments)
            tracked_request.tag("arguments", str(filtered_args))

That filter_element method automatically filters out any potentially sensitive information such as keys, tokens, passwords etc. 

Adding more context

We have the tool calls being tracked and the errors handled, but FastMCP supports adding many custom keys and properties to a user's tools. These would be useful to include in the context of the tracked request. To get the called tool attributes and metadata, FastMCP recommends using the fastmcp_context object like so:
tool = await context.fastmcp_context.fastmcp.get_tool(tool_name)

Now, we can tool attributes to the tracked request with: 

 if hasattr(tool, "description") and tool.description:
            tracked_request.tag("tool_description", str(tool.description)[:200])

        if hasattr(tool, "tags") and tool.tags:
            tracked_request.tag("tool_tags", ",".join(sorted(tool.tags)))

We also added support for tool annotations https://gofastmcp.com/servers/tools#param-annotations


Conclusion

Adding FastMCP support to the Scout Python Agent shows how easily middleware can unlock powerful performance insights. With just a few lines of code, we’re able to track tool call execution time, arguments, errors, and context metadata, giving developers visibility into how their MCP servers behave in production.

This same approach can be applied to any custom middleware or framework method: wrap key operations in a TrackedRequest, tag relevant context, and let Scout do the rest. Whether it’s FastMCP, your own async handler, or a home-grown service layer, the pattern remains simple and effective.

We’re excited to continue expanding support to MCP resources and prompts using FastMCP’s on_read_resource and on_get_prompt hooks. If you’re experimenting with your own instrumentation, we’d love to see it. Feel free to open a pull request in our public repository. Scout APM Python Agent repository 

The more visibility developers have into their agents, the better everyone’s applications perform.

Ready to Optimize Your App?

Join engineering teams who trust Scout Monitoring for hassle-free performance monitoring. With our 3-step setup, powerful tooling, and responsive support, you can quickly identify and fix performance issues before they impact your users.