If you’ve spent any time debugging slow endpoints in a Phoenix application, you know the pattern: something is slow, you open your APM, you see a big chunk of time attributed to “Ecto” or maybe just “Controller,” and then you go make coffee while you grep through logs trying to figure out what actually happened. With v2.0 of the Scout APM Elixir agent, we’re closing that gap. Elixir applications now have full access to Scout’s Database Monitoring and External Services views, giving you per-query breakdowns of your Ecto database calls and per-domain visibility into outbound HTTP requests through Finch, Req, and Tesla.
Both of these features share a theme: visibility into the things your application calls out to. Your code is the middle layer in a sandwich between users and dependencies. Every once in a while you have to watch for some moldy bread. Database mold is the worst. We should probably move on.
Database Monitoring for Elixir
Previously, our Ecto integration told you that a query happened and how long it took. That’s like a mechanic telling you “your car made a noise.” Helpful, but you’d really like to know which noise and where.
Now, Elixir applications get the same Database Monitoring view that our Ruby and Python agents have had. Every Ecto query is broken down by table and operation — you’ll see entries like Execution#Insert, StepLog#Insert, and ObanJob#Update — each ranked by time consumed, with throughput, mean duration, and 95th percentile latency. A stacked bar chart shows how database time is distributed across queries over time, and a Database Events sidebar highlights usage spikes so you can spot anomalies at a glance.
This happens automatically. The agent identifies the table and command type from each Ecto query, so if you already have our EctoTelemetry integration attached, you’re done:
# In your Application.start/2 (you probably already have this)
:ok = ScoutApm.Instruments.EctoTelemetry.attach(MyApp.Repo)
No configuration changes. Deploy the latest agent and the data starts flowing.
Why Per-Query Visibility Matters
Knowing that an endpoint spends 400ms in Ecto is a starting point. Knowing that it’s User#Select running 47 times via PageController.index is a diagnosis — that’s an N+1 query, and you can fix it with a preload.
The Database Monitoring view shows you exactly which queries consume the most time across your application. You can sort by rank, throughput, or percentile latency to find the worst offenders. Click into any query row to see the transaction traces where that query appears, and expand individual spans to see the full SQL text.
Here’s what the agent extracts from each query result:
# Under the hood, we pull from the Ecto result:
def extract_result_info(%{result: {:ok, result}}) do
command = Map.get(result, :command) # :select, :insert, :update, :delete
num_rows = Map.get(result, :num_rows) # integer row count
{normalize_command(command), num_rows}
end
The command type determines how queries are grouped in the Database view (e.g., Execution#Insert vs Execution#Update), and the full parameterized SQL is available in trace detail when you need to see exactly what ran.
External Service Instrumentation
Most production Elixir apps talk to external services. Payment processors, email providers, search APIs, webhooks, that internal microservice your team swears will be deprecated by Q3 (it won’t be). These HTTP calls are frequently the slowest part of a request, but they’ve historically been invisible in traces unless you manually wrapped them with timing code.
Elixir applications now get Scout’s External Services view, which groups outbound HTTP calls by domain. You can see at a glance which third-party services consume the most time, their throughput, and mean/95th percentile latency — just like the Database view does for queries. We support automatic instrumentation for the two most popular HTTP client patterns in the Elixir ecosystem: Finch (which also covers Req) and Tesla.
Finch and Req
Finch is the HTTP client that powers Req, and it emits telemetry events that we hook into. One line in your application startup is all it takes:
def start(_type, _args) do
ScoutApm.Instruments.FinchTelemetry.attach()
children = [
# your supervision tree...
]
Supervisor.start_link(children, strategy: :one_for_one)
end
Every Finch HTTP request now appears as an HTTP/{method} span in your traces. Since Req uses Finch under the hood, all your Req.get! and Req.post! calls are automatically instrumented too. No changes to your Req code needed.
Each span captures:
- Operation:
HTTP/GET,HTTP/POST, etc. - URL: the full request URL (sanitized, more on that below)
Tesla
Tesla uses a middleware architecture, so it needs one extra step. You need to include the Tesla.Middleware.Telemetry plug in your client module:
defmodule MyApp.PaymentClient do
use Tesla
plug Tesla.Middleware.Telemetry
plug Tesla.Middleware.BaseUrl, "https://api.stripe.com"
plug Tesla.Middleware.JSON
end
Then attach our handler at startup:
ScoutApm.Instruments.TeslaTelemetry.attach()
One thing to note: place Tesla.Middleware.Telemetry as close to the end of your middleware stack as you can. Middleware executes top-to-bottom on the way out, so putting it near the top means the timing measurement wraps more of the actual HTTP work.
URL Sanitization
We strip query strings from URLs before sending them to Scout. Your GET https://api.example.com/users?token=secret123&page=2 becomes GET https://api.example.com/users. This is a deliberate decision. Query strings frequently contain API keys, tokens, session identifiers, and other values that have no business living in a monitoring tool. We’d rather lose some debugging convenience than accidentally store your credentials.
Self-Exclusion
The agent automatically filters out requests to Scout’s own endpoints (apm.scoutapp.com, errors.scoutapm.com, otlp.scoutotel.com, etc.). You won’t see monitoring overhead cluttering your traces. It’s a small thing, but noisy instrumentation data is surprisingly annoying when you’re trying to diagnose a real problem.
What Shows Up in Scout
In the External Services view, calls are grouped by domain — so if your app talks to api.stripe.com and www.wikipedia.org, you’ll see each as a separate row with its own throughput, time consumed, and latency metrics. A chart at the top shows total time consumed by call vs. throughput over time, making it easy to spot when an external dependency starts slowing down.
In individual traces, HTTP calls appear as HTTP#GET, HTTP#POST, etc. with a clickable URL badge that reveals the full sanitized URL. This lets you see exactly which endpoint your app called and how long it took relative to the rest of the transaction.
The Full Picture
With these additions, Elixir gets the same service-level visibility that Scout users expect from our Ruby and Python agents. The Database Monitoring view shows you which queries consume the most time across your entire application. The External Services view does the same for outbound HTTP calls. And individual traces tie it all together — a trace for a typical background job might show an HTTP#GET call to an external API taking 763ms, followed by an Execution#Insert at 1.3ms and a couple of StepLog#Insert calls, with clickable SQL and URL badges for the full details.
That’s the kind of breakdown that turns “this job is slow” into “this job spends 99% of its time waiting on wikipedia.org.” Combine that with Scout’s N+1 detection and error monitoring, and you have a fairly complete picture of where time goes in an Elixir application. We’re not done, but the gaps are getting smaller.
Getting Started
Update your scout_apm dependency to the latest version and add the attach calls to your Application.start/2:
def start(_type, _args) do
# Ecto (you likely already have this)
:ok = ScoutApm.Instruments.EctoTelemetry.attach(MyApp.Repo)
# HTTP clients (add whichever you use)
ScoutApm.Instruments.FinchTelemetry.attach()
ScoutApm.Instruments.TeslaTelemetry.attach()
children = [
# ...
]
Supervisor.start_link(children, strategy: :one_for_one)
end
If you’re not already using Scout APM with your Elixir application, our Elixir documentation walks through the full setup. The agent is lightweight, the integration is straightforward, and you can be looking at real trace data within a few minutes.




