October 1, 2024

Preventing and solving memory issues is at the heart of good memory management in Ruby – and of course, at Scout Monitoring, we also know that solid monitoring can be the X factor that makes all the difference.

But what exactly are we looking for when we load up something like Scout’s monitoring platform? We’ll discuss that in this post as we run through some common Ruby memory issues and their causes.

This is the second part of this series, so if you’re just jumping in, feel free to jump backward if you need a refresher – or move on ahead if you’re ready to get into solutions and Scout monitoring: 

Common cases include memory bloat, memory fragmentation, and memory leaks. Let’s take a look at each one of these in detail. Then, in the final part of the series, we’ll talk about some prevention methods and solutions for these, as well as how monitoring solutions can augment these principles.

Memory fragmentation

Memory fragmentation is a common problem and results from the unmanaged use of memory blocks over long periods of time. More specifically, since memory is allocated to objects in small chunks, when parts of the allocated memory are freed, only that particular chunk or slot is released. 

So, this means that sometimes although you might have some empty slots spread across a heap page, it doesn’t guarantee there will be a contiguous block of free memory that can be allocated to a new object.

To illustrate, let’s say that 7/8 available slots from a heap page have been allocated for a certain program:

Each color above represents a block of memory allocated for various requests. Now, let’s imagine that, during the course of the program, some of these blocks were freed:

At this point, if we attempted to allocate memory for a request 4 slots in length, we’d be unable to do so. We do have 4 open slots – but they are organized in a non-contiguous way across the heap page that won’t support this request. This scenario illustrates well the essence of memory fragmentation.

Let’s look at this more closely in the context of Ruby: after a few cycles of garbage collection, it’s rare for an entire heap page to be completely emptied. As a result, partially used pages can’t be returned to the pool of available memory, even if they contain some free slots. This means that, although there may be empty slots within Ruby’s memory, the memory allocator still considers these pages as “in use” because they haven’t been fully freed.

This issue is even more pronounced at the kernel level because the memory allocator can’t release an OS page unless all allocations on that page are freed. This means there could be plenty of free slots within the heap, but very few fully freed OS pages. 

In a worst-case scenario, you might have scattered free slots across memory but not enough contiguous space to meet a new allocation request. When this happens, the system allocates an entirely new OS page, adding even more unused slots and leading to further inefficiency.

As we mentioned in the previous part of this series, although Ruby’s garbage collection is quite effective and has also improved over the years, developers still need to be mindful. 

Memory leaks

Memory leaks are when allocated memory slots are not freed up after they’ve been used, and this results in more and more slots being allocated as the code continues to run. More generally speaking, memory leaks occur in cases like this: caching user sessions but never expiring them, storing uploaded file data in memory instead of temporary storage, or maintaining growing lists of background job results. 

Another sneaky example might be when event listeners or callbacks are repeatedly attached to objects but never cleaned up. 

Let’s look at a simple Ruby example of a memory leak in action to better understand this issue:


arr = []
stats = {}

loop do
  sleep(1)
  20_000.times { arr << "apm" }
  puts GC.stat(stats)[:heap_live_slots]
end

The above code creates 20,000 strings every second and prints an object count:


285051
295052
305053
315054
325055
335056
345057

Woah! The count keeps rising because the garbage collector can’t collect the strings continuously being added to the array since they are referenced by arr, and thus, never go out of scope. (If this weren’t the case, the garbage collector would have been able to collect these and the numbers would stay in the expected ranges.)

This situation often occurs when a dynamically allocated block of memory is no longer referenced, making it unreachable. To handle this, garbage collectors in many languages automatically recycle memory slots that aren’t referenced, assuming they are no longer needed.

Memory bloat

Memory leaks are centered around unfreed objects piling up, while memory bloat is more about unplanned memory allocation. This issue doesn’t come from the runtime environment or bad memory management, but from too many objects being allocated memory at once.

This is how a typical memory bloat incident appears on a memory vs. time graph:

This little spike might seem harmless at first, but it’s important to note that the pace at which memory is freed can never match the pace at which it is allocated: due to constraints like fragmentation, memory deallocation is always slower than allocation. This results in the app using an abnormally large amount of memory throughout its life-cycle – greatly affecting performance. 

A Rails app will likely quickly recover when it serves a slow request: a single slow request doesn’t have a long-lasting impact. This is not the case for memory-hungry requests: just one allocation-heavy request will have a long-lasting impact on your Rails app’s memory usage.

Memory bloat is frequently caused by power users. Controller-actions that work fine for most users will often buckle under the weight of power users. These “power user” requests often use 10-100x more memory than your baseline requests. For instance, a single request that renders the results of 1,000 ActiveRecord objects vs. 10 will trigger many allocations and have a long-term impact on your app’s memory usage. 

A number of memory increases happen when a Rails application is started. Ruby loads libraries dynamically, so some libraries won’t be loaded until requests are processed. It’s important to filter out these requests from your analysis.

Scouting out deeper solutions

We’ve covered the basics of memory fragmentation, leaks, and bloating, as well as why they occur. In the final part of this series, we’ll look at the best practices for preventing these issues, how to identify and distinguish them, and how to use Scout like a pro to take your memory mastery to the next level. 

And one last note: understanding Ruby’s memory system is an awesome way to gain expertise with the language, but actually dealing with this in practice is challenging. If you’re ready, Scout is here to help monitor your memory usage in real time and offer practical insights without the need to track everything yourself. 

So, check out your options for Scout (including our free plan) and our guide to getting up and running in 3 minutes in Ruby – and be sure to join our Discord community!

Related Articles

Complement Your Monitoring: Making Logs Readable for Humans & Machines

Complement Your Monitoring: Making Logs Readable for Humans & Machines

While Scout provides powerful monitoring tools (try it now!) mastering logging is an awesome complement to these skills. In this post, we’ll see how to create readable, actionable logs for both humans and machines. You’ll improve your logging strategy, drastically...

Ruby memory mastery: a Scout roadmap to monitoring like a pro | part 3

Ruby memory mastery: a Scout roadmap to monitoring like a pro | part 3

What is good memory management in Ruby? In this post – the third part of our series – we’ll look at the best practices for preventing memory issues, how to identify and distinguish them, and how to use Scout like a pro to take your memory mastery to the next level. ...

Scout Monitoring Changelog – September 2024

Scout Monitoring Changelog – September 2024

We had a couple of nice releases in September and we are still cranking away on some nice treats this month as well. Here’s what we are looking back on: Python Log Management We’ve released our python package for Log Management! It leverages a pre-configured Otel SDK...

Subscribe to our newsletter