<aside> 📘 Series:

  1. Garbage Collection In Go : Part I - Semantics
  2. Garbage Collection In Go : Part II - GC Traces
  3. Garbage Collection In Go : Part III - GC Pacing

Relates:

Introduction

In the first post, I took the time to describe the behavior of the garbage collector and show the latencies that the collector inflicts on your running application. I shared how to generate and interpret a GC trace, showed how the memory on the heap is changing, and explained the different phases of the GC and how they affect latency cost.

The final conclusion of that post was, if you reduce stress on the heap you will reduce the latency costs and therefore increase the application’s performance. I also made a point that it’s not a good strategy to decrease the pace at which collections start, by finding ways to increase the time between any two collections. A consistent pace, even if it’s quick, will be better at keeping the application running at top performance.

In this post, I will walk you through running a real web application and show you how to generate GC traces and application profiles. Then I will show you how to interpret the output from these tools so you can find ways to improve the performance of your applications.

Running The Application

Look at this web application that I use in the Go training.

Figure 1

https://github.com/ardanlabs/gotraining/tree/master/topics/go/profiling/project

Figure 1 shows what the application looks like. This application downloads three sets of rss feeds from different news providers and allows the user to perform searches. After building the web application, the application is started.

Listing 1

$ go build
$ GOGC=off ./project > /dev/null

Listing 1 show how the application is started with the GOGC variable set to off, which turns the garbage collection off. The logs are redirected to the /dev/null device. With the application running, requests can be posted into the server.

Listing 2

$ hey -m POST -c 100 -n 10000 "<http://localhost:5000/search?term=topic&cnn=on&bbc=on&nyt=on>"

Listing 2 shows how 10k requests using 100 connections are run through the server using the hey tool. Once all the requests are sent through the server, this produces the following results.

Figure 2