Building a high-agility microservice app with Go, Redis, and Fargate

I am currently building a high-scale app to serve clients of big demand. Learn what I'm doing.

Building a high-agility microservice app with Go, Redis, and Fargate

One day you walk into (or log onto) work in the summer of 2022 to find that you need to build a highly-dependable, scalable, agile, and performant application to serve a wide variety of users and process potentially hundreds of thousands of entries. Oh, and you're the only developer on the project. And you have a verbal deadline by the end of the year. If you were a sane person you would have quit your job. But if you're insane like me, you take on the task.

Now the question is: What in the name of fresh hell are you going to do? You research, lots o-HAH are you kidding? You just go for it!!! Okay, maybe not so fast, there's a lot of questions as to what you can use for the task and how to get it done. This is how I'm currently building a high-scale app to serve the masses.

Starting with Go

I used to not like Go that much, but now that I've been building more and more software with it, I really do love it and it's a no-brainer for the app I'm going to be building. Its simplicity allows for rapid development and easy deployment across a fleet of infrastructure (will get into this more later).

In real-world practice, I used Go to decouple an API from our main Laravel application. When handling large sets of data for large clients, we saw high response times (up in the tens of seconds). This was due to a variety of factors:

  • Querying MongoDB too much
  • Non-concurrent array looping (with intensive operations)
  • No caching

Could we have solved these problems with the Laravel API? Sure. However, when you're dealing with trying to architect your app around the microservices framework, it's best to build something from scratch and follow the best practices and take advantage of the power of Go. In a few weeks I had a basic API running that absolutely destroyed the existing one, bringing down those extremely long response times to under 1 second. Not bad for handling thousands of ops.

To framework or to not framework

Look, lines!

Choosing whether to use Go's default HTTP server or a framework such as Gin or Fiber can come down to your preference. There are many benefits and drawbacks to using one, and not all usecases are created equal.

I've chosen to use Go Fiber to build this application. My main reasons for this are:

  • Zero memory allocation provided by fasthttp
  • Express-like syntax helps onboard new developers easier
  • Rapid development turnaround
  • Made for fasthttp's usecase, multiple small to medium requests with consistent response times

It's not all sunshine and roses however. There is a major issue with Fiber (and fasthttp) and that is the incompatibility of net/http packages. This means you will have to forego things like gqlgen or any server-related package part of the net/http ecosystem. For some, this is a deal breaker and in this case you should use Gin instead. However, when building my API replacement, Fiber has been serving well and runs like a champ so I will stay true to it for this project.

You likely don't need that Go package

There's an old saying that if you can solve the problem with a few lines of JS, then you don't need that npm package. The same applies to Go packages. If it's something you can build yourself and put in the app, do it that way.

You should only be using go get for things that are absolute musts (frameworks, database handlers, etc.) Adding more and more dependencies just adds to your application's bloat and shouldn't be done. Keep it plain jane.


Structure over chaos

The biggest issue I've come across when working with Go is dealing with the language's strictness of circular dependencies. This isn't like JS where you can just import a module and be done with it. You need to make sure whatever you're importing doesn't rely on the same package you're using it on. Having circular dependencies indicates bad design in your application's structure, which is something that you need to resolve.

To do so, you will need a good directory structure that isn't like a Windows 98 maze screensaver. Having a "three-tier" architecture and separating things where possible is the way to go, however this structure can be confusing to those just starting out. The best middle ground I can advise is don't make a package too bloated (too many files) to where you'll need to eventually call it in somewhere that will cause a circular dependency. Split them up.


The microservice architecture

I am designing this application around the microservices architecture. This means that most functions of the app will be separated from eachother. At a high-level, this will include the following:

  • Frontend UI (SvelteKit)
  • Backend (Go Fiber)
  • Worker Nodes (Go)
  • Aurora & ElastiCache

The backend will handle the API and other primary functions, however things like jobs and automated tasks will be handled via worker nodes. This keeps the app decoupled, and ensures if one part of the app is having issues the whole thing does not come crashing down.

The microservice architecture may cost more to deploy and adds additional complexity, however at scale it holds its own against monolithic counterparts. It's a no-brainer if you are building a high-resilience and scalable application. In some ways it gets more complex to architect, but in other ways it gets much easier to work with.

Leveraging Redis

Redis is a pretty special database engine where everything is stored in memory. This allows for extremely fast read/writes. When developing an intensive app that is going to be distributed across multiple instances, keeping track of things with microsecond-level latency is critical. Because we're pretty deep into the AWS ecosystem, I'm using ElastiCache which takes care of the infrastructure question. Redis will be used to manage our queue and any job data.

Our job data is formatted in JSON. This gets parsed and processed by our separate "worker" nodes. The question then became how to efficiently have the worker nodes make changes to a key without collisions.

Here's what we need to achieve:

  1. The backend sends the job data and what to process.
  2. A worker node receives this job data, batches where necessary (to send to other workers), and begins processing the task.
  3. As the worker goes along, it saves the results of the task in Redis.
  4. Once a worker is aware they are the last task, after it finishes it will package the output into a client-deliverable format (CSV or JSON) and upload this to S3. We can then mark the job as finished in our primary database, delete the job from Redis, and notify the backend that this job is done.

The bad way

Originally, to have a single server working example I processed the data this way on the completion of a task:

1. Call redis.Get on the job data
2. Unmarshal the JSON data
3. Make the appropriate changes the task did, add to the tasks completed integer
4. Remarshal the updated JSON
5. Call redis.Set on the job data
6. Calculate the tasks completed integer to know when all tasks are completed

If you're running a single worker that handles jobs synchronously, this could be seen as "good enough", but in multiple ways it is not. There is no logical reason to do this over and over again for every job, adding additional overhead. This also completely falls apart when you want to distribute your tasks, because then you have issues with mismatching data overwriting your other worker's data.

The better way (with RedisJSON)

RedisJSON is a library for Redis that makes working with JSON data easier. While it's not required, its features help immensely with what we're trying to achieve.

1. Have an "output" section in the job data
2. Calculate how many batches of tasks there will be in a job
3. Use JSONArrAppend to add the finished data
4. Either calculate how many are in the array or use JSONNumIncrBy to add to a "total completed" value, to know when all tasks are completed

By moving as much as we can to appends, we eliminate the risk of overwrite possibilities and the additional overhead of marshalling/unmarshalling JSON. For a non-JSON way, you could leverage Redis sets or use the native IncrBy/Append functions just like you would in RedisJSON.

As for how I access the JSON data, I use JSONPath (https://redis.io/docs/stack/json/path/) in the append and get queries. You can also use the RediSearch plugin with indexes, which I'm looking at as well.

Queues & Message Broker

Alright, so I've got how I want the queue to be managed. Now how do I signal to the workers that I want to process something? This is where a queue system and message broker comes into play. For Go, there's a few libraries that handle this task for us using Redis and other supported queue systems:

I've had a positive experience with both Machinery and Asynq, however with Asynq in particular there is a concern you need to be aware of:

Asynq is not yet feature complete, and an update can require major changes to your code, depending on what is changed. This can mean more frequent code changes in your production environment, with more chances to go wrong. With that said, Asynq does work well and you shouldn't exclude it from your arsenal. But if you're building something for high scale, you should consider Machinery instead if you want to rely on a library for most of the tasks.

If you're more of the DIY type, you can create your own queue service with RabbitMQ, AWS SQS, Kafka, Pub/Sub, etc. It's less complicated than it sounds, and there's plenty of tutorials online to guide you through the process.

Serverless with Fargate

My idea for the worker nodes was to have seamless, instant, automatic scaling. If the workers are overloaded with tasks, spin up new ones that can join the queue and handle the jobs with ease. For this, I have selected AWS Fargate, a serverless compute engine. Because Go is a compiled language, we can simply build the app and run it anywhere. No fuss or dependency installs required. Neat.

Before we can deploy our app on Fargate, we first need to package it up in a neat Docker container. There are various ways to package up your Go app in Docker, however this is the simplest way via a Dockerfile:

FROM golang:1.19-alpine as builder

WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN go build -o app



FROM alpine:latest as prodapp
RUN apk add --no-cache ca-certificates
COPY --from=builder app .

CMD ./app

In my case, I'm going to be building the image for the ARM64 architecture. Why ARM you may ask? Building for ARM allows us to utilize AWS's Graviton2 processors, which deliver a 40% compute cost reduction while keeping performance equivalent (or even better) than its x86 counterpart. If we ever need to do things on-premises, we could also use Raspberry Pi's for cheap hardware.

docker build --platform arm64 -t worker-app:latest . --target prodapp

Once that's done, the built image then gets pushed onto ECR. This is AWS's version of Docker Hub, not much to say here. Once that's done, a deployment is ran to replace the Fargate instances with the new image.

The great thing about Fargate is the ability to just spawn new workers on the fly, instantly. You don't need to spin up EC2 instances, wait for updates, deploy/run the app, etc. Because it's all neatly packed into a Docker container, it can spin up and be ready in record time. And with Fargate, all the instances are managed by AWS. There's no need to worry about OS-level things, because it's not in your control.

Once development in the project gets to a later stage, the build and deploy process will be automated with things like pipelines and CodeDeploy.

Conclusion

Building apps for high agility and performance is no easy task, and I'm not even finished with the entire architecture. There's more that went into it, but the meat of how I'm getting a rapidly-developed product ready to market is all here. As time goes on things will get more refined, and I'll be sure to update this post on what I've learned about maintaining high-scale applications.

In addition, when building high-scale apps, language and stack is not everything. Your app will only be as performant as you make it so. If your code is slow by design, you will inevitably run into scalability issues. I know when shipping an MVP you want to get something out rapidly, but you need to have your baseline on what you foresee the product becoming. With that said, things can happen. One day you launch to 10 users, then overnight you're being smacked with millions of requests by hundreds of thousands of users all over the world. No matter the case, you always need to be ready and build your application to withstand the heat.

I don't have too much experience in this category, but I'm learning more every day and I feel confident enough to share some of the things I have been doing with building an application based on a well-architected framework. I hope this gave some food for thought and for those in a similar situation as me, I salute you and all the best luck in making your app succeed at scale.