Our monthly subscription fees have changed recently. Please see the latest information here.
Software Engineer Theo Gregory shares how we use serverless to speed up deployment.
Our stack at Freetrade is for the most part serverless functions, as we’ve discussed before.
Being serverless has enabled all of us to share the DevOps hat and play a role in how our code makes it out into the open.
This means that up until a recent hire, we have achieved all that we have without the aid of a Site Reliability Engineer (SRE). Having a dedicated SRE won’t stop us from getting our hands dirty, instead, we will do so with greater foresight, experience and technical know-how on our side.
When we began with only 2 engineers on our team, we chose Firebase as the tool to get our product off the ground.
At the time, it was the perfect tool for the job.
Today, we maintain 317 serverless functions, each with a crucial part to play in powering your app behind the scenes.
As the number of functions grew, we clocked that Firebase’s gears were beginning to groan.
Deployment times were rising exponentially in tandem with the pace of output by our growing crack team of devs. Dev cycles were slowing and hotfixes and releases would take longer to reach production.
In this codebase, which we will refer to as the monolith, we label functions that we want to be deployed using a selection of Firebase SDK helper methods. When we call `firebase deploy`, our whole codebase is bundled up and, for each occurrence of these helper methods, is deployed as a Google Cloud Function.
Each “independent” function contained all of our source code.
Then in February, having been hard at work implementing our solution (detailed below), we had our first experience of Firebase timing out part-way through our deployment in UAT (the User Acceptance Testing version of the app). We had anticipated this day may come and, thanks to foresight, we were braced and prepared.
A monorepo is when you use a single repository to store multiple independent projects. Using a monorepo is not so much the solution itself rather a means to support it. While monorepos are the subject of much debate in engineering circles, as you’ll see it provides us with precisely what we need right now.
Using Firebase was a great approach for a fledgling startup, but the control that it abstracted away left us unable to resolve our issue with our current tooling. Further, though slightly tangentially, features available in Google Functions were often not supported through Firebase, such as declaring whether a function should retry on failure.
We decided that each function should be wholly responsible for its own deployment flow, yet we should be able to easily develop code that could be shared between functions.
To develop across multiple packages at the same time would be costly were these hosted in different repositories, and so the choice to use a monorepo became obvious.
Given we were using a monorepo to develop primarily serverless functions, the term mono-function was coined.
While much of our code is written in TypeScript and hosted on Google Functions, any mono-function has the independence to choose to be written in Python and hosted on AWS as an EC2 instance.
Getting our mono-functions out into production can be split into three main parts: development, build, and deployment.
“With great power comes great responsibility” - The late, great Stan Lee.
When writing code, it’s important we have a quick feedback loop should we wish to make incremental changes in many packages at once, and remain sure our code can compile. For this, we use Lerna.
Lerna saves the need for each local package to copy each change from its neighbour into it’s `node_modules` folder when being built. Instead, each package and mono-function need only be built once, and Lerna will manage symlinks between each dependency and its dependents to make them appear as available.
Rollup also provides us with tree-shaking, which means we can deploy the most streamline code possible. Of course, with each mono-function being its own project, even if we were to deploy the whole codebase into the function it still wouldn’t be very large. This is more useful when the mono-function depends on a bulky external library.
Finally, now we have code that’s ready to work its magic, Terraform steps up to the plate to deploy it to the cloud. Of all the perks that terraform provides here, the greatest is its ability to determine which mono-functions have changed since the last deployment and which have not.
Whereas with the monolith we are forced to deploy every function every time, not only are deployments faster, but they are skipped entirely for those that haven’t changed.
What a beauty ✨
Our deployment speeds now hark back to the early days when we were just in Alpha. Take a look at some recent figures:
That’s around 8.67x faster! And while these figures may not wow you on the face of it, remember that this magnitude also applies every time our devs want to deploy to test something (which as you might imagine, happens quite a lot).
If we take a conservative estimate and say that each of our engineers deploys twice a week for a year...
24 Engineers * 1h 1m 29s * 2 * 52 = 9207744 seconds, that’s 106.57 days of effort saved per year!
Here at Freetrade, we’re proud of what we work on, but we take great care to not let that blind us from lessons that could be learned from what we have done, and uncovering where improvements can be made.
This approach isn’t perfect in every way. For example, it demands a fair amount of extra code written any time we wish to create a new function. The tradeoff between replicated boilerplate and customisability is a tough line to toe, and we’re ever experimenting with where we feel that line should be.
In the meantime, to save ourselves the torment of manually writing such boilerplate, we knocked together some Yeoman scripts to generate it all for us.
It is important to also note that we do not hold all of our serverless functions within one project. If you have followed our previous blog posts, you’ll know that on December 12th 2019 we launched our Invest platform, whose code is held in its own independent project.
Our first project at Freetrade, the Client Platform, is responsible for all other client data, and is the entry point for any interaction from the apps. It is important that these concerns are separated, so that we may more easily reason about the state and behaviour of our systems.
As domains grow, we will create new projects to ensure that one does not become the single point of failure for all our teams.
In the not too distant future, we may see the introduction of our very own Growth or Discover platforms, to more easily separate the concerns that are owned by these two talented teams.
Last week we held a retro on mono-functions, and have now aggregated the pain points that we have uncovered over time. This way we can tackle the remaining chinks in the armour one by one, and keep track of any progress made upon them.
With the upcoming introduction of Freetrade Time, where engineers can take each Friday afternoon to tackle projects that bring that little extra zing to our customers, it’s hard to imagine such a hitlist will survive long!
P.S. did we mention we're hiring?
The views expressed above are those of community members and do not reflect the views of Freetrade. It is not investment advice and we always encourage you to do your own research.