Building Freetrade

A day in the life of a Freetrade QA engineer

Achchu Ganeshanathan gives an inside glimpse on what makes the Freetrade app run smoothly.

Tuesday mornings always bring lots of coffee and a flurry of Slack messages. That’s how the ‘release day’ (where we release our updated apps and backend software to our production environments) of a QA Engineer at Freetrade begins. 

This year has brought about some unexpected changes at the company, with accelerated growth partly due to GameStop-inspired events outside of our control. 

Tuesdays and Thursdays are our release days. Tuesday being the most prominent day out of the two as we release our platforms as well as our apps on this day. Thursdays we release our platforms with new software changes to our production environments. 

Work for this is split among the QA team members. At the moment, there are only two of us at Freetrade in this function (but we’re hiring!). 

I take on the responsibility for the platform release. I am looking at our client platform and cloud composer releases. My first pit stop is to check if our overnight deployments to the test environments have gone as planned. 

Here’s what an ideal day looks like: 


A sea of green deploy messages on slack tells me my morning coffee may be stale, but the deploy jobs are piping hot and ready to go. 

Once I’ve confirmed the overnight release branch was deployed to our test environment successfully, I can check the status of the automation runs. The automation tests are essentially a suite of regression test cases that run periodically on different OS’s and devices. The test suite consists of tests that we deem are crucial for the functioning of our applications, like logging in, signing up, the ability to search for stocks, buy/sell stocks, different types of buy/sells, top-up, withdrawals and so on. 

The automation runs helps us gain confidence that the code that has been merged since the last release has not broken any crucial functionality. (More detail about automation coming up in a post soon by our Senior test automation engineer, stay tuned) 

The automation test runs are checked in detail to see if they’ve been run successfully. There are different types of automation runs based on the timing of the markets. e.g - UK market hours tests & US market open hours tests. If the tests have all passed then I make a note of this in our release documents and prep it for sign off. On top of the automation runs I also run through some basic checks manually on both the iOS and Android platforms to ensure the impending release is safe to go out to customers.

After sign-offs from myself, our on-call engineer and the compliance and operations teams, the release is ready for the on-call engineer to deploy to our production environments. In other words, the kraken is ready to be released

As we all know perfection is a fallacy and a 100% ideal day is near impossible, below I will list the events and actions taken by QA when there are more issues on a Tuesday morning than just stale coffee.. 


 Realistic Tuesday events:


Sometimes the initial issue could be that the release branch failed to deploy to the test environment the night before. This rarely happens, and if it does then there is a genuine issue that needs to be investigated. As our deployments are done automatically, the slack message from the deploy bot will be the first indication that the oncall engineer needs to look into the problem. Once the issue has been resolved we deploy the release candidate again and start our release process a bit later than usual.

The next potential problem could be the failure of an integration test. When this occurs the QA team looks into the failure reason  and then re-run the integration tests. 90% of the time this will resolve the problem. 

If it doesn’t then the integration failure is looked into further to identify if it is a genuine issue with the code or if the test in itself is faulty. If the test is faulty we document it and attempt to fix the issue for the next release, while taking caution of the area where the test failed and run more targeted manual testing. For example, if an integration test failed between two functions that were sending information about an order to create a contract note, and it was deemed that it was the test that was faulty. Then the QA engineer would run some manual tests in this area to ensure the functionality is intact 

If the release branch and integration tests are looking good then we move on to our automation tests. This is where we attempt to catch bugs using our end to end acceptance tests. If there is a failure in the automation tests then we try to find where the failure has occurred. We do this by looking at the test failure output logs which are helped by the videos of the tests that are recorded in browser stack. 

For instance we may find an issue in an old device using an older OS version due to a change we have made to the UI. The test could be passed on all the other newer devices with newer OS versions but fail on an older device. This is an example of the type of issues we attempt to minimize using the automation tests before our new releases get into the hands of our customers. 

If the failure is genuine then we create a bug ticket for either the apps or the platform and ensure that it is fixed (depending on the severity of the issue) before the release can be signed off. This could mean a release is delayed but it’s better to do that and make sure the release is a high quality one, as our priority is always to  build the best product possible for our customers. Quality is one of our key company objectives along with delivering great new features and other deliverables!  

Sign up for our newsletter

Download the app and start
investing now.