Take a look at this chart. Normally when charts go down like that, it’s not good. But when you’re looking at server response times, it most definitely is. The lower the response times, the faster the app.
So, how did that happen?
Well, the explanation is a bit technical so I sat down to interview one of our super-talented engineers, Kevin, to discover more about how he and the team did it. You can find the full video interview below with details of the migration itself but what follows is a brief overview.
“Resource Guru simplifies the way we work and gives us a holistic view of what’s happening.”
When we first developed Resource Guru back in 2012, we decided to use Ruby on Rails. We made that choice for a number of reasons but, primarily, because of its great reputation when it comes to speed of development. We’ve achieved a lot with Rails over the years and continue to use it but, just recently, Kevin realised that we needed to break away from Ruby to speed up certain tasks. The weapon of choice he settled for was Go (golang). The key benefit of Go over Ruby is parallelism – where multiple tasks run at the same time (e.g., multiple threads on a multicore processor). This is not to be confused with concurrency.
It’s worth noting that Ruby MRI does actually support native threads but, due to the Global Interpreter Lock (GIL), parallelism isn’t possible in practice. The GIL is built directly into the interpreter and ensures that two threads belonging to the same Ruby process can never be executed in parallel.
OMG – we need to change cloud provider!
After doing some gymnastics to get Go running on our existing cloud platform, Engine Yard with Amazon Web Services (AWS), Kevin realised that we really needed to move to somewhere where we could run Go in a native environment and generally get more control over our stack.
Moving cloud provider is not to be taken lightly. There’s a huge amount to consider and mistakes can be extremely costly. But, we knew this was a vital move to ensure Resource Guru performs as well as it can and is well-positioned for the future.
After a lot of careful consideration, we settled on Google Cloud Platform (GCP) which allows us to run applications in containers using Docker and Kubernetes. It’s this technology that enabled us to set up a native Go application and get the massive speed gains that you see in the chart above.
So far, we have only switched a small part of our app over to Go but it’s an ongoing process. A lot of the bigger, more noticeable gains will be appearing in the coming weeks. We’re really excited about this move and the gains we’re making!
Why Google Cloud Platform?
Everything at Google, from Search to Gmail, runs in a Linux container. Each week they launch more than 2 billion container instances across their global data centers, and the power of containers has enabled both more reliable services and higher, more-efficient scalability. What’s amazing about GCP is that we can take advantage of that same infrastructure.
Google also has a global network footprint, with over 75 points-of-presence across more than 33 countries, ensuring low latency and responsiveness. It’s recognized as one of the most technically competent companies in the world. So, this move makes Resource Guru more robust, more reliable, more secure and a heck of a lot faster.
What’s more, we were accepted onto the Google Cloud Platform for Startups programme which comes with $100,000 in credit and a whole bunch of other goodies. Google – we really can’t thank you enough! :)
For a more in-depth discussion check out Episode 35 of the ZADevChat podcast, where Kenneth Kalmer and Kevin McKelvin discuss the ins and outs of the migration.