5830dab2-52d0-4dfa-8703-ce95353b723f.png
DSCF8674.jpeg
Dan Achim

MagicBook migration from Heroku to AWS

The premise

In the last quarter of 2019 56k.cloud and MagicBook started a conversation about how we could help them. They were looking to improve their cloud architecture as the launch date was getting closer.

A move from a hybrid cloud architecture, Heroku and AWS, to only AWS was decided. The assumption was that the hybrid approach would only get more difficult to manage in time. It was also becoming quite expensive for what it offered.

Finding the right solution

AWS environment setup

The two main requirements for the AWS environment and accounts were:

  • Fully automated. For this, Terraform was the clear choice, given 56k.cloud's experience with it and past successful implementations.
  • Setup as a multi-account AWS organisation and follows Amazon's best practices.

The framework that best fit these two requirements and made it easy to meet them is Gruntwork's Reference Architecture. Since 56k.cloud has a long-running partnership with them, it was proposed to MagicBook and accepted.

Application architecture

For the application architecture, MagicBook's team had some ideas in mind. The application is split into multiple services, all based on the Django web framework. But it made sense to split them even further, into two categories:

Django based, long-running, web applications. functions. This provides greater flexibility and cost optimisation given the on-demand nature of Lambda.

For the services that fall into the first category, the recommendation was to wrap them in Docker containers and run them on AWS ECS Fargate. This proved to be a good choice, in terms of easiness to run and auto-scaling.

To tie everything together, AWS API Gateway seemed to be the best fit. It would sit in front of both the ECS and Lambda applications, exposing them as one unique API for the desktop and mobile applications.

Behind the scenes, all the data would be stored and cached in AWS RDS Postgresql and AWS Elasticache Redis clusters. AWS SES and SNS provide the inter-application communication. AWS Cognito takes care of the user authentication.

Of course, everything needs monitoring, which is provided byAWS Cloudwatch Logs and Alarms. They are integrated with Grafana for nicer graphs and with Slack for alarm notifications.

Last but not least, the "static" websites would be hosted on AWS S3 with Cloudfront as CDN.

Automation

As mentioned, automation was a key requirement of the project. The main ingredient is Gitlab with its self-hosted Gitlab runners. All the code repositories, including the Terraform ones, are hosted on MagicBook's Github account. They are mirrored into MagicBook's Gitlab account, to which several Gitlab runners are connected to.

A runner is deployed into each AWS account (environment) using Packer to build a custom AMI and Terraform to deploy it. Whenever a git commit is pushed to Github, it is mirrored into Gitlab. This triggers a pipeline on the appropriate runner, depending on the environment.

There are three different types of CI / CD processes defined on the Gitlab runners:

Infrastructure pipelines. Any commit to the Terraform (Gruntwork) code triggers a special pipeline that applies to the proper AWS environment. This way infrastructure changes are applied in a centralised and accountable manner.. Upon a successful build a new deployment on AWS ECS is started to pick up the new container version. framework to manage the Lambda applications on AWS. This framework packages the code into the proper format, uploads it and restart the app.

Implementation

Everything mentioned above was deployed using Terragrunt. It is a wrapper around Terraform designed for the Gruntwork reference architecture.

Some changes needed to be done to the architecture to simplify it a bit and make it work for MagicBook's use case. Also, a few custom modules had to be written for the resources that Gruntwork didn't provide for. For example Grafana and Gitlab runners.

Having these great tools handy, the whole project was finished in approximately four months, with just one 56k.cloud engineer working full-time on it!

Conclusion

The end result is an automated and flexible infrastructure that can safely run MagicBook's applications.

As estimated in the beginning, the costs are lower compared to Heroku. This is owed to the on-demand nature of Lambda and the use of ECS's auto-scaling capabilities. The migration to AWS also allowed the MagicBook team to optimise the application code, further reducing costs.

To put it in numbers, the monthly invoices are 35% lower.

But we consider the decisive factor of this project's success was the openness and trust that the MagicBook team showed us. This lead to a great collaboration and communication and we thank them for this!


Find out more about 56K.Cloud

We love Cloud, IoT, Containers, DevOps, and Infrastructure as Code. If you are interested in chatting connect with us on Twitter or drop us an email: info@56K.Cloud We hope you found this article helpful. If there is anything you would like to contribute or you have questions, please let us know!

DSCF8674.jpeg

Dan Achim

Software Engineer