MagicBook migration from Heroku to AWS

AWS May 27, 2020
Screenshot Magicbook AI


The premise

In the last quarter of 2019 56k.cloud and MagicBook started a conversation about how we could help them. They were looking to improve their cloud architecture as the launch date was getting closer.

A move from a hybrid cloud architecture, Heroku and AWS, to only AWS was decided. The assumption was that the hybrid approach would only get more difficult to manage in time. It was also becoming quite expensive for what it offered.


Finding the right solution

AWS environment setup

The two main requirements for the AWS environment and accounts were:

  • Fully automated. For this, Terraform was the clear choice, given 56k.cloud's experience with it and past successful implementations.
  • Setup as a multi-account AWS organisation and follows Amazon's best practices.

The framework that best fit these two requirements and made it easy to meet them is Gruntwork's Reference Architecture. Since 56k.cloud has a long-running partnership with them, it was proposed to MagicBook and accepted.

Application architecture

For the application architecture, MagicBook's team had some ideas in mind. The application is split into multiple services, all based on the Django web framework. But it made sense to split them even further, into two categories:

  1. Django based, long-running, web applications.
  2. For the one-off, photo processing tasks the code would be moved to AWS Lambda functions. This provides greater flexibility and cost optimisation given the on-demand nature of Lambda.

For the services that fall into the first category, the recommendation was to wrap them in Docker containers and run them on AWS ECS Fargate. This proved to be a good choice, in terms of easiness to run and auto-scaling.

To tie everything together, AWS API Gateway seemed to be the best fit. It would sit in front of both the ECS and Lambda applications, exposing them as one unique API for the desktop and mobile applications.

Behind the scenes, all the data would be stored and cached in AWS RDS Postgresql and AWS Elasticache Redis clusters. AWS SES and SNS provide the inter-application communication. AWS Cognito takes care of the user authentication.

Of course, everything needs monitoring, which is provided byAWS Cloudwatch Logs and Alarms. They are integrated with Grafana for nicer graphs and with Slack for alarm notifications.

Last but not least, the "static" websites would be hosted on AWS S3 with Cloudfront as CDN.

Automation

As mentioned, automation was a key requirement of the project. The main ingredient is Gitlab with its self-hosted Gitlab runners. All the code repositories, including the Terraform ones, are hosted on MagicBook's Github account. They are mirrored into MagicBook's Gitlab account, to which several Gitlab runners are connected to.

A runner is deployed into each AWS account (environment) using Packer to build a custom AMI and Terraform to deploy it. Whenever a git commit is pushed to Github, it is mirrored into Gitlab. This triggers a pipeline on the appropriate runner, depending on the environment.

There are three different types of CI / CD processes defined on the Gitlab runners:

  1. Infrastructure pipelines. Any commit to the Terraform (Gruntwork) code triggers a special pipeline that applies to the proper AWS environment. This way infrastructure changes are applied in a centralised and accountable manner.
  2. ECS application pipeline. Commits to the application code trigger a pipeline that packages the app into a Docker container and pushes it to AWS ECR. Upon a successful build a new deployment on AWS ECS is started to pick up the new container version.
  3. Lambda application pipeline. These use the Serverless framework to manage the Lambda applications on AWS. This framework packages the code into the proper format, uploads it and restart the app.

Implementation

Everything mentioned above was deployed using Terragrunt. It is a wrapper around Terraform designed for the Gruntwork reference architecture.

Some changes needed to be done to the architecture to simplify it a bit and make it work for MagicBook's use case. Also, a few custom modules had to be written for the resources that Gruntwork didn't provide for. For example Grafana and Gitlab runners.

Having these great tools handy, the whole project was finished in approximately four months, with just one 56k.cloud engineer working full-time on it!


Conclusion

The end result is an automated and flexible infrastructure that can safely run MagicBook's applications.

As estimated in the beginning, the costs are lower compared to Heroku. This is owed to the on-demand nature of Lambda and the use of ECS's auto-scaling capabilities. The migration to AWS also allowed the MagicBook team to optimise the application code, further reducing costs.

To put it in numbers, the monthly invoices are 35% lower.


But we consider the decisive factor of this project's success was the openness and trust that the MagicBook team showed us. This lead to a great collaboration and communication and we thank them for this!

Migrating all our services to the AWS Gruntwork reference architecture allowed us to make our AI-based photo book design system even faster while reducing its infrastructure costs.
-- Jean-Pierre Gehrig, VP of Engineering at MagicLabs.ai


Find out more about 56K.Cloud

We love Cloud, IoT, Containers, DevOps, and Infrastructure as Code. If you are interested in chatting connect with us on Twitter or drop us an email: info@56K.Cloud We hope you found this article helpful. If there is anything you would like to contribute or you have questions, please let us know!

Tags

Dan Achim

Site Reliability Engineer

Great! You've successfully subscribed.
Great! Next, complete checkout for full access.
Welcome back! You've successfully signed in.
Success! Your account is fully activated, you now have access to all content.