Migrating "WiSaw" to serverless AWS Lambda.
Originally"WiSaw" backend was built on KOA2. See earlier blog posts about how it was done. At that time it was an excellent choice.
Here are few things the project benefitted from by being built on KOA2:
It was super easy to get going.
KOA2 is a traditional middle-ware which implements, now classical, MVC (Model-View-Controller) architectural pattern. So, if you ever used anything like Ruby On Rails, or Django, or Express (there are plenty to pick from), you should at least be familiar with the high-level architecture, which by itself is a huge benefit.
It was supporting new ES6/7 standard, specifically a new asynch/await construct, which eliminates "callback hell" very elegantly (this is one feature we were most interested in).
It was easy to autoscale by hosting it on AWS "elastic beanstalk".
Here are few things we didn't like too much and thought would be nice to improve eventually:
The runtime cost — even when there was no any traffic to the application, we still had to pay for at least 1 AWS instance. This is how "elastic beanstalk" is designed to work — it always has to maintain at least 1 instance, and it will add more instances as needed based on demand, and there is nothing you can do about it. Maybe it would not be such a big deal, if we were "burning" VC's cash, but, since we are non-profit — looking for cost optimization opportunities is a big deal for us.
After all, KOA2 is still a traditional server-side monolith. We wanted to be able to deploy different parts of the application without doing a complete system build, which appears to be somewhat problematic in KOA.
The world keeps moving on, and we realized -- it's time to rewrite our backend.
We wanted to utilize some new design and run-time concepts, micro- or maybe even nano-services architecture. There are plenty solutions offered by different vendors. Google, Microsoft, Amazon -- they all have something to offer in this arena. In our case, since the rest of the infrastructure is hosted on AWS, and, since we are already most familiar with Amazon Web Services, the choice is natural -- "AWS Lambda".
Using "AWS Lambda" as is raw is perfectly fine, however, even to deploy a simple “Hello World” function on "AWS Lambda" requires a lot of jumping through hoops. Two most popular frameworks, which are built on top of AWS Lambda, and do a lot of orchestration for you, so that you do not have to worry about low level stuff:
Claudia and Serverless.com.
“Claudia” is an open source, very well built. However, “serverless” seems to be catching up on things little quicker. Perhaps, because “serverless” does operate like a business (yes, they do have jobs openings listed on their web site), while “claudia” has more of a feel of a nice effort backed by a bunch of enthusiasts. We actually ended up trying both solutions, and, while “claudia” was certainly useable and very helpful, everything in “serverless” seemed to be just a one tiny bit better. So, we will be building on “serverless”.
Here is really excellent "serverless" manual https://s3.amazonaws.com/anomaly/ServerlessStack/ServerlessStack-v2.0.pdf
, which is well written and covers a lot of different things. If you read it — building a service on "AWS Lambda" will be a snap.
Things we’ve learned while moving the backend to "serverless":
Serverless (as a concept) is much more strict about things. For instance, the payload size is limited to 10Mb. This limit is actually imposed by "AWS Gateway". As the result:
We used to upload images and store them in the DB as a byte array. Obviously this was way non conventional in classical nano-services definition (although it was very quick to build initially). What we changed it to is actually much better, faster, cooler.
We had to move the images to S3, which is really the way to go.
The image upload has to be done in 2 API calls (instead of one). First invocation, is a very small call to the API, which basically describes the resource we are about to upload, and receives a secure token in the response, which can be used in the consequent request to actually upload this resource straight to S3.
We used to download up to 100 images in one payload and had no choice but to change it. Now we make an API call, which returns up to 100 URL's in one payload-- this it tiny comparing to what it was before (only few KB), which also makes it super fast. Then every image thumbnail is downloaded straight from S3 as needed (when it becomes visible on the screen).
Image processing (resizing, generating thumbnails) had to move to asynchronous service.
Triggered invocation of AWS Lambda function is actually very cool. Basically, the function is invoked every time a new resource is added to S3 automagically, you do not have to invoke this "AWS Lambda" function explicitely. This makes it possible to just upload a full size image, and after few short moments a thumbnail magically appears on S3 as well.
Using “serverless”, you do not have to worry about lower level AWS services, like "Cloud Formation", and AWS "API Gateway". You just run the “deploy” command and the “serverless” orchestrates quite a complex configuration for you automagically. Why do you even want to use something like "AWS API Gateway", or why should you even care if your service is using it or not? Here is just one example why it makes sense: AWS "API Gateway" has a GEO based distribution (Akamai style). If your service is deployed in one of US zones (obviously, if you hosting RDS instance in the same zone, it would makes sense to colocate your API hosting), but your mobile client is making a call to your service from, lets say, Australia. In this scenario, your request will enter the AWS system as close to where your client is located as possible regardless of where it may be in the world, and, after that, it will be routed to USA through dedicated AWS pipelines, which, needless to say, are blazing fast. We have actually tested our services performance from Japan and had not noticed much of a difference in responsiveness.
We've built and deployed 15 functions in production (and 15 functions in test) environment:
We initially started with the default memory allocation for all of our services, which is 128Mb per service. At some point we noticed, that "ThumGenerate" function would occasionally fail due to "out of memory" exception. We've also learned, that the way to make a function execute quicker is to bump up the memory size to the maximum. When we do that, it will not only allocate more memory to your service invocations, but will also allocate more CPU's to every invocation, which will make everything much faster. Funny how it works, by allocating more memory, it uses more resources, but the execution time is proportionally smaller, so you end up paying almost the same amount of money per invocation. Based on this logic we ended up bumping the memory size to the possible maximum of 3Gb for all of the services.
One of the goals for us to migrate to "AWS Lambda" was cost savings. Here are "before and after" numbers (quite obvious -- we achieved what we were aiming for):
And this is all. Obviously, there is much more effort went into the migration process. We had to figure out how to configure deployment to different environment (test, production), we had to figure out how to pass DB configuration to the runtime instances so that they can connect to RDS, but not to expose our runtime configs in the public repo. If someone curious, all code is available in OpenSource github repository https://github.com/echowaves/WiSaw.serverless
The open-source example project is called What I Saw.