THE SERVERLESS SERIES — Mistakes You Should Avoid

Nicolas Dao
Neap For Startups
Published in
5 min readApr 27, 2018

--

In this post, I will share the lessons I learned over the past year while using Serverless to build mobile and web apps for a tech consultancy in Sydney. For each drawback, I will also recommend one or multiple solutions.

1. FaaS — Connection Pooling Limitation

FaaS conversations do not mention this limitation very often. Cloud providers market FaaS as a solution that could infinitely scale. While this may apply to the function itself, most of the resources that your function depends upon won’t be infinitely scalable. The number of concurrent connections that your relational database supports is one of those limited resources. The unfriendliness of FaaS towards connection pooling is what makes this problem such a big deal. Indeed, as we mentioned before, each instance of your function lives in its isolated stateless environment, and that means that when it connects to a relational database (e.g., PostgreSQL, MySQL, Oracle), it should most probably use a connection pool to avoid reconnecting back and forth with your DB. Your relational database can only manage a certain amount of concurrent connection (usually the default is 20), and spawning more than 20 instances of your function will quickly exhaust your database connections, preventing other systems from accessing it. For that reason, I recommend avoiding any FaaS if your function needs to communicate with a relational DB using a connection pool. If you need to use a connection pool, then a few options are available:

  • Use a BaaS instead. We’ll talk more about this in the next post about Google App Engine but in a nutshell, you can configure GAE to scale to a specific maximum number of instances, which would prevent to exhaust all the DB connections. In that next post, we will also detail how to architect microservices using that configuration.
  • Some relational database like PostgreSQL offers plugins that can solve this problem by multiplexing the number of available concurrent connections.

2. FaaS — No Support For WebSocket

This one is kind of obvious, but for those who think they can have the cake and eat it too, you can’t hope to maintain a WebSocket on a system that is by design ephemeral. If you’re looking for a Serverless WebSocket, then use a BaaS like Google App Engine in Flexible mode or Google KNative instead (more about this in a future post). Alternatively, if you’re attempting to create a Serverless GraphQL API, then it is possible to use Subscriptions (which relies on WebSocket) by using AWS AppSync. A great article that explains this use case in greater details is Running a scalable & reliable GraphQL endpoint with Serverless.

3. FaaS — Cold Start

FaaS solutions like AWS Lambda have demonstrated huge gains when solving Map-Reduce challenges (e.g., Leveraging AWS Lambda for Image Compression at Scale). However, if you’re trying to provide a fast response to events like HTTP requests, you’ll need to take into account the time required by the function to warm up. Your function lives inside a virtual environment that needs to be spawned to scale up based on the traffic it received (something you naturally do not control). This spawning process takes a few seconds, and after your function idles due to low traffic, it will need to be spawned again.

I learned that at my expense when deploying a relatively complex reporting REST API on Google Cloud Functions. That API was part of a microservice refactoring effort to break down our big monolithic web API. I started with a low-traffic endpoint, which meant that function was often in an idle state. The reports that were powered by that microservice became slow the first time they were accessed. To fix that issue I moved our microservice from Google Cloud Function (FaaS) to Google App Engine in standard mode. That migration allowed to keep at least one instance up all the time.

4. FaaS — Long-Lived Processes, Don’t Bother!

AWS Lambda and Google Cloud Functions can run no longer than 3 and 9 minutes respectively. If your business logic is a long-running task, you will have to move to a BaaS like Google App Engine, Google KNative, or Google Kubernetes Engine.

For more details on FaaS limitations, please refer to AWS Lambda quotas and Google Cloud Functions quotas.

5. BaaS & FaaS — Loosing Infrastructure Control

That’s Serverless for you. If your product requirements necessitate some degree of control over your infrastructure, then Serverless will most likely leave you up the creek. Example of such problems could be:

  • Microservices deployment orchestration. Ending up with a myriad of Serverless microservices will quickly become a deployment nightmare, especially if they need to be versioned altogether or by domain.
  • Controlling the lifecycle of each server to save on costs.
  • Having long-running tasks on multiple servers.
  • Controlling the exact version of the underlying server OS, or installing specific libraries required by your app.
  • Controlling exact geo-replication of your app or data to ensure consistent and fast performances globally (there are ways to overcome this in some scenarios. Check out Build a serverless multi-region, active-active backend solution in an hour).

Serverless may fall short in all those above use cases. However, as we discussed in our previous post, Serverless is just an extension of PaaS. To keep as much focus as possible on writing code rather than worrying too much about the underlying infrastructure scalability and reliability, leveraging the latest PaaS containerization strategies (e.g., Google Kubernetes Engine)can get you very close to what Serverless has to offer.

6. BaaS & FaaS — Compliance & Security

Serverless shares all the usual complaints related to the cloud. You are giving up control of your infrastructure to one or multiple 3rd parties. Depending on the vendor, Serverless may or may not provide the right SLA and security levels for your business case. Is Serverless a go or no-go from a compliance and security point of view highly depends on your particular case. Many articles discuss this topic in great details (e.g., The state of serverless security).

Conclusion

Serverless is not a silver bullet. The gains you can obtain from it depend on your knowledge of it. The good part is that the barrier to entry is so low that you should be proficient in no time.

COMING NEXT…

Of course Serverless has limitations; all Tech has. The question now is how we overcome them. The next post is about suggestions my team and I developed to deal with those limitations Why We Love Zeit Now & When To Use It Over FaaS.

Follow me on Medium — Nicolas Dao — if you’re interested in what’s coming next:

--

--

Focused on nurturing happiness in tech. and in life. Co-founder of Neap (https://neap.co), a Tech. Consultancy based in Sydney.