Serverless Antipatterns: What Not to Do With AWS Lambda?
In my entire career as a technology consultant, I’ve observed that design patterns are good folks while anti-patterns are bad folks. And sometimes good folks can turn into bad folks. This happens in software engineering! (Just like Hollywood movies!)
To describe this problem, I’ve got a theory of “golden hammer.” Once we learn to use any complex tool, golden hammer in our case, abruptly we see golden nails everywhere! Relatable enough?
And that’s what happening with serverless. Functions are amazing and presumably, they are good guys, cloud architects will use it at every possible occasion. However, I’ve observed many counterproductive patterns which we need to contemplate before we adopt this new model of cloud computing.
This article is a discussion on serverless antipatterns that I have found to be persistent and their probable solutions. Here’s what not to do with AWS Lambda:
What is an anti-pattern?
The term anti-pattern was first coined by Andrew Koenig, where he describes
“An antipattern is just like a pattern, except that instead of a solution it gives something that looks superficially like a solution but isn’t one.”
By formally describing the repeated behaviour, we can recognise the behaviour that leads to the adoption of an antipattern and can learn how we can avoid or learn from the broken patterns of others.
What not to Do with AWS Lambda: Serverless Antipatterns
#1. Serverless Async Calls
Asynchronous communication is commonly used in server-based architecture for building scalable solutions since services are decoupled and can work autonomously.
When a Service A requires another Service B to perform its own task, a call will be made from A to B. While Service B is working, its parent Service A will be kept waiting. Since serverless architecture is billed on the basis of resources consumed, this is an antipattern.
This could be worse when you are chaining your functions. Another example, if your secondary function is making a call to a physical database, or anything which isn’t on a same platform or cloud, you are under the risk of getting a slow response, especially under a strained moment. At this moment there are two outputs:
- Your function is going to time out (AWS Lambda timeout is 300 seconds) and the task would be terminated.
- The significant increase in cost due to an increase in waiting time for your parent function to execute.
We use serverless architecture so that we don’t pay for the idle but it isn’t that simple when it comes to asynchronous calls. Each function comes with its specific amount of resources and when the function is invoked, you’ll be billed for the amount of time it is running. Gotcha- it doesn’t matter whether you are using those resources or not.
Considering the asynchronous calls, you don’t pay for the idle but you pay for the wait! If you are using asynchronous code within your functions instead of making them single threaded, you are using FaaS as a Platform to build servers.
How to avoid it?
One potential solution would be to make sure that by the time your asynchronous requests are resolved your function will stay active. If it’s taking more than that, maybe you’re welcoming an antipattern.
#2. Shared Code/Logic
Development teams are now expected to dissect business logic and build code blocks that are highly decoupled and are independently managed.
However, this expectation might not come off as expected, because you may come across scenarios where multiple functions require the same business/code logic.
But when you cross the boundaries between functions even though everything looks same, they are explicitly in different contexts, implemented by the different code and probably using different data store.
Consider the case, what happens when there is a major change in the AWS Lambda shared libraries. You’d be required to change dozens of methods of how your function’s endpoints work with this shared core logic. And this isn’t something you have accounted for in your custom software development cycle.
Also, by ongoing with shared logic attitude, you are crossing the isolation barrier, reducing the effectiveness of your serverless architecture and hampering its scalability.
How to avoid it?
A recommended suggestion here is to adhere to the DRY (DON’T REPEAT YOURSELF) principle since copy and paste is bad. Going back to the roots, ‘The Pragmatic Programmer’ describes DRY as “Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.” In other words, this means to take nanoservice to the extreme approach.
If you are dealing with just a bunch of Lambdas, this might be the best approach. However, if you have more than just a bunch of Lambdas, this would be a nightmare in practice.
What now? You may follow the Mama Bear approach (not too hot, not too cold) and list a limited set of Lambda functions that maps out logically how the client callers would consume them. It still left them with the problem of shared logic, however, rather than solving the problem through application design, they designed their own development workflow.
The evils of too much coupling between the services by shared code are far worse than the problems caused by code duplication- Sam Newman, Building Microservices page 59
#3. Distributed Monoliths
I’ve often come across developers having a constant assumption that putting everything into a library means that they will never have to worry about functions using wrong or outdated implementation because they all need to be update their dependency to its latest version.
Whenever you practice an approach where you change some behaviour consistent across all your functions by updating them all to the same new version of your library, you are moving towards a potential threat of strong coupling.
With this, you lose one major benefit of serverless architecture- loose coupling, the ability to have all your functions evolve and to manage them independently from each other.
Eventually, you may develop a system where a change in one of the function mandates the change in all of them.
If you think you won’t be able to keep your functions independent, maybe it’s time for you to consider another architecture approach.
3 questions to ask yourself to spot a distributed monolith:
- A change to one function often requires a change to another function?
- Deploying one function requires any other function to be deployed at the same time?
- Are your functions overly chatty and communicating too much?
How to avoid it?
One of the preventive measures you should keep in mind is to keep away the business functionalities that are not relevant to the main functions.
Functional separation is vital for the overall AWS Lambda performance, agility and scalability of your application. Also, follow the DRY principle.
#4. Complex Processing
Undoubtedly, serverless is amazing when you want to execute smaller chunks of code. But due to its inherent limitations, executing a complex compute would be an anti-pattern. For example, image processing can be executed smoothly but that isn’t the same case with video processing.
Here the problem isn’t with the language of whether it can handle it or not, but it’s the limitations of the computing power for a single function.
At present, the computing resources are pretty restricted and hence, you’ll need to be aware of your serverless platform limitations. Let’s have a look at the major serverless providers:
How to avoid it?
- Restrict the amount of data a function need to process by reducing the size of the data package.
- Find out how your functions are using the allocated RAM while making sure about the right data structures to avoid unnecessary allocations.
- In AWS Lambda, use /tmp directory which is non-persistent file storage for the function to read and write.
Just as you would optimise your application on the component level, you’ll need to this on a function level when it comes to a serverless architecture.
Note: Most of the serverless platforms offer temporary read/write data storage on per function basis (which is to be erased once the function ends). This is non-persistent file storage for the function to read and write. Effective use of this can be useful for storing temporary results and perform the memory intensive tasks.
#5. Serverless Big Data ETL Pipeline
As we move towards the serverless architecture, the process of handling the data and its security is becoming a critical concern.
Considering the fact that serverless functions are ephemeral or stateless, everything which function might need to process itself should be provided at the runtime. In typical cases, task payloads provide the tasks with the primary mechanism. In data payloads, data is pulled in from the queue, database or other data source.
Although serverless providers might allow you to process and pass huge chunks of data with the help of a data payload but it isn’t the wise thing to do. This not only reduces efficiency but also increase the surface area of data breaches. Less data means less storage & less transmission which leads to more secure systems.
Moreover, when you’re dealing with systems in which functions are calling other functions, message queues may be used as a random choice to buffer the work in the end. But architects need to be extremely aware of the level of recursion as they are more prevalent than one might think of.
How to avoid it?
In this case, you’re dealing with serverless architecture, you need to critically analyze what and how much amount of data is passed. Functions should only receive data which is needed for its execution. That means you should only send the part of data and not the whole data instead.
This practice might be sufficient if you are dealing with a small amount of data. However, when you’re dealing with large or/and unstructured data, it’d be wise to transmit data IDs rather than data itself.
#6. Real-time Communication: IoT
Real-time communication is the backbone of the IoT systems and hence serverless seems to be the best choice. But is it?
Consider this example, AWS IoT costs $5 per 1 Million requests and DynamoDB costs $0.0065 for every 10 put requests and 50 get requests on per second basis. When you’re dealing with small architecture and have fewer of requests, your per month bill might get up to $15 for DynamoDB and $150 for AWS IoT, not considering the cost of AWS Lambda, API Gateway and storage.
This seems to be too good to be true where it might cost you around $200 per month to run your IoT system but imagine a case where there are thousands of devices making millions of requests per minute. Would you still pay for it?
For example, if you’ve 10k devices sending one request a second, your monthly bill will be more than 136k and if you’ve 100k devices sending one request a second, your monthly bill will be more than $136M.
Keeping aside the serverless cost factor, IoT use cases are highly latency sensitive: e-commerce, advertising, online gaming and gambling, sentiment analysis and much more. Startup time for Lambda is high for web use cases where even multiple seconds of delay can’t be entertained or be hidden by UI tricks.
How to avoid it?
When you’re dealing with complex IoT systems, there is no quick rule to avoid the cost and latency issues.
The first thing you can do is to figure out the requirements of your approach. Serverless might be the best approach in the following cases:
- For your IoT system, you’re not worried about vendor lock-in
- If you need to validate your ideas fast and have a short time to market
- You have the fewer number of requests which will keep the cost low and latency can be ignored.
#7. Long Processing Tasks
The configuration of long tasks fairly impacts the overall performance of the app. Even after auto-scaling, these complex running tasks will hinder the performance of other long tasks to be operated under their stipulated time frames. And that’s why your functions will have certain runtime limits.
Considering an AWS Lambda example, it has a timeout limit of 300 seconds while API Gateway will time out after 29 seconds. Here you may realize that Lambda runtime is useless since API Gateway will timeout after 29 seconds. Reason being, our frontend will call an API and Lambda will be integrated with the backend.
Our major goal here is to process the request as fast as possible and quickly perform the long-running tasks in the background.
How to avoid it?
The resource limits offered by most of the serverless platforms are sufficient to process the basic needs of its application. However, if your needs are advanced, you may opt for asynchronous processing.
#8. Grains of Sand: High Granularity of Functions
Are your functions too small and too many? You’re not alone. There are numerous people who unknowingly over-engineer their serverless architecture ends up with extra-granular services, eventually leading to problems in monitoring, security practices and AWS Lambda triggers & function management.
Since functions can be deployed at a minimal rate, it is a general tendency for developers to create functions that they don’t need.
I’ve observed that one of the major challenges that people face is defining boundaries when it comes to dealing with a serverless architecture. They either end up with too loose or too tight coupling.
However, the main aim should be to evaluate and design an architecture which provides the right balance.
At Cloud Foundry Summit 2017, Alex Sologub of Altoros pointed out that engineers need to take responsibility based on design and facts and not on their gut feelings. While not doing so, here are some potential threats you jump into:
- Complexity in architecture
- Wastage of human resource
- Slower development and deployment
- Complexity in function management
- Increasing the attack surface area
How to avoid it?
Before you architect your functions, here are three points you should take care of:
- UX Analytics: The manner in which your users are interacting with your application helps you in deciding whether you need the function for that or not.
- Performance: Understanding the distribution of load across your functions is very critical. This data will help you in sorting out your function responsibilities and mapping out various factors pertaining to its availability and scalabilities.
- Use Cases: Functions are usually built around bounded contexts i.e. business capabilities, as a part of a larger system designed to facilitate specific business needs. Understanding the needs thoroughly will help you in differentiating the functions, whether they are needed to be incorporated under the permanent system of a temporary one.
#9. Excessive Communication Protocols
To have a standard method of communication between functions, your preferred communication protocols should be language and platform independent.
Both of these points are easy to follow and can be accomplished easily. What’s hard here is to find the right balance of synchronous and asynchronous protocols. Most of the development teams do not consider this beforehand and falls into the trap of excessive communication protocols which results in complex systems.
How to avoid it?
First things first, avoid yourself from committing to any protocols before you gain a good understanding of how your functions are going to operate in serving the users.
Secondly, you should classify your functions between internal and external services. External services should be provided with widely available HTTP interface for smooth execution while internal services will do with the rich capabilities facilitated by the message brokers.
Serverless antipatterns follow various formal and informal approaches whether you’re working on a custom software development or refactoring an existing one. Keeping AWS Lambda best practices and antipatterns in mind will help you in designing a better and clean architecture, over which we obsess at Simform.
This post is a way to avoid functions as a next golden hammer. Would be looking forward to your suggestions and experiences. Reach me out on Twitter at @RohitAkiwatkar or drop me a mail at firstname.lastname@example.org