Serverless Architecture – What It Is? Benefits, Limitations & Use cases
One of the most talked about technologies in recent times has been Artificial Intelligence (AI). ChatGPT, for instance, is an AI-based module developed by OpenAI. However, no one talks about the technology that supports its functionality, i.e., serverless technology.
From powering a brilliant AI machine to propelling business solutions, serverless architecture has been the driving force. It allows organizations to focus on business while third-party services manage resources.
So, what exactly is serverless architecture? And how does it help reduce resource management? We will discuss serverless architecture’s aspects, including challenges, benefits, examples, etc. Let’s begin with the basics first!
What is Serverless Architecture?
Serverless architecture is a way to build and run applications that reduce the need for resource management. Such an approach enables organizations to run applications without managing the physical servers.
It enables cloud providers to execute code by allocating resources and scaling infrastructure. Examples of serverless computing platforms are AWS Lambda, Azure Functions, Google Cloud Functions, etc.
Serverless offerings are divided into two main categories.
- Backend as a Service lets developers focus on managing the front end of applications and rid them of backend development tasks like hosting cloud storage and database management.
- Function as a Service is an event-driven execution model that executes small code modules. It triggers the functions when the execution of certain events happens in application modules.
Now that we know what serverless is, here is “what isn’t serverless?”
PaaS is not the same as serverless FaaS.
If your PaaS can efficiently start instances in 20ms that run for half a second, then call it serverless. https://t.co/S3YzvqFYLR
— Adrian Cockcroft – now @email@example.com (@adrianco) May 28, 2016
Most PaaS apps are not event-driven, while FaaS apps are. You can configure auto-scaling in PaaS, but configuring it for each request without a specific traffic profile is tricky. Hence, serverless FaaS is different from PaaS.
How does serverless architecture work?
Serverless architecture shifts the resource management to third-party service providers and saves time otherwise spent on updating, patching, and managing servers.
The traditional approach couples the frontend logic, backend logic, security, and database without separating concerns. However, businesses need an architecture that can scale as per requirements and is flexible and reliable.
Serverless architecture helps with the separation of concerns. The frontend logic is decoupled from backend logic, database, and security services. All the backend services communicate with frontend logic through APIs.
A standard serverless architecture includes,
- Lambda functions(FaaS) enable different services in a serverless architecture. For example, you can build login and data access services as lambda functions. These functions read and write data from the database to provide JSON responses.
- Security token service(STS) is a temporary credential-generating service to invoke lambda functions for the client application.
- User authentication service ensures that data access is provided based on the user’s identity and pre-defined privileges. For example, AWS Cognito allows you to add signup and authentication across different devices. It also enables organizations to issue temporary credentials and authenticate users for data access.
- A fully managed NoSQL database provides enhanced flexibility, scalability, and availability for your web applications in a serverless architecture.
Why adopt serverless architecture?
Serverless architecture, with its event-driven approach, is popular among organizations due to its benefits such as cost efficiency, flexible scaling, improved app performance, high development speed, etc.
A survey indicates that the adoption of an event-driven approach is the primary reason businesses use serverless architecture.
Business benefits of serverless architecture
- Reduced operational cost- Serverless architecture pattern minimizes the need for resource management and offers auto-scaling, resulting in reduced operational costs. Further, you benefit from shared infrastructure and labor cost gains.
- Reduced development cost- Standard functionalities like push notifications and version-specific messages need a shared codebase. However, you can use BaaS services rather than creating a code for each service. For example, if you need an authentication service, you can use Auth0.
- Pay per use- You will save money with serverless architecture using FaaS platforms, which allows you to only pay for the computing resources you need.For example, a server takes around one minute to process one request, and several apps run on the same machine; you can imagine the overload.You may have to pay for more instances to reduce the overload and improve performance. FaaS removes this issue by managing resources optimally for your application.
- Brings more agility- Serverless computing lets development teams focus on building core products, designing scalable and reliable systems, and handling the infrastructure. Hence, reducing the products’ time-to-market, providing more agility, and quick deliveries.
- Complete utilization of resources- The serverless architecture eliminates the need for heavy lifting related to scaling and managing servers with its built-in fault tolerance capacity. As a result, it does not require developers to worry about boilerplate code and components.
- Better user experience- With serverless applications, you can ship new features daily and as quickly as possible. This makes for better user experiences as you will fulfill customer demands for new features and quick bug fixes without making them wait longer.
Examples of serverless architecture
From leading companies like Coca-Cola to Netflix and many others, serverless architecture has been the go-to solution.
Coca-Cola went from $13000 to $4,500 per year with serverless
Coca-Cola has several vending machines across the world serving beverages continuously. But how do they manage to keep the vending machines full of beverages all the time?
The answer is a communication system connecting all the machines back to Coca-Cola HQ. Coca-Cola spent $13,000 a year with EC2 T2.Medium machines up until 2016.
However, in 2018, Coca-Cola switched to a serverless architecture based on AWS services, and the annual cost of maintaining the communication infrastructure was reduced to $4,500. The cost calculation was based on the 30 million requests that Coca-Cola received in 2018.
So, how Coca-Cola uses serverless?
Whenever a customer buys a drink from the vending machine, a call is made to the payment gateway for purchase verification. The verification process triggers an AWS Lambda service that handles all the business logic behind the transaction.
If a customer initiates the transaction from mobile, the system sends a push notification to the device for submitting data to Android Pay or Apple Pay.
All the communication between the vending machine, payment gateway, and mobile device happens within seconds.
Netflix used AWS Lambda for event-driven and rule-based management.
Netflix manages large volumes of data to provide a smoother streaming experience. Earlier, Netflix used to have on-premise data centers, which were becoming obsolete with an increasing user base. So, it migrated to the AWS infrastructure in 2016.
However, Netflix needed to ensure there were no performance bottlenecks when it switched to a cloud-native infrastructure.
So, Netflix uses rule-based, and event-driven AWS Lambda functions to implement the serverless architecture. It breaks down any received content from production houses into 5-minute streams.
There are more than 60 streams of processes running in parallel and repackaged for deployments. It allows them to process millions of data in real-time and provide an enhanced user experience.
Signeasy chose AWS serverless to build their SaaS dashboard
Signeasy provides SaaS solutions for document transaction management and cloud-based eSignature.
Earlier, on customers’ demand, Signeasy used to compile all the data and reports and email it back to the businesses. It was a long and tedious process causing delays.
So, to free themselves of this monotonous and performance hampering process, Signeasy decided to build self-serve dashboards with usage reports on a serverless architecture.
Using AWS Serverless, Signeasy reduced infrastructure management costs and augmented existing SaaS apps with self-serve usage reports.
Signeasy’s users can log in to the portal and authenticate their identity. The portal uses a combination of tenant ID and user ID in JSON Web Tokens(JWT) to distinguish between different profiles. Documents are stored in Amazon S3. Users’ actions are stored in Amazon RDS.
Further, user actions are also placed in the message queue on Amazon SQS. It uses this queue to loosely couple existing microservices on Amazon EKS, which allows Signeasy to process messages asynchronously and reduce delays.
Report writer services on AWS Lambda process the messages. Whenever a tenant uses the dashboard to access usage reports, the request is sent from the web client on the browser as an API call to Amazon API Gateway.
As per request, API Gateway servers report data from the backend reports service running on a separate lambda function. The report service will verify the user ID before sending data back to the web client. Serverless architecture helped Signeasy reduce infrastructure costs and the wait time for report generation by 8 hours.
Serverless architecture best suits organizations wanting to reduce resource management and focus on business aspects. However, there are specific use cases for which you can choose serverless architecture.
When to use a serverless architecture?
Here is the list of use cases for serverless architecture showcasing the scenarios where it can be implemented in your applications.
- To implement asynchronous message processing in applications
- To process data to enable powerful machine learning insights
- Build high-latency, real-time applications like multimedia apps to execute automatic allocation of memory and complex data processing
- To serve unpredictable workloads for rapidly changing developmental needs, customer demands feature addition and other complex scalability needs
- To dynamically resize images or transcode video and simplify multimedia processing for different devices
- To build a shared delivery dispatch system
- For Internet of Things(IoT)-based applications and smart devices
- In live video broadcasting scenarios based application modules
- For event-triggered computing – scenarios that involve multiple devices to access various file types
- To implement stream processing at scale
- For the orchestration of microservice workloads
- To perform security checks
- To support service integrations for multi-language to meet the demands of modern software
- To implement Continuous Integration(CI) and Continuous Delivery(CD)
No doubt, serverless reduces administrative overhead. It takes the server maintenance off the developer’s plates and reduces overall server costs. But, there are some limitations also.
When not to use a serverless architecture
Serverless shouldn’t be considered when:
- You need control over the hardware
- You need a deep feature set
- When workloads require a high level of security
- When you can find a cost-effective, high-performing alternate solution
Challenges of serverless architecture
Serverless architecture has several advantages. Still, implementing it has many challenges, such as cold start, vendor lock-ins, and more.
#1. Cold starts
Cold starts are delays that occur when you invoke serverless functions after a long time. It causes a few seconds of delays for the functions to run and hampers productivity. One of the critical reasons for cold starts is the dependencies of serverless functions.
For example, if you call a serverless function, all the dependencies will automatically get imported into the containers increasing the latency. Another reason for cold startups is large functions requiring more time for setup.
#2. Vendor lock-ins
Vendor lock-ins are a significant challenge for organizations opting for serverless architecture at different levels like API, cloud services, and more.
For example, many organizations have tightly coupled APIs with serverless infrastructure, causing issues when they want to move away from a specific vendor.
Similarly, many companies use specific cloud services and couple them with serverless architecture. For example, AWS cloud services work best with their serverless offerings.
However, if you want to use cloud services from other vendors, tightly coupled serverless architecture becomes a bottleneck.
#3. Opinionated application design
Serverless architecture has specific requirements you must fulfill while designing your applications. For example, your app design must be cloud compatible. In other words, some app components need to be deployable on the cloud.
Further, serverless architecture works best with the microservices approach. You can add monolithic services to the app with serverless architecture, but performance and efficiency will see some impact.
#4. Stateless executions
Serverless architecture offers stateless executions, which can cause issues as caching might accumulate only parts of the required information.
In other words, the state is not shared between invocation of functions, and you need to design your apps to have all the information before executing internally.
Further, every external state needs to be fetched at the beginning of execution and exported before you finish executing a function.
#5. Debugging insights
Serverless architecture inherently restricts application debugging insights. For example, the entire infrastructure is in the serverless environment offered by the cloud service providers.
So, you don’t have visibility over the complete code for debugging. You will have access to function logs and specific components to do so.
Criticisms of serverless architecture
Despite being popular for its scalability, affordability, and flexibility, serverless architecture is not without its detractors. Here are some of the potential drawbacks of serverless architecture to be considered.
- Long-running workloads are more costly on ongoing serverless than dedicated servers
- Delays when processing the cold-start request while executing functions
- Higher dependency on your providers for debugging and monitoring tools and limited control over the platform’s architecture and availability
- Increasing complexity and chaos due to a lack of balance between the functions.
- Difficulty in conducting integration testing for a group of deployed functions due to small-sized modules
Best Practices for implementing a serverless architecture
Designing serverless applications with an optimal solution is the best way to start. Your goal of designing a decoupled, stateless application is possible with the simplicity of the design of serverless architectures.
#1. Manage code repositories
You can use frameworks such as AWS Serverless Application Model to break the functionalities into smaller services with separate code repositories. This is because managing the entire application logic in a single repository becomes difficult as the application scales over time.
It is possible to create repositories per function in minor units of serverless applications. In most cases, creating separate repositories helps define individual functions. These apps contain microservices related to multiple smaller functions with shared resources.
#2. Use fewer libraries
Since functions have cold starts and warm starts, you must use fewer libraries in your functions. For example, AWS Lambda has cold starts due to the initiation of a higher number of libraries for function invocation.
More libraries mean more time to initiate the functions, resulting in cold starts. So, the best way is to ensure that your serverless architecture has fewer libraries.
#3. Use platform-agnostic programming languages
One of the best ways to reduce dependencies and vendor lock-ins is to leverage platform-agnostic programming languages. Rather than using multiple programming languages, stick to the one that most platforms support.
#4. Analyze instances and memory requirements
Knowing the number of active instances and their related costs is essential before designing a serverless application. Also, know how much memory is required to execute the functions.
This will help you develop scalable serverless applications and help you build less complex serverless applications.
#5. Write single-purpose codes
Single-purpose codes are easier to test, deploy, explain, and understand. This will also limit the execution of more functions saving costs with reduced bugs and dependencies.
Writing single functions for your serverless apps will make them less complex and more agile, increasing the development speed.
#6. Develop and deploy powerful frontends.
Designing strong frontends lets you execute complex functionalities on the client side, reducing costs by minimizing execution times and function calls.
This will be easier when users want an immediate and seamless result over their actions. They could easily access the application features, improving application performance and user experience.
The big picture
Simform built a serverless architecture solution for Food Truck Spaces, a food truck and property management platform.
It needed solutions to manage a scalable directory across multiple locations, including the data of food entrepreneurs, event organizers, and property owners. Further, it needed automation features for booking confirmations, records management, cancellations, payments, etc.
Simform built a responsive single-page application integrated with AWS Lambda to automate booking confirmation and manage online transactions. Users can now book slots for food trucks, manage records, send notifications, and confirm bookings.
Consequently, Food Truck Spaces saw a 30% hike in revenue. By leasing properties to different time slots truck owners, it is able to make more profits.
So, if you want to go serverless, sign-up for a 30-minute session with our experts for fantastic ROI!
I'm curious how would serverless apply to stateful connection-persistent services like live video streaming using RTSP or RTMP. As I understand, service usually needs to establish a persistent connection to a camera and start transcoding incoming packets/frames and then serve to clients of the service (the serving part can be stateless). I can't wrap my head around how to turn this scenario into a fully serverless setup. If you would shed some light on a general direction, I greatly appreciate.
For me a very critical factor is the cold/warm issue. I don't know about AWS, but with Azure Functions it can take as long as 5 seconds to get a response in a cold start. As of today, we work around this problem with a "ping" to the functions to keep them alive.
True, cold startup time along with concurrency and identifying optimum memory size are critical for your serverless application. So far we have observed good performance with AWS Lambda - worst startup time being 3000ms. There are many other solutions for this problem such as minimizing deployment package size of the function and leverage the container reuse by lazily loading variables. I have discussed some more challenges in this blog - https://www.simform.com/serverless-performance/