Serverless Security- What are the Security Risks & Best Practices?
Since you’ve landed on this blog, you’re either using a serverless architecture or wanting to know about the security of the cloud computing model––Serverless. In fact, many cloud providers offer robust cloud services via this method and also provide impenetrable security features with their services. Regardless of the architectural pattern of data storage, however, there are many instances of security threats in web or software applications.
A recent incident of such a security threat occurred in March’19 at Norsk Hydro. This unfortunate debacle hurt the company with a steep $71 million hit. They had to suddenly switch from automated to manual operations for a certain period and were forced to stop production at 170 plants. Eventually, Microsoft came to their rescue and helped them put the units back in production.
Another example is a DDoS attack reported by Amazon Web Services in February 2020. At 2.3 Tbps, it was the highest in all records—meaning upto 2.3 terabytes per second web traffic witnessed the impact on AWS servers at a time. Thankfully, AWS Shield staff managed to mitigate the risk. The attackers used the hijacked Connection-less Lightweight Directory Access Protocol, also known as CLDAP web servers. According to ZDNet, the client is unidentified, but the event did cause three days of “elevated threat” for the AWS Shield employees.
These examples go to show that it’s part of our due diligence to learn best practices and save our systems from security risks. Of course, the serverless approach lessens the development team’s burden for security concerns, but it doesn’t nullify them altogether. That’s why it’d be best to first know more about its impacts, risks, and best practices before adopting the model.
Here’s what this post will cover:
- How does the serverless model protect an application without a physical server?
- What do we mean by serverless security?
- Who provides function as a service?
- What are some common security risks?
- What are the security best practices one could follow?
Overview: What is Serverless Security?
Though organizations don’t need to worry about infrastructure-related operations in a serverless model, specific security concerns still need to be addressed.
Here’s why serverless architecture/model needs protection:
- The serverless model doesn’t use firewalls or detection system software(IDS tools).
- The architecture doesn’t have instrumentation agents or protection methods like file transfer protocols or keys authentication.
In this architecture, the data requested by a user is stored at the client-side––for example, Twitter. When you load more tweets, the page is refreshed on the client-side. These tweets are cached in the device. It is the client-side that holds this data. That’s why the focus is on permissions, behavioral protection, data security, and strong code that shields applications from coding and library vulnerabilities.
This means that organizations will focus more on…
- Core functionalities of the product
- Development practices,
- Increasing productivity
- Reduction of time to market
- Customer satisfaction
- Improvement in quality
And will focus less on…
- Operating system
- Runtime environment
- Infrastructure operability and complexity
It’s important to assess the security aspect of modern technology in spite of its advanced features. Of course, cloud computing has taken over the entire world of software, mobile, and web apps, but how does it protect your apps?
You might’ve thought of adopting a serverless model and wanted to know more about its impacts and security concerns.
- How does the serverless model protect an application without a physical server?
- Who provides this function as a service?
- What are some common security risks?
- What are the security best practices one could follow?
Before we explore these points, however, it’s essential to understand that cloud providers offer serverless models in the form of two distinct services: i). Functions as a Service ii).. Backend as a Service.
For more information on FaaS and BaaS models, please refer to serverless architecture.
Organizations must take precautions to secure serverless apps because this architecture uses an even smaller approach than microservices. It uses independent, miniature pieces of software that interact using multiple APIs that become public upon interaction with cloud providers. This mechanism creates a severe security loophole where attackers could access the APIs and pose challenges for the security of serverless apps like complex enterprise software.
Here’s the graphical representation of how the serverless model works in comparison with traditional virtual machines:
Check How We Helped Our Client, Food Truck Spaces, To Build Secure Angular App on AWS Stack
What are Some Serverless Security Risks & Challenges
- Insecure configuration: Cloud service providers offer multiple out-of-the-box settings and features. These settings should provide reliable, authenticated offerings. And if configurations are left unattended, it may result in big security threats.
- Function permissions: The serverless ecosystem has many independent functions, and each function has its services and responsibilities for a particular task. This interaction among functions is massive, and it may sometimes create a situation where functions become overprivileged with the permissions/rights–– say, a function to send messages to get access to database inventory.
- Event-data Injection– Different types of input from different event sources have an encrypted message format. And hence, multiple parts of these messages may contain untrusted inputs that need a careful assessment.
Here’s how you can avoid this security risk:
- Keep your data separate from commands and queries.
- Make sure that the code is running with minimum permissions required for successful execution.
- Use a safe API to invoke your function, which avoids the use of the interpreter entirely.
Use SELECT LIMIT and other SQL commands (if your function is dealing with SQL database) to prevent mass disclosure of records in case of SQL injection.
[ORDER BY expression [ ASC | DESC ]]
LIMIT number_rows [ OFFSET offset_value ];
- Insecure Storage– Sometimes, developers prefer to keep application secret storage in plain text configuration, making the application storage environment insecure. These tiny flaws sometimes result in bigger threats of insecurity in serverless hosting.
- Function monitoring & logging– Cloud providers may not provide adequate security facilities in logging and monitoring for applications. It may result in risk factors at the application layer.
Here are some of the report examples that you should generate on a regular basis for effective monitoring and logging. It’s recommended by SANS Essential Categories of Log Reports. You can also set alarms on Amazon CloudWatch so that suspicious activity on any of the below-mentioned reports is notified to you––
- A report containing all login failures and successes by user, system and business unit
- Login attempts (success & failures) to disabled/service/non-existing/default/suspended accounts
- All logins after office hours or “off” hours
- User authentication failures by the count of unique attempted systems
- VPN authentication and other remote access logins
- Privileged account access
- Multiple login failures followed by success via the same account
- Broken authentication– Unlike traditional applications, your serverless application is accessible for all once it’s published on the cloud. Also, it promotes even smaller design than microservices and, hence, contains thousands of functions. So, you must apply robust authentication for end-users’ access and carefully execute multiple functions.
Below is how you can make sure if every endpoint is authenticated:
Incorporating an extensive authentication system that exercises control and authentication over these APIs is highly critical.
- We store user’s active tokens in the database and authenticate them against every request. This will let you exercise token authentication, invocation count and expiration time limit. If you’re just starting with the serverless architecture, Amazon Cognito is an easy solution which will suffice.
- If at all user authentication is not an option, developers are recommended to use secure API keys or SAML assertions.
- Automatic resource allocation– The serverless application doesn’t get blocked or unavailable when bombarded with fake requests as part of a cyber attack. Instead, the application receives automatic scaling of resources by the cloud provider as a part of the “autoscaling” and “pay for what you use” concept. So the situation becomes unlike with traditional architecture where applications stop serving the requests and make services unavailable during such attacks.
- Poor execution of verbose error messages– Sometimes, developers forget to clean the code while moving the applications to production, and the verbose error messages remain as it is. It results in a breach of secret information regarding serverless functions. It must be withheld within the system as confidential data to protect the apps from attackers.
- System complexity: System’s overall complexity increases while handling multiple functions and dealing with third-party dependencies compared to traditional infrastructure. Thus, it is difficult to detect malicious packages because of the inability to apply behavioral security controls.
- Lack of security testing: Standard apps are rather easy to test without the added complexity of serverless architectures. This is because serverless apps deal with third-party integrations of database services, back-end cloud services, and other dependencies. As a result, it falls short in security testing.
- Complex attack surface- Serverless application has an expanded attack surface because of the vast range of event sources and smaller parts of the application. It deals with different functions which are triggered through multiple interactions. It uses API gateway commands, cloud-storage events, and many others. Therefore, the risk elimination of malicious event-data injections becomes difficult.
This is what you do about dependencies in serverless applications:
- Remove unnecessary dependencies, unused features, components, files, and documentation. Continuously monitor versions of frameworks, libraries and their dependencies on client and server-side both, .
- Components should be obtained from official sources with signed packages to reduce the chance of malicious components.
- Create security patches for the older versions of libraries and its components.
Adnan Rahic, developer advocate at Dashbird, says that the practice of serverless security tends to come with time. When you’re young and eager to build new software, security takes a back-seat ride. You build the app and hope you don’t get hacked. As a developer matures, this mindset changes.
The first step is to learn the standard best practices in programming. Validating input is, by far, the most important. Learning that an input to your system can be malicious regardless of who sends it. Parse it correctly, don’t cut corners here.
Once this is done, the following crucial steps are permissions and secrets. Using AWS KMS is a must if you want to manage secrets and keys efficiently while permissions can be managed with ease through IAM Users and Roles. Bear in mind, this is most crucial for S3 buckets. Secure your buckets immediately because they are the most vulnerable part of the whole system.
Once you have these key steps taken care of, the rest is just keeping it all working flawlessly. This is where monitoring tools come into play. You also need to learn how to utilize Dashbird, Cloudwatch, IOPipe, Thundra, or any other similar tool.
AWS Lambda vs Azure Functions vs Google Cloud Functions: Comparing Serverless Providers
Serverless Security Best Practices & How to Better Secure Serverless Functions
When you go serverless, you don’t need to think about securing a server unlike with a server-hosted application. However, you’re still sharing responsibility with a serverless model to secure your application. And since it’s a serverless architecture, you need to practice the best methods that protect serverless apps and don’t just detect intrusion on firewalls like in server-hosted apps.
These are several best practices organizations undertake to secure their serverless apps:
Limited access of permissions
The serverless ecosystem is made up of various functions. Organizations may mistakenly grant permissions to functions by giving them the access to multiple parts of the application.. Organizations also need to limit the access of functions among employees working closely on the projects.
So, they’d be setting permissions and limiting access for two entities:
i) Give limited rights to employees about accessing functions:
You have to limit the permissions to specific roles. For example, you are using AWS or Microsoft Azure as your cloud-service vendor. You can set limits or do a role-based setup for accessing features of functions. Organizations have to decide the roles of their function and distribute them among the employees based on their responsibilities.
ii) Give functions limited access to confidential information:
This is about restricting resources per function. Much like limiting access to a team member, you could also regulate what part of your application a serverless function can access.
It sets a barrier for the functions that cause security breaches. It especially helps when attackers succeed in decoding one function but further fail to access the entire system.
When enterprise systems are in danger, this segmentation of functions becomes a safety net that doesn’t let disasters happen. Permissions granted to particular functions influence its performance on runtime; thus, you should consider limiting access to permissions.
How to create roles and segregate functions to avoid these pitfalls?
- Create custom roles as per the needs and apply them to functions and on individual accounts or to a group.
- Set limits for one account or multiple accounts.
- Create identity-based roles and separate permissions for one user or apply a set of permissions for different roles.
Listed below are some of the resources which can come handy for you:
- Serverless “Least Privilege” Plugin by PureSec
- AWS Identity & Access Management Best Practices Guide
- Microsoft Azure Identity Management and Access Control Best Practices Guide
- Leverage principle of least privilege
Monitoring serverless functions
You should regularly assess all functions. This enhances your visibility into functions by tracing them end-to-end, quickly detects problems, and focuses on actionable insights. Security teams should also focus on taking audits and network logs at regular intervals.
These are some ways you can manage security logs:
- Keep track of the number of failing executions.
- Track the number of functions executed.
- Assess the performance of functions based on the time taken for execution.
- Measure the concurrency based on the number of times a particular function is executed.
- Measure the amount of provisioned concurrency in use.
- Centralize logs from multiple accounts for real-time analysis.
Automate security controls
Security teams should automate processes for configuration and test-driven checks. Automating these checks saves you from the complexity and a bigger attack-surface of serverless architecture. You can integrate tools for continuous monitoring and access management. For example, there are several dependency scanners like snyk.io. Development teams can also write codes that automate scanning of confidential information and checking of permissions.
Refer to this potential checklist to automate security controls:
- Check if the function permissions are allowing excessive provisions for attackers.
- Involve security analysis in CI/CD pipeline; automate continuous checking for vulnerabilities.
- Educate development teams on moving infrastructure as code.
- Implement audit-logging policy and network configuration to detect compromised functions.
Developers often derive components from various third-party platforms. It’s a best practice to check the reliability of sources, and whether the links they’re referring to are secure. It saves you from unexpected vulnerabilities. What’s more, remember to check the latest versions of components used from open-source platforms. Because developers mostly use open source components in modern applications, it’s hard to trace vulnerability in code or detect any issue. It’d be best if you can get timely updates and latest versions. It keeps you ahead of time and avoids unplanned threats.
How to do it?
- Regularly check updates on development forums.
- Avoid using third-party software with too many dependencies.
- Use automated dependency scanner tools.
It’s recommended to store sensitive credentials like databases in safer places and keep their accesses extremely secure and limited. Furthermore, be extra careful with critical credentials like API keys. Set environment variables to run time-evaluation settings; then, deploy time in configuration files. It could be a nightmare if the same configuration file is used in multiple functions, and you have to redeploy services if variables are set at deployment time.
What are some ideal practices?
- First and foremost, rotate the keys on a regular basis. Even if you’re hacked, this ensures that access to hackers is cut-off.
- Every developer, project, and component should have separate keys.
- Encrypt environments variables and sensitive data using encryption helpers
Deployment at a specific time
You should see what’s the best time to deploy an application module that creates a bug-free platform for users. Ensure that you deploy modules when users are not engaged with the platform in real-time, and your platform’s experiencing less traffic. To protect applications against attackers, you could restrict the deployment to certain time intervals.
What should be an ideal deployment practice?
- Deploy at idle times.
- Avoid rush hours like Black Friday.
- Avoid huge system updates in real-time; set specific time slots for upgrades.
Developers should keep in mind certain geographic considerations while deploying app modules. Code deployed from different geographic locations could create code-related issues. For example, a developer located in New York uses US-East-1 timezone while a developer from Asia uses a different time zone in deployment settings.
- Use a single region or a suitable time zone for deployment.
- Use Safeguard while handling serverless framework to avoid unexpected problems in contentious development and to manage dependencies of work.
What to do Next?
Serverless architecture introduces a new paradigm of application development. And there’s no denying that new opportunities do have unique challenges. However, it could also mean incredible benefits in return––such as cost-efficiency, scalability, and ease of managing infrastructure.
As this architecture shifts the organization’s focus from infrastructure operations to developing quality code, it also requires people to remain extra careful in giving cloud providers a responsibility to handle your application infrastructure. And thus, one has to play it carefully and sensibly while adopting security best practices for this technology.
What’s more, with companies increasingly going ‘serverless’, cloud providers have become sure to offer exceptional security features from their end. Do you have experience with serverless security and security best practices about this architecture? Share it with me in the comments section or join me on Twitter @RohitAkiwatkar to take the discussion further.