With unprecedented demand for innovation from business units and customers, enterprises are optimizing core technologies for increased intelligence, speed, and agility. Businesses are developing scalable web applications offering high-quality service, excellent availability, good performance, high security and good customer support. Topping the chart of cloud computing services, Amazon Web Services has retained its position as the top on-demand cloud computing platforms to companies. Pioneering the cloud IaaS market in 2006, Amazon Web Services (AWS) has emerged as the apex for most of the businesses to help them ahead of the competitive curve.
Overview of the AWS services
Ever since its inception in 2006, AWS has managed to build a commendable services system which helps in making your infrastructure safe and secure, cost efficient and scalable. The broad categories under which the entire AWS service is divided are Deployment, Administration and Security Services, Application Services, Data Services and Infrastructure. AWS gives you a global access including different regions and availability zones. It enables you to reach near your customers and also shift your workload wherever you want. With the vast infrastructure available on AWS and multiple hosts of services, enterprises can capitalize on the services offered. Designing and building a scalable application can be initiated with the vital components like Amazon Machine Images (AMI), Amazon Elastic Compute Cloud (Amazon EC2), Amazon Virtual Private Cloud (Amazon VPC) and Amazon Route53.
Apart from infrastructure, AWS additionally provides several application services. These are basically the generic components which will help you in cost-cutting. The components include queueing services, email services, database warehouse services, caching services, notification services database services among many other things.
Building a scalable application
Now that you’ve set up the infrastructure and have a good variety of services available to build a decent application, you need to figure out where will you begin from? The start can be as simple as deploying an application in a box. AWS enables virtual machines creation by providing technology stacks like Amazon Machine Images (AMI).
Amazon Machine Images (AMI)
AWS makes it simpler to set up virtual machines by providing the most commonly used technology stacks in the form of AMI. An Amazon Machine Image (AMI) gives the information required to launch an instance, which is a virtual server in the cloud. You can specify an AMI during the launch an instance. An AMI includes a template for the root volume for the instance, launch permissions that control which AWS accounts can use the AMI to launch instances and a block device mapping that specifies the volumes to attach to the instance when it’s launched.
Amazon Elastic Compute Cloud (Amazon EC2)
Amazon Elastic Compute Cloud provides the scalable computing capacity in the AWS cloud. This eliminates the hardware upfront so that you can develop and deploy applications faster. Amazon EC2 can be used to launch few or multiple servers configure security and manages storage easily. The Amazon EC2 provides a firewall that enables you to specify the protocols, ports, and source IP ranges that can reach instances using security groups.
Amazon Virtual Private Cloud (Amazon VPC)
Amazon Virtual Private Cloud gives a provision to launch AWS resources in a virtual network. It gives complete control over the virtual networking environment including a selection of IP address range, subnet creation, the configuration of route tables and network gateways. Network configurations can be easily customized and multiple layers of security, security groups can be leveraged.
Amazon Route 53
Amazon Route 53 is a highly available and scalable cloud DNS web service. It is designed to give developers an extremely dependable and cost-efficient way to route end users to Internet applications by translating names to numeric addresses that help computers connect to one another. Amazon Route 53 effectively connects user requests to infrastructure running in AWS – such as Amazon EC2 instances, Elastic Load Balancing load balancers or Amazon S3 buckets.
The original architecture is fine until your traffic ramps up. Now you wonder how you can scale with when you start getting users up to 100. Here you can scale vertically to address the growing demands of the application when the users grow around 100.
Vertical scaling basically means pumping up more power to your existing machines, ie CPU, RAM. It is ideal to improve the vertical scaling by adding more resources in that direction. In vertical scaling, the load is spread between the resources of the RAM and CPU of a single machine. Scaling happens through multi-core and the data is accumulated in a single node.
It has a limited capacity of only a single machine. To scale beyond that becomes difficult, comes with a downtime and an upper limit. The best examples of vertical scaling is the cloud version of MySQL- Amazon RDS. Involving downtime, this system helps in easy and smooth scaling from small to bigger machines.
The best way to start the vertical scaling process is by separating the tiers and composing each of them according to the resource needs. By decoupling the application tiers the process becomes much easier. When the servers are stateless, scaling can be done by adding more instances to a tier and load balance incoming requests across EC2 instances using Elastic Load Balancing (ELB). ELB distributes the incoming application traffic across EC2 instances. It is horizontally scaled, imposes no bandwidth limit, supports SSL termination, and performs health checks so that only healthy instances receive traffic.
When you scale your database instance up or down, your storage size remains the same and is not affected by the change. Correct licensing in place for commercial engines (SQL Server, Oracle) especially if you Bring Your Own License (BYOL). One important thing to call out is that for commercial engines, you are restricted by the license, tied to the CPU cores. DB instances can be easily modified to increase the allocated storage space or improve the performance by changing the storage type. The vertical scaling requires minimal downtime because the standby database gets upgraded first, then a failover will occur to the newly sized database.
Horizontal scaling essentially involves adding machines in the pool of existing resources. When users grow up to 1000 or more, vertical scaling can’t handle requests and horizontal scaling is required. In horizontal scaling, we add more machines in the horizontal direction, unlike vertical scaling where we add more resources in the vertical direction. The partitioning of data determines the horizontal scaling. A part of data consists in each of the nodes. Horizontal scaling is an easier process as it involves scaling up pretty dynamically by just adding machines in the pool. Examples of horizontal scaling are MongoDB and Cassandra. Horizontal scaling requires a distributed approach and the applications should be decoupled. It helps in maximizing the EC2 and over provisioning is avoided.
The biggest advantage of horizontal scaling is that it facilitates the administrators to increase the capacity as and when wanted. Horizontal scaling is one of the most attractive principles of IT. It is most recommended for cloud computing. Here managers build systems by just adding hardware. Redundant data storage can be achieved by using other pieces of hardware. Redundant data storage in turn helps in reducing the chance of partial system failure. Horizontal scaling enables the IT managers to create big systems by networking low-generic hardware and consequently adding them to a system when necessary. With horizontal scaling, systems are easily built by using different, enhanced components.
Horizontal scaling affords the ability to scale with traffic. It is the ability to connect with multiple hardware or software entities, such as servers so that they function as a single logical unit. The performance of a read-heavy database can be improved using read replicas by horizontal scaling the database. Horizontal scaling is a cost-effective way to handle high volume traffic. Here in horizontal scaling, you are not using expensive servers to serve your specific requests. RDS MySQL, PostgreSQL, and MariaDB can have up to 5 read replicas, and Amazon Aurora can have up to 15 read replicas.
When users increase to 10k, Database deployment is the first thing which needs to be done. It is a challenging tasks to be accomplished by the IT department. It takes a lot of effort and patience to execute it without any hiccups. It involves both the process of updating existing database and deploying new ones. Many times IT departments fail when it comes to properly planning the deployment process. It involves initial deployment process, allotting time for necessary fixes, structuring of the database and carrying out the configuration process during upgradation. It is best advised not to delay the deployment process until the development process, instead it is best to plan the entire process intricately, design and script it, automate as much as you can along with the configuration system so that when the time comes to carry out the deployment process, you are ready for it.
There are two general directions to deploy a database on AWS. The foremost option is to use a managed database service such as Amazon Relational Database Service (Amazon RDS) or Amazon Dynamo DB and the second step is to host your own database software on Amazon EC2.
Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. It is cost-efficient and resizable capacity while automating time-consuming administration tasks such as hardware provisioning, database setup, patching and backups. It allows you to enhance the performance of the applications, provide superior security and compatibility.
Amazon Dynamo DB
Amazon Dynamo DB is a fast and flexible NoSQL database service for applications that need consistent, single-digit millisecond latency. It is a completely managed cloud database and supports document and key-value store models. The flexible data model, reliable performance, and automatic scaling of throughput capacity make it a great fit for mobile, web, gaming, IoT, and much more.
Users: > 500k
With a huge number of users up to 500k, it is quite difficult to manage all the requests without automation and monitoring services. Auto Scaling gets more efficiency from your infrastructure after you shift workload components to the appropriate AWS services and decouple your application. Auto Scaling is a web service designed to launch or terminate Amazon EC2 instances automatically based on user-defined policies, schedules, and health checks. It helps ensure that you have the exact number of Amazon EC2 instances to handle the load for your application. Collections of EC2 instances can be stored in Auto scaling groups, customizing the size of these groups. Your group never goes above the size. Auto scaling comes into the picture when the users ascend over 500K. One of the best things about scaling is AWS offers a set of building blocks to facilitate automation. The services spread across a spectrum that varies on the level of convenience and control. AWS services that facilitate automation include AWS Elastic Beanstalk, AWS Ops Works, AWS Cloud Formation, and Amazon EC2.
AWS Elastic Beanstalk
AWS Elastic Beanstalk is a service that allows users to deploy code written in Java NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar servers such as Apache, NGINX, Passenger, and IIS. Environments and versions are the two primary concepts that play a vital role when using Elastic Beanstalk. An environment represents the infrastructure that is automatically provisioned to run your application. A version represents a flavour of your application code which can be stored in either Amazon S3 or GitHub.
AWS Ops Works
AWS Ops Works provides a unique approach to application management. In case of a lifecycle event, built-in recipes or custom recipes can be executed. Additionally, AWS OpsWorks auto-heals application stack, giving scaling based on time or workload demand and generates metrics to facilitate monitoring. AWS Ops Works gives enhanced control on a microscopic level than AWS Elastic Beanstalk for better deployment of applications.
AWS Cloud Formation
AWS Cloud Formation provides resources using a template in JSON format. You have the option to choose from a collection of sample templates to get started on common tasks. With Cloud Formation, environments can be modified in a controlled way. Out of all AWS services from deployment, Cloud Formation gives the best control and granularity.
AWS Code Deploy
AWS Code Deploy is a platform service for automating code deployment to Amazon EC2 instances and instances running on premises. Deployments can be launched, stopped, and monitored through the AWS Management Console, AWS Command Line Interface, API, or SDK. Code Deploy supports the concept of deployment health. The minimum number of instances that need to be healthy can be set.
Service Oriented Architecture
Users: > 1 million
Service Oriented Architecture comes into the picture when the number of users are more than 1 million. The benefit of SOA is in the scalability of each service independently. Web and application tiers will have different resource requirements and different services. AWS provides a host of generic services to help you build infrastructure quickly. They are:
Amazon Simple Queue Service (SQS)
Amazon Simple Queue Service (SQS) is a comprehensive message queuing service to decouple and scale micro services, distributed systems, and server fewer applications. It is a simple and cost-effective service to decouple and coordinate the components of a cloud application. Using SQS sending, storing, and receiving messages can be executed easily between software components of any size.
Amazon Simple Notification Service (SNS)
Amazon Simple Notification Service (SNS) is a flexible, pub/sub messaging and mobile notifications service for coordinating messages delivery to clients. With SNS you can send messages to a large number of subscribers. The benefits are easy setup, smooth operation, and high reliability to send notifications to all endpoints.
Cloud architects use AWS because of the rich ecosystem with breadth of services and APIs. Software development using cloud computing requires vision to be successful. The blog lays down a path to implement AWS for building an application from a single to million users. For a single user the application infrastructure is very simple, consisting of AMI, VPC and EC2. As the number of users increases, the cloud architecture become more complex with horizontal scaling, vertical scaling, database, autoscaling and service oriented architecture.