Angular Performance Tuning: 15 Ways to Build Sophisticated Web Apps
Angular is, by default, a powerful and high performing front-end framework. Yet, unexpected challenges are bound to happen when you’re building mission-critical web apps, apps that are content-heavy and complex on the architectural side.
When a performance crisis hits hard, there is a common pattern of how PMs usually solve these problems. Browsing their way to a few conference talks and implementing every tip they come across. Does this solve all the issues? No! Because this is the wrong way to approach the problem. In other words, finding a solution and applying it to the problem that you’re yet to identify will lead you to a dead end.
If you’ve been there before, this blog is just for you. This article will help you systematically identify the bottlenecks and discuss comprehensive Angular performance optimization techniques. Before we jump onto the solutions, let’s discuss how to get started.
How to Optimize Angular Performance: Where to Begin?
Performance issues manifest themselves in ways that directly affect the end-users’ experience. Having said that, a decline in traffic or no traffic, decrease in engagement, high bounce rate are some of the factors that could give you a quick idea if your application is struggling with performance.
In cases like these, our goal should be to identify the issue and then work on optimizing it. However, before we start with the Angular performance optimization, let’s understand a few common performance bottlenecks and ways to improve them.
Some of the common problems faced by applications are:
- Unnecessary server usage
- Slow page response
- Unexpected app crash
- Periodic slowdown
- Not meeting expected results with the migrated technology
- Unexpected errors due to a real-time data stream
These problems can certainly be rectified using Angular optimization techniques. But before you get there, the first thing you need to ensure is that your app is adhering to the principles of clean coding architecture. From our first-hand experience at Simform, it has helped us be consistent with our code quality and build code that’s easily readable, testable, and maintainable.
Coming back to the common problems that we discussed above problems, here are some solutions you can apply to boost your app performance. But let us tell you these are quick fixes.
A few solutions to rectify fundamental Angular performance issues:
- Remove the unnecessary change detection that slows down the application.
- Add OnPush at required places.
- Slow HTTP requests(use a service aggregator like GraphQL)
- Optimize the hosting (through static cache content, using PWA)
- Remove unnecessary mathematical recomputations
- Reduce the size of bootstrap logic
Check how we build a highly performant Angular App for the food truck industry.
Tips/Methods to Optimize Angular Performance
Angular does a great job in terms of providing high-performing apps. But, if you want to develop a large scale app that deals with multiple heavy computations, you may start to see performance bottlenecks. Plus, these issues aren’t solved completely with basic quick fixes.
In this section, we’ve discussed some prominent performance issues in detail and a few ways to deal with them. However, it doesn’t translate that you’ve to apply each of these solutions, but, knowing the few basic hacks can alleviate your web app performance significantly.
This is why I love @angular Not only you get this great framework that’s evolving and improving everyday, but also other amazing libraries that works with it to make better web apps. I just learned about angular flex-layout and I loved it. #AngularConnect
— Bünyamin Coşkuner (@bnymncoskuner) November 7, 2017
Let’s start by understanding Ahead of Time Compilation, which is a part of both run time and load time performance.
Ahead-of-Time (AoT) Compilation
The user impact of AoT compilation is significant to a major extent because it directly affects the total time required to bootstrap the application. The Angular AoT compiler pre-compiles the HTML and typescript code before the browser downloads it.
This pre-downloading and compiled version of the application during the build process provides a faster rendering process, lowering the bootstrap time significantly. Hence, the user experience and interactions become incredibly smooth and quick.
Main performance trade-offs between just-in-time (JIT) and ahead-of-time (AOT) compilation. With @graalvm you have both options.
— Thomas Wuerthinger (@thomaswue) July 1, 2019
To give you a fair idea of the comparison between the bootstrap time for AoT and JIT compilation, let’s take a look at this graph:
It demonstrates the difference in time-to-interactive between two Angular applications: One built with AoT and the other without it.
It was observed that the total JS payload using the JIT mode was 89% higher, a drastic difference than that of the application that used AoT. So how did this happen?
The Angular AoT compiler pre-compiled the HTML and typescript code before the browser downloaded it. It resulted in a faster rendering process that eventually lowered the bootstrap time significantly.
Additionally, the shorter turnaround time of the AoT-driven version was attributed to the file size that was as small as 101KB of JS (gzip) against the non-AoT application size, 190KB (gzip).
What Ahead-of-Time Compilation Offers?
- Pre-downloads the compiled version and gives a faster rendering time
- Optimizes the number of asynchronous requests and converts it into a smoother user experience
- Reduces the application payload by pre-compiling the application version
- Loads the application with animation and CSS due to inline use of external HTML templates and CSS
- Detects the template errors earlier and hence, reduces the application payload
Want to develop a web app that performs well?
RunTime Performance Optimization
Chrome offers an array of dev tools that help you evaluate the performance of your application or web page. It reviews your application’s responsiveness, idle time, faults, among others. Although these tools rectify the basic shortcomings, your app still needs some tweaks for major problems.
Let’s discuss some ways to increase Angular runtime performance.
Change detection in Angular is a powerful mechanism that can help you optimize the performance metrics. Yet, time and again, it has required modification since one solution doesn’t solve all the problems.
Change detection detects new entries and updates. It reflects the component data changes and automatically re-renders the application to reflect those changes in the view.
However, when it comes to large scale complex applications, change detection can come off as a challenge, for the frequency of the detection of changes hinder the browser thread.
For instance, it consistently goes to the main app component to check and verify the changes in the sub-components.
In such cases, we can optimize the process by applying these three explicit change detection strategies, OnPush & Immutability, Pipes instead of methods, and Detach change detection.
Now, these methods are not going to alter the purpose of change detection but filter the actions made by change detection. Let’s take a brief look at them:
OnPush & Immutability
It bifurcates actions for applying change detection. For example, with default change detection in Angular applications, the rendering happens for the entire tree starting from the root component to the smallest subtree. It slows down the process that results in longer waiting times and bad user experience.
OnPush selects the specific branch to go under change detection. It does not trigger rendering of root components or other subtrees. Whenever it is not required to check the changes in one of the component trees, they can be skipped. For example, for any two-component trees such as entries for prime numbers and non-prime numbers, you can check the value entered only for the prime numbers component tree.
It is not necessary to check the sub component’s tree for sole purposes operations, which in this case are entering prime numbers value. So, in the cases where it is required to skip the sub-component trees, OnPush serves as the best way to use change detection.
As we have described in the above discussion, applying OnPush does the good job for minimizing the change detection. But there are some cases where that too, requires a bit of optimization. Lets see how.
Angular can check for complexities to recalculate values in some cases. You can use immutability of objects to reduce these complexity. Immutability triggers the change detection to render the entire DOM, whereas mutable objects do not return a new reference when values are changed.
Hence, mutable objects do not render the DOM and stop the triggering of change detection. However, if you want some rendering to occur for detecting the changes clearly, you can still use immutable.js data types for an instance List instead of Array.
Detach change detection
For heavy computations, it is better to skip change detection this powerful tool since it increases the application load by instantiating a lot of components. When events are too much to handle for heavy apps, it becomes an expensive approach to manage. Even after applying OnPush, the rendering can be increasingly difficult for the child elements in some applications.
You can minimize the change detection by detaching it when the data is set to boost the turnaround time to perform actions. It cuts down the change taking place deep down in the tree, limiting the recheck to the given component only.
Using Pipes instead of methods
Change detection can be best optimized using the Angular pipes mechanism. It has two types: Pure pipes and impure pipes.
Pure pipes reduce the recalculating of values, complicated numbers, and further distills the expected results. It only returns values when the input is different from the previous invocations. Hence, you can minimize the use of functions or methods in the template by replacing it with pipes instead because a pipe would be called only when input values change while a function or a method would be called on every change detection.
Web workers for non-blocking User Interface
Processes like encryptions of data and resizing of images involve the main thread, which on the other hand, freezes the user interface. In such cases, the users find it annoying to use the application. Web workers put these complex processes into a separate thread to avoid the involvement of the main thread in the explicit background processes and maintain an effortless operation of the user interface.
Here are some of the types of use case /apps for using web workers:
- Complex calculations
- Real-time content formatting
- Progressive web app
- Extensive data update on a database
- Image filtering
3 Common Issues that Applications Struggle with & How Web workers Convert them into a Success
These three common problems include resizing of images, Ray tracing, and encryptions of data. What do these three cases have in common? All these operations have heavy computations to perform by CPU in fractions of seconds in the background. Configuring your app component to web workers makes it trouble-free and frees up the user interface.
Let’s understand in detail each case:
Use case 1:
Here’s where web workers come into the picture. It distributes the computations across separate threads, reducing the burden of the main thread. This process does not block the UI but further continues the process in the background without hindering the user experience.
Use case 2:
Ray tracing is a rendering technique that uses a heavy intensive CPU mathematical computation. It traces the light as pixels to generate an image and stimulates the lighting effects like reflection, refraction, and many others. All these computations involve multiple threads to operate, and this leads to the blocking of the user interface. To keep the user interface running effortlessly, we need a separate thread that only works for ray tracing.
Web workers split the image rendering between several workers and also between CPUs as required. Having said that, the background processes become light weighted and do not block the user interface. Though Web workers are not commonly used, they perform the important functions in massive computations development.
Use case 3:
Let’s take an example of banking transactions and other financial transactions that require a high level of secure encryptions. Though these transactions are managed easily with the sleek user interface. Despite executing hundreds of transactions simultaneously, it provides a completely polished UI interaction.
To perform the end-to-end encryptions for sensitive data, we need some concrete business logic that justifies the time, coding efforts, and user experience. It becomes time-consuming and adds more complexity for larger projects with giant data.
Web workers manage these processes since they are the backbone of performing CPU intensive operations. They free the main thread and continue the process in the background. It is much advantageous when you want to perform such complex computations. Web workers do their job here to solely process the calculations for encrypting the data and run the algorithms.
Minimize Additional Checks by enableProdMode
You can enable production mode to stop further checks made by change detection. It disables the additional checks of Angular by turning off the assertions.
Optimize Events for Faster DOMs
Optimizations of Events are responsible for avoiding unnecessary loading and server requests. Minifying the business logic of events results in faster working DOMs. It is observed that slower DOM results in delayed processing of click events and so delivering poor user experience.
In the worst cases with no or minimal optimizations, the components take more time to service click events and are dependent on other workers to perform it. In these cases, the change detection does not complete until the controller returns from the task. So, you can optimize the events by altering the business logic that requires the fewest strings possible to depend and the shortest way possible.
Optimize DOM Manipulation for Better Performance
DOM manipulation is responsible for an application’s speed and performance. It uses ngFor directive to manage an array of iterable objects. ngFor is a structural directive and used for iterative operations.
Let’s say an application is required to add ten usernames where each time it iterates the entire DOM for adding a new user. This whole process is not feasible for the projects requiring 10,000 or more entries. Even for lesser entries as 100, the speed and performance of the application will be drowned.
The iteration of the entire DOM impacts negatively on network performance (requiring the browser to recompute the positioning and styling of nodes, shipping lots of unused bytes), runtime performance, and other aspects. All these results in slowing down the page response.
The best solution to this is using trackBy, which puts an end to this unnecessary creation or destruction of DOM by bifurcating the items with unique identifiers. It makes the process easy by manipulating the DOM for the items that are changed by limiting the creation and destruction of DOM only for the items that are changed.
angular dom search is super fast. this is a killer framework.
— Nils-Holger Nägele (@nilsnagele) March 21, 2014
Our Angular Developers build Angular Apps that perform well
Loadtime Performance Optimization
Load Time Angular performance directly affects the business side of an application. It shows how your application behaves after a user clicks or performs an action while interacting with the application modules. If the application is taking more than usual time in reflecting on the user actions, it represents the poor load time performance.
Let’s discuss several ways to optimize load time performance in Angular applications.
Code Splitting for Improved Time To Interactive (TTI)
Code splitting makes a website fully interactive and ready-to-performing well under the TTI measure (you can check how well TTI does in the open-source tool Lighthouse provided by Chrome Dev Tools for improving web quality). Code Splitting can be performed at different levels, such as entry points, dynamic imports, and preventing
Lazy Loading for Better Efficiency of Program’s Operation
Building a large scale application involves meticulous details that should not be ignored. These applications usually contain a large number of feature modules. However, all these feature modules are not required to be loaded all at once.
Loading only the necessary modules at the initial load not only reduces the bundle size but also decreases load time. This design pattern is called lazy loading, and, as said, it loads the app modules only when it is necessary. Ideally, for an application to be successful, the initial load time should be short. For that, it is recommended to lazy load the components that are not necessary at first.
Let’s take an example of an eCommerce app. It shows the demonstration of the components that are to be lazy-loaded and released with the time. It sharpens the app performance magnificently by saving the website from the lousy waiting times.
The modules are lazy loaded before release:
The modules are released onclicks:
Lazy loading makes significant performance improvement, such as:
- Decreased size of code bundles
- Classification of modules according to the functionalities
- Respective navigation of routes for downloading the code modules
Save Your Build from Memory Leak
Negligence of minor things in web app development could lead you to significant setbacks—for instance, memory leaks. Memory leaks occur when your application fails to eliminate the resources that are not being used. When the application’s memory is being consumed more and more without any addition of the new files (texts, images, etc.), then the application is likely to face major performance degradation.
The global declaration of variables creates unnecessary memory leaks, usually in cases when developers do not unsubscribe observables or declare extraneous global variables. For instance, when the window or tab is open but unused. These entities have chances of remaining unused and should be immediately unsubscribed or deleted for the sake of improving Angular performance.
Remove Unused Code Using Tree-Shaking
Preload& Prefetch for Instant Engagement
Preload is used when you want to load the initial content of a web page, at the time of loading a website. Prefetch is used for loading the required content once the website is already loaded on the browser. These attributes are used for loading static resources. It is important to attend the users waiting for the site to load with some instant content instead of a blank page. These attributes do the nice job of loading the essential content as quickly as possible.
Removal of Third Party Packages for Smaller Build
Server Rendering (using Angular Universal)
Considering the fact that 53% of users don’t wait for the page taking more than 3 seconds to load, server-side rendering is crucial for web development. Angular universal is a process of rendering an application to HTML on a server instead of rendering on a browser, which is the case of typical Angular applications.
In typical scenarios, when a user sends a request, the browser first loads the blank page. Server rendering overcomes that concept by instantly displaying some useful information to the users.
We often see in social media posts with a little summary based cards.
Cache Static Content for Lightweight User Experience
For many eCommerce applications in the market, users can’t install each different eCommerce app. For always enjoying different eCommerce platforms and following an easy approach, it is possible to keep a shortcut on the homescreen and still get native-like experience.
The earlier versions of Angular didn’t rank high on the performance metrics. However, with the release of Angular 9, many performance optimization problems are sorted because of the framework’s capability of dealing with them.
Angular #ivy is using classical programming concepts and patterns to move the framework forward. This is why I love #angular, engineering we can predict instead of hacks and conventions. #angularconnect
— Michael (@masimplo) November 7, 2018
Speaking from our experience of the Angular team at Simform, it has decreased as much as 35% of build size in the Angular 9 version, and that is quite significant. What’s more, Google’s light house shows 99% in performance metrics of Angular 9 against 95% of Angular 8 according to average measures.
The future holds a sharp balance between the clean code and performance that makes your end-users’ and you happy. If you need help implementing these techniques, clarifications, or something is wrong, feel free to connect with me on Twitter or drop me a line at email@example.com.