Micro-frontend architecture has gained a lot of popularity lately, and rightly so. This frontend architecture breaks down a frontend monolith into smaller, decoupled, and easily manageable micro-frontends. Much like we have microservices at the backend.
The decomposed micro-frontends provide an extremely cost-effective way for growing organizations to develop web applications and scale quickly. However, companies often don’t start with a micro-frontend architecture right away.
When the frontend codebase is small, changes are easier and consistency is easier to enforce. But as the monolith grows, even small UI changes become high-risk releases because coordination and regression testing grow non-linearly. Over time, UI drift shows up across pages and flows as different teams optimize locally without a clean composition model.
A micro-frontend approach doesn’t just reduce coordination pain. It usually shows up as clearer UI ownership boundaries, fewer cross-team merge conflicts, independent releases for isolated surfaces, and faster iteration without requiring a full-site redeploy each time. You can learn more about the benefits in our detailed article on micro-frontend architecture. You can learn more about these in our detailed article on the micro-frontend architecture.
In this article, we share real examples of how organizations compose micro-frontends based on their delivery and governance needs. Before we get to those examples, here’s a quick rundown of the three main composition techniques teams use to assemble micro-frontends into a single user experience.
Server-side composition
In server-side composition, micro-frontends are assembled on the server and then delivered to the browser as a composed experience. The advantage is predictable initial render performance, fewer “blank screen” moments, and simpler control over critical content assembly—especially for high-traffic entry pages.
Build-time integration
In build-time integration, each micro-frontend typically lives in its own repository, but the final application is assembled during a build step. This often improves performance and dependency management, but it can reduce release independence if teams don’t manage shared contracts, versioning, and CI coordination carefully.
Run-time integration
Run-time integration assembles micro-frontends in the browser. Teams deliver UI fragments independently and compose them using patterns like iframes, JavaScript-based integration, or web components. This often increases release independence, but requires strong discipline around performance budgets, shared dependencies, and cross-frontend contracts.
Now that you are familiar with the different integration techniques, let’s jump right into the examples.
Bit Speed Up Development With Build-time Integration
Bit, the popular component-sharing platform, enables more than 100,000 developers to organize, manage, and collaborate over shared components. Since their decisions affect the work of so many teams and developers, they always choose what enables better partnership during product development and promotes team autonomy.
To realize the benefits of micro frontends, they have a fine component model in place and also avail a shared infrastructure. This allows teams to independently build components and integrate them later to create efficient applications.
To keep teams from stepping on each other’s toes, they split and decoupled the release pipelines. Build-time integrations helped them ensure that release processes don’t couple together as it happens in the case of iframes. As a result, they experienced a 50X faster component-driven CI process.
DAZN Got Predictability With Build-Time Integration
Faster product development is merely one of many reasons why you should consider looking beyond run-time integration. In the case of DAZN, they wanted more predictable outcomes.
DAZN is an OTT streaming service that caters live and on-demand content to its subscribers scattered across several countries. The nature of their service puts them in a unique position since they deliver user experiences not just on the web but on smart TVs, consoles, and set-top boxes as well.
As a growing organization that also wanted to scale the frontend simultaneously, micro frontends was their most obvious architectural choice. It’d let them build smaller, autonomous teams to improve delivery speed and maintain decent code quality.
When it comes to building a micro frontend, they always ask themselves which method they’d use for compilation.
Build-time compilation allows them to verify the application’s performance and run all end-to-end tests before serving it. This strategy not only puts their predictability concerns to rest, but it also serves well for frontend scaling.
Vonage Leveraged All The Run-time Integration Techniques
Bit and DAZN found build-time integrations to make more sense for them. Like Bit, Vonage also collaborates with multiple third-party developers with their number even reaching half a million.
They jumped to the micro frontend architecture as maintaining code was getting more difficult with teams writing over their repositories. Bugs became unavoidable, and scalability started to seem like a distant dream.
Once they adopted the micro-frontend architecture, they next had to choose an injection method. They overlooked the build-time integration method since they believed it was more suitable for organizations with frontend at the company level.
So they once again had three ways to go about the run-time integration—using web components, iframes, and JavaScript-based techniques. Ultimately, they decided to proceed with a combination of all three based on their specific requirements.
Iframes give them 100% component isolation but don’t allow sharing dependencies. With Javascript-based techniques, communication between the parent application and the injected component becomes much simpler. However, it has no provision for isolation.
When they used web components for integration, they found them to be technology agnostic but could not find useful documentation for most use cases.
With the combination of all three run-time integrations, they can now provide isolation, communicate efficiently, and stay technology agnostic whenever needed.
Smallcase Tackled Performance Issues With Built-time Composition
Not everyone has a clear picture from the get-go. Smallcase had to jump through many hoops to discover what worked best for them. They switched to the micro-frontend architecture to manage the bottleneck arising in the subscription-flow management.
For integration, they started with run-time composition. The most significant advantage of using this method is that it finally made atomic deployment possible for them. They didn’t need to upgrade microsites to make changes in the widget script. Run-time composition spared them the hassle of deploying 50+ microsites.
But the method has its downsides as well:
Hence, they decided to switch over to build-time composition. This method gave them a much-needed performance boost thanks to its superior dependency management, which led to smaller bundles. Along with that, they could also continue independent deployment.
Even though build time composition isn’t ideal for all scenarios, it’s working in favor of Smallcase for the time being. So I’d like to reiterate what I said at the beginning—make choices based on your specific use case.
Choose build-time integrations if you want to take advantage of components and retain all the benefits of micro frontends. But if you wish to stay technology agnostic, exercise independent deployment, and improve communication between the parent app and the component, then run-time integrations are your safest bet.
No matter which composition technique you choose, micro-frontends only work when teams align on boundaries, shared contracts, and operational discipline (testing, observability, and release practices). The “right” approach is typically the one that matches how independent your releases need to be and how much you can standardize shared dependencies.
If you’re evaluating micro-frontends because UI delivery and ownership are starting to strain, Simform can help assess fit, define boundaries, and implement a composition approach that preserves speed without creating a fragile UI mesh.

