Taking micro-frontends to the next level

Post Editor

The micro-frontends concept has been out there for quite a while. We’ve been using this architecture in Wix since around 2013, long before it was even given this name. In this article I’d like to share some of the things we did in order to evolve the concept of developing big scale micro-frontends.

25 min read

Taking micro-frontends to the next level

The micro-frontends concept has been out there for quite a while. We’ve been using this architecture in Wix since around 2013, long before it was even given this name. In this article I’d like to share some of the things we did in order to evolve the concept of developing big scale micro-frontends.

25 min read

The micro-frontends concept has been out there for quite a while. We’ve been using this architecture in Wix since around 2013, long before it was even given this name. It was also a key factor in enabling our gradual migration from AngularJS to React in 2016. We’ve been evolving it and gathering tons of experience with it for many years. In this article I’d like to share some of the things we did in order to evolve the concept of developing big scale micro-frontends (at the time of writing this article we have 700 developers working on this architecture).

Introduction to micro-frontends
Link to this section

A lot was written about micro-frontends and this article tries to focus on more advanced topics, so I will try to make the intro very short. When a team becomes very big, it starts to become very difficult for many people to work on one monolithic application:

  • Codebase becomes very big, hard to maintain and riddled with unwanted complexity.
  • Builds become very long and involve a lot of moving parts that most developers don’t know how to deal with when issues arise.
  • Deployments contain too many changes which means totally unrelated changes might block people from deploying or force rolling back versions.
  • The list goes on. Monoliths are hard to maintain in big teams, if you are here, I guess you know this.

This is why in big teams it is a good idea to try to break applications to smaller independent things which can be developed in a separate project, built separately and deployed separately from each other. I keep stressing that this is a smart way to go for big teams. If you have a small team, don’t do this, it will only make your life harder. In Wix we started with this approach only when we had around 100 developers working on the frontend application.

The simplest example I’ve seen which demonstrate this approach goes like this:

<html> <head> <script src="https://shipping.example.com/shipping-service.js"></script> <script src="https://profile.example.com/profile-service.js"></script> <script src="https://billing.example.com/billing-service.js"></script> <title>Parent Application</title> </head> <body> <shipping-service /> <profile-service /> <billing-service /> </body> </html>

So, what we have here is three independent bundles. Each one can be developed, built and deployed separately and each one registers a custom element which the parent application can eventually render. Don’t get me wrong, you don’t have to use custom elements in order to have a micro-frontend architecture; those bundles could have registered React components to some global Map which the parent application uses in order to reference them while rendering (this is essentially what we do at Wix), but the custom element example is a cool way to show it without needing to go into those details. Not to go too much off-topic, but registering to a global Map for later referencing is exactly what customElements.define() does under the hood anyway.

Micro-frontends at Wix
Link to this section

I’m going to dive into many details of our internal architecture and tooling we built over the years, so I’d like to take a moment and do a quick overview of the Wix platforms, don’t worry you don’t have to know a lot about Wix to get it.

Wix is a platform where people can create sites for their business using a WYSIWYG editor and manage their business using a big variety of business management tools. This means that business owners can, for example, create a site with an online store where their customers can add items to the cart, view the cart, checkout, see the status of their orders, etc. all through a completely customizable site. And on the other hand, the business owners can manage the store on Wix, which means they have management pages where they edit the catalog, see orders and analytics, browse their customers and manage their inventory; all within one big application which calls the business manager.

But it doesn’t end there, sites can also have a blog, and a forum, and signup/login screens or they can be sites for restaurants that can order food or book a table or they can be sites for a conference or a concert where people can buy tickets and choose where they’d like to sit or maybe even watch it online from within the site. On the business manager side, business owners can also send newsletters to customers, manage online campaigns for their business, create automations (for example, send customers feedback email 5 days after delivery) and even chat with people currently on their site.

You get the picture. There is tons of functionality both on the site and on the business manager. Which is why we decided that both the viewer which renders all of the sites and the business manager will be micro-frontend hosts. In this article we will focus on those two platforms and their different needs.

To be honest, we actually have two more micro-frontend hosts in Wix which are the editor (where all of the sites are created) and also our mobile app (the only micro-frontend React Native app I’m aware of in the world). However, each one of those deserves its own article, so we won’t touch them in this article.

Few screens shots to make it all connect:

Content imageContent image
Business Manager (1)
Content imageContent image
Business Manager (2)
Content imageContent image
Business Manager (3)

In a nutshell, micro frontends that run in the business manager can be either a full page experience hosted next to the sidebar, or a widget added to the top bar, or a widget which is hosted inside a different micro frontend.

Content imageContent image
Viewer (1)
Content imageContent image
Viewer (2)
Content imageContent image
Viewer (3)

The viewer, in a nutshell, translates a big json which was created by the editor into a dynamic React component tree where each of the components is taken from the micro frontend which owns it and is passed the settings and design parameters which are also stored on that json.

Plugable micro frontends
Link to this section

So after going over the screenshots carefully let’s discuss the first challenge we had when developing the viewer and business manager. Unlike many applications in which the features of the applications are predetermined, our applications load with a completely different feature set depending on the context:

  • Each business has different extensions installed and each extension can register a page in the business manager sidebar and router so the user can navigate to it.
  • Each extension can also register other types of business components such as the tabs displaying information about a specific contact which you’ve seen in the screenshot business manager (3).
  • Each page in the site has different widgets placed by the user when editing the page and each widget might be located differently in the DOM tree depending on the parent where the user attached the widget.

Those are not requirements that are special for Wix. Those are pretty classic characteristics of a pluggable system and this is exactly what we created in order to solve those things. Moreover, micro frontend is a classic solution for a pluggable system and was a big part of our solution. Let’s discuss what are the building blocks of a pluggable micro frontend.

First of all we need to have a place which stores all the information about the extensions, pages and widgets in existence. For example we need to know that the ecommerce extension includes the products manager and orders manager pages which should appear in the business manager and that each one has a dedicated sidebar entry and route inside the business manager. Also the ecommerce extension includes the orders contact tab which should appear in the contact view with some tab title. Finally the ecommerce extension includes products gallery, product and cart widgets which the user can place in the site on some pages (we actually create pages automatically and place those widgets ourselves in them so the user only needs to deal with customizing them, but as I mentioned before, the editor architecture is out of scope for this article).

So where do we store all of this information? We have a service we call the dev center where each developer in Wix can define a new extension. An extension can have multiple components, and each such component can be of a different type and data which is related to this type. So if we take the ecommerce extension example, it includes the following components:

  1. Products manager. Type: Business manager page. Data: Bundle URL, Sidebar label, Route path.
  2. Orders manager. Type: Business manager page. Data: Bundle URL, Sidebar label, Route path.
  3. Orders tab. Type: Contact tab. Data: Bundle URL, Tab title
  4. Products gallery widget. Type: Viewer widget. Data: Bundle URL, Bunch of editor related data we won’t go into.
  5. Product widget. Type: Viewer widget. Data: Bundle URL, Bunch of editor related data we won’t go into.
  6. Cart widget. Type: Viewer widget. Data: Bundle URL, Bunch of editor related data we won’t go into.
Content imageContent image
Dev Center (1)
Content imageContent image
Dev Center (2)

So now when business manager renders, it goes through the following process:

  1. Check what extensions are installed in this site (we have a service which remembers what extensions are installed in each site and we have an app market where the user installs extensions and all the metadata for the app market is stored in the dev center as well, but we won’t go into that).
  2. For the extensions that are installed, get the list of all the components from the dev center of type business manager page.
  3. Dynamically add all of the links to the sidebar with the correct routes according to the components data.
  4. Dynamically configure the React Router with the routes from the components data so that when the user navigates to a page we will dynamically import the bundle url of the correct micro frontend component which can render that page.

When the contact manager page in the business manager renders, the exact same process happens, the only difference is that it will query for components of type contact tabs. instead and that it will dynamically add the tab titles to the tabs selector according to the components data similarly to what the business manager host did in the sidebar. Theoretically if we wanted, the components data for contact tabs can even contain a route and the contact page could have also configured a nested route in the host’s React Router.

The viewer goes through a similar process, with some minor differences:

  1. Check what widgets are needed in the structure of the page we’re about to render (we have a service which provides information about the page structure).
  2. For those widgets, get the components data from the dev center.
  3. Dynamically import the bundle urls of all of the widgets on the page and render a dynamic React tree according to the page structure.
  4. Repeat the process in each page navigation in the site (note that in Wix the first navigation to a site is performing SSR and later navigations inside the site happen in the client side like an SPA, so this process can happen both in the client and in the server).

In the end, we can define a general pattern: A pluggable micro frontend host needs to find out what are the bundle urls of the things it needs to render, it needs to dynamically render its UI with the things that are installed and it needs to dynamically import the correct bundles when the time is right to download them. It is very important not to eagerly download bundles unless it really makes sense to and it is very important to optimize those flows with server side caching wherever possible, otherwise performance becomes a bottleneck very fast for this kind of applications.

Integrating micro frontends
Link to this section

So far we’ve discussed mainly the case of micro frontends supplying components that are rendered by the host application or by other micro frontends. This is done by each micro frontend registering its components in a global Map which is available for the host and for the other micro frontends where they call lookup components by their component type, as described in the dev center, along with their component data which can be used to render links in the sidebar, configure routers, etc.

But what if some micro frontend wants to invoke some functionality in a different micro frontend? For example as can be seen in screenshot viewer (2), clicking on the cart icon will open the mini cart panel. Actually, it is possible for any viewer widget, even ones that are not part of the ecommerce extension, to open the mini cart. This is possible because similarly to how components are registered in the global Map, it is also possible to register APIs there.

Here’s another example, this time from business manager where contacts API which was registered is invoked by tasks micro frontend:

Content imageContent image
Business Manager (4)

We used the pluggable architecture described in the section above to enable this. Similarly to how an extension can have a component in dev center with component type of business manager page, contacts tab or viewer widget, it can also have a component type of business manager API provider or viewer API provider. This means that the bundle for that API provider micro frontend simply registers an API in the global Map instead of registering a component. We call that global map: the Module Registry.

Link to this section

Performance is a huge pitfall for micro frontends. When you have separately deployed bundles, each developed and maintained by a different team, duplications quickly arise:

  • Each bundle contains the same basic utilities needed by every micro frontend (bi logger, monitoring library, http client, UI components, i18n library, date formatting utils, state management utils, polyfills, the list goes on)
  • Each micro frontend fetches similar contextual information from server side like information about the site owner, or the site visitor, or the site itself, or some settings.

There are two kinds of approaches we use to avoid those problems:

(1) Externalize and centralize things at the host level: Do not bundle the bi logger in the micro frontends, load it only once at the host level and make it available to the micro frontends. Similarly, don’t make requests to fetch common context from the server in the micro frontends. Instead, fetch the needed context in the host and make it available for the micro frontends as well. There is a heavy cost for this kind of solution though. This approach has quite a few caveats:

  • Essentially, we are creating a contract between the host and the micro frontend. The micro frontends count on the host to make these libraries and contextual data available to them. Breaking this contract, in any way will break the micro frontends, which means it needs to be backward compatible forever or we face very complicated migrations.
  • This is why we use this method for example for our bi logger where we have a stable API and complete control over it but we do not use it for external libraries such as MobX where API might change between versions and we do not want version upgrades to become a very complicated task.
  • Some of the things might be needed only by some of the micro frontends, but not most, which means it might not be cost effective to externalize to the host if it is not very likely to get used.

(2) Smart cache sharing at the host level: When the first approach caveats are too painful it is better to let the first micro frontend which needs some library or data from the server fetch it and make sure that the next micro frontend which requires it will use it from the client cache instead of fetching a duplicate. We use two technologies to do this:

  • Webpack Module Federation: We configure potential duplicates as shared dependencies, which means Webpack will make sure to download dependencies only from the first micro frontend that needs it and reuse in the next ones. This also allows us to have multiple versions of such dependencies without conflicts, although we always strive to align all the micro frontends to use the same versions to ensure reuse of this cache.
  • React Query: Our communication layer uses React Query under the hood, in a way which shares the cache between all of the micro frontends.

Both module federation and react query are pretty new and experimental in our stack, we are in integration phases of it, I’ll update this article once we gain more experience with it.

Developer experience
Link to this section

When we started using micro frontends we immediately realized that if creating a new micro frontend will be a difficult task people would surely opt out of creating new modules and instead keep adding widgets and functionality to existing micro frontends. This is why we put a lot of effort into making sure that tasks such as creating new micro frontend for a business manager page or for a viewer widget are tasks that are incredibly trivial.

When a developer in Wix wants to create a new project they run a tool we call create-yoshi-app (more about what is yoshi soon) which asks them a few questions about whether they’d like to develop a business manager page, a viewer widget, etc. and generates the needed code. It also configures their newly generated extensions and components in the dev center which was described earlier. A newly generated project just needs to be pushed to github and can be added to our CI systems, deployed and be made available to users by clicking a few buttons.

Content imageContent image

Our main design guideline for generated projects is to have absolutely zero boilerplate and absolutely zero configuration in order to work. We take a lot of inspiration from Next.js, which means that extension which contains business manager pages, for example only contain one .ts file per page which exports the page component and our build tool takes care of building it into a bundle which contains all of the relevant code needed to register it to the business manager and other related tasks. Similarly, a viewer widget is also just a .ts file exporting the widget. In the past we allowed generating JavaScript code as well, but in the last two years we’ve moved to work exclusively with TypeScript.

Our build tool which takes care of building those projects is called yoshi, under the hood it runs a carefully configured Webpack which ensures things such as es modules and css modules are working properly (both of which are super important and useful for micro frontends isolation; micro frontends must not pollute the global namespace, otherwise very bad and hard to debug problems happen).

Yoshi takes care of tons of complexity, some of it related to Webpack and some of it related to our platforms and abstracts all of it from the developer with very little customization options. Over the years this has been the source of many complaints of developers working in Wix who wanted to tweak their Webpack configuration a bit differently, but eventually this is what allows us to add things such module federations and make sure we do not bundle things which should be externalized. It has also helped us in bumping 3 Webpack versions already in hundreds of projects with almost zero effort. In addition, yoshi provides tons of functionality which will be described in the next sections.

One cool example is that when running in CI yoshi knows to produce Webpack stats and upload them to a service we call dumbledore. This actually means that developers can connect to dumbledore and see the webpack stats of their bundles using Webpack bundle analyzer over time, in each commit and each PR, they can see analytics of how their bundles grew over time and they can even see visual diffs between two versions and what changed in the stats.

Content imageContent image
Dumbledore (1)
Content imageContent image
Dumbledore (2)

Local development
Link to this section

When developing a standalone application locally we are used to the experience of running npm start and have a development server run where you can open a browser pointing to localhost and see the application running in your browser. With modern build tools we are also used to the experience of every code change being reflected immediately in the browser without needing to refresh. This is commonly referred to as HMR (hot module replacement).

The thing is that micro frontends are not standalone applications. Micro frontends need a host to render them. Over the years we tried multiple ways to solve this:

  • We started by having an html simply rendering your hosted component as if it was a standalone application, but then everyone needed to write mocks to all of the integrations they had with the host and other modules.
  • We later had a host test kit provided platform team which actually ran a local version of the host where configuration pointing to the local extension is passed to the test kit instead of the host getting it from dev center, but it was very hard to maintain and still didn’t solve the integration with other modules well enough.
  • Eventually, what ended up being the easiest approach which gives tons of confidence to everybody in integrations and also is the simplest approach to support is actually running the real production host with special parameters which override the dev center configuration.

Let’s elaborate a bit about how the final approach works. As we described before, the hosts (business manager and viewer) decide during render time the component data of the micro frontends that need to be included in the rendered page. The components data includes the bundle URLs and also other metadata which is needed by the host UI such as sidebar links and routes. So, when we want to see our business manager pages or viewer widget in action while coding, we actually open the real host and pass a special query param which tells the host: hey, here’s a json with some components data, plz merge it with the components data you fetched from dev center as if it came from dev center. This way we can change the bundle URL of an existing component to point to localhost.

This whole process is managed by yoshi without developers needing to understand how it works under the hood. Developers just run npm start and yoshi starts the dev server, opens a page where the developer can select the site for which we want to open the business manager or viewer (depends on what we develop) from a list and then opens it with the correct params, taking care of HMR transparently.

Local development with Yoshi

But the yoshi magic doesn’t end here. If you recall, we mentioned earlier that the viewer performs SSR when rendering the first page, which means that the server needs to fetch the bundle from your local machine, which is not as trivial as putting a URL pointing to localhost in the script tag. The nice thing is that yoshi actually opens an http tunnel which allows the rendering server to fetch the bundle from our machine. This is also very useful if you need to view your changes on a mobile device and not on the device you are coding on.

Deploy preview
Link to this section

We use the same mechanism which allows us to override the components data in order to replace the bundle URL to point to our dev server in order to be able to preview PRs or any commit as if they were deployed. This is quite a game changer for us: every PR for any micro frontend is automatically uploaded to the CDN by our build system and an automatic comment is added to the PR with a link. When the developer reviewing the PR clicks this link, a screen where the reviewer can select a site from a list appears, and once a site is selected we get automatically redirected to the business manager or viewer with the required override parameters.

Content imageContent image
Deploy Preview

We also have a Chrome extension we call Wix Insiders where developers can select to preview any commit or PR for any of the micro frontends in the page they are viewing. This is especially useful when trying to find in which version some problem started, we can really easily preview past commits until we find the first commit where it occurred.

Content imageContent image
Wix Insiders (1)
Content imageContent image
Wix Insiders (2)

End to End Testing
Link to this section

The same deploy preview mechanism is also something we rely heavily on in our e2e tests. Each micro frontend has its own e2e tests which actually run against the real production host and passes the right parameters in order to render the version under test of the micro frontend. This means that with every build, our CI system creates a deploy preview and then runs those tests against this deploy preview. We also have a similar process which allows us to create a deploy preview and run the e2e tests from the local developer machine.

This is actually a pretty controversial approach. The classic approach is to run e2e tests on an isolated instance of the system which was invoked for the purpose of running tests on it. However, after many years of trying similar approaches our conclusion is that it is simply not worth the effort since eventually this path results in mocking some part of the system, which means it is not really testing things end to end.

In Wix we believe in having a good suite of e2e tests. We don’t rely solely on them of course and try to keep the balance with components tests and unit tests, but we have quite a few. This is why we created a tool we call Sled which runs e2e tests in parallel on AWS lambdas, which means our e2e tests run very very fast. Because they run very fast, we also employ elaborate retry mechanisms in Sled which brings much more stability to those tests. In other words Sled tries to solve the two main disadvantages of e2e tests which are performance and flakiness (the third is debug-ability and we have plans to introduce a really cool solution there as well soon).

Another very interesting problem with testing micro frontends which Sled tries is dependencies. For example, changes in the hosts might look good in the host’s e2e tests but actually break one of the micro frontends. Also, change in some micro frontend might pass but break a different micro frontend which uses its API or is hosted inside it. The way we solve this is that it is possible for any micro frontend (A) to declare some of its e2e tests as tests which are verifying that an integration with a different micro frontend (B). Sled saves this information and then when B has some changes, Sled runs not only the e2e tests of B, but also the e2e tests of A which tests their integration, so if those tests fail then the build of B will fail.

Finally, Sled also allows us to run benchmarks using a tool we call Perfer. Perfer runs some scenarios multiple times in parallel and checks how long it took, comparing it to benchmarks on previous commits and failing if the results degraded (we compare many performance KPIs, such as TTI, JavaScript coverage, number of requests, bytes transferred and lighthouse score). This is very important for micro frontends since with so many things happening on one page it is practically impossible to understand where some degradation originated from in retrospect. We have to find degradations immediately when the change occurs, which means that all micro frontends must have such benchmarks.

Content imageContent image
Perfer Report

Link to this section

One of the main problems with many micro frontends running on the same page is that when something goes wrong and the page fails to load or some exception starts occurring, it is hard to know in which micro frontend it originated and what team needs to handle it.

In order to make sure the monitoring errors go to the right team, we have a different Sentry dashboard for each extension and the hosts add an error boundary around their components and report the error to the correct dashboard. The identifier for the Sentry dashboard of a micro frontend is actually configured in the components data in the dev center.

The same “error boundary” behavior is also used in our communication layer in order to catch asynchronous flows which cause errors due to http request from some micro frontend and we keep working on other ways to conclude what dashboard an error should be sent to, like taking a look in the call stack, but there’s no real way to cover everything and eventually we need to live with the situation that some errors will be reported to the host Sentry dashboard, which means the host development team needs to do the triage and contact the offending team one they understand the root cause.

Another thing we do in order to identify the offending micro frontend automatically is that we don’t only rely on error monitoring. We also have our internal monitoring system which we call FedOps, which relies on proactive events from widgets. Let’s take for example the product widget in the viewer, which has an “add to cart” button in it. When the “add to cart” button is clicked, the widget reports a “start add to cart” event to FedOps and once the asynchronous flow completes it reports a “done add to cart” event to FedOps. Since this kind of operation should never fail, the FedOps monitor will send an alert to the ecommerce dashboard once it drops even a little. Many times FedOps alerts are triggered when Sentry alerts didn’t reach the threshold or didn’t reach the right destination.

Content imageContent image
FedOps Dashboard

Gradual rollout
Link to this section

In Wix we have a gradual rollout system we call Ark which gradually opens a new version to users while listening to alerts both from monitoring dashboards of the micro frontend extension and the micro frontend host. While there’s no alert Ark gradually opens the new version to more and more users. If an alert is triggered Ark will automatically rollback to the last known good version. Ark is actually pretty smart and knows how to identify users, which means it can automatically open the latest commit to Wix employees, which means when employees use Wix they always see the latest commit and can report problems even before the latest commit rolls out to real users.

But if you recall from previous sections, the bundle URL of the micro frontends comes from the components data in the dev center. So how can Ark control the version that is being served? So actually, I simplified this a bit in the previous explanations. In reality, the URL in dev center looks something like: https://cdn.domain.com/ecom-products-gallery/${version(‘ecom-products-gallery}/bundle.js

This means that once the host gets this templated URL, it needs to pass it to some mechanism which returns the real URL with the correct version of ecom-products-gallery. This mechanism is a library provided by the Ark team which is used in all of the hosts. Under the hood of this library we have a complicated mechanism where the library subscribes to rollout events in order to know what micro frontends versions are rolling out and to what population so that they can fill in the right version, but we will not go into those details in this article.

Enforcing standards
Link to this section

While working with micro frontends gives teams a lot of independence, which is something we are super happy with, independence also comes with additional challenges. It means that when we need to do some cross changes like deprecating an old version of some library, or starting to use some new mechanism we can’t simply do the cross change and push it to git like we would have with a monolith.

This is why we created an internal tool we call CI Police. In CI Police we can define rules which run during the build of all micro frontends, and check if the micro frontend conforms with some standard we want to enforce. A rule is simply a JavaScript function which can do whatever is needed to validate the rule. Usually rules are most interested in looking for something in package.json or in the bundle, but they can do just about everything.

Some more elaborate rules we have, test for example that you have at least one Sled benchmark test on each widget. And that you have a bundle size limitation check on all your bundles. We also use CI Police to ensure we have no more than 2 different versions of libraries which are defined as shared in module federation in order for it to be effective.

In order to not interrupt too much with the day to day of teams, CI Police supports many mechanisms such as sending Slack notifications to offending teams before it starts breaking their build. It also allows teams to ask for a grace period. CI Police has a very easy to use dashboard where we can see how many offending projects we have, follow on how projects start to conform with rules and most importantly, see the status of a rule before we start sending notifications in order to make sure the rule behaves correctly.

Content imageContent image
(CI Police — Dashboard)
Content imageContent image
(CI Police — Slack notification)

In the near future we will also provide a code mod along with rules, so CI Police can not only notify offending projects, but also provide an automatic PR offering a fix.

Third party micro frontends
Link to this section

Wix is an open platform, which means that we want to make it possible to create business manager pages and viewer widgets also for external developers that do not work at Wix. That said, it still means that we need to have security of our users in mind, which means that external micro frontends need to work “the old school way”, which means they are sandboxed in an iframe. The way this works is that we have a special micro frontend business manager page which actually can host an iframe of an externally developed page and bridges all of the business manager’s API to it using post messages.

Same goes for viewer widgets, but actually for the viewer this is not a solution we can live with for long because it brings many performance issues, SEO issues and UX issues with it. We are working on creating a much better set of solutions for creating external viewer widgets, more on that in later articles.

In order to make the external developer experience work we’ve opened the dev center publicly (dev.wix.com) which means external developers can define their own extension, which their own components, only instead of creating a business manager page, they create a business manager iframe and instead of providing a bundle URL they provide an iframe URL. The rest is pretty much the same, their entries are added to the business manager sidebar and router and when they get rendered the business manager knows to load the “iframe container” micro frontend which will render their iframe and bridge their API calls.

Link to this section

As is obviously apparent we worked a lot in order to solve many problems introduced by micro frontends because we believe this solution gives tons of velocity and independence to teams working on a huge scale problem. We hope to one day release some of those tools publicly and believe many people will find them useful. Many of the solutions we’ve created are very useful also for monoliths, but there’s no question that the micro frontends come with a cost.

Over the last 2 years we’ve invested a lot in creating a build system we call Falcon, which we believe can work for smartly building very big mono repos (yes, we’ve looked into Bazel, but currently we going with an in house solution for reasons which are out of scope for this article). This is very important for us since even in the micro frontend world, we want to be able to have mono repo per extension, but it will also allow us to experiment with alternative solutions of having one huge thing which is built and deployed as a whole.

I’m not saying that we gave up on micro frontends, this is very far from the truth, we are actually very happy with it and keep working on it. However, I believe it is very important not to be completely in love with one solution and try and see if other things can work.

My most important takeaway from this whole article is that it has been an incredible experience to solve so many infrastructure challenges which only engineering organizations living on the bleeding edge of those technologies can tackle and I believe we have tons of more such challenges in our future.

Comments (0)

Be the first to leave a comment


About the author


About the author

Shahar Talmi

About the author


Featured articles