28

Here's an unexpected issue I've run into with Webpack code splitting in the wild: Imagine this scenario:

  1. The user loads a React app with Webpack code splitting and a few bundle chunks are loaded
  2. A deploy happens and the contents of any future chunks that the user might receive from the server are updated (note: the previous chunks get deleted on the server during a deploy)
  3. The user clicks on a link and loads a new route which triggers more bundle chunks to load. Except these new chunks are incompatible with the ones the user's browser has already loaded and the app breaks because of a runtime error

How can this scenario be prevented?

One possible solution would be to maintain multiple versioned sets of chunks but I'm wondering if there's a simpler solution being used by large-scale apps.

If preload-webpack-plugin is used, all chunks can be prefetched but they will only stay cached for a short time (5 minutes in Chrome).

7
  • 1
    I'm having the same considerations as yours. Do you find a solution? Thank you! Commented Feb 26, 2018 at 10:27
  • 1
    @JordanEnev we ended up maintaining multiple versions of deploys so that the old chunks wouldn't get invalidated. Commented Feb 26, 2018 at 18:55
  • Thank you! If I find something simpler, I'll write. Commented Feb 26, 2018 at 20:58
  • 2
    Interested in a solution for this. Would it be possible to have the webpack loader code respond with some sort of alert that would ask a user to reload? Commented Jan 23, 2019 at 13:22
  • 1
    Just simple solution, we can use push via websocket after deploy, and show the notification that page should be reloaded window.navigation.reload() or use long polling and check the latest version from somewhere, for example, from latest index.html Commented Sep 3, 2020 at 14:41

4 Answers 4

2

As Max Stoiber writes on spectrum.chat:

ServiceWorkers come in really handy when doing code splitting!

We use the excellent offline-plugin by @nekr to cache all the current bundles locally, so no matter if the server updates the files or not the ServiceWorker will always serve the files from the local cache. Every hour it will check the server for updates and, if an update is available, download all the fresh bundles from the remote server and cache them locally. The next time the user restarts the app the new version of the app is used! 💯

https://github.com/NekR/offline-plugin

This solution means your app downloads all the chunks up front, which defeats the purpose of code splitting in terms of bandwidth, but at least you still retain the benefit of only parsing the chunks you need to load the app, which for me is significant on slow devices. Also, browser refreshes/caching now involves the Service Worker lifecycle (see "Waiting" at https://developers.google.com/web/fundamentals/primers/service-workers/lifecycle).

Sign up to request clarification or add additional context in comments.

3 Comments

I disagree that it's a solution. Better to say "it's a workaround over the problem". I couldn't understand why developers hiding the problems.
Yes I agree but I've not found a better solution yet. And adding a server worker complicates a lot.
I wouldn't say its a workaround persay. The problem is due to the requirement of loading less for the user for what they need - By doing so we need to make sure an update doesn't break users who aren't using the new version. Another approach with this could be to have the service worker "listen" for app changes somehow and updates the dynamic code accordingly but that might be frustrating and complicated to setup
2

This problem is extremely well stated.

I will add though that "Deletion" might not be the right name for whats happening, depending on the setup.

My initial response to this problem was that this was a caching problem. That old chunk files were being picked up instead of the new one. Its close to what was happening at least in my case I had the following:

index.js

const Page1 = lazy(() => import('./page/Page1'));
const Page2 = lazy(() => import('./page/Page2'));

const main = () => {
  {
    '/page1': Page1,
    '/page2': Page2,
  }[window.location.href](); /* Some Render Router Implementation */
};
  1. V1 Deployed at (https://my-domain/distribution_folder/*)
  2. User would load V1 index.js
  3. V2 Deployed at (https://my-domain/distribution_folder/*)
  4. User (who hadn't refreshed) would dynamically load a chunked route using their cached V1 index.js file.
  5. Request would be sent to (https://my-domain/distribution_folder/{page_name}.{chunk_hash}.js)
  6. A chunk error would occur because that unique chunk would no longer be there.

Its interesting because the provider that was being used was migrating traffic to the new version. So I thought that would be the end of it but what I wasn't realizing was that any user could still be using a previously deployed version - How would they know? They're already using the application. The browser already downloaded the application (index.js).

The solution really depends on where you're dynamically importing these chunks. In the case above since they're page routes we can do a hard refresh when the user requests a different page when we can't find a chunk. This assumes however that your Cache-Control headers are setup correctly though. For example:

  • index.js -> Cache-Control: no-store
  • page/{page_name}.{chunk_hash}.js -> Cache-Control: public,max-age=31536000,immutable

We can make these chunks immutable because sometimes they don't change between releases and if they don't change why not use the cached version. However, index.js cannot be stored in the cache because this is the "router" that dynamically loads the content and this will always change.

Pros

  • No more chunk load errors
  • We don't need to load everything on first page load
  • Less complexity by not having a service worker

Cons

  • This approach forces a refresh for users

Related Questions

Comments

0

if the chunk filenames are hashed, wouldn't old route link to old hashed chunk (which presumably would still be available) and load everything fine?

3 Comments

But in this scenario, a deploy will erase the old hashed chunks so they would no longer be available.
you could preserve the old chunks by hosting them on a cdn / in s3
If they're hashed and uploaded to the same spot, yes this is a totally valid approach. As long as those files are truly immutable :)
-1

https://webpack.js.org/guides/caching/#output-filenames

A simple way to ensure the browser picks up changed files is by using output.filename substitutions. The [hash] substitution can be used to include a build-specific hash in the filename, however it's even better to use the [chunkhash] substitution which includes a chunk-specific hash in the filename.

1 Comment

But in this scenario, a deploy will erase the old hashed chunks so they would no longer be available.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.