Performance degradation over time (starts at around 100ms and grows to over 400).
See original GitHub issue🐛 Bug Report
Over time, extracting chunks leads to the following graph:
To Reproduce
For each request (we use express), we set up server side rendering as follows:
import { readFileSync } from 'fs';
import * as path from 'path';
import { ChunkExtractor } from '@loadable/server';
import { Request } from 'express';
const statsFile = path.resolve('./build/client/bundles/chunks.json');
// https://github.com/gregberge/loadable-components/issues/560
const stats = JSON.parse(readFileSync(statsFile).toString('utf8'));
export const getChunkExtractor = (request: Request) => {
const { cdnHost } = request.user.siteSettings;
const cdnUrl = cdnHost ? `https://${cdnHost}` : '';
const publicPath = `${cdnUrl}${request.pluginOptions.settings.portalPrefix}/bundles/`;
return new ChunkExtractor({
stats,
publicPath,
});
};
request.pluginOptions = Object.assign({}, options);
request.applicationContext = getApplicationContext(request);
request.i18nContext = getI18nContext(request);
request.helmetContext = {};
request.routerContext = {};
request.shouldDoServerSideRendering = shouldDoServerSideRendering(request);
request.apolloCache = getApolloCache(request);
request.apolloLink = getApolloLink(request);
request.apolloClient = getApolloClient(request);
request.chunkExtractor = getChunkExtractor(request);
request.styleExtractor = new ServerStyleSheet();
const cleanup = async () => {
request.styleExtractor.seal();
await request.apolloClient.clearStore();
await request.apolloCache.reset();
request.apolloClient.link = null;
request.apolloClient.cache = null;
const keysToGarbageCollect = [
'pluginOptions',
'applicationContext',
'i18nContext',
'helmetContext',
'routerContext',
'shouldDoServerSideRendering',
'apolloCache',
'apolloLink',
'apolloClient',
'chunkExtractor',
'styleExtractor',
];
keysToGarbageCollect.forEach(key => {
const subKeys = Object.keys(request[key]) || [];
subKeys.forEach(subKey => {
request[key][subKey] = null;
});
request[key] = null;
});
};
await prePopulateCacheWithData(request);
const Application = request.pluginOptions.getApplication(request);
request.performance.startMeasure('ssr-extract-styles');
const noSsrHtml = renderToString(
<StyleSheetManager sheet={request.styleExtractor.instance}>
<Application />
</StyleSheetManager>
);
request.performance.stopMeasure('ssr-extract-styles');
request.performance.startMeasure('ssr-extract-chunks');
request.chunkExtractor.collectChunks(<Application />);
renderToString(
<ChunkExtractorManager extractor={request.chunkExtractor}>
<Application />
</ChunkExtractorManager>
);
request.performance.stopMeasure('ssr-extract-chunks');
if (request.routerContext.url) {
return response.redirect(request.routerContext.url);
}
if (request.routerContext.statusCode && request.routerContext.statusCode !== 200) {
return next(
new Boom(`Response code not 200 - ${request.routerContext.statusCode}`, {
statusCode: request.routerContext.statusCode,
})
);
}
response.type('html');
sendPreloadHeaders(request, response);
sendHeadHtml(request, response);
sendBodyHtml(request, response, noSsrHtml);
response.end();
cleanup();
Expected behavior
I’ve setup a manual cleanup process to make sure we are cleaning up everything. During the making of this bug, I also noticed that we are collecting chunks twice. I also am aware, that we could theoretically combine collecting styles and chunks into one, but I separate them to pinpoint the issue. I also am aware of #560 which is why I am using stats and not statsPath.
Any idea on what I could check next?
Issue Analytics
- State:
- Created 3 years ago
- Comments:13
Top Results From Across the Web
Bountysource
Performance degradation over time (starts at around 100ms and grows to over 400).
Read more >Professional Data Engineer on Google Cloud Platform Exam ...
You see some degradation in performance after the migration to Dataproc, ... size of the Cloud Bigtable cluster of read operations take longer...
Read more >Using a valid ApolloLink with renderToString executes queries ...
Performance degradation over time (starts at around 100ms and grows to over 400). gregberge/loadable-components#603.
Read more >Measure performance with the RAIL model - web.dev
RAIL is a user-centric performance model that provides a structure for thinking about performance. The model breaks down the user's ...
Read more >Problem Set 5 Question 1 - Cornell Computer Science
Slow Start: The sender increases the congestion window by 1 MSS every time a segment is ACKed, until a dropped packet is detected....
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Hello!
So I changed my codebase to the following and will run it in production for a while to see what’s what. Will report back later in the day.
It all depends on the way you made that leak, but usually that’s almost impossible to trace, unless you have a test which will do roughly the same as you did - trying to render something 100 times and checking that everything is still fine. There are plenty of tools to debug memory issues and that a bigger part of Rust idealogy, but we have to work with javascript. Keep calm and carry on.