The memory usage seems increasing when the table.insert() error occurs
See original GitHub issueI am the original reporter of issue #1313. Unfortunately, I’m still facing with the issue. The cause of the problem seemed to be somewhere else, and I was able to reproduce it, so I’d like you to check it out.
As olavloite mentioned in #1313, the memory usage is stable in his script. https://github.com/googleapis/nodejs-spanner/issues/1313#issuecomment-790019913 However, it seems to be leaking slightly when table.insert() throws an error. I think this is the root cause of our problem.
The results below are with table.upsert()
The results below are with table.insert() causes an error
Environment details
- OS: macOS
- Node.js version: v14.17.1
- npm version: 7.20.5
@google-cloud/spanner
version: 5.13.1
Steps to reproduce
- Causing an error in table.insert()
- repeat, repeat and repeat
My test script:
const {Spanner} = require('@google-cloud/spanner');
const spanner = new Spanner();
const instance = spanner.instance('my-instance');
const database = instance.database('my-database');
const table = database.table('Singers');
main().then(() => console.log('[INFO] Finished'));
async function main() {
for (let row = 1; row <= 50000; row++) {
const newRow = {
SingerId: 1,
FirstName: 'firstname',
LastName: 'lastname',
};
//await table.upsert(newRow);
try {
await table.insert(newRow);
} catch(err) {
if (err.code != 6) { // ignore grpc AlreadyExists
throw err;
}
}
if (row % 25 === 0) {
console.log(`[INFO] Rows updated so far: ${row}`);
if (global.gc) {
global.gc();
const used = process.memoryUsage().heapUsed / 1024 / 1024;
console.log(`[INFO] Memory usage: ${used} MB`);
}
}
}
}
Output(with table.upsert()):
[INFO] Rows updated so far: 25
[INFO] Memory usage: 17.850181579589844 MB
[INFO] Rows updated so far: 50
[INFO] Memory usage: 17.832420349121094 MB
[INFO] Rows updated so far: 75
[INFO] Memory usage: 17.85736846923828 MB
[INFO] Rows updated so far: 100
[INFO] Memory usage: 17.891868591308594 MB
[INFO] Rows updated so far: 125
[INFO] Memory usage: 17.89563751220703 MB
[INFO] Rows updated so far: 150
[INFO] Memory usage: 17.9027099609375 MB
...
[INFO] Rows updated so far: 16150
[INFO] Memory usage: 18.864845275878906 MB
[INFO] Rows updated so far: 16175
[INFO] Memory usage: 18.887779235839844 MB
[INFO] Rows updated so far: 16200
[INFO] Memory usage: 18.87274932861328 MB
[INFO] Rows updated so far: 16225
[INFO] Memory usage: 18.86701202392578 MB
...
[INFO] Memory usage: 18.895301818847656 MB
[INFO] Rows updated so far: 49925
[INFO] Memory usage: 18.900909423828125 MB
[INFO] Rows updated so far: 49950
[INFO] Memory usage: 18.90435028076172 MB
[INFO] Rows updated so far: 49975
[INFO] Memory usage: 18.941001892089844 MB
[INFO] Rows updated so far: 50000
[INFO] Memory usage: 18.904075622558594 MB
Output(with table.insert() causes an error):
[INFO] Rows updated so far: 25
[INFO] Memory usage: 17.84894561767578 MB
[INFO] Rows updated so far: 50
[INFO] Memory usage: 17.83715057373047 MB
[INFO] Rows updated so far: 75
[INFO] Memory usage: 17.863059997558594 MB
[INFO] Rows updated so far: 100
[INFO] Memory usage: 17.921218872070312 MB
[INFO] Rows updated so far: 125
[INFO] Memory usage: 17.93242645263672 MB
[INFO] Rows updated so far: 150
[INFO] Memory usage: 17.938697814941406 MB
...
[INFO] Rows updated so far: 16150
[INFO] Memory usage: 20.609634399414062 MB
[INFO] Rows updated so far: 16175
[INFO] Memory usage: 20.611862182617188 MB
[INFO] Rows updated so far: 16200
[INFO] Memory usage: 20.61297607421875 MB
[INFO] Rows updated so far: 16225
[INFO] Memory usage: 20.616310119628906 MB
[INFO] Rows updated so far: 16250
[INFO] Memory usage: 20.620315551757812 MB
...
[INFO] Memory usage: 23.255882263183594 MB
[INFO] Rows updated so far: 49925
[INFO] Memory usage: 23.259368896484375 MB
[INFO] Rows updated so far: 49950
[INFO] Memory usage: 23.264991760253906 MB
[INFO] Rows updated so far: 49975
[INFO] Memory usage: 23.25994873046875 MB
[INFO] Rows updated so far: 50000
[INFO] Memory usage: 23.261062622070312 MB
Issue Analytics
- State:
- Created 2 years ago
- Reactions:1
- Comments:8 (3 by maintainers)
Top Results From Across the Web
Resolve Out Of Memory Issues - SQL Server - Microsoft Learn
It is possible that the amount of memory you installed and allocated for In-Memory OLTP becomes inadequate for your growing needs. If so,...
Read more >Memory usage goes wild with Doctrine bulk insert
With a very very tiny entity, it works but memory consumption increase too much: several MB whereas it should be KB. clear() ,...
Read more >Analyze memory usage of PostgreSQL – why is it growing ...
A trigger that is defined as AFTER INSERT...FOR EACH ROW will queue up info all the inserted rows and then fire the trigger...
Read more >Re: "Memory Limit Exceeded" error on Impala when i... - 34678
I am trying to run the following query on Impala and I'm getting a "Memory Limit Exceeded" error: ```. insert into flattened_impressions.
Read more >Shared Memory Problem (unable to allocate ... - Ask TOM
SQLException: ORA-00604: error occurred at recursive SQL level 1 ORA-04031: unable to ... in part to the increased resource consumption but mainly to...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Today, I deployed our functions with
nodejs-spanner
v5.x to our production environment. The session pool settings are as same as my previous comment.The results are as shown in the image below, and it looks good, with no trend of increasing memory utilization.
So, I am going to close this issue.
I appreciate your cooperation.
Yes. It means
gcloud functions deploy ... --max-instances=1
.There is an error below. I think it’s not garbage collections, but the instance of Cloud Functions restarted due to the error.
As you said, it seems to work well if there are enough instances, so I will check this with our actual work load.