question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

[BUG] Memory leak when creating "issues", with TV Shows that have large season numbers

See original GitHub issue

Description

Hi!

Whenever you create issues (and try to view the issue created afterwards) with a TV Show, that has high season numbers, like Season 2003, Season 2004 and so on, Overseerr/nodejs will take up loads of memory within seconds.

Version

v1.28.0 (snapcraft)

Steps to Reproduce

16/02/2022 - Slight update

  1. Login to Overseerr
  2. Find a show with a large season number
  3. In my case, MythBusters
  4. Create an issue for Season 2003
  5. Select All Episodes and Issue: Video
  6. Press Submit
  7. Either press “view issue” immediatly after creating it. – or –
  8. Press on the “Issue” tab, to view all created issues.
  9. Watch as your RAM usage explodes

Screenshots

Creating the issue https://imgbox.com/dwZoteT7

Trying to access the Issue tab after running a sudo snap restart overseerr https://imgbox.com/VFTii82t

(updated the links, imgur is buggy atm)

Logs

No response

Platform

Overseerr is running on Ubuntu 21.10/impish

Device

Desktop

Operating System

Windows 11

Browser

Firefox

Additional Context

No relevant logs this time. Started from this thread: https://discord.com/channels/783137440809746482/789002037529804833/937119741376626719

Ending up creating this GH issue instead.

Edit - 16/02/2022

  • If you run in to the same issue, and wondering how to edit your .db file, here’s a short guide for Linux. I don’t use my root user, so I just use sudo with my “local”/“home” user:

Quick guide to fix the DB file So to clean the DB I used just a simple DB viewer / DB editor. In my case, I just chose the first one I found: https://github.com/sqlitebrowser/sqlitebrowser

The .db file usually lies in the /root/snap/overseerr folder if you’re using Linux. In my case, it lies in: /root/snap/overseerr/common/db

To be safe, shutdown the overseerr instance: sudo snap stop overseerr

Just download all the files in there. Or copy them: sudo cp -R /root/snap/overseerr/common/db /home/user/path/db and chown -R user:user /home/user/path/db, if you don’t use your root user.

Open the db.sqlite3 with your DB viewer. If you use the one above, follow these steps to find the issue you want to delete (the one with the large season number):

  1. Open the db.sqlite3 with DB Browser for SQLite
  2. Select the Browse Data `tab
  3. Find the Table and select issue
  4. Find the issue with a large season number, and delete it
  5. Save the db-file. You’ll be left with just the db.dqlite3 file in the folder you edited the db file in.

Then purge/delete the old .sqlite3* files in /root/snap/overseerr/common/db and re-upload the db file to whatever directory. Then copy the db.sqlite3 file back to the /root/snap/overseerr/common/db with sudo cp db.sqlite3 /root/snap/overseerr/common/db, and remember to chown it back to the root user with sudo chown -R root:root /root/snap/overseerr/common/db.

Start/restart overseerr again sudo snap start overseerr, and rejoice that you can go into your issue section again, without everything crashing: .

Code of Conduct

  • I agree to follow Overseerr’s Code of Conduct

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Reactions:2
  • Comments:11

github_iconTop GitHub Comments

1reaction
sbrown7792commented, Jun 22, 2022

In case this gets re-opened when someone is able to reproduce it: I was hit by the same symptoms described here, except the trigger was a Request for a TV show with 176 seasons. House Hunters International (and yes, they’re numbered 1-176 on TMDb 🙄 the “why” is beyond me…). And since “requests” appear on the main page (under “Recent Requests”), I didn’t have to browse to any other page to trigger the memory “explosion” and subsequent crash:

<--- Last few GCs ---> 
[28:0x7f06f676c340]   255995 ms: Mark-sweep (reduce) 4066.0 (4143.8) -> 4065.1 (4143.8) MB, 2019.3 / 0.0 ms  (average mu = 0.708, current mu = 0.006) allocation failure scavenge might not succeed
[28:0x7f06f676c340]   257923 ms: Mark-sweep (reduce) 4066.3 (4143.8) -> 4065.5 (4144.0) MB, 1916.4 / 0.0 ms  (average mu = 0.556, current mu = 0.006) allocation failure scavenge might not succeed
<--- JS stacktrace --->
FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory
error Command failed with signal "SIGABRT".
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
yarn run v1.22.18
$ NODE_ENV=production node dist/index.js

I am running in Docker on linux, version 1.29.1

I was able to delete the relevant media_request and season_request rows by using the sqlite3 db client, which then let me browse the overseerr GUI without it crashing. Unfortunately I wasn’t able to reproduce after re-adding the request (after clearing data for the show), or adding an issue relating to the series (after doing a full Plex scan).

1reaction
TheCatLadycommented, Feb 10, 2022

Seems the #2523 fix has been merged into develop. So, after next stable release, if I wipe my Overseerr setup and scan it all over, will these series be scanned in with the TMDb ids instead? (atleast those who has an available TMDb match/id).

They will only get picked up if they are added with the TMDb season numbers in Plex. Otherwise, you’ll need to manually mark them as available. (If you want to discuss this issue further, Discord may be a better place to do so, so we can keep this issue on-topic.)

Read more comments on GitHub >

github_iconTop Results From Across the Web

Memory Leak Bug Is Killing MacOS Monterey Performance
Memory leaks occur when an application uses more memory, or RAM, than is necessary.
Read more >
Users Reporting 'Memory Leak' Issues After Updating to ...
Some users who recently upgraded to macOS Monterey are experiencing a bug known as a "memory leak," a scenario in which a specific...
Read more >
Musings from debugging a production memory leak…
Many people use the term “memory leak” to refer to a broad range of issues that cause memory use to go up. ·...
Read more >
Monterey app memory leak isn't just caused by Mac pointer
A number of Mac users are seeing an error message: “Your system has run out of application memory.” The error is caused by...
Read more >
[BUG] Memory increases when same context is used #6319
I have a case where I make several requests per minute, it leaks memory all over the place. I've tried closing pages, closing...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found