Is DATA_UPLOAD_MAX_MEMORY_SIZE actually used?
See original GitHub issueHi !
I have been using SRegistry on my Synology NAS without issue for a couple of months now, and that’s really great. However, I am starting to manipulate significantly larger containers (about 3GB, while so far I used 700MB). My NAS has only 2GB of RAM (yeah… I should definitely increase it, but in lockdown condition, it’s, for now, impossible), and the push of my large containers fails with a 403 error (in addition to perturbing the other services running on the NAS). The 403 error received from the client, but the SRegistry logs show things like this:
[error] 6#6: *3392 upstream timed out (110: Operation timed out) while reading response header from upstream, client: 172.17.0.1, server: localhost, request: "PUT /v2/push/imagefile/261/SHA_REDACTED HTTP/1.1", upstream: "uwsgi://172.17.0.4:3031", host: "REDACTED"
In config.py
, I set DATA_UPLOAD_MAX_MEMORY_SIZE = 500
, but it does not seem to help.
When I use the search bar in GitHub, DATA_UPLOAD_MAX_MEMORY_SIZE seems to appear only once on the repo (in the config.py). So, I am wondering if this setting is actually used and whether there is a way to improve the situation on my side?
Note: copying the same containers on the same NAS via NFS works without issue, so it is not a network capability issue.
I will be very glad if you have a suggestion on this.
Thanks a lot in advance.
Best regards and stay safe!
Issue Analytics
- State:
- Created 3 years ago
- Comments:31 (31 by maintainers)
Hi @vsoch, Don’t be sorry for poking me, I am very happy to see that you have an idea that could solve my problem. I have been very busy in the last few days, I will try the PR asap!
I am testing your PR, but it will take some time before I can report back as each test take dozens of minutes. More soon.