question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Trouble with curl post when file is big

See original GitHub issue

Context

torchserve version: 0.0.1b20200409 torch version: 1.5.0 torchvision version [if any]: 0.6.0 torchtext version [if any]: 0.6.0 torchaudio version [if any]: no java version: openjdk 11.0.5 Operating System and version: Ubuntu 16.04.6 LTS Environment: conda env with Python 3.7.7

Installed using source? [yes/no]: no Are you planning deploy it using docker container? [yes/no]: yes Is it a CPU or GPU environment?: gpu Using a default/custom handler? [If possible upload/share custom handler/model]: no What kind of model is it e.g. vision, text, audio?: vision Are you planning to use local models from model-store or public url being used e.g. from S3 bucket etc.? local models from model-store Provide config.properties, logs [ts.log] and parameters used for model registration/update APIs: Link to your project [if any]:

when data.pkl is 3.1MB:

config.properties default

comand: curl -X POST http://127.0.0.1:8080/predictions/lungcls -T data.pkl -v

ts_log.log:

2020-05-12 15:47:53,081 [INFO ] main org.pytorch.serve.ModelServer -
TS Home: /home/gaozebin/.conda/envs/torchserver/lib/python3.7/site-packages
Current directory: /data/gaozb/server_lung/detsegandcovid19/covId2019
Temp directory: /tmp
Number of GPUs: 8
Number of CPUs: 56
Max heap size: 30688 M
Python executable: /home/gaozebin/.conda/envs/torchserver/bin/python
Config file: N/A
Inference address: http://127.0.0.1:8080
Management address: http://127.0.0.1:8081
Model Store: /data/gaozb/server_lung/detsegandcovid19/covId2019/model_store
Initial Models: lungcls=lungcls.mar
Log dir: /data/gaozb/server_lung/detsegandcovid19/covId2019/logs
Metrics dir: /data/gaozb/server_lung/detsegandcovid19/covId2019/logs
Netty threads: 0
Netty client threads: 0
Default workers per model: 8
Blacklist Regex: N/A
Maximum Response Size: 6553500
Maximum Request Size: 6553500
2020-05-12 15:47:53,087 [INFO ] main org.pytorch.serve.ModelServer - Loading initial models: lungcls.mar
2020-05-12 15:47:55,701 [DEBUG] main org.pytorch.serve.wlm.ModelVersionedRefs - Adding new version 1.0 for model lungcls
2020-05-12 15:47:55,701 [DEBUG] main org.pytorch.serve.wlm.ModelVersionedRefs - Setting default version to 1.0 for model lungcls
2020-05-12 15:47:55,701 [INFO ] main org.pytorch.serve.wlm.ModelManager - Model lungcls loaded.
2020-05-12 15:47:55,701 [DEBUG] main org.pytorch.serve.wlm.ModelManager - updateModel: lungcls, count: 8
2020-05-12 15:47:55,719 [INFO ] main org.pytorch.serve.ModelServer - Initialize Inference server with: EpollServerSocketChannel.
2020-05-12 15:47:55,793 [INFO ] W-9003-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /tmp/.ts.sock.9003
2020-05-12 15:47:55,793 [INFO ] W-9006-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /tmp/.ts.sock.9006
2020-05-12 15:47:55,796 [INFO ] W-9003-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]35467
2020-05-12 15:47:55,796 [INFO ] W-9006-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]35464
2020-05-12 15:47:55,797 [INFO ] W-9003-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started.
2020-05-12 15:47:55,797 [INFO ] W-9003-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.7.7
2020-05-12 15:47:55,797 [INFO ] W-9006-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started.
2020-05-12 15:47:55,797 [DEBUG] W-9006-lungcls_1.0 org.pytorch.serve.wlm.WorkerThread - W-9006-lungcls_1.0 State change null -> WORKER_STARTED
2020-05-12 15:47:55,797 [DEBUG] W-9003-lungcls_1.0 org.pytorch.serve.wlm.WorkerThread - W-9003-lungcls_1.0 State change null -> WORKER_STARTED
2020-05-12 15:47:55,798 [INFO ] W-9006-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.7.7
2020-05-12 15:47:55,802 [INFO ] W-9003-lungcls_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9003
2020-05-12 15:47:55,802 [INFO ] W-9006-lungcls_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9006
2020-05-12 15:47:55,812 [INFO ] W-9007-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /tmp/.ts.sock.9007
2020-05-12 15:47:55,813 [INFO ] W-9007-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]35461
2020-05-12 15:47:55,813 [INFO ] W-9007-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started.
2020-05-12 15:47:55,813 [DEBUG] W-9007-lungcls_1.0 org.pytorch.serve.wlm.WorkerThread - W-9007-lungcls_1.0 State change null -> WORKER_STARTED
2020-05-12 15:47:55,814 [INFO ] W-9007-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.7.7
2020-05-12 15:47:55,814 [INFO ] W-9007-lungcls_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9007
2020-05-12 15:47:55,821 [INFO ] W-9000-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /tmp/.ts.sock.9000
2020-05-12 15:47:55,821 [INFO ] W-9002-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /tmp/.ts.sock.9002
2020-05-12 15:47:55,822 [INFO ] W-9002-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]35462
2020-05-12 15:47:55,822 [INFO ] W-9000-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]35463
2020-05-12 15:47:55,823 [INFO ] W-9002-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started.
2020-05-12 15:47:55,823 [DEBUG] W-9000-lungcls_1.0 org.pytorch.serve.wlm.WorkerThread - W-9000-lungcls_1.0 State change null -> WORKER_STARTED
2020-05-12 15:47:55,823 [INFO ] W-9000-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started.
2020-05-12 15:47:55,823 [INFO ] W-9002-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.7.7
2020-05-12 15:47:55,823 [INFO ] W-9000-lungcls_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9000
2020-05-12 15:47:55,823 [DEBUG] W-9002-lungcls_1.0 org.pytorch.serve.wlm.WorkerThread - W-9002-lungcls_1.0 State change null -> WORKER_STARTED
2020-05-12 15:47:55,823 [INFO ] W-9000-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.7.7
2020-05-12 15:47:55,823 [INFO ] W-9002-lungcls_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9002
2020-05-12 15:47:55,825 [INFO ] W-9005-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /tmp/.ts.sock.9005
2020-05-12 15:47:55,825 [INFO ] W-9001-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /tmp/.ts.sock.9001
2020-05-12 15:47:55,826 [INFO ] W-9005-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]35466
2020-05-12 15:47:55,826 [INFO ] W-9005-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started.
2020-05-12 15:47:55,827 [DEBUG] W-9005-lungcls_1.0 org.pytorch.serve.wlm.WorkerThread - W-9005-lungcls_1.0 State change null -> WORKER_STARTED
2020-05-12 15:47:55,827 [INFO ] W-9001-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]35460
2020-05-12 15:47:55,827 [INFO ] W-9005-lungcls_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9005
2020-05-12 15:47:55,827 [INFO ] W-9005-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.7.7
2020-05-12 15:47:55,827 [INFO ] W-9001-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started.
2020-05-12 15:47:55,827 [DEBUG] W-9001-lungcls_1.0 org.pytorch.serve.wlm.WorkerThread - W-9001-lungcls_1.0 State change null -> WORKER_STARTED
2020-05-12 15:47:55,827 [INFO ] W-9001-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.7.7
2020-05-12 15:47:55,827 [INFO ] W-9001-lungcls_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9001
2020-05-12 15:47:55,828 [INFO ] W-9004-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /tmp/.ts.sock.9004
2020-05-12 15:47:55,828 [INFO ] W-9004-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]35465
2020-05-12 15:47:55,829 [INFO ] W-9004-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started.
2020-05-12 15:47:55,894 [DEBUG] W-9004-lungcls_1.0 org.pytorch.serve.wlm.WorkerThread - W-9004-lungcls_1.0 State change null -> WORKER_STARTED
2020-05-12 15:47:55,894 [INFO ] W-9004-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.7.7
2020-05-12 15:47:55,894 [INFO ] W-9004-lungcls_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9004
2020-05-12 15:47:55,898 [INFO ] main org.pytorch.serve.ModelServer - Inference API bind to: http://127.0.0.1:8080
2020-05-12 15:47:55,898 [INFO ] main org.pytorch.serve.ModelServer - Initialize Management server with: EpollServerSocketChannel.
2020-05-12 15:47:55,900 [INFO ] main org.pytorch.serve.ModelServer - Management API bind to: http://127.0.0.1:8081
2020-05-12 15:47:55,900 [INFO ] W-9006-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /tmp/.ts.sock.9006.
2020-05-12 15:47:55,901 [INFO ] W-9003-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /tmp/.ts.sock.9003.
2020-05-12 15:47:55,900 [INFO ] W-9004-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /tmp/.ts.sock.9004.
2020-05-12 15:47:55,901 [INFO ] W-9001-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /tmp/.ts.sock.9001.
2020-05-12 15:47:55,901 [INFO ] W-9007-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /tmp/.ts.sock.9007.
2020-05-12 15:47:55,902 [INFO ] W-9002-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /tmp/.ts.sock.9002.
2020-05-12 15:47:55,903 [INFO ] W-9005-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /tmp/.ts.sock.9005.
2020-05-12 15:47:55,903 [INFO ] W-9000-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /tmp/.ts.sock.9000.
2020-05-12 15:48:05,463 [INFO ] W-9007-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Device Id:<ts.context.Context object at 0x7f763aa18d10>.
2020-05-12 15:48:05,463 [INFO ] W-9007-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle -
2020-05-12 15:48:05,463 [INFO ] W-9007-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Device Id:cuda:0.
2020-05-12 15:48:05,463 [INFO ] W-9007-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle -
2020-05-12 15:48:05,463 [INFO ] W-9007-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Model dir:/tmp/models/4476426a5c31f0793b9586111df91fc843a56494.
2020-05-12 15:48:05,463 [INFO ] W-9007-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle -
2020-05-12 15:48:05,464 [INFO ] W-9007-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - resGRU Model init done.
2020-05-12 15:48:05,464 [INFO ] W-9007-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Model file 0.9869522036002483_0.9273959341723137_0.9733333333333334_0.840782122905028_PG.pkl loaded successfully
2020-05-12 15:48:05,467 [INFO ] W-9007-lungcls_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 9531
2020-05-12 15:48:05,467 [DEBUG] W-9007-lungcls_1.0 org.pytorch.serve.wlm.WorkerThread - W-9007-lungcls_1.0 State change WORKER_STARTED -> WORKER_MODEL_LOADED
2020-05-12 15:48:05,524 [INFO ] W-9001-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Device Id:<ts.context.Context object at 0x7f11991ffc90>.
2020-05-12 15:48:05,524 [INFO ] W-9001-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle -
2020-05-12 15:48:05,524 [INFO ] W-9001-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Device Id:cuda:2.
2020-05-12 15:48:05,524 [INFO ] W-9001-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle -
2020-05-12 15:48:05,524 [INFO ] W-9001-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Model dir:/tmp/models/4476426a5c31f0793b9586111df91fc843a56494.
2020-05-12 15:48:05,524 [INFO ] W-9001-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle -
2020-05-12 15:48:05,524 [INFO ] W-9001-lungcls_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 9588
2020-05-12 15:48:05,524 [INFO ] W-9001-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - resGRU Model init done.
2020-05-12 15:48:05,525 [DEBUG] W-9001-lungcls_1.0 org.pytorch.serve.wlm.WorkerThread - W-9001-lungcls_1.0 State change WORKER_STARTED -> WORKER_MODEL_LOADED
2020-05-12 15:48:05,525 [INFO ] W-9001-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Model file 0.9869522036002483_0.9273959341723137_0.9733333333333334_0.840782122905028_PG.pkl loaded successfully
2020-05-12 15:48:05,658 [INFO ] W-9002-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Device Id:<ts.context.Context object at 0x7fd5758c8d90>.
2020-05-12 15:48:05,658 [INFO ] W-9002-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle -
2020-05-12 15:48:05,659 [INFO ] W-9002-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Device Id:cuda:3.
2020-05-12 15:48:05,659 [INFO ] W-9002-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle -
2020-05-12 15:48:05,659 [INFO ] W-9002-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Model dir:/tmp/models/4476426a5c31f0793b9586111df91fc843a56494.
2020-05-12 15:48:05,659 [INFO ] W-9002-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle -
2020-05-12 15:48:05,659 [INFO ] W-9002-lungcls_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 9723
2020-05-12 15:48:05,659 [INFO ] W-9002-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - resGRU Model init done.
2020-05-12 15:48:05,659 [DEBUG] W-9002-lungcls_1.0 org.pytorch.serve.wlm.WorkerThread - W-9002-lungcls_1.0 State change WORKER_STARTED -> WORKER_MODEL_LOADED
2020-05-12 15:48:05,659 [INFO ] W-9002-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Model file 0.9869522036002483_0.9273959341723137_0.9733333333333334_0.840782122905028_PG.pkl loaded successfully
2020-05-12 15:48:06,161 [INFO ] W-9000-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Device Id:<ts.context.Context object at 0x7f8dd3ce3d90>.
2020-05-12 15:48:06,161 [INFO ] W-9000-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle -
2020-05-12 15:48:06,161 [INFO ] W-9000-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Device Id:cuda:1.
2020-05-12 15:48:06,161 [INFO ] W-9000-lungcls_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 10225
2020-05-12 15:48:06,161 [INFO ] W-9000-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle -
2020-05-12 15:48:06,162 [DEBUG] W-9000-lungcls_1.0 org.pytorch.serve.wlm.WorkerThread - W-9000-lungcls_1.0 State change WORKER_STARTED -> WORKER_MODEL_LOADED
2020-05-12 15:48:06,162 [INFO ] W-9000-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Model dir:/tmp/models/4476426a5c31f0793b9586111df91fc843a56494.
2020-05-12 15:48:06,162 [INFO ] W-9000-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle -
2020-05-12 15:48:06,162 [INFO ] W-9000-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - resGRU Model init done.
2020-05-12 15:48:06,162 [INFO ] W-9000-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Model file 0.9869522036002483_0.9273959341723137_0.9733333333333334_0.840782122905028_PG.pkl loaded successfully
2020-05-12 15:48:09,560 [INFO ] W-9004-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Device Id:<ts.context.Context object at 0x7fc0e1ab3d90>.
2020-05-12 15:48:09,560 [INFO ] W-9004-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle -
2020-05-12 15:48:09,560 [INFO ] W-9004-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Device Id:cuda:5.
2020-05-12 15:48:09,560 [INFO ] W-9004-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle -
2020-05-12 15:48:09,560 [INFO ] W-9004-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Model dir:/tmp/models/4476426a5c31f0793b9586111df91fc843a56494.
2020-05-12 15:48:09,560 [INFO ] W-9004-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle -
2020-05-12 15:48:09,560 [INFO ] W-9004-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - resGRU Model init done.
2020-05-12 15:48:09,561 [INFO ] W-9004-lungcls_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 13625
2020-05-12 15:48:09,561 [INFO ] W-9004-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Model file 0.9869522036002483_0.9273959341723137_0.9733333333333334_0.840782122905028_PG.pkl loaded successfully
2020-05-12 15:48:09,561 [DEBUG] W-9004-lungcls_1.0 org.pytorch.serve.wlm.WorkerThread - W-9004-lungcls_1.0 State change WORKER_STARTED -> WORKER_MODEL_LOADED
2020-05-12 15:48:09,679 [INFO ] W-9005-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Device Id:<ts.context.Context object at 0x7fbb695b4d10>.
2020-05-12 15:48:09,680 [INFO ] W-9005-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle -
2020-05-12 15:48:09,680 [INFO ] W-9005-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Device Id:cuda:6.
2020-05-12 15:48:09,680 [INFO ] W-9005-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle -
2020-05-12 15:48:09,680 [INFO ] W-9005-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Model dir:/tmp/models/4476426a5c31f0793b9586111df91fc843a56494.
2020-05-12 15:48:09,680 [INFO ] W-9005-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle -
2020-05-12 15:48:09,680 [INFO ] W-9005-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - resGRU Model init done.
2020-05-12 15:48:09,680 [INFO ] W-9005-lungcls_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 13744
2020-05-12 15:48:09,680 [INFO ] W-9005-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Model file 0.9869522036002483_0.9273959341723137_0.9733333333333334_0.840782122905028_PG.pkl loaded successfully
2020-05-12 15:48:09,680 [DEBUG] W-9005-lungcls_1.0 org.pytorch.serve.wlm.WorkerThread - W-9005-lungcls_1.0 State change WORKER_STARTED -> WORKER_MODEL_LOADED
2020-05-12 15:48:09,685 [INFO ] W-9006-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Device Id:<ts.context.Context object at 0x7fdb67825d50>.
2020-05-12 15:48:09,686 [INFO ] W-9006-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle -
2020-05-12 15:48:09,686 [INFO ] W-9006-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Device Id:cuda:7.
2020-05-12 15:48:09,686 [INFO ] W-9006-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle -
2020-05-12 15:48:09,686 [INFO ] W-9006-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Model dir:/tmp/models/4476426a5c31f0793b9586111df91fc843a56494.
2020-05-12 15:48:09,686 [INFO ] W-9006-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle -
2020-05-12 15:48:09,686 [INFO ] W-9006-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - resGRU Model init done.
2020-05-12 15:48:09,686 [INFO ] W-9006-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Model file 0.9869522036002483_0.9273959341723137_0.9733333333333334_0.840782122905028_PG.pkl loaded successfully
2020-05-12 15:48:09,686 [INFO ] W-9006-lungcls_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 13750
2020-05-12 15:48:09,686 [DEBUG] W-9006-lungcls_1.0 org.pytorch.serve.wlm.WorkerThread - W-9006-lungcls_1.0 State change WORKER_STARTED -> WORKER_MODEL_LOADED
2020-05-12 15:48:10,120 [INFO ] W-9003-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Device Id:<ts.context.Context object at 0x7f479042cd10>.
2020-05-12 15:48:10,120 [INFO ] W-9003-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle -
2020-05-12 15:48:10,120 [INFO ] W-9003-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Device Id:cuda:4.
2020-05-12 15:48:10,120 [INFO ] W-9003-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle -
2020-05-12 15:48:10,120 [INFO ] W-9003-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Model dir:/tmp/models/4476426a5c31f0793b9586111df91fc843a56494.
2020-05-12 15:48:10,120 [INFO ] W-9003-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle -
2020-05-12 15:48:10,120 [INFO ] W-9003-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - resGRU Model init done.
2020-05-12 15:48:10,120 [INFO ] W-9003-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Model file 0.9869522036002483_0.9273959341723137_0.9733333333333334_0.840782122905028_PG.pkl loaded successfully
2020-05-12 15:48:10,120 [INFO ] W-9003-lungcls_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 14183
2020-05-12 15:48:10,120 [DEBUG] W-9003-lungcls_1.0 org.pytorch.serve.wlm.WorkerThread - W-9003-lungcls_1.0 State change WORKER_STARTED -> WORKER_MODEL_LOADED
2020-05-12 15:48:57,295 [INFO ] W-9007-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Device Id:<ts.context.Context object at 0x7f763aa18d10>.
2020-05-12 15:48:57,295 [INFO ] W-9007-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle -
2020-05-12 15:48:57,295 [INFO ] W-9007-lungcls_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 3212
2020-05-12 15:48:57,295 [INFO ] W-9007-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - <class 'bytearray'>
2020-05-12 15:48:57,295 [INFO ] W-9007-lungcls_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - 21
2020-05-12 15:48:57,299 [DEBUG] W-9007-lungcls_1.0 org.pytorch.serve.wlm.Job - Waiting time: 1, Backend time: 3223

Expected Behavior return prob.

Current Behavior when data.pkl is 7.1MB:

*   Trying 127.0.0.1...
* Connected to 127.0.0.1 (127.0.0.1) port 8080 (#0)
> POST /predictions/lungcls HTTP/1.1
> Host: 127.0.0.1:8080
> User-Agent: curl/7.47.0
> Accept: */*
> Content-Length: 7957803
> Expect: 100-continue
>
< HTTP/1.1 413 Request Entity Too Large
< content-length: 0
* HTTP error before end of send, stop sending
<
* Closing connection 0

Possible Solution split the data.

Steps to Reproduce just curl -X POST http://127.0.0.1:8080/predictions/lungcls -T data.pkl -v

description:

I have a custom “handler.py” and serve started normal. When use commas curl -X POST http://127.0.0.1:8080/predictions/modelname -T data.pkl to up file which small than 6MB it works well, if file bigger such as 7.5MB, it got nothing, no response.

BTW,

when I use python to request like this:

with open('data.pkl', 'rb') as f:
    r = requests.post(url, files={'file': f})

it got error EOFError: Ran out of input and r is 503 . data.pkl is saved by pickle module. And can be read by code imgs = pickle.load(io.BytesIO(f)) when use cult with small file.

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:9 (2 by maintainers)

github_iconTop GitHub Comments

1reaction
maaquibcommented, Jan 19, 2021

@RickyGunawan09 You can find details on how to set config properties here

0reactions
RickyGunawan09commented, Jan 20, 2021

@RickyGunawan09 You can find details on how to set config properties here

@maaquib thank you

Read more comments on GitHub >

github_iconTop Results From Across the Web

POST 4GB file from shell using cURL - Stack Overflow
To upload large binary files using CURL you'll need to use --data-binary flag. In my case it was: curl -X PUT --data-binary @big-file.iso ......
Read more >
failed to upload file with size 24GB via curl POST ... - GitHub
It shows two separate and different attempts and only the second has the "out of memory" error you filed this issue about. You...
Read more >
Upload large files with curl without RAM cache. - Server Fault
But when curl uploading large file it trying to fully cache it in RAM wich produces high memory load. I've tried to use...
Read more >
The Best Kept Secrets About cURL File Upload - Filestack Blog
So let's explore the best-kept secrets about the cURL upload file. ... We can easily issue the following (somewhat more sophisticated) ...
Read more >
Download big file over bad connection - Unix StackExchange
Use -C - to tell curl to automatically find out where/how to resume the transfer. It then uses the given output/input files to...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found