Maximum number of server active requests exceeded. Error while uploading data using EC2.
See original GitHub issue🐛🐛 Bug Report
Not able to upload data using EC2.
⚗️ Current Behavior
Check screenshot below.
Input Code
import json
import time
import numpy as np
from PIL import Image
from tqdm import tqdm
import hub
from hub import Dataset, schema
from hub.schema import Tensor
mpii_schema = {
"image": schema.Image(
shape = (None, None, 3),
max_shape = (10000, 10000, 3),
dtype = 'uint8'),
"isValidation": Tensor(max_shape = (100,), dtype = 'float64'),
"img_paths": Tensor(max_shape = (100,), dtype = 'float64'),
"img_width":Tensor(max_shape = (100,), dtype = 'float64'),
"img_height": Tensor(max_shape = (100,), dtype = 'float64'),
"objpos": Tensor(max_shape = (100,), dtype = 'float64'),
"joint_self": Tensor(
shape = (None, None),
max_shape = (100, 100),
dtype = 'float64'),
"scale_provided": Tensor(max_shape = (100,), dtype = 'float64'),
"annolist_index": Tensor(max_shape = (100,), dtype = 'float64'),
"people_index": Tensor(max_shape = (100,), dtype = 'float64'),
"numOtherPeople": Tensor(max_shape = (100,), dtype = 'float64'),
}
def get_anno(jsonfile):
with open(jsonfile) as f:
instances = json.load(f)
annotations = []
for i in range(len(instances)):
annotations.append(instances[i])
return annotations, len(annotations)
def upload_data(tag, schema, anno_shape, img_path, annotations):
ds = Dataset(tag,
mode = 'w+',
schema = schema,
shape = (anno_shape,),
storage_cache = None)
for i in tqdm(range(anno_shape)):
ds["image", i] = np.array(Image.open(img_path + annotations[i]['img_paths']))
ds["isValidation", i] = annotations[i]['isValidation']
ds["img_paths", i] = annotations[i]['img_paths']
ds['img_width', i] = annotations[i]['img_width']
ds["img_height", i] = annotations[i]['img_height']
ds["objpos", i] = np.array(annotations[i]['objpos'])
ds["joint_self", i] = np.array(annotations[i]['joint_self'])
ds["scale_provided", i] = annotations[i]['scale_provided']
ds["annolist_index", i] = annotations[i]['annolist_index']
ds["people_index", i] = annotations[i]['people_index']
ds["numOtherPeople", i] = annotations[i]['numOtherPeople']
if i % 1000 == 0:
print(i,"instances uploaded.")
ds.commit()
if __name__ == "__main__":
tag = input("Enter tag(username/dataset_name):")
jsonfile = input("Enter json file path:")
img_path = input("Enter path to images:")
annotations, anno_shape = get_anno(jsonfile)
t1 = time.time()
upload_data(tag, mpii_schema, anno_shape, img_path, annotations)
print("Time taken to upload:", (time.time() - t1), "sec")
Expected behavior/code Last checked uploaded till 11k instances. The same code working fine on colab stored all 25k instances(59 GB data) locally.
⚙️ Environment
Python
version(s):- good: [e.g. 3.8]
- better: [3.8.6 - Clang 12.0.0 (clang-1200.0.32.27)]
OS
: [e.g. Ubuntu 18.04, OSX 10.13.4, Windows 10]IDE
: [Vim, VS-Code, Pycharme]Packages
: [Tensorflow==2.1.2 - latest
]
🧰 Possible Solution (optional)
🖼 Additional context/Screenshots (optional)
Add any other context about the problem here. If applicable, add screenshots to help explain.
Issue Analytics
- State:
- Created 3 years ago
- Comments:6 (6 by maintainers)
Top Results From Across the Web
Error codes for the Amazon EC2 API - AWS Documentation
These errors are usually caused by an AWS server-side issue. ... The maximum number of concurrent CreateImage requests for the instance has been...
Read more >Account and limit settings - GitLab Docs
On the left sidebar, select Settings > General, then expand Account and limit. Increase or decrease by changing the value in Maximum export...
Read more >S3 Error: The difference between the request time and the ...
The time on your local box is out of sync with the current time. Sync up your system clock and the problem will...
Read more >What is the maximum number of instances that I can have ...
Answer. By default, when you create an EC2 account with Amazon, your account is limited to a maximum of 20 instances per EC2...
Read more >Configuration properties | Bitbucket Data Center and Server 8.6
Maximum number of audit event rows stored in DB, events exceeding the ... If an error occurs while retrieving the SSO configuration from...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
I’ve had this issue before with
boto
. It might beboto
’s limitation - I’ll see if there is a known workaround.Finally reproduced the error. In forums some point on wasabi not being stable.