open can not open s3url which key value contains ':@'
See original GitHub issueReproduce code:
from boto3 import client
c = client('s3')
c.put_object(Key='oss-playground', Key=':@', Body=b'')
from smart_open import open
f = open('s3://oss-playground/:@')
from urllib import parse
f = open('s3://oss-playground/' + parse.quote(':@'))
Expected result:
file can be opened
Reality:
ValueError: not enough values to unpack (expected 2, got 1)
ValueError: '%3A%40' does not exist in the bucket 'oss-playground', or is forbidden for access
There should be at least someway to access 's3://oss-playground/:@'
Issue Analytics
- State:
- Created 4 years ago
- Comments:7 (1 by maintainers)
Top Results From Across the Web
Amazon S3 upload file and get URL - Stack Overflow
No you cannot get the URL in single action but two :) First of all, you may have to make the file public...
Read more >Access Denied error when using an S3 static website endpoint
Objects can't be encrypted by AWS Key Management Service (AWS KMS). ... is publicly accessible, open the object's URL in a web browser....
Read more >Presigned URLs — Boto3 Docs 1.26.33 documentation - AWS
A user who does not have AWS credentials or permission to access an S3 object ... A presigned URL is generated by an...
Read more >Securing AWS S3 uploads using presigned URLs - Medium
The first thing we need to do is create a IAM user which has access to both reading and writing objects to S3....
Read more >Share Your AWS S3 Private Content With Others, Without ...
A user who does not have AWS credentials or permission to access an S3 ... A presigned url is generated by an AWS...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
I would think it’s a design decision. If you wish to support something like credentials in URL, you may need to support url-encoded URL. So, the bucket/key part need to be URL-encoded.
For example,
s3://user/name:pass/word@buc:ket/ke@y
needs to be encoded as
s3://user/name:pass/word@buc%3Aket/ke%40y
It’s a breaking change for user used to use
s3://bucket/100%25
to access
s3://bucket/100%25
but now it will try to get
s3://bucket/100%
instead. (
urllib.parse.quote('%') == '%25'
)Or, you could just drop support for username/password/host/port things for it could be configured in another way.
I think URL-encoding is the way to go.
The Amazon CLI allows
@
in URL paths, but that breaks the RFC, so we shouldn’t be following their example. Could you please confirm whether this reasoning is true?