Can't list buckets
See original GitHub issueI have some AWS credentials for which I can’t list buckets. I almost never have bucket level permissions, but I do have permissions on prefixes. So something like
fs = s3fs.S3FileSystem(profile_name='user')
fs.ls('s3://home/sseabold/prefix')
Won’t work because I can’t list any buckets (S3Client.list_buckets fails), and I also don’t generally have permissions at the bucket level.
Similar issues to those here [1, 2] for the default (old) boto behavior.
[1] https://github.com/conda/conda/pull/2126 [2] https://github.com/blaze/odo/pull/448
Issue Analytics
- State:
- Created 7 years ago
- Comments:19 (7 by maintainers)
Top Results From Across the Web
Can upload files but can't list S3 bucket objects. Get access ...
Yes, listing the content of buckets requires permission on the bucket itself. Same for uploading, so it is strange that you could upload....
Read more >Can't list S3 buckets despite having FullAccess?
I'm creating a Lambda that iterates over all S3 buckets and has the line: for bucket in s3.buckets.all():. When I try to test...
Read more >Can't list buckets · Issue #38 · fsspec/s3fs
I have some AWS credentials for which I can't list buckets. I almost never have bucket level permissions, but I do have permissions...
Read more >Controlling access to a bucket with user policies
List all buckets owned by the parent account. See root-level items in the companybucket bucket. However, the users still can't do much.
Read more >Resolve Access Denied error for ListObjectsV2 using S3 sync
I'm running the aws s3 sync command to copy objects to or from an Amazon ... the name of the permission that allows...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Yes agreed, we should check in ls(‘’) first, and if not found, try ls(bucket) - if it succeeds, the bucket exists.
https://github.com/dask/s3fs/blob/bfd5de29270a0063935889ce089f84b3f803012b/s3fs/core.py#L791 Would it make more sense to change to
self.ls(bucket)
in this line? Some credentials have limited ACL to see other buckets.