question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Support setting a different endpoint for COPY TO / FROM for DNS-style S3 bucket adresses

See original GitHub issue

Use case: I want to be able to export data from CrateDB using COPY TO / FROM with (AWS) S3-compatible technologies like Minio, so that I don’t rely on AWS for my storage setup.

Feature description: With AWS SDK the endpoint defaults to <bucket>.s3.amazonaws.com. In order to use my Minio S3 bucket, I am able to enter my own endpoint in the URI for COPY FROM / TO commands. The ability to use AWS S3 is not affected.


Origin: https://community.crate.io/t/export-table-using-copy-to-command-to-alternate-s3-endpoint/246/4

S3 Client Helper: https://github.com/crate/crate/blob/b8f5bfc1ae1d34b0426f2a01217fde663505225b/server/src/main/java/io/crate/external/S3ClientHelper.java

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Reactions:7
  • Comments:7 (4 by maintainers)

github_iconTop GitHub Comments

3reactions
emrescommented, Sep 16, 2021

Would be nice to rely on S3 compatible storage. Google Cloud Platform storage has S3 compatibility, there’s Digital Ocean, Linode, and Vultr as well that offer S3 compatible object storage as well.

As-is, I am using s3fs as a bandaid with file://.

In addition to the above, there are also other S3-compatible storage systems such as Nutanix Object Storage which I’m using in one of my clients (so far it works as expected with utilities such as s5cmd and Cyberduck.)

Therefore it’ll be very good if CrateDB provides enough flexibility with different S3-compatible object storage systems.

2reactions
jeeminsocommented, Feb 2, 2022

Closing this issue, all necessary changes are now merged.

The scope of this issue was to delegate the provided s3 endpoints to AWS SDK. There still could be problems between SDK and the s3 compatible storage, ex) #12094. So, please try this feature with the latest nightly build.

The latest nightly build can be found at: https://cdn.crate.io/downloads/releases/nightly/ The steps to install and run: https://crate.io/docs/crate/tutorials/en/latest/install.html#ad-hoc-unix-macos-windows Also have a few sample copy from/to statements, here and here.

Thank you!

Read more comments on GitHub >

github_iconTop Results From Across the Web

Virtual hosting of buckets - Amazon Simple Storage Service
Virtual hosting is the practice of serving multiple websites from a single web server. One way to differentiate sites in your Amazon S3...
Read more >
s3cmd S3 Endpoint and DNS-Style bucket is missing on ...
Got it fixed due to older version on the s3cmd 1.6.1, as done upgrade to s3cmd 2.02.
Read more >
How to access bucket using s3cmd? - WEkEO
Access to the bucket can be realized with the help of various tools that ... Region: US S3 Endpoint: <dpi_url> DNS-style bucket+hostname:port template...
Read more >
How to use S3cmd to manage your Object Storage - UpCloud
UpCloud Object Storage is fully S3-compliant meaning any existing S3 ... First, create a new bucket, then copy the example.txt into it.
Read more >
Amazon S3 Tools: S3cmd Usage
S3cmd command line usage, options and commands. S3cmd is a tool for managing objects in Amazon S3 storage. It allows for making and...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found