No way to specify wildcard Origin on resumable upload initiate session
See original GitHub issueFollowing what was described in issue #2755. I would like to be able to add a wildcard to UploadObjectOptions.Origin
; Just to describe, I don’t know if it is a bug or a feature request.
Environment details
- .NET Core version: 3.1.200
- Google.Cloud.Storage.V1 - 2.5.0 and 3.0.0-beta03
My situations is that:
- I have an API deployed on GAE. This API is used to process files that has a huge size (around 4GB each);
- I have an app also deployed on GAE. This app consumes the API.
- When the App needs to upload a file, it gets from API a resumable upload URI;
- Due to the
XmlHttpRequest
filesize limitation (around 80MB), it uploads in chunks of 8MB (similar from what Google Cloud Storage does when uploading directly to it).
What is happening, is, on the last chunk, when the status received is 200
, I am getting the following error:
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://storage.googleapis.com/upload/storage/v1/b/… (Reason: CORS header ‘Access-Control-Allow-Origin’ missing).
On the API, I am creating the URI like this:
var options = new UploadObjectOptions {
PredefinedAcl = PredefinedObjectAcl.PublicRead,
Origin = "*"
};
// Create a temporary uploader so the upload session can be manually initiated without actually uploading.
var tempUploader = storageClient.CreateObjectUploader(
bucketName,
filePath,
contentType,
new MemoryStream(),
options);
Uri uri = await tempUploader.InitiateSessionAsync();
This is the response headers when the status is 308
:
HTTP/2 308 Permanent Redirect
content-type: text/plain; charset=utf-8
x-guploader-uploadid: [OMITTED]
range: bytes=0-41943039
x-range-md5: aec2f597fb80999005874b9e79b39ea3
access-control-allow-origin: http://localhost:8080
access-control-allow-credentials: true
access-control-expose-headers: Access-Control-Allow-Credentials, Access-Control-Allow-Origin, Access-Control-Expose-Headers, Content-Length, Content-Type, Date, Range, Server, Transfer-Encoding, X-GUploader-UploadID, X-Google-Trace, X-Range-MD5
content-length: 0
date: Thu, 23 Apr 2020 08:24:08 GMT
server: UploadServer
alt-svc: h3-Q050=":443"; ma=2592000, h3-Q049=":443"; ma=2592000, h3-Q048=":443"; ma=2592000, h3-Q046=":443"; ma=2592000, h3-Q043=":443"; ma=2592000, h3-T050=":443"; ma=2592000
X-Firefox-Spdy: h2
This is the last response that I get when the status is 200
, and the error happens:
HTTP/2 200 OK
x-guploader-uploadid: [OMITTED]
etag: CIe+pMmP/ugCEAE=
content-type: application/json; charset=UTF-8
date: Thu, 23 Apr 2020 08:24:11 GMT
vary: Origin
vary: X-Origin
cache-control: no-cache, no-store, max-age=0, must-revalidate
expires: Mon, 01 Jan 1990 00:00:00 GMT
pragma: no-cache
content-length: 2745
server: UploadServer
alt-svc: h3-Q050=":443"; ma=2592000, h3-Q049=":443"; ma=2592000, h3-Q048=":443"; ma=2592000, h3-Q046=":443"; ma=2592000, h3-Q043=":443"; ma=2592000, h3-T050=":443"; ma=2592000
X-Firefox-Spdy: h2
Take a look that the last response doesn’t come with the header: access-control-allow-origin
This is the same behaviour if I don’t set anything on UploadObjectOptions.Origin
. However, if I set directly http://localhost:8080
it works fine.
This is my JavaScript code:
const xhr = new XMLHttpRequest();
xhr.open("PUT", uploadUri, true);
xhr.setRequestHeader('Content-Type', `${contentType}`);
xhr.setRequestHeader('Content-Range', `bytes ${startBytes}-${(endBytes - 1)}/${totalBytes}`);
xhr.send(slicedFile);
My problem is, I don’t have only a single origin, I have multiple origins. Now, my workaround is sending the origin to the API in order to add the Origin
. This is ugly.
I would like to be able to accept any Origin using wildcards, as I defined on Set Bucket CORS.
If you need any further information, let me know.
Issue Analytics
- State:
- Created 3 years ago
- Comments:7
I would prefer not to do that in the library documentation, as that means it would become inaccurate if the team accepts your feature request. I think it would be better to ask for the service team to update the service documentation if they don’t want to implement the feature.
I’ll have a closer look at this when I get a chance, but it’s not clear to me whether this is a client library issue or a Storage service issue. Are you able to capture the requests (from .NET) as well as the responses to see if we’re sending the headers you expect to send? If the library is behaving as expected, we’ll need to pass this over to the Storage team - I’m afraid the engineers working in this repo don’t have in-depth knowledge of every API represented. But obviously if it is a client library issue, we should be able to fix that 😃