Unexpected timeout using BlobClient.Upload
See original GitHub issueDescribe the bug BlobClient.Upload throws an error after 100secs when uploading a large file on slow connection (filesize vs connectionspeed should be choosen to take longer than 100 secs) This error also kicks in, even if I set the BlobClientOptions.Retry.NetworkTimeout higher than 100 secs, but this value is respected when the timeout is set e.g. to 5 secs!?
Expected behavior No exception, and no enexpected timeout of 100 secs or at least the option to set this timeout higher than 100secs
Actual behavior (include Exception or Stack Trace)
0.3 secs 0 bytes transferred 0.0%
0.3 secs 131072 bytes transferred 0.1%
0.4 secs 262144 bytes transferred 0.3%
0.6 secs 393216 bytes transferred 0.4%
1.2 secs 524288 bytes transferred 0.5%
...
99.1 secs 35782656 bytes transferred 35.7%
99.4 secs 35913728 bytes transferred 35.8%
99.8 secs 36044800 bytes transferred 35.9%
100.1 secs 36175872 bytes transferred 36.0%
Azure.RequestFailedException: The request was aborted: The request was canceled.
at Azure.Core.Pipeline.HttpWebRequestTransport.<ProcessInternal>d__5.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Azure.Core.Pipeline.HttpWebRequestTransport.Process(HttpMessage message)
at Azure.Core.Pipeline.HttpPipelineTransportPolicy.Process(HttpMessage message, ReadOnlyMemory`1 pipeline)
at Azure.Core.Pipeline.HttpPipelinePolicy.ProcessNext(HttpMessage message, ReadOnlyMemory`1 pipeline)
at Azure.Core.Pipeline.RequestActivityPolicy.<ProcessNextAsync>d__10.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Threading.Tasks.ValueTask.ThrowIfCompletedUnsuccessfully()
at Azure.Core.Pipeline.RequestActivityPolicy.<ProcessAsync>d__9.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Azure.Core.Pipeline.RequestActivityPolicy.Process(HttpMessage message, ReadOnlyMemory`1 pipeline)
at Azure.Core.Pipeline.HttpPipelinePolicy.ProcessNext(HttpMessage message, ReadOnlyMemory`1 pipeline)
at Azure.Core.Pipeline.ResponseBodyPolicy.<ProcessAsync>d__5.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Azure.Core.Pipeline.ResponseBodyPolicy.Process(HttpMessage message, ReadOnlyMemory`1 pipeline)
at Azure.Core.Pipeline.HttpPipelinePolicy.ProcessNext(HttpMessage message, ReadOnlyMemory`1 pipeline)
at Azure.Core.Pipeline.LoggingPolicy.<ProcessAsync>d__8.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Azure.Core.Pipeline.LoggingPolicy.Process(HttpMessage message, ReadOnlyMemory`1 pipeline)
at Azure.Core.Pipeline.HttpPipelinePolicy.ProcessNext(HttpMessage message, ReadOnlyMemory`1 pipeline)
at Azure.Core.Pipeline.HttpPipelineSynchronousPolicy.Process(HttpMessage message, ReadOnlyMemory`1 pipeline)
at Azure.Core.Pipeline.HttpPipelinePolicy.ProcessNext(HttpMessage message, ReadOnlyMemory`1 pipeline)
at Azure.Core.Pipeline.HttpPipelineSynchronousPolicy.Process(HttpMessage message, ReadOnlyMemory`1 pipeline)
at Azure.Core.Pipeline.HttpPipelinePolicy.ProcessNext(HttpMessage message, ReadOnlyMemory`1 pipeline)
at Azure.Core.Pipeline.RetryPolicy.<ProcessAsync>d__11.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at Azure.Core.Pipeline.RetryPolicy.<ProcessAsync>d__11.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Azure.Core.Pipeline.RetryPolicy.Process(HttpMessage message, ReadOnlyMemory`1 pipeline)
at Azure.Core.Pipeline.HttpPipelinePolicy.ProcessNext(HttpMessage message, ReadOnlyMemory`1 pipeline)
at Azure.Core.Pipeline.HttpPipelineSynchronousPolicy.Process(HttpMessage message, ReadOnlyMemory`1 pipeline)
at Azure.Core.Pipeline.HttpPipelinePolicy.ProcessNext(HttpMessage message, ReadOnlyMemory`1 pipeline)
at Azure.Core.Pipeline.HttpPipelineSynchronousPolicy.Process(HttpMessage message, ReadOnlyMemory`1 pipeline)
at Azure.Core.Pipeline.HttpPipelinePolicy.ProcessNext(HttpMessage message, ReadOnlyMemory`1 pipeline)
at Azure.Core.Pipeline.HttpPipelineSynchronousPolicy.Process(HttpMessage message, ReadOnlyMemory`1 pipeline)
at Azure.Core.Pipeline.HttpPipeline.Send(HttpMessage message, CancellationToken cancellationToken)
at Azure.Storage.Blobs.BlobRestClient.BlockBlob.<UploadAsync>d__0.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Threading.Tasks.ValueTask`1.get_Result()
at System.Runtime.CompilerServices.ConfiguredValueTaskAwaitable`1.ConfiguredValueTaskAwaiter.GetResult()
at Azure.Storage.Blobs.Specialized.BlockBlobClient.<UploadInternal>d__26.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Azure.Storage.Blobs.Specialized.BlockBlobClient.<>c__DisplayClass48_0.<<GetPartitionedUploaderBehaviors>b__0>d.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Azure.Storage.PartitionedUploader`2.<UploadInternal>d__19.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Azure.Storage.Blobs.BlobClient.<StagedUploadInternal>d__29.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Azure.Storage.Blobs.BlobClient.Upload(String path, BlobHttpHeaders httpHeaders, IDictionary`2 metadata, BlobRequestConditions conditions, IProgress`1 progressHandler, Nullable`1 accessTier, StorageTransferOptions transferOptions, CancellationToken cancellationToken)
at AzureBlobStorageTest2.Program.UploadFile(String filename) in C:\Users\BKlaiber\Source\Repos\AzureBlobStorageTest2\AzureBlobStorageTest2\Program.cs:line 51
To Reproduce
using System;
using System.Diagnostics;
using System.Globalization;
using System.IO;
using System.Threading;
using Azure.Storage;
using Azure.Storage.Blobs;
namespace AzureBlobStorageTest2
{
class Program
{
private const string AccessKey = "/EYwq...YOUR KEY...XY5iw==";
private const string StorageAccountName = "YOURSTORAGEACCOUNTNAME";
private const string ContainerName = "YOURCONTAINER";
private const string LargeFilename = "D:\\Temp\\LargeFile.dat";
static void Main(string[] args)
{
Thread.CurrentThread.CurrentCulture = CultureInfo.InvariantCulture;
Thread.CurrentThread.CurrentUICulture = CultureInfo.InvariantCulture;
UploadFile(LargeFilename);
Console.ReadLine();
}
public static void UploadFile(string filename)
{
Uri containerUri = new Uri($"https://{StorageAccountName}.blob.core.windows.net/{ContainerName}");
var key = new StorageSharedKeyCredential(StorageAccountName, AccessKey);
//BlobServiceClient client = new BlobServiceClient(accountUri, key);
////var container = client.GetBlobContainerClient(ContainerName);
var container = new BlobContainerClient(containerUri, key);
var uploadClient2WithoutKey = container.GetBlobClient(Path.GetFileName(filename));
var blobClientOptions = new BlobClientOptions()
{
Retry = { MaxRetries = 0, NetworkTimeout = TimeSpan.FromHours(5) }
};
var uploadClient2 = new BlobClient(uploadClient2WithoutKey.Uri, key, blobClientOptions);
var sw = Stopwatch.StartNew();
var totalBytes = new FileInfo(filename).Length;
try
{
uploadClient2.Upload(filename, null, null, null,
new Progress<long>(l => Console.WriteLine($"{sw.Elapsed.TotalSeconds,5:##0.0} secs {l,10} bytes transferred {(double)l / totalBytes * 100,4:0.0}%")));
}
catch (Exception e)
{
Console.WriteLine(e);
}
}
}
}
Environment: Win10 1709, VS 2019
<package id="Azure.Core" version="1.6.0" targetFramework="net472" />
<package id="Azure.Storage.Blobs" version="12.7.0" targetFramework="net472" />
<package id="Azure.Storage.Common" version="12.6.0" targetFramework="net472" />
Issue Analytics
- State:
- Created 3 years ago
- Comments:6 (3 by maintainers)
Top Results From Across the Web
Solve timeout errors on file uploads with new azure.storage ...
I tested with code as following, it uploaded the file(~10M) successfully. ... Create a blob client using the local file name as the...
Read more >Azure blob storage upload fail - Microsoft Q&A
I have an issue uploading a large file to my azure storage blob thru azure storage explorer. The file is approx. 160GB.
Read more >Using retry policies with blob operations | Microsoft Azure ...
A storage operation that accesses the Azure Storage service can fail in various ways. For example, there could be an unexpected timeout if...
Read more >Using retry policies with blob operations - Microsoft Azure ...
For example, there could be an unexpected timeout if the Storage service is moving a partition for performance reasons, so we are dealing...
Read more >Unexpected Azure Storage Account transactions caused by ...
But the Azure cloud shell has a timeout of twenty minutes. When the source container has many files, the script may take longer...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
@seanmcc-msft Ok, I see. Maybe the implementation can adapt the blocksize, assuming that if someone wants to upload huge file he will also have fast connetction. Probably not possible for streams if they don’t know the finaly length!? Anyway I would rather blame the 100secs timeout than the blocksize. Does someone expect a timeout of 100secs? Unfortunately everything is fine during developer tests (low load, good connections). But problem will arise in the real world. So modifying the timeout might also be a solution.
@tg-msft, @kasobol-msft, @jaschrep-msft, @amnguye
@BerndK, this is a good question, we’ve been debating lowering the defaults on and off for some time. The reason for the high defaults is that the SDK calls Put Blob or Put Block and Put Block List for the Upload API. There is a limit on the number of blocks a Block Blob can have, and we didn’t want to limit the size of a blob that had been created from the SDK.
That being said, lots of customers have run into this issue, and we will continue discussing lowering the defaults.