IOException with AWSBatch
See original GitHub issueI’m having some problems running the AWSBatch examples from the aws opendata page - https://docs.opendata.aws/genomics-workflows/aws-batch/configure-aws-batch-start/ - as far as I can tell I have correctly set the permissions, however when I try running the examples I get the error below. It looks like the output files aren’t being written the S3 bucket.
I’m probably missing something obvious, but I can’t work out what I’m doing wrong. Would anyone have any suggestions for where might be good to look for errors?
Caused by: java.io.IOException: Could not read from s3://concr-genomics-results/cromwell-execution/wf_hello/b7e4cdce-ff14-4509-aec3-b226ed31043c/call-hello/hello-rc.txt: s3://s3.amazonaws.com/concr-genomics-results/cromwell-execution/wf_hello/b7e4cdce-ff14-4509-aec3-b226ed31043c/call-hello/hello-rc.txt
at cromwell.engine.io.nio.NioFlow$$anonfun$withReader$2.applyOrElse(NioFlow.scala:146)
at cromwell.engine.io.nio.NioFlow$$anonfun$withReader$2.applyOrElse(NioFlow.scala:145)
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:34)
at scala.util.Failure.recoverWith(Try.scala:232)
at cromwell.engine.io.nio.NioFlow.withReader(NioFlow.scala:145)
at cromwell.engine.io.nio.NioFlow.limitFileContent(NioFlow.scala:154)
at cromwell.engine.io.nio.NioFlow.$anonfun$readAsString$1(NioFlow.scala:98)
at cats.effect.internals.IORunLoop$.cats$effect$internals$IORunLoop$$loop(IORunLoop.scala:85)
at cats.effect.internals.IORunLoop$RestartCallback.signal(IORunLoop.scala:336)
at cats.effect.internals.IORunLoop$RestartCallback.apply(IORunLoop.scala:357)
at cats.effect.internals.IORunLoop$RestartCallback.apply(IORunLoop.scala:303)
at cats.effect.internals.IOShift$Tick.run(IOShift.scala:36)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)
at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: java.nio.file.NoSuchFileException: s3://s3.amazonaws.com/concr-genomics-results/cromwell-execution/wf_hello/b7e4cdce-ff14-4509-aec3-b226ed31043c/call-hello/hello-rc.txt
at org.lerch.s3fs.S3FileSystemProvider.newInputStream(S3FileSystemProvider.java:350)
at java.nio.file.Files.newInputStream(Files.java:152)
at better.files.File.newInputStream(File.scala:337)
at cromwell.core.path.BetterFileMethods.newInputStream(BetterFileMethods.scala:240)
at cromwell.core.path.BetterFileMethods.newInputStream$(BetterFileMethods.scala:239)
at cromwell.filesystems.s3.S3Path.newInputStream(S3PathBuilder.scala:156)
at cromwell.core.path.EvenBetterPathMethods.mediaInputStream(EvenBetterPathMethods.scala:94)
at cromwell.core.path.EvenBetterPathMethods.mediaInputStream$(EvenBetterPathMethods.scala:91)
at cromwell.filesystems.s3.S3Path.mediaInputStream(S3PathBuilder.scala:156)
at cromwell.engine.io.nio.NioFlow.$anonfun$withReader$1(NioFlow.scala:145)
at cromwell.util.TryWithResource$.$anonfun$tryWithResource$1(TryWithResource.scala:14)
at scala.util.Try$.apply(Try.scala:209)
at cromwell.util.TryWithResource$.tryWithResource(TryWithResource.scala:10)
... 14 more
Issue Analytics
- State:
- Created 5 years ago
- Comments:25 (3 by maintainers)
Top Results From Across the Web
Resolve "No space left on device" errors from AWS Batch
To resolve the error, first verify that your Docker volume hosted on Amazon EBS has enough disk space. If there's not enough disk...
Read more >AWS Batch job - no space left on device, but EBS ...
I am running into an issue where I get a java.lang.RuntimeException: java.io.IOException: No space left on device in my Batch jobs. I thought ......
Read more >Excutors lost and disconnecting in EMR
Hey! Since a couple of days, I'm getting an issue I'm not able to resolve for now. My EMR job fails during the...
Read more >How AWS Batch Works - YouTube
Learn about how AWS Batch works - http://amzn.to/2jw31pLAWS Batch enables developers, scientists, and engineers to easily and efficiently ...
Read more >Aws Batch Job No Space Left On Device But Ebs Autoscaling
AWS Batch enables you to run batch computing workloads on the AWS Cloud. ... to store IOException: No space left on device UnsafeExternalSorter:...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
OK I think I got it working.
It turns out the custom AMI I created was incorrect.
When making a custom AMI using the cloud formation stacks as described here.
The AMI type needs to be specified as ‘cromwell’ and the Scratch mount point needs to be specified as
\cromwell_mount
This information is stated elsewhere, but perhaps this page should be updated to include this information?
Because the stack has changed a lot over time, I wouldn’t assume the bug discussed in this thread is the same as your bug. I’ll provide some more detail in the thread you created.