Failed to evaluate job outputs - IOException: Could not read from s3...
See original GitHub issueWhile testing cromwell-36 with AWS batch I was able to reproduce this error:
2019-02-25 09:38:52,508 cromwell-system-akka.dispatchers.engine-dispatcher-24 ERROR - WorkflowManagerActor Workflow b6b9322c-3929-4b72-9598-45d97dfb858d failed (during ExecutingWorkflowState): cromwell.backend.standard.StandardAsyncExecutionActor$$anon$2: Failed to evaluate job outputs:
Bad output 'print_nach_nachman_meuman.out': [Attempted 1 time(s)] - IOException: Could not read from s3://nrglab-cromwell-genomics/cromwell-execution/run_multiple_tests/b6b9322c-3929-4b72-9598-45d97dfb858d/call-test_cromwell_on_aws/shard-61/SingleTest.test_cromwell_on_aws/f8ecf673-ed61-4b06-b1d6-c20f7efe986e/call-print_nach_nachman_meuman/print_nach_nachman_meuman-stdout.log: Cannot access file: s3://s3.amazonaws.com/nrglab-cromwell-genomics/cromwell-execution/run_multiple_tests/b6b9322c-3929-4b72-9598-45d97dfb858d/call-test_cromwell_on_aws/shard-61/SingleTest.test_cromwell_on_aws/f8ecf673-ed61-4b06-b1d6-c20f7efe986e/call-print_nach_nachman_meuman/print_nach_nachman_meuman-stdout.log
at cromwell.backend.standard.StandardAsyncExecutionActor.$anonfun$handleExecutionSuccess$1(StandardAsyncExecutionActor.scala:867)
The error occurs when running many sub-workflows within a single wrapping workflow. The environment is configured correctly, and the test usually passes when running <30 subworkflows.
Here are the workflows:
run_multiple_test.wdl
import "three_task_sequence.wdl" as SingleTest
workflow run_multiple_tests {
scatter (i in range(30)){
call SingleTest.three_task_sequence{}
}
}
three_task_sequence.wdl
workflow three_task_sequence{
call print_nach
call print_nach_nachman {
input:
previous = print_nach.out
}
call print_nach_nachman_meuman{
input:
previous = print_nach_nachman.out
}
output{
Array[String] out = print_nach_nachman_meuman.out
}
}
task print_nach{
command{
echo "nach"
}
output{
Array[String] out = read_lines(stdout())
}
runtime {
docker: "ubuntu:latest"
maxRetries: 3
}
}
task print_nach_nachman{
Array[String] previous
command{
echo ${sep=' ' previous} " nachman"
}
output{
Array[String] out = read_lines(stdout())
}
runtime {
docker: "ubuntu:latest"
maxRetries: 3
}
}
task print_nach_nachman_meuman{
Array[String] previous
command{
echo ${sep=' ' previous} " meuman"
}
output{
Array[String] out = read_lines(stdout())
}
runtime {
docker: "ubuntu:latest"
maxRetries: 3
}
}
Here is the cromwell-conf:
// aws.conf
include required(classpath("application"))
webservice {
port = 8001
interface = 0.0.0.0
}
aws {
application-name = "cromwell"
auths = [{
name = "default"
scheme = "default"
}]
region = "us-east-1"
}
engine {
filesystems {
s3 { auth = "default" }
}
}
backend {
default = "AWSBATCH"
providers {
AWSBATCH {
actor-factory = "cromwell.backend.impl.aws.AwsBatchBackendLifecycleActorFactory"
config {
root = "s3://nrglab-cromwell-genomics/cromwell-execution"
auth = "default"
numSubmitAttempts = 3
numCreateDefinitionAttempts = 3
concurrent-job-limit = 100
default-runtime-attributes {
queueArn: "arn:aws:batch:us-east-1:66:job-queue/GenomicsDefaultQueue"
}
filesystems {
s3 {
auth = "default"
}
}
}
}
}
}
system {
job-rate-control {
jobs = 1
per = 1 second
}
}
Would appreciate help on this. I wonder if cromwell was ever tested for many parallel sub-workflows running on AWS?
Thanks!
Issue Analytics
- State:
- Created 5 years ago
- Comments:23 (2 by maintainers)
Top Results From Across the Web
Spark 1.6.1 S3 MultiObjectDeleteException - Stack Overflow
I'm getting the same issue with a non-spark streaming job. Spark 1.6.2, Hadoop 2.6. Doesn't work with the direct output committer either. –...
Read more >Resolve errors uploading data to or downloading data from ...
I want to download data from Amazon Aurora and upload it to Amazon S3. How can I resolve an error I received while...
Read more >Committing work to S3 with the S3A Committers
Magic output committer task fails “The specified upload does not exist” “Error Code: NoSuchUpload”; Job commit fails “java.io.
Read more >May 18, 2022•Knowledge 000148254 - Search
In Data Engineering Integration (BDM), S3 to S3 mapping fails with the following error messages in spark mode when mapping is run for...
Read more >Troubleshooting - nf-core
nextflow run nf-core/<pipeline_name> -profile test,docker ... This does not tell you why the job failed to submit, but is often is due to...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Still getting this error today.
Hmmm, still stuck on this - any updates from your guys’ end? I tried cloning and resubmitting, still getting the same error.