segment fault of picard MarkDuplicates when RNA-seq BAM
See original GitHub issuesegment fault of picard MarkDuplicates when RNA-seq BAM.
Bug Report
Affected tool(s)
MarkDuplicates
Affected version(s)
- 2.14.1/Latest 2.15.0
- but 2.10.3. is OK
Description
segment fault of picard MarkDuplicates when RNA-seq BAM. although we do not use the bam without dup any more, but we can see how many dup in it.
Steps to reproduce
[Tue Nov 14 10:52:58 CST 2017] MarkDuplicates INPUT=[/biowrk/refseq.Homo_sapiens.108/bam.star/SAMEA1968189/Aligned.sortedByCoord.out.bam] OUTPUT=/biowrk/refseq.Homo_sapiens.108/bam.star/SAMEA1968189/md.bam METRICS_FILE=/biowrk/refseq.Homo_sapiens.108/bam.star/SAMEA1968189/md.metrics REMOVE_DUPLICATES=true VERBOSITY=WARNING COMPRESSION_LEVEL=0 CREATE_INDEX=true MAX_SEQUENCES_FOR_DISK_READ_ENDS_MAP=50000 MAX_FILE_HANDLES_FOR_READ_ENDS_MAP=8000 SORTING_COLLECTION_SIZE_RATIO=0.25 TAG_DUPLICATE_SET_MEMBERS=false REMOVE_SEQUENCING_DUPLICATES=false TAGGING_POLICY=DontTag CLEAR_DT=true ADD_PG_TAG_TO_READS=true ASSUME_SORTED=false DUPLICATE_SCORING_STRATEGY=SUM_OF_BASE_QUALITIES PROGRAM_RECORD_ID=MarkDuplicates PROGRAM_GROUP_NAME=MarkDuplicates READ_NAME_REGEX=<optimized capture of last three ':' separated fields as numeric values> OPTICAL_DUPLICATE_PIXEL_DISTANCE=100 MAX_OPTICAL_DUPLICATE_SET_SIZE=300000 QUIET=false VALIDATION_STRINGENCY=STRICT MAX_RECORDS_IN_RAM=500000 CREATE_MD5_FILE=false GA4GH_CLIENT_SECRETS=client_secrets.json USE_JDK_DEFLATER=false USE_JDK_INFLATER=false
[Tue Nov 14 10:52:58 CST 2017] Executing as root@T620 on Linux 3.10.0-693.5.2.el7.x86_64 amd64; OpenJDK 64-Bit Server VM 1.8.0_151-b12; Deflater: Intel; Inflater: Intel; Picard version: 2.15.0-SNAPSHOT
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x00007f8b2a6da58b, pid=29322, tid=0x00007f830d745700
#
# JRE version: OpenJDK Runtime Environment (8.0_151-b12) (build 1.8.0_151-b12)
# Java VM: OpenJDK 64-Bit Server VM (25.151-b12 mixed mode linux-amd64 )
# Problematic frame:
# V [libjvm.so+0x62e58b]
#
# Core dump written. Default location: /usr/hpc-bio/bam.align/core or core.29322
#
# An error report file with more information is saved as:
# /usr/hpc-bio/bam.align/hs_err_pid29322.log
#
# If you would like to submit a bug report, please visit:
# http://bugreport.java.com/bugreport/crash.jsp
#
Issue Analytics
- State:
- Created 6 years ago
- Comments:6 (1 by maintainers)
Top Results From Across the Web
Why do these two reads cause segmentation fault in Cufflinks?
I have 50bp PE RNA-seq, and 4 of my ~200 samples end in error after ... Marked (but did not remove) duplicates with...
Read more >How to Mark duplicates with MarkDuplicates or ...
To mark duplicates in RNA-Seq data, use MarkDuplicates. ... java -Xmx32G -jar picard.jar MarkDuplicates \ INPUT=6747_snippet.bam \ #specify multiple times ...
Read more >MarkDuplicates (Picard) - GATK - Broad Institute
This tool locates and tags duplicate reads in a BAM or SAM file, where duplicate reads are defined as originating from a single...
Read more >diffHiC read loss when loading BAM to HDF
The preparePairs function expects a name-sorted BAM file as input, such that all paired reads (or segments thereof) are grouped together.
Read more >Samtools markdup for duplicate removal or Picard?
# Finally mark duplicates. Use -r flag to remove duplicates, and -s to print stats. samtools markdup positionsort.bam markdup.bam.
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
@lbergelson
use_jdk_deflater=true use_jdk_inflater=true => OK use_jdk_deflater=true =>OK use_jdk_inflater=true =>segment fault too.
In my test, it is fixed in picard 2.16.0 release.