You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What steps will reproduce the problem?
1. #CHIMERASCAN-0.4.5 setup
export
PYTHONPATH=/shared/app/chimerascan-0.4.5/lib64/python2.6/site-packages:$PYTHONPA
TH
export PATH=/shared/app/chimerascan-0.4.5/chimerascan/:$PATH
python /shared/app/chimerascan-0.4.5/chimerascan/chimerascan_run.py -p 8
/shared/app/BOWTIE/indexes/CHIMERASCAN_INDEXES $a1 $a2
/home/hazards/Project_DW/Sample_99/Nov14_CHIMERASCAN_OUT
2. $a1 $a2 are variables referring to specific read pair fastq files
3. I am running the program on human lung derived RNASeq fastq generated by an
Illumina sequencer
What is the expected output? What do you see instead?
chimeras.bedpe was expected
Here's what I see:
.
.
.
2013-11-15 09:18:55,808 - root - WARNING - Could not extract sequence of length
>101 from 3' partner at gene_uc002ect.2:0-101, only retrieved sequence of
length 94
2013-11-15 09:18:56,438 - root - WARNING - Could not extract sequence of length
>101 from 3' partner at gene_uc010vft.1:0-101, only retrieved sequence of
length 90
2013-11-15 09:18:56,771 - root - WARNING - Could not extract sequence of length
>101 from 5' partner at gene_uc011mtj.1:0-96, only retrieved sequence of length
96
2013-11-15 09:19:06,915 - root - WARNING - Could not extract sequence of length
>101 from 3' partner at gene_uc010zpm.1:1741-1842, only retrieved sequence of
length 99
2013-11-15 09:19:09,269 - root - INFO - Filtering encompassing chimeras with
few supporting reads
2013-11-15 09:57:34,874 - root - INFO - Extracting breakpoint sequences from
chimeras
2013-11-15 10:16:30,598 - root - INFO - Building bowtie index of breakpoint
spanning sequences
2013-11-15 10:35:32,445 - root - INFO - Extracting encompassing reads that may
extend past breakpoints
2013-11-15 10:51:26,686 - root - INFO - Separating unmapped and single-mapping
reads that may span breakpoints
2013-11-15 11:05:38,719 - root - INFO - Extracting single-mapped reads that may
span breakpoints
/home/hazards/.lsbatch/1384447987.22825.shell: line 42: 10199 Killed
python /shared/app/chimerascan-0.4.5/chimerascan/chimerascan_run.py -p 8
/shared/app/BOWTIE/indexes/CHIMERASCAN_INDEXES
What version of the product are you using? On what operating system?
Chimerscan 0.4.5
Bowtie 1.0.0
Python 2.6.6
RHEL 6.2 Linux
LSF 7.2
Please provide any additional information below.
The log files suggest that the initial runs complete through
isize_dist.txt, breakpoint_bowtie_index.log, and tmp_singlemap_seqs.txt
-rw-r--r-- 1 hazards hazards 1446 Nov 15 07:23 runconfig.xml
-rw-r--r-- 1 hazards hazards 5449339058 Nov 15 09:33 aligned_reads.bam
-rw-r--r-- 1 hazards hazards 7386269125 Nov 15 10:45 sorted_aligned_reads.bam
-rw-r--r-- 1 hazards hazards 9578704 Nov 15 10:48
sorted_aligned_reads.bam.bai
-rw-r--r-- 1 hazards hazards 6613 Nov 15 10:48 isize_dist.txt
drwxr-xr-x 2 hazards hazards 104 Nov 16 20:22 log
drwxr-xr-x 2 hazards hazards 8192 Nov 16 21:38 tmp
Sample_86/Nov14_CHIMERASCAN_OUT/log:
total 24
-rw-r--r-- 1 hazards hazards 465 Nov 15 09:33 bowtie_alignment.log
-rw-r--r-- 1 hazards hazards 457 Nov 15 15:07 bowtie_trimmed_realignment.log
-rw-r--r-- 1 hazards hazards 12663 Nov 16 20:53 breakpoint_bowtie_index.log
Sample_86/Nov14_CHIMERASCAN_OUT/tmp:
total 110869964
-rw-r--r-- 1 hazards hazards 5797564944 Nov 15 08:03 reads_2.fq
-rw-r--r-- 1 hazards hazards 5797564944 Nov 15 08:03 reads_1.fq
-rw-r--r-- 1 hazards hazards 1792743679 Nov 15 09:32 unaligned_1.fq
-rw-r--r-- 1 hazards hazards 1792743679 Nov 15 09:32 unaligned_2.fq
-rw-r--r-- 1 hazards hazards 193024 Nov 15 09:32 maxmulti_1.fq
-rw-r--r-- 1 hazards hazards 193024 Nov 15 09:32 maxmulti_2.fq
-rw-r--r-- 1 hazards hazards 4876081081 Nov 15 15:07 realigned_reads.bam
-rw-r--r-- 1 hazards hazards 3120178382 Nov 15 23:38 gene_paired_reads.bam
-rw-r--r-- 1 hazards hazards 2072258813 Nov 15 23:38 genome_paired_reads.bam
-rw-r--r-- 1 hazards hazards 635025773 Nov 15 23:38 unmapped_reads.bam
-rw-r--r-- 1 hazards hazards 934624 Nov 15 23:38 complex_reads.bam
-rw-r--r-- 1 hazards hazards 295327399 Nov 15 23:45 discordant_reads.bedpe
-rw-r--r-- 1 hazards hazards 295327399 Nov 15 23:45 discordant_reads.srt.bedpe
-rw-r--r-- 1 hazards hazards 49046649769 Nov 16 17:34 encompassing_chimeras.txt
-rw-r--r-- 1 hazards hazards 17552465613 Nov 16 19:26
encompassing_chimeras.filtered.txt
-rw-r--r-- 1 hazards hazards 17552465613 Nov 16 19:47
encompassing_chimeras.breakpoint_sorted.txt
-rw-r--r-- 1 hazards hazards 603862877 Nov 16 20:22 breakpoints.fa
-rw-r--r-- 1 hazards hazards 652818329 Nov 16 20:22 breakpoints.txt
-rw-r--r-- 1 hazards hazards 140401371 Nov 16 20:23 breakpoints.4.ebwt
-rw-r--r-- 1 hazards hazards 25023302 Nov 16 20:23 breakpoints.3.ebwt
-rw-r--r-- 1 hazards hazards 234699054 Nov 16 20:39 breakpoints.1.ebwt
-rw-r--r-- 1 hazards hazards 70200692 Nov 16 20:39 breakpoints.2.ebwt
-rw-r--r-- 1 hazards hazards 234699054 Nov 16 20:53 breakpoints.rev.1.ebwt
-rw-r--r-- 1 hazards hazards 70200692 Nov 16 20:53 breakpoints.rev.2.ebwt
-rw-r--r-- 1 hazards hazards 3878595 Nov 16 21:27 encomp_spanning_reads.fq
-rw-r--r-- 1 hazards hazards 212324326 Nov 16 21:32
unaligned_spanning_reads.fq
-rw-r--r-- 1 hazards hazards 649644709 Nov 16 21:38 singlemap_reads.srt.bam
-rw-r--r-- 1 hazards hazards 5238976 Nov 16 21:38
singlemap_reads.srt.bam.bai
-rw-r--r-- 1 hazards hazards 0 Nov 16 21:38 tmp_singlemap_seqs.txt
I run the program on a local research cluster whose nodes have either 16Mb ram
or 24Gb ram. restricting to the 24Gb ram machines has no effect. ie the program
still runs and gets killed. I would assume that there is a size limitation but
the some files as large as 5-7 Gb have completed while others fail as
described. Re-running the jobs that fail repeats the failure.
Original issue reported on code.google.com by [email protected] on 18 Nov 2013 at 9:59
The text was updated successfully, but these errors were encountered:
Original issue reported on code.google.com by
[email protected]
on 18 Nov 2013 at 9:59The text was updated successfully, but these errors were encountered: