You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi all,
For some input BED files the peaks tool works nicely. For others (all formatted the same way) I get the following error. I am running this with 16 CPUs and 60GB of available, which seemed like it would be sufficient for the possible memory. Additionally, it takes ~17-20 hours to get all the way to the point where it throws this error. Any thoughts or help would be appreciated.
Executing the following command: iCount peaks gencode.v21.annotation.segment.gtf input.bed iCountPeaks.bed --scores iCountPeaks_scores.tsv
Input parameters for function 'run' in iCount.analysis.peaks
annotation: gencode.v21.annotation.segment.gtf
sites: input.bed
peaks: iCountPeaks.bed
scores: iCountPeaks_scores.tsv
features: None
group_by: gene_id
merge_features: False
half_window: 3
fdr: 0.05
perms: 100
rnd_seed: 42
report_progress: False
Loading annotation file...
60155 out of 2581788 annotation records will be used (2521633 skipped).
Loading cross-links file...
Calculating intersection between annotation and cross-link file...
Processing intersections...
Traceback (most recent call last):
File "/share/PI/bertozzi/users/raflynn/tools/iCount/iCount/cli.py", line 436, in main
result_object = func(**args)
File "/share/PI/bertozzi/users/raflynn/tools/iCount/iCount/analysis/peaks.py", line 515, in run
processed = _process_group(hits, group_size, half_window, perms)
File "/share/PI/bertozzi/users/raflynn/tools/iCount/iCount/analysis/peaks.py", line 371, in _process_group
random_ = get_avg_rnd_distrib(group_size, sum_scores, half_window, perms=perms)
File "/share/PI/bertozzi/users/raflynn/tools/iCount/iCount/analysis/peaks.py", line 269, in get_avg_rnd_distrib
rnd_ps = numpy.zeros((perms, total_hits + 1))
MemoryError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "iCount", line 11, in <module>
load_entry_point('iCount', 'console_scripts', 'iCount')()
File "/share/PI/bertozzi/users/raflynn/tools/iCount/iCount/cli.py", line 444, in main
exception_message = exception.args[0]
IndexError: tuple index out of range
Traceback (most recent call last):
File "/share/PI/bertozzi/users/raflynn/tools/iCount/iCount/cli.py", line 436, in main
result_object = func(**args)
File "/share/PI/bertozzi/users/raflynn/tools/iCount/iCount/analysis/peaks.py", line 515, in run
processed = _process_group(hits, group_size, half_window, perms)
File "/share/PI/bertozzi/users/raflynn/tools/iCount/iCount/analysis/peaks.py", line 371, in _process_group
random_ = get_avg_rnd_distrib(group_size, sum_scores, half_window, perms=perms)
File "/share/PI/bertozzi/users/raflynn/tools/iCount/iCount/analysis/peaks.py", line 269, in get_avg_rnd_distrib
rnd_ps = numpy.zeros((perms, total_hits + 1))
MemoryError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "iCount", line 11, in <module>
load_entry_point('iCount', 'console_scripts', 'iCount')()
File "/share/PI/bertozzi/users/raflynn/tools/iCount/iCount/cli.py", line 444, in main
exception_message = exception.args[0]
IndexError: tuple index out of range
The text was updated successfully, but these errors were encountered:
Hi all,
For some input BED files the peaks tool works nicely. For others (all formatted the same way) I get the following error. I am running this with 16 CPUs and 60GB of available, which seemed like it would be sufficient for the possible memory. Additionally, it takes ~17-20 hours to get all the way to the point where it throws this error. Any thoughts or help would be appreciated.
The text was updated successfully, but these errors were encountered: