Skip to content

better handle output #1560

@skorzennik

Description

@skorzennik

The way our users use SPAdes creates millions of temporary(?) files, which causes problems on some filesystems (like GPFS) that have build-in limits on # files per directory. Deleting these temp files when teh analysis is done is also time and resources consuming. This seems like a design flaw to me.

1- are you considering a way to consolidate the s/w to avoid creating such a huge number of files and/or deleting the ones not needed any longer as the analysis proceeds?

2- is there a way to tell SPAdes to clean up this mees when done?

Thanks,
Sylvain Korzennik, Ph.D. HPC analyst, Smithsonian Institution.

Metadata

Metadata

Assignees

No one assigned

    Labels

    incompleteLacking logs, reproducers, etc.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions