Workflow Type: Nextflow
Stable

Introduction

ebi-metagenomics/biosiftr is a bioinformatics pipeline that generates taxonomic and functional profiles for low-yield (shallow shotgun: < 10 M reads) short raw-reads using MGnify biome-specific genome catalogues as a reference.

The biome selection includes all the biomes available in the MGnify genome catalogues.

The main sections of the pipeline include the following steps:

  1. Raw-reads quality control (fastp)
  2. HQ reads decontamination versus human, phyX, and host (bwa-mem2)
  3. QC report of decontaminated reads (FastQC)
  4. Integrated quality report of reads before and after decontamination (MultiQC)
  5. Mapping HQ clean reads using Sourmash and bwa-mem2 (optional)
  6. Taxonomic profile generation
  7. Functional profile inference

The final output includes a species relative abundance table, Pfam and KEGG Orthologs (KO) count tables, a KEGG modules completeness table, and DRAM-style visuals (optional). In addition, the shallow-mapping pipeline will integrate the taxonomic and functional tables of all the samples in the input samplesheet.

Installation

This workflow was built using Nextflow and follows nf-core good practices. It is containerised, so users can use either Docker or Apptainer/Singularity to run the pipeline. At the moment, it doesn't support Conda environments.

The pipeline requires Nextflow and a container technology such as Apptainer/Singularity or Docker.

Required Reference Databases

The first time you run the pipeline, it will download the required MGnify genomes catalogue reference files and the human_phiX BWAMEM2 index. If you select a different host for decontamination, you must provide the index yourself.

Running the pipeline using bwamem2 is optional. If you want to run the pipeline with this option set the --download_bwa true. This database will occupy considerable storage in your system depending on the biome.

In addition, instructions to generate the databases from custom catalogues can be found in the BioSIFTR paper's repository.

Usage

Prepare a samplesheet with your input data that looks as follows:

samplesheet.csv:

sample,fastq_1,fastq_2
paired_sample,/PATH/test_R1.fq.gz,/PATH/test_R2.fq.gz
single_sample,/PATH/test.fq.gz

Each row represents a fastq file (single-end) or a pair of fastq files (paired end) where 'sample' is a unique identifier for each dataset, 'fastq_1' is the path to the first FASTQ file, and 'fastq_2' is the path to the second FASTQ file for paired-end data.

Now, you can run the pipeline using the minimum of arguments:

nextflow run ebi-metagenomics/biosiftr \
   --biome  \
   --input samplesheet.csv \
   --outdir  default = `results` \
   --dbs  \
   --decontamination_indexes 

The central location for the databases can be set in the config file.

Optional arguments include:

--run_bwa  default = `false`   # To generate results using bwamem2 besides sourmash
--core_mode  default = `false` # To use core functions instead of pangenome functions
--run_dram  default = `false`  # To generate DRAM results

Use --core_mode true for large catalogues like the human-gut to avoid over-prediction due to a large number of accessory genes in the pangenome. Nextflow option -profile can be used to select a suitable config for your computational resources. You can add profile files to the config directory. Nextflow option -resume can be used to re-run the pipeline from the last successfully finished step.

Available biomes

This can be any of the MGnify catalogues for which shallow-mapping databases are currently available

Biome Catalogue Version
chicken-gut v1.0.1
cow-rumen v1.0.1
human-gut v2.0.2 ⚠️
human-oral v1.0.1
human-vaginal v1.0
honeybee-gut v1.0.1
marine v2.0
mouse-gut v1.0
non-model-fish-gut v2.0
pig-gut v1.0
sheep-rumen v1.0
zebrafish-fecal v1.0

⚠️ Note for human-gut:

The human-gut shallow-mapping database was created manually by re-running Panaroo to reconstruct the pangenomes. This is likely to have caused discrepancies in the pangenomes, so please bear that in mind.

Test

To test the installed tool with your downloaded databases, you can run the pipeline using the small test dataset. Even if there are no hits with the biome you are interested in, the pipeline should finish successfully. Add -profile if you have set up a config profile for your compute resources.

cd biosiftr/tests
nextflow run ../main.nf \
    --input test_samplesheet.csv \
    --biome  \
    --dbs  \
    --decontamination_indexes 

Credits

ebi-metagenomics/biosiftr pipeline was originally written by @Ales-ibt.

We thank the following people for their extensive assistance in the development of this pipeline: @mberacochea

Version History

v1.2.0 (earliest) Created 17th Jun 2025 at 15:26 by Alejandra Escobar

Delete workflows/shallowmapping.nf


Frozen v1.2.0 89f88f4
help Creators and Submitter
Creator
  • Alejandra Escobar
Submitter
Citation
Escobar, A. (2025). BioSIFTR. WorkflowHub. https://doi.org/10.48546/WORKFLOWHUB.WORKFLOW.1735.1
Activity

Views: 43   Downloads: 5

Created: 17th Jun 2025 at 15:26

Last updated: 17th Jun 2025 at 15:33

Annotated Properties
Topic annotations
help Tags

This item has not yet been tagged.

help Attributions

None

Total size: 7.43 MB
Powered by
(v.1.17.0-main)
Copyright © 2008 - 2025 The University of Manchester and HITS gGmbH