MGnify genomes analysis pipeline
Version 1

Workflow Type: Nextflow

MGnify genomes analysis pipeline

MGnify A pipeline to perform taxonomic and functional annotation and to generate a catalogue from a set of isolate and/or metagenome-assembled genomes (MAGs) using the workflow described in the following publication:

Gurbich TA, Almeida A, Beracochea M, Burdett T, Burgin J, Cochrane G, Raj S, Richardson L, Rogers AB, Sakharova E, Salazar GA and Finn RD. (2023) MGnify Genomes: A Resource for Biome-specific Microbial Genome Catalogues. J Mol Biol. doi:

Detailed information about existing MGnify catalogues:

Tools used in the pipeline

Tool/Database Version Purpose
CheckM 1.1.3 Determining genome quality
dRep 3.2.2 Genome clustering
Mash 2.3 Sketch for the catalogue; placement of genomes into clusters (update only); strain tree
GUNC 1.0.3 Quality control
GTDB-Tk 2.1.0 Assigning taxonomy; generating alignments
GTDB r207_v2 Database for GTDB-Tk
Prokka 1.14.6 Protein annotation
IQ-TREE 2 Generating a phylogenetic tree
Kraken 2 2.1.2 Generating a kraken database
Bracken 2.6.2 Generating a bracken database
MMseqs2 13.45111 Generating a protein catalogue
eggNOG-mapper 2.1.3 Protein annotation (eggNOG, KEGG, COG, CAZy)
InterProScan 5.57-90.0 Protein annotation (InterPro, Pfam)
CRISPRCasFinder 4.3.2 Annotation of CRISPR arrays
AMRFinderPlus 3.11.4 Antimicrobial resistance gene annotation; virulence factors, biocide, heat, acid, and metal resistance gene annotation
AMRFinderPlus DB 3.11 2023-02-23.1 Database for AMRFinderPlus
SanntiS Biosynthetic gene cluster annotation
Infernal 1.1.4 RNA predictions
tRNAscan-SE 2.0.9 tRNA predictions
Rfam 14.6 Identification of SSU/LSU rRNA and other ncRNAs
Panaroo 1.3.2 Pan-genome computation
Seqtk 1.3 Generating a gene catalogue
VIRify - Viral sequence annotation
MoMofy 1.0.0 Mobilome annotation
samtools 1.15 FASTA indexing



The pipeline is implemented in Nextflow.


Reference databases

The pipeline needs the following reference databases and configuration files (roughtly ~150G):


This pipeline requires singularity or docker as the container engine to run pipeline.

The containers are hosted in biocontainers and repository.

It's possible to build the containers from scratch using the following script:

cd containers && bash

Running the pipeline

Data preparation

  1. You need to pre-download your data to directories and make sure that genomes are uncompressed. Scripts to fetch genomes from ENA ( and NCBI ( are provided and need to be executed separately from the pipeline. If you have downloaded genomes from both ENA and NCBI, put them into separate folders.

  2. When genomes are fetched from ENA using the script, a CSV file with contamination and completeness statistics is also created in the same directory where genomes are saved to. If you are downloading genomes using a different approach, a CSV file needs to be created manually (each line should be genome accession, % completeness, % contamination). The ENA fetching script also pre-filters genomes to satisfy the QS50 cut-off (QS = % completeness - 5 * % contamination).

  3. You will need the following information to run the pipeline:

  • catalogue name (for example, zebrafish-faecal)
  • catalogue version (for example, 1.0)
  • catalogue biome (for example, root:Host-associated:Human:Digestive system:Large intestine:Fecal)
  • min and max accession number to be assigned to the genomes (only MGnify specific). Max - Min = #total number of genomes (NCBI+ENA)


The pipeline is built in Nextflow, and utilized containers to run the software (we don't support conda ATM). In order to run the pipeline it's required that the user creates a profile that suits their needs, there is an ebi profile in nexflow.config that can be used as template.

After downloading the databases and adjusting the config file:

nextflow run EBI-Metagenomics/genomes-pipeline -c  -profile  \
--genome-prefix=MGYG \
--biome="root:Host-associated:Fish:Digestive system" \
--ena_genomes= \
--ena_genomes_checkm= \
--mgyg_start=0 \
--mgyg_end=10 \
--catalogue_name=zebrafish-faecal \
--catalogue_version="1.0" \
--ftp_name="zebrafish-faecal" \
--ftp_version="v1.0" \


Install development tools (including pre-commit hooks to run Black code formatting).

pip install -r requirements-dev.txt
pre-commit install

Code style

Use Black, this tool is configured if you install the pre-commit tools as above.

To manually run them: black .


This repo has 2 set of tests, python unit tests for some of the most critical python scripts and nf-test scripts for the nextflow code.

To run the python tests

pip install -r requirements-test.txt

To run the nextflow ones the databases have to downloaded manually, we are working to improve this.

nf-test test tests/*

Version History

v2.0.0 (earliest) Created 28th Apr 2023 at 10:36 by Martin Beracochea

generate_summary_json .. fix empty file comparison

Frozen v2.0.0 8fa9134
help Creators and Submitter
Not specified

Views: 87

Created: 28th Apr 2023 at 10:36

help Attributions


Total size: 97.5 MB

Brought to you by:

Powered by
Copyright © 2008 - 2023 The University of Manchester and HITS gGmbH