This repository contains the workflow used to find and characterize the HI sources in the data cube of the SKA Data Challenge 2. It was developed to process a simulated SKA data cube data cube, but can be adapted for clean HI data cubes from other radio observatories.
The workflow is managed and executed using snakemake workflow management system. It uses https://spectral-cube.readthedocs.io/en/latest/ based on https://dask.org/ parallelization tool and https://www.astropy.org/ suite to divide the large cube in smaller pieces. On each of the subcubes, we execute https://github.com/SoFiA-Admin/SoFiA-2 for masking the subcubes, find sources and characterize their properties. Finally, the individual catalogs are cleaned, concatenated into a single catalog, and duplicates from the overlapping regions are eliminated. Some diagnostic plots are produced using Jupyter notebook.
The documentation can be found in the Documentation page. The workflow and the results can be cited in the Zenodo record.
Click and drag the diagram to pan, double click or use the controls to zoom.
Version History
Version 1 (earliest) Created 9th Aug 2021 at 21:25 by Javier Moldon
Added/updated 2 files
Open
master
c49b2b9
Creators
Not specifiedSubmitter
Views: 2359 Downloads: 294
Created: 9th Aug 2021 at 21:25
None