Workflows

What is a Workflow?
611 Workflows visible to you, out of a total of 654

MMV Im2Im Transformation

Build Status

A generic python package for deep learning based image-to-image transformation in biomedical applications

The main branch will be further developed in order to be able to use the latest state of the art techniques and methods in the future. To reproduce the results of our manuscript, we refer to the branch ...

Type: Python

Creator: Justin Sonneck

Submitter: Justin Sonneck

DOI: 10.48546/workflowhub.workflow.626.1

Name: Matrix Multiplication Contact Person: support-compss@bsc.es Access Level: public License Agreement: Apache2 Platform: COMPSs

Description

Matrix multiplication is a binary operation that takes a pair of matrices and produces another matrix.

If A is an n×m matrix and B is an m×p matrix, the result AB of their multiplication is an n×p matrix defined only if the number of columns m in A is equal to the number of rows m in B. When multiplying A and B, the elements of the ...

Type: COMPSs

Creators: Jorge Ejarque, The Workflows and Distributed Computing Team (https://www.bsc.es/discover-bsc/organisation/scientific-structure/workflows-and-distributed-computing)

Submitter: Raül Sirvent

DOI: 10.48546/workflowhub.workflow.484.1

Name: Matrix multiplication with Files Contact Person: support-compss@bsc.es Access Level: public License Agreement: Apache2 Platform: COMPSs

Description

Matrix multiplication is a binary operation that takes a pair of matrices and produces another matrix.

If A is an n×m matrix and B is an m×p matrix, the result AB of their multiplication is an n×p matrix defined only if the number of columns m in A is equal to the number of rows m in B. When multiplying A and B, the elements ...

Type: COMPSs

Creators: Javier Conejero, The Workflows and Distributed Computing Team (https://www.bsc.es/discover-bsc/organisation/scientific-structure/workflows-and-distributed-computing/)

Submitter: Raül Sirvent

DOI: 10.48546/workflowhub.workflow.485.1

This workflow takes as input a SRA_manifest from SRA Run Selector and will generate one fastq file or fastq pair of file for each experiment (concatenated multiple runs if necessary). Output will be relabelled to match the column specified by the user.

Type: Galaxy

Creators: Lucille Delisle, Pierre Osteil, Wolfgang Maier

Submitter: WorkflowHub Bot

Stable

A demonstration workflow for Reduced Order Modeling (ROM) within the eFlows4HPC project, implemented using Kratos Multiphysics, EZyRB, COMPSs, and dislib.

Work-in-progress

rquest-omop-worker-workflows

Source for workflow definitions for the open source RQuest OMOP Worker tool developed for Hutch/TRE-FX

Note: ARM workflows are currently broken. x86 ones work.

Inputs

### Body Sample input payload:

{ 
"task_id": "job-2023-01-13-14: 20: 38-", 
"project": "", 
"owner": "", 
"cohort": { 
"groups": [ 
{ 
"rules": [ 
{ 
"varname": "OMOP", 
"varcat": "Person", 
"type": "TEXT", 
"oper": "=", 
"value": "8507" 
} 
], 
"rules_oper": "AND" 
} 
], 
"groups_oper": "OR" 
}, 
"collection":
...
Stable

Summary

The data preparation pipeline contains tasks for two distinct scenarios: leukaemia that contains microarray data for 119 patients and ovarian cancer that contains next generation sequencing data for 380 patients.

The disease outcome prediction pipeline offers two strategies for this task:

Graph kernel method: It starts generating personalized networks for ...

Type: Python

Creator: Yasmmin Martins

Submitter: Yasmmin Martins

Stable

Summary

This pipeline contains the following functions: (1) Data processing to handle the tansformations needed to obtain the original pathway scores of the samples according to single sample analysis GSEA (2) Model training based on the disease and healthy sample pathway scores, to classify them (3) Scoring matrix weights optimization according to a gold standard list of drugs (those that went on clinical trials or are approved for the disease).It tests the weights in a range of 0 to 30 (you ...

Type: Python

Creator: Yasmmin Martins

Submitter: Yasmmin Martins

Stable

Summary

The PPI information aggregation pipeline starts getting all the datasets in GEO database whose material was generated using expression profiling by high throughput sequencing. From each database identifiers, it extracts the supplementary files that had the counts table. Once finishing the download step, it identifies those that were normalized or had the raw counts to normalize. It also identify and map the gene ids to uniprot (the ids found usually ...

Type: Python

Creator: Yasmmin Martins

Submitter: Yasmmin Martins

Stable

Summary

This pipeline has as major goal provide a tool for protein interactions (PPI) prediction data formalization and standardization using the OntoPPI ontology. This pipeline is splitted in two parts: (i) a part to prepare data from three main sources of PPI data (HINT, STRING and PredPrin) and create the standard files to be processed ...

Type: Python

Creator: Yasmmin Martins

Submitter: Yasmmin Martins

Powered by
(v.1.14.1)
Copyright © 2008 - 2023 The University of Manchester and HITS gGmbH