COMPSs GPU Cache DNN Distributed Training
Version 1

Workflow Type: COMPSs

Name: Dislib Distributed Training - Cache ON
Contact Person:
Access Level: public
License Agreement: Apache2
Platform: COMPSs
Machine: Minotauro-MN4

PyTorch distributed training of CNN on GPU and leveraging COMPSs GPU Cache for deserialization speedup.
Launched using 32 GPUs (16 nodes).
Dataset: Imagenet
Version dislib-0.9
Version PyTorch 1.7.1+cu101

Average task execution time: 36 seconds

Click and drag the diagram to pan, double click or use the controls to zoom.

Version History

Version 1 (earliest) Created 25th Mar 2024 at 11:27 by Cristian Tatu

No revision comments

Frozen Version-1 887f42c
help Creators and Submitter
Additional credit

The Workflows and Distributed Computing Team (

Tatu, C. (2024). COMPSs GPU Cache DNN Distributed Training. WorkflowHub.

Views: 371

Created: 25th Mar 2024 at 11:27

Annotated Properties
Topic annotations
help Tags

This item has not yet been tagged.

help Attributions


Total size: 202 KB
Powered by
Copyright © 2008 - 2023 The University of Manchester and HITS gGmbH