COMPSs GPU Cache DNN Distributed Training
Version 1

Workflow Type: COMPSs
Stable

Name: Dislib Distributed Training - Cache ON
Contact Person: cristian.tatu@bsc.es
Access Level: public
License Agreement: Apache2
Platform: COMPSs
Machine: Minotauro-MN4

PyTorch distributed training of CNN on GPU and leveraging COMPSs GPU Cache for deserialization speedup.
Launched using 32 GPUs (16 nodes).
Dataset: Imagenet
Version dislib-0.9
Version PyTorch 1.7.1+cu101

Average task execution time: 36 seconds

Click and drag the diagram to pan, double click or use the controls to zoom.

Version History

Version 1 (earliest) Created 25th Mar 2024 at 11:27 by Cristian Tatu

No revision comments

Frozen Version-1 887f42c
help Creators and Submitter
Creator
Additional credit

The Workflows and Distributed Computing Team (https://www.bsc.es/discover-bsc/organisation/scientific-structure/workflows-and-distributed-computing/)

Submitter
Citation
Tatu, C. (2024). COMPSs GPU Cache DNN Distributed Training. WorkflowHub. https://doi.org/10.48546/WORKFLOWHUB.WORKFLOW.802.1
Activity

Views: 296

Created: 25th Mar 2024 at 11:27

Annotated Properties
Topic annotations
help Tags

This item has not yet been tagged.

help Attributions

None

Total size: 202 KB
Powered by
(v.1.14.1)
Copyright © 2008 - 2023 The University of Manchester and HITS gGmbH