Workflow Type: COMPSs
Frozen
Stable
Name: Dislib Distributed Training - Cache ON
Contact Person: cristian.tatu@bsc.es
Access Level: public
License Agreement: Apache2
Platform: COMPSs
Machine: Minotauro-MN4
PyTorch distributed training of CNN on GPU and leveraging COMPSs GPU Cache for deserialization speedup.
Launched using 32 GPUs (16 nodes).
Dataset: Imagenet
Version dislib-0.9
Version PyTorch 1.7.1+cu101
Average task execution time: 36 seconds
Click and drag the diagram to pan, double click or use the controls to zoom.
Version History
Version 1 (earliest) Created 25th Mar 2024 at 11:27 by Cristian Tatu
No revision comments
Frozen
Version-1
887f42c
Creators and Submitter
Creator
Additional credit
The Workflows and Distributed Computing Team (https://www.bsc.es/discover-bsc/organisation/scientific-structure/workflows-and-distributed-computing/)
Submitter
Citation
Tatu, C. (2024). COMPSs GPU Cache DNN Distributed Training. WorkflowHub. https://doi.org/10.48546/WORKFLOWHUB.WORKFLOW.802.1
License
Activity
Views: 907 Downloads: 168
Created: 25th Mar 2024 at 11:27
Annotated Properties
Topic annotations
Attributions
None