Name: Dislib Distributed Training - Cache ON
Contact Person: cristian.tatu@bsc.es
Access Level: public
License Agreement: Apache2
Platform: COMPSs
Machine: Minotauro-MN4
PyTorch distributed training of CNN on GPU and leveraging COMPSs GPU Cache for deserialization speedup.
Launched using 32 GPUs (16 nodes).
Dataset: Imagenet
Version dislib-0.9
Version PyTorch 1.7.1+cu101
Average task execution time: 36 seconds
Click and drag the diagram to pan, double click or use the controls to zoom.
Version History
Version 1 (earliest) Created 25th Mar 2024 at 11:27 by Cristian Tatu
Frozen
Version-1
887f42c
![help](/assets/famfamfam_silk/information-8cfe563be76fb11b27c8ba778f5d40ce6800b244d6ca58ef6e9973308efbe534.png)
Creator
Additional credit
The Workflows and Distributed Computing Team (https://www.bsc.es/discover-bsc/organisation/scientific-structure/workflows-and-distributed-computing/)
Submitter
Views: 543 Downloads: 94
Created: 25th Mar 2024 at 11:27
![help](/assets/famfamfam_silk/information-8cfe563be76fb11b27c8ba778f5d40ce6800b244d6ca58ef6e9973308efbe534.png)
This item has not yet been tagged.
![help](/assets/famfamfam_silk/information-8cfe563be76fb11b27c8ba778f5d40ce6800b244d6ca58ef6e9973308efbe534.png)
None