Evaluation of PyTorch as a Data-Parallel Programming API for GPU Volume Rendering

Abstract
Data-parallel programming (DPP) has attracted considerable interest from the visualization community, fostering major software initiatives such as VTK-m. However, there has been relatively little recent investigation of data-parallel APIs in higherlevel languages such as Python, which could help developers sidestep the need for low-level application programming in C++ and CUDA. Moreover, machine learning frameworks exposing data-parallel primitives, such as PyTorch and TensorFlow, have exploded in popularity, making them attractive platforms for parallel visualization and data analysis. In this work, we benchmark data-parallel primitives in PyTorch, and investigate its application to GPU volume rendering using two distinct DPP formulations: a parallel scan and reduce over the entire volume, and repeated application of data-parallel operators to an array of rays. We find that most relevant DPP primitives exhibit performance similar to a native CUDA library. However, our volume rendering implementation reveals that PyTorch is limited in expressiveness when compared to other DPP APIs. Furthermore, while render times are sufficient for an early ''proof of concept'', memory usage acutely limits scalability.
Description

        
@inproceedings{
10.2312:pgv.20211041
, booktitle = {
Eurographics Symposium on Parallel Graphics and Visualization
}, editor = {
Larsen, Matthew and Sadlo, Filip
}, title = {{
Evaluation of PyTorch as a Data-Parallel Programming API for GPU Volume Rendering
}}, author = {
Marshak, Nathan X.
 and
Grosset, A. V. Pascal
 and
Knoll, Aaron
 and
Ahrens, James
 and
Johnson, Chris R.
}, year = {
2021
}, publisher = {
The Eurographics Association
}, ISSN = {
1727-348X
}, ISBN = {
978-3-03868-138-0
}, DOI = {
10.2312/pgv.20211041
} }
Citation