Algorithmic and HPC Challenges in Parallel Tensor Computations
Oguz Kaya
27 March 2018, 10h30 Salle/Bat : 465/PCRI-N
Contact :
Activités de recherche : Calcul à haute performance
Résumé :
Tensors, or multi-dimensional arrays, have been increasingly used in the recent past in many application domains including signal processing, quantum chemistry, data analysis and machine learning. Tensor decompositions, generalization of matrix decompositions such as SVD to higher dimensions, are employed to find a low-rank representation of data in these applications. This in turn enables finding feasible solutions to the problem at hand with a proper interpretation of this compact representation. Specifically, sparse tensor decompositions are used in data analysis involving recommender systems, graph analytics, and anomaly detection in order to predict missing entries in the tensor, while dense tensor decompositions are heavily employed in signal processing for the detection of unknown signal sources. Recently, another promising use case of tensors have arised in the solution of linear systems and eigenvalue problems in higher dimensional problems where matrices and vectors can be expressed using low-rank tensor decompositions, and all matrix-vector operations can be carried out in this compressed form with tremendous computational and memory gains. In all these applications, computing tensor decompositions efficiently is indispensible for rendering tensor methods practical when dealing with data of massive scale. The focus of this talk is challenges encountered in accelerating the computation of sparse and dense tensor decompositions using effective shared/distributed memory parallelization, partitioning, data structures, and algorithms.