Doctoral thesis

High-performance computing approach to approximate bayesian inference

  • 2024

PhD: Università della Svizzera italiana

English There is a growing demand for performing large-scale Bayesian inference tasks, arising from greater data availability and higher-dimensional model parameter spaces. The methodology of integrated nested Laplace approximations (INLA) provides a popular and reliable paradigm for performing inference applicable to a large subclass of additive Bayesian hierarchical models. The work presented in this thesis is dedicated to the integration and development of high-performance computational methods for the INLA framework. The main focus is twofold. The first objective is to improve the performance of the computational bottleneck operations, which consist of Cholesky factorizations, solving linear systems, and selected matrix inversions. We present two numerical solvers to handle these operations, a sparse CPU-based library and a novel blocked GPU-accelerated approach. Second, we establish parallelization strategies that target multi-core architectures (single node), making use of nested thread-level parallelism. For particularly large-scale applications, which arise in the context of spatio-temporal phenomena, we additionally put forward a performant distributed memory variant (multi node), capable of handling models with millions of latent parameters. We showcase the accuracy and performance of our proposed works on synthetic as well as real-world applications.
  • English
Computer science and technology
License undefined
Open access status
Persistent URL

Document views: 12 File downloads:
  • 2024INF009.pdf: 11