A high-performance computing project on GPU-based Monte Carlo simulation for exotic options. The project emphasized psuedorandom number generation, pricing accuracy, and large-scale throughput for Asian, barrier, and lookback-style options.
Available materials: Github Project paper (PDF)
Many Monte Carlo methods are embarrassingly parallel due to independence, and can be computationally dense, and thus benefit greatly from GPU architecture. The canonical Monte Carlo problem is the generation of pseudo-random numbers, in particular samples from the normal distribution, which is a key component in financial simulations.
Various non-standard methods exploit fully programmable GPUs for efficiency, including the Ziggurat method, the Wallace method, and related hybrid generators. Standard methods such as Box–Muller remain robust in the GPU setting, but speed–accuracy trade-offs still need to be considered.
This project investigates the transferability of Monte Carlo methods from sequential to parallel execution, and then studies the pricing of exotic options such as Asian, lookback, and barrier contracts as a representative application. The emphasis is on the CUDA platform, with discussion of implementation details, experiments, and numerical results within that framework.
Monte Carlo methods have enormous applications in GPU usage for high-performance financial applications. The Wallace Method and Hybrid generators provide reasonable alternatives to traditional PRNGs for GPU-based systems, and contingencies related to architecture and algorithm design must be considered, and speed-accuracy tradeoffs are crucial. The advantages of GPUs were demonstrated using numerical experiments for various methods, problems and hardware, using the CUDA platform provided by NVIDIA, and results from literature were presented and expanded upon. Efficiency and reliability of methods in CUDA libraries was discussed and evaluated.