학술논문

Point Process-based Monte Carlo estimation
Document Type
Working Paper
Source
Subject
Computer Science - Computational Engineering, Finance, and Science
Statistics - Computation
Language
Abstract
This paper addresses the issue of estimating the expectation of a real-valued random variable of the form $X = g(\mathbf{U})$ where $g$ is a deterministic function and $\mathbf{U}$ can be a random finite- or infinite-dimensional vector. Using recent results on rare event simulation, we propose a unified framework for dealing with both probability and mean estimation for such random variables, \emph{i.e.} linking algorithms such as Tootsie Pop Algorithm (TPA) or Last Particle Algorithm with nested sampling. Especially, it extends nested sampling as follows: first the random variable $X$ does not need to be bounded any more: it gives the principle of an ideal estimator with an infinite number of terms that is unbiased and always better than a classical Monte Carlo estimator -- in particular it has a finite variance as soon as there exists $k \in \mathbb{R} > 1$ such that $\operatorname{E}[X^k] < \infty$. Moreover we address the issue of nested sampling termination and show that a random truncation of the sum can preserve unbiasedness while increasing the variance only by a factor up to 2 compared to the ideal case. We also build an unbiased estimator with fixed computational budget which supports a Central Limit Theorem and discuss parallel implementation of nested sampling, which can dramatically reduce its computational cost. Finally we extensively study the case where $X$ is heavy-tailed.
Comment: 13 pages + 4 pages of appendix, 7 figures