We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Bayesian methods for inverse problems - lecture 2

Formal Metadata

Title
Bayesian methods for inverse problems - lecture 2
Title of Series
Number of Parts
9
Author
Contributors
License
CC Attribution - NonCommercial - NoDerivatives 2.0 Generic:
You are free to use, copy, distribute and transmit the work or content in unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
We consider the inverse problem of recovering an unknown parameter from a finite set of indirect measurements. We start with reviewing the formulation of the Bayesian approach to inverse problems. In this approach the data and the unknown parameter are modelled as random variables, the distribution of the data is given and the unknown is assumed to be drawn from a given prior distribution. The solution, called the posterior distribution, is the probability distribution of the unknown given the data, obtained through the Bayes rule. We will talk about the conditions under which this formulation leads to well-posedness of the inverse problem at the level of probability distributions. We then discuss the connection of the Bayesian approach to inverse problems with the variational regularization. This will also help us to study the properties of the modes of the posterior distribution as point estimators for the unknown parameter. We will also briefly talk about the Markov chain Monte Carlo methods in this context.
Keywords