8
Stack Exchange
votes

Time evolution of a Bayesian posterior

Posted on 1/1/2017
UQ ID:465
Science
Cross Validated
Original

I have a question regarding the time evolution of a quantity related to a Bayesian posterior.

Suppose we have binary parameter space with prior , The data generating processes are Brownian motions with parameter-determined drift:

for deterministic functions , .

By Girsanov's theorem, the likelihood ratio, conditional on , as a stochastic process is

where

So the posterior probability (of ) is then given by

Question : Can one describe explicitly how evolves?

Direct calculation, by Ito's lemma, shows that

So

\begin{align*} \frac{ d E[ \frac{ p e^{\gamma_t} }{ p e^{\gamma_t} + (1 - p) }] }{dt} &= (1 - p) E[\frac{p e^{\gamma_t} }{(p e^{\gamma_t} + (1 - p) )^2}] (\theta_1^2(t) - \theta_2^2(t)) \\ &; - (1 - p) E[\frac{( p e^{\gamma_t})^2 }{(p e^{\gamma_t} + (1 - p) )^3}] \ (\theta_1^2(t) - \theta_2^2(t)) , \end{align*}

i.e.

\begin{align*} \frac{ d E[ \frac{ p e^{\gamma_t} }{ p e^{\gamma_t} + (1 - p) }]}{dt} &= (1 - p) E[\frac{p e^{\gamma_t} }{(p e^{\gamma_t} + (1 - p) )^2}(1- \frac{ p e^{\gamma_t} }{ p e^{\gamma_t} + (1 - p) })] \cdot (\theta_1^2(t) - \theta_2^2(t)) \\ &= (1 - p) [; E[\frac{p e^{\gamma_t} }{(p e^{\gamma_t} + (1 - p) )^2}] E[1- \frac{ p e^{\gamma_t} }{ p e^{\gamma_t} + (1 - p) }]\\ &; + Cov(\frac{p e^{\gamma_t} }{(p e^{\gamma_t} + (1 - p) )^2}, 1- \frac{ p e^{\gamma_t} }{ p e^{\gamma_t} + (1 - p }) ;] \cdot (\theta_1^2(t) - \theta_2^2(t)) \\ &= 0. \\ \end{align*}

The last inequality follows from the fact that successive Bayesian priors form a martingale: let be the prior, be the posterior, and be the conditional densities, then

\begin{align*} E[q(s)] &= E[ \frac{ p(x|s)p(s) }{ \int p(x|s')p(s') ds' }] \\ &= \int \frac{ p(x|s)p(s) }{ \int p(x|s')p(s') ds' } \int p(x|s')p(s') ds' dx\\ &= \int p(x|s) p(s) dx \\ &= p(s). \end{align*}

So the whole thing boils down to

\begin{align*} 0 &= E[\frac{p e^{\gamma_t} }{(p e^{\gamma_t} + (1 - p) )^2}(1- \frac{ p e^{\gamma_t} }{ p e^{\gamma_t} + (1 - p) })] \\ &= E[\frac{p e^{\gamma_t} }{(p e^{\gamma_t} + (1 - p) )^2}] E[1- \frac{ p e^{\gamma_t} }{ p e^{\gamma_t} + (1 - p) }]\\ &; + Cov(\frac{p e^{\gamma_t} }{(p e^{\gamma_t} + (1 - p) )^2}, 1- \frac{ p e^{\gamma_t} }{ p e^{\gamma_t} + (1 - p }). \end{align*}

The question is then whether one can solve for from above---by computing the time evolution of the covariance term. Or, is there another way to get at this?

Follow Up Question : Does have a Bayesian interpretation? Why should it be orthogonal to ---the posterior probability of the other state ---at all times ?

Keywords & Tags
probability
bayesian
mathematical-statistics
stochastic-processes
posterior
Model Answers (9)
Submitted answers from various AI models
Unsolved
No reviews yet

K2-Think

Pending UQ Validation
Needs Human Verification
MBZUAI-IFM
Qwen2.5-32B
Sep 18, 2025, 01:50 PM

To address the question of how ( E\left[\frac{p e^{\gamma_t}}{(p e^{\gamma_t} + (1-p))^2}\right] ) evolves, we start by analyzing the posterior probability ( \pi_t = \frac{p e^{\gamma_t}}{p e^{\gamma_t} + (1-p)} ), where ( \gamma_t ) is the log-likelihood ratio. Let ( c = 1-p ), so ( \pi_t = \frac{p M_t}{p M_t + c} ) with ( M_t = e^{\gamma_t} ) (the likelihood ratio).

Evolution o...

0 model reviews
0 human reviews
No reviews yet

GPT-5

Failed UQ Validation
Needs Human Verification
OpenAI
Sep 2, 2025, 06:56 AM

Write Δ(t) = θ1(t) − θ2(t) and let L_t = exp{∫_0^t Δ(s) dX_s − 1/2 ∫_0^t Δ(s)^2 ds}, so that the posterior of s1 given the observation filtration Ft = σ(Xs: s ≤ t) is π_t = P(s1 | Ft) = p L_t / (p L_t + 1 − p).

  1. A simpler target quantity

Your quantity can be rewritten in terms of the posterior itself: π_t(1 − π_t) = p(1 − p) L_t / (p L_t + 1 − p)^2, so E[ p e^{γ_t} / (p e^{γ_t} + 1 − p)^2 ] = ...

0 model reviews
0 human reviews
No reviews yet

Claude Opus 4

Failed UQ Validation
Needs Human Verification
Anthropic
Aug 1, 2025, 12:00 AM

This is a fascinating question about the time evolution of a specific functional of the Bayesian posterior in a continuous-time setting. Let me analyze this step by step.

Key Observations

First, let me clarify what you've shown. You've demonstrated that:

T...

0 model reviews
0 human reviews
No reviews yet

o3 Pro

Failed UQ Validation
Needs Human Verification
OpenAI
Aug 1, 2025, 12:00 AM

Let

 πt := P(s = s1 | ℱt ) = peγt /(peγt + 1 – p) (1)

be the (continuous-time) posterior probability of the first hypothesis, where

 γt = ∫0t (θ1 – θ2 )(s) dBs – ½∫0t (θ1² – θ2² )(s) ds (2)

and δ(t) := θ1(t) – θ2(t).
We work under the “mixture’’ probability P under which the true state is s1 with probability p and s2 with probability 1...

0 model reviews
0 human reviews
No reviews yet

Gemini 2.5 Pro

UQ Validated
Failed Human Verification
Google
Aug 1, 2025, 12:00 AM

Excellent question. This delves into the dynamics of Bayesian learning in a continuous-time setting, a core topic in filtering theory. Your approach using Itô's lemma is the right one, but there seems to be a subtle error in your derivation that led to the confusing conclusion about "orthogonality".

Let's break down the problem and address your questions systematically.

First, let's establish th...

0 model reviews
0 human reviews
No reviews yet

o4 mini

Failed UQ Validation
Needs Human Verification
OpenAI
Aug 1, 2025, 12:00 AM

First, a small notational re‐interpretation will help expose what you are really after:

Let
qₜ := P[s = s₁ | ℱₜ]
be the usual posterior, so
qₜ = p e^{γₜ}
-------------
p e^{γₜ} + (1−p)

and denote your “mysterious’’ quantity by
aₜ := p e^{γₜ}
--------
(p e^{γₜ} + (1−p))²
= qₜ(1−qₜ)/(1−p) .

We will do three t...

0 model reviews
0 human reviews
No reviews yet

Claude 3.7 Sonnet

Failed UQ Validation
Needs Human Verification
Anthropic
Aug 1, 2025, 12:00 AM

Time Evolution of a Bayesian Posterior

This is an interesting question about the time evolution of a specific expectation related to a Bayesian posterior in a continuous-time setting with Brownian motion.

Analyzing the Problem

Let me start by reframing the key elements:

  • Binary parameter space {s₁, s₂} with prior (p, 1-p)
  • Data generating process: dXₜ = θᵢ(t)dt + dBₜ
  • Posterior probabil...
0 model reviews
0 human reviews
No reviews yet

DeepSeek R1

Failed UQ Validation
Needs Human Verification
DeepSeek
deepseek-ai/DeepSeek-R1
Aug 1, 2025, 12:00 AM
<think> We are given a Bayesian posterior probability for a binary state space {s1, s2} with prior (p, 1-p). The data generating process is a Brownian motion with drift that depends on the state. The likelihood ratio is given by an exponential of a process γ_t, and the posterior for s1 is:
π_t = [p * exp(γ_t)] / [p * exp(γ_t) + (1-p)]

The question asks for an explicit description of the evol...

0 model reviews
0 human reviews
No reviews yet

o3

Failed UQ Validation
Needs Human Verification
OpenAI
Aug 1, 2025, 12:00 AM

Below I denote
 πt = P{s = s1 | 𝔽tX } = p eγt /(p eγt + 1–p)   ( 𝔽tX is the σ–field generated by the observations up to t )

and write δ(t)=θ1(t)−θ2(t).
Your object of interest is

 At ≔ p eγt /(p eγt +1–p)² .                                                (1)

1 A simple algebraic reduction
Using πt we can put

 At = πt (1–πt)/(1–p).                                       ...
0 model reviews
0 human reviews
Submit Your Model's Answer
Your submission will be validated by our LLM validator and then by human experts

You need to be logged in to submit an answer