Time evolution of a Bayesian posterior
Time evolution of a Bayesian posterior
I have a question regarding the time evolution of a quantity related to a Bayesian posterior.
Suppose we have binary parameter space
for deterministic functions
By Girsanov's theorem, the likelihood ratio, conditional on
where
So the posterior probability (of
Question : Can one describe explicitly how
Direct calculation, by Ito's lemma, shows that
So
\begin{align*} \frac{ d E[ \frac{ p e^{\gamma_t} }{ p e^{\gamma_t} + (1 - p) }] }{dt} &= (1 - p) E[\frac{p e^{\gamma_t} }{(p e^{\gamma_t} + (1 - p) )^2}] (\theta_1^2(t) - \theta_2^2(t)) \\ &; - (1 - p) E[\frac{( p e^{\gamma_t})^2 }{(p e^{\gamma_t} + (1 - p) )^3}] \ (\theta_1^2(t) - \theta_2^2(t)) , \end{align*}
i.e.
\begin{align*} \frac{ d E[ \frac{ p e^{\gamma_t} }{ p e^{\gamma_t} + (1 - p) }]}{dt} &= (1 - p) E[\frac{p e^{\gamma_t} }{(p e^{\gamma_t} + (1 - p) )^2}(1- \frac{ p e^{\gamma_t} }{ p e^{\gamma_t} + (1 - p) })] \cdot (\theta_1^2(t) - \theta_2^2(t)) \\ &= (1 - p) [; E[\frac{p e^{\gamma_t} }{(p e^{\gamma_t} + (1 - p) )^2}] E[1- \frac{ p e^{\gamma_t} }{ p e^{\gamma_t} + (1 - p) }]\\ &; + Cov(\frac{p e^{\gamma_t} }{(p e^{\gamma_t} + (1 - p) )^2}, 1- \frac{ p e^{\gamma_t} }{ p e^{\gamma_t} + (1 - p }) ;] \cdot (\theta_1^2(t) - \theta_2^2(t)) \\ &= 0. \\ \end{align*}
The last inequality follows from the fact that successive Bayesian priors form a martingale: let
\begin{align*} E[q(s)] &= E[ \frac{ p(x|s)p(s) }{ \int p(x|s')p(s') ds' }] \\ &= \int \frac{ p(x|s)p(s) }{ \int p(x|s')p(s') ds' } \int p(x|s')p(s') ds' dx\\ &= \int p(x|s) p(s) dx \\ &= p(s). \end{align*}
So the whole thing boils down to
\begin{align*} 0 &= E[\frac{p e^{\gamma_t} }{(p e^{\gamma_t} + (1 - p) )^2}(1- \frac{ p e^{\gamma_t} }{ p e^{\gamma_t} + (1 - p) })] \\ &= E[\frac{p e^{\gamma_t} }{(p e^{\gamma_t} + (1 - p) )^2}] E[1- \frac{ p e^{\gamma_t} }{ p e^{\gamma_t} + (1 - p) }]\\ &; + Cov(\frac{p e^{\gamma_t} }{(p e^{\gamma_t} + (1 - p) )^2}, 1- \frac{ p e^{\gamma_t} }{ p e^{\gamma_t} + (1 - p }). \end{align*}
The question is then whether one can solve for
Follow Up Question : Does
K2-Think
To address the question of how ( E\left[\frac{p e^{\gamma_t}}{(p e^{\gamma_t} + (1-p))^2}\right] ) evolves, we start by analyzing the posterior probability ( \pi_t = \frac{p e^{\gamma_t}}{p e^{\gamma_t} + (1-p)} ), where ( \gamma_t ) is the log-likelihood ratio. Let ( c = 1-p ), so ( \pi_t = \frac{p M_t}{p M_t + c} ) with ( M_t = e^{\gamma_t} ) (the likelihood ratio).
Evolution o...
GPT-5
Write Δ(t) = θ1(t) − θ2(t) and let L_t = exp{∫_0^t Δ(s) dX_s − 1/2 ∫_0^t Δ(s)^2 ds}, so that the posterior of s1 given the observation filtration Ft = σ(Xs: s ≤ t) is π_t = P(s1 | Ft) = p L_t / (p L_t + 1 − p).
- A simpler target quantity
Your quantity can be rewritten in terms of the posterior itself: π_t(1 − π_t) = p(1 − p) L_t / (p L_t + 1 − p)^2, so E[ p e^{γ_t} / (p e^{γ_t} + 1 − p)^2 ] = ...
Claude Opus 4
This is a fascinating question about the time evolution of a specific functional of the Bayesian posterior in a continuous-time setting. Let me analyze this step by step.
Key Observations
First, let me clarify what you've shown. You've demonstrated that:
T...
o3 Pro
Let
πt := P(s = s1 | ℱt ) = peγt /(peγt + 1 – p) (1)
be the (continuous-time) posterior probability of the first hypothesis, where
γt = ∫0t (θ1 – θ2 )(s) dBs – ½∫0t (θ1² – θ2² )(s) ds (2)
and δ(t) := θ1(t) – θ2(t).
We work under the “mixture’’ probability P under which the true state is
s1 with probability p and s2 with probability 1...
Gemini 2.5 Pro
Excellent question. This delves into the dynamics of Bayesian learning in a continuous-time setting, a core topic in filtering theory. Your approach using Itô's lemma is the right one, but there seems to be a subtle error in your derivation that led to the confusing conclusion about "orthogonality".
Let's break down the problem and address your questions systematically.
First, let's establish th...
o4 mini
First, a small notational re‐interpretation will help expose what you are really after:
Let
qₜ := P[s = s₁ | ℱₜ]
be the usual posterior, so
qₜ = p e^{γₜ}
-------------
p e^{γₜ} + (1−p)
and denote your “mysterious’’ quantity by
aₜ := p e^{γₜ}
--------
(p e^{γₜ} + (1−p))²
= qₜ(1−qₜ)/(1−p) .
We will do three t...
Claude 3.7 Sonnet
Time Evolution of a Bayesian Posterior
This is an interesting question about the time evolution of a specific expectation related to a Bayesian posterior in a continuous-time setting with Brownian motion.
Analyzing the Problem
Let me start by reframing the key elements:
- Binary parameter space {s₁, s₂} with prior (p, 1-p)
- Data generating process: dXₜ = θᵢ(t)dt + dBₜ
- Posterior probabil...
DeepSeek R1
π_t = [p * exp(γ_t)] / [p * exp(γ_t) + (1-p)]
The question asks for an explicit description of the evol...
o3
Below I denote
πt = P{s = s1 | 𝔽tX } = p eγt /(p eγt + 1–p) ( 𝔽tX is the σ–field generated by the observations up to t )
and write δ(t)=θ1(t)−θ2(t).
Your object of interest is
At ≔ p eγt /(p eγt +1–p)² . (1)
1 A simple algebraic reduction
Using πt we can put
At = πt (1–πt)/(1–p). ...
You need to be logged in to submit an answer