Ito's formula proof - why can we assume $u(t,omega), v(t,omega)$ are elementary?
$begingroup$
My question is about a simplification made in the proof of Ito's formula.
In the proof of Ito's formula in my textbook it says that since $int_0^T f dBs$ is defined as
$$int_0^T f dB_s = lim_{nto infty} int_0^T f_n dB_s$$
(with limit in probability) where $f_n$ is a sequence of step-functions such that
$$int_0^t (f-f_n)^2 ds to 0$$
in probability, then if $X_t$ is an Ito process of the form
$$X_t = X_0 + int_0^t u(s,omega)ds + int_0^t v(s,omega)dB_s$$
then to prove Ito's formula, we can assume $u$ and $v$ are elementary functions. ie, they have the form
$$f(s,omega) = sum_j e_j (omega) 1{t in [t_j, t_{j+1})}$$
So I understand that if $X_t$ is defined by
$$X_t^n = int_0^t f_n dB_s$$
where $f_n$ is a step function. Then clearly $f_n$ is an elementary function. So the fact we can assume $u,v$ are elementary is due the fact $X_t$ is the limit of integrals of step functions. Where I am confused is how this limit can be interchanged with the function $g in C^2$. That is, why is it the case that
$$g(X_t) = g(lim_n X_t^n) = lim_n g(X_t^n)$$
I suspect it is because:
$u,v$ are assumed to be almost surely integrable (definition of Ito process)
$g$ is a continuous mapping
And so it follows by some form of the continuous mapping theorem? But I am still unsure of the exact justification.
I am guessing this is the reason we need $g(t,x)$ to be twice continuously differentiable? So that we can take the limit out of all:
$$g, frac{partial g}{partial t}, frac{partial g}{partial x}, frac{partial^2 g}{partial x^2}$$
Using the continuous mapping theorem? Is this correct?
probability probability-theory stochastic-processes stochastic-calculus stochastic-integrals
$endgroup$
add a comment |
$begingroup$
My question is about a simplification made in the proof of Ito's formula.
In the proof of Ito's formula in my textbook it says that since $int_0^T f dBs$ is defined as
$$int_0^T f dB_s = lim_{nto infty} int_0^T f_n dB_s$$
(with limit in probability) where $f_n$ is a sequence of step-functions such that
$$int_0^t (f-f_n)^2 ds to 0$$
in probability, then if $X_t$ is an Ito process of the form
$$X_t = X_0 + int_0^t u(s,omega)ds + int_0^t v(s,omega)dB_s$$
then to prove Ito's formula, we can assume $u$ and $v$ are elementary functions. ie, they have the form
$$f(s,omega) = sum_j e_j (omega) 1{t in [t_j, t_{j+1})}$$
So I understand that if $X_t$ is defined by
$$X_t^n = int_0^t f_n dB_s$$
where $f_n$ is a step function. Then clearly $f_n$ is an elementary function. So the fact we can assume $u,v$ are elementary is due the fact $X_t$ is the limit of integrals of step functions. Where I am confused is how this limit can be interchanged with the function $g in C^2$. That is, why is it the case that
$$g(X_t) = g(lim_n X_t^n) = lim_n g(X_t^n)$$
I suspect it is because:
$u,v$ are assumed to be almost surely integrable (definition of Ito process)
$g$ is a continuous mapping
And so it follows by some form of the continuous mapping theorem? But I am still unsure of the exact justification.
I am guessing this is the reason we need $g(t,x)$ to be twice continuously differentiable? So that we can take the limit out of all:
$$g, frac{partial g}{partial t}, frac{partial g}{partial x}, frac{partial^2 g}{partial x^2}$$
Using the continuous mapping theorem? Is this correct?
probability probability-theory stochastic-processes stochastic-calculus stochastic-integrals
$endgroup$
$begingroup$
Hi Xiaomi, does your textbook define stochastic integrals as limits in probability (as opposed to limits in $L^2$)?
$endgroup$
– AddSup
Dec 19 '18 at 13:05
$begingroup$
Yes. Limit in $L^2$ is used for the introduction, but then switched to limit in probability to relax the assumptions on the class of functions.
$endgroup$
– Xiaomi
Dec 19 '18 at 13:09
add a comment |
$begingroup$
My question is about a simplification made in the proof of Ito's formula.
In the proof of Ito's formula in my textbook it says that since $int_0^T f dBs$ is defined as
$$int_0^T f dB_s = lim_{nto infty} int_0^T f_n dB_s$$
(with limit in probability) where $f_n$ is a sequence of step-functions such that
$$int_0^t (f-f_n)^2 ds to 0$$
in probability, then if $X_t$ is an Ito process of the form
$$X_t = X_0 + int_0^t u(s,omega)ds + int_0^t v(s,omega)dB_s$$
then to prove Ito's formula, we can assume $u$ and $v$ are elementary functions. ie, they have the form
$$f(s,omega) = sum_j e_j (omega) 1{t in [t_j, t_{j+1})}$$
So I understand that if $X_t$ is defined by
$$X_t^n = int_0^t f_n dB_s$$
where $f_n$ is a step function. Then clearly $f_n$ is an elementary function. So the fact we can assume $u,v$ are elementary is due the fact $X_t$ is the limit of integrals of step functions. Where I am confused is how this limit can be interchanged with the function $g in C^2$. That is, why is it the case that
$$g(X_t) = g(lim_n X_t^n) = lim_n g(X_t^n)$$
I suspect it is because:
$u,v$ are assumed to be almost surely integrable (definition of Ito process)
$g$ is a continuous mapping
And so it follows by some form of the continuous mapping theorem? But I am still unsure of the exact justification.
I am guessing this is the reason we need $g(t,x)$ to be twice continuously differentiable? So that we can take the limit out of all:
$$g, frac{partial g}{partial t}, frac{partial g}{partial x}, frac{partial^2 g}{partial x^2}$$
Using the continuous mapping theorem? Is this correct?
probability probability-theory stochastic-processes stochastic-calculus stochastic-integrals
$endgroup$
My question is about a simplification made in the proof of Ito's formula.
In the proof of Ito's formula in my textbook it says that since $int_0^T f dBs$ is defined as
$$int_0^T f dB_s = lim_{nto infty} int_0^T f_n dB_s$$
(with limit in probability) where $f_n$ is a sequence of step-functions such that
$$int_0^t (f-f_n)^2 ds to 0$$
in probability, then if $X_t$ is an Ito process of the form
$$X_t = X_0 + int_0^t u(s,omega)ds + int_0^t v(s,omega)dB_s$$
then to prove Ito's formula, we can assume $u$ and $v$ are elementary functions. ie, they have the form
$$f(s,omega) = sum_j e_j (omega) 1{t in [t_j, t_{j+1})}$$
So I understand that if $X_t$ is defined by
$$X_t^n = int_0^t f_n dB_s$$
where $f_n$ is a step function. Then clearly $f_n$ is an elementary function. So the fact we can assume $u,v$ are elementary is due the fact $X_t$ is the limit of integrals of step functions. Where I am confused is how this limit can be interchanged with the function $g in C^2$. That is, why is it the case that
$$g(X_t) = g(lim_n X_t^n) = lim_n g(X_t^n)$$
I suspect it is because:
$u,v$ are assumed to be almost surely integrable (definition of Ito process)
$g$ is a continuous mapping
And so it follows by some form of the continuous mapping theorem? But I am still unsure of the exact justification.
I am guessing this is the reason we need $g(t,x)$ to be twice continuously differentiable? So that we can take the limit out of all:
$$g, frac{partial g}{partial t}, frac{partial g}{partial x}, frac{partial^2 g}{partial x^2}$$
Using the continuous mapping theorem? Is this correct?
probability probability-theory stochastic-processes stochastic-calculus stochastic-integrals
probability probability-theory stochastic-processes stochastic-calculus stochastic-integrals
edited Dec 19 '18 at 10:03
Xiaomi
asked Dec 19 '18 at 9:44
XiaomiXiaomi
1,066115
1,066115
$begingroup$
Hi Xiaomi, does your textbook define stochastic integrals as limits in probability (as opposed to limits in $L^2$)?
$endgroup$
– AddSup
Dec 19 '18 at 13:05
$begingroup$
Yes. Limit in $L^2$ is used for the introduction, but then switched to limit in probability to relax the assumptions on the class of functions.
$endgroup$
– Xiaomi
Dec 19 '18 at 13:09
add a comment |
$begingroup$
Hi Xiaomi, does your textbook define stochastic integrals as limits in probability (as opposed to limits in $L^2$)?
$endgroup$
– AddSup
Dec 19 '18 at 13:05
$begingroup$
Yes. Limit in $L^2$ is used for the introduction, but then switched to limit in probability to relax the assumptions on the class of functions.
$endgroup$
– Xiaomi
Dec 19 '18 at 13:09
$begingroup$
Hi Xiaomi, does your textbook define stochastic integrals as limits in probability (as opposed to limits in $L^2$)?
$endgroup$
– AddSup
Dec 19 '18 at 13:05
$begingroup$
Hi Xiaomi, does your textbook define stochastic integrals as limits in probability (as opposed to limits in $L^2$)?
$endgroup$
– AddSup
Dec 19 '18 at 13:05
$begingroup$
Yes. Limit in $L^2$ is used for the introduction, but then switched to limit in probability to relax the assumptions on the class of functions.
$endgroup$
– Xiaomi
Dec 19 '18 at 13:09
$begingroup$
Yes. Limit in $L^2$ is used for the introduction, but then switched to limit in probability to relax the assumptions on the class of functions.
$endgroup$
– Xiaomi
Dec 19 '18 at 13:09
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
If $X_t $ is of the form
$$
X_t = int_0^t f(s,omega)dB_s(omega),quad tgeq 0,
$$ then by using localization technique, we may assume $X_t$ is a $L^2$-bounded martingale. In that case, there is a sequence of elementary processes ${f_n}$ such that
$$
E[int_0^T |f_n(s,cdot)-f(s,cdot)|^2ds]to 0
$$ for all $T>0$. Let $X_{n,t} = int_0^t f_n(s,omega)dB_s(omega)$. Then by martingale maximal inequality, we have
$$
E[sup_{tin [0,T]}|X_t -X_{n,t}|^2]leq C E[int_0^T |f_n(s,cdot)-f(s,cdot)|^2ds]to 0.
$$ This implies that $X_{n.t} to_p X_t$ uniformly on every compact interval $[0,T]$. We may further assume that $gin C^2_0$ via localization method. Then,
$$
g(X_{n,t}), g'(X_{n,t}), g''(X_{n,t})
$$ converge in probability to
$$
g(X_t), g'(X_t), g''(X_t)
$$ locally uniformly.
$endgroup$
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3046224%2fitos-formula-proof-why-can-we-assume-ut-omega-vt-omega-are-elementar%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
If $X_t $ is of the form
$$
X_t = int_0^t f(s,omega)dB_s(omega),quad tgeq 0,
$$ then by using localization technique, we may assume $X_t$ is a $L^2$-bounded martingale. In that case, there is a sequence of elementary processes ${f_n}$ such that
$$
E[int_0^T |f_n(s,cdot)-f(s,cdot)|^2ds]to 0
$$ for all $T>0$. Let $X_{n,t} = int_0^t f_n(s,omega)dB_s(omega)$. Then by martingale maximal inequality, we have
$$
E[sup_{tin [0,T]}|X_t -X_{n,t}|^2]leq C E[int_0^T |f_n(s,cdot)-f(s,cdot)|^2ds]to 0.
$$ This implies that $X_{n.t} to_p X_t$ uniformly on every compact interval $[0,T]$. We may further assume that $gin C^2_0$ via localization method. Then,
$$
g(X_{n,t}), g'(X_{n,t}), g''(X_{n,t})
$$ converge in probability to
$$
g(X_t), g'(X_t), g''(X_t)
$$ locally uniformly.
$endgroup$
add a comment |
$begingroup$
If $X_t $ is of the form
$$
X_t = int_0^t f(s,omega)dB_s(omega),quad tgeq 0,
$$ then by using localization technique, we may assume $X_t$ is a $L^2$-bounded martingale. In that case, there is a sequence of elementary processes ${f_n}$ such that
$$
E[int_0^T |f_n(s,cdot)-f(s,cdot)|^2ds]to 0
$$ for all $T>0$. Let $X_{n,t} = int_0^t f_n(s,omega)dB_s(omega)$. Then by martingale maximal inequality, we have
$$
E[sup_{tin [0,T]}|X_t -X_{n,t}|^2]leq C E[int_0^T |f_n(s,cdot)-f(s,cdot)|^2ds]to 0.
$$ This implies that $X_{n.t} to_p X_t$ uniformly on every compact interval $[0,T]$. We may further assume that $gin C^2_0$ via localization method. Then,
$$
g(X_{n,t}), g'(X_{n,t}), g''(X_{n,t})
$$ converge in probability to
$$
g(X_t), g'(X_t), g''(X_t)
$$ locally uniformly.
$endgroup$
add a comment |
$begingroup$
If $X_t $ is of the form
$$
X_t = int_0^t f(s,omega)dB_s(omega),quad tgeq 0,
$$ then by using localization technique, we may assume $X_t$ is a $L^2$-bounded martingale. In that case, there is a sequence of elementary processes ${f_n}$ such that
$$
E[int_0^T |f_n(s,cdot)-f(s,cdot)|^2ds]to 0
$$ for all $T>0$. Let $X_{n,t} = int_0^t f_n(s,omega)dB_s(omega)$. Then by martingale maximal inequality, we have
$$
E[sup_{tin [0,T]}|X_t -X_{n,t}|^2]leq C E[int_0^T |f_n(s,cdot)-f(s,cdot)|^2ds]to 0.
$$ This implies that $X_{n.t} to_p X_t$ uniformly on every compact interval $[0,T]$. We may further assume that $gin C^2_0$ via localization method. Then,
$$
g(X_{n,t}), g'(X_{n,t}), g''(X_{n,t})
$$ converge in probability to
$$
g(X_t), g'(X_t), g''(X_t)
$$ locally uniformly.
$endgroup$
If $X_t $ is of the form
$$
X_t = int_0^t f(s,omega)dB_s(omega),quad tgeq 0,
$$ then by using localization technique, we may assume $X_t$ is a $L^2$-bounded martingale. In that case, there is a sequence of elementary processes ${f_n}$ such that
$$
E[int_0^T |f_n(s,cdot)-f(s,cdot)|^2ds]to 0
$$ for all $T>0$. Let $X_{n,t} = int_0^t f_n(s,omega)dB_s(omega)$. Then by martingale maximal inequality, we have
$$
E[sup_{tin [0,T]}|X_t -X_{n,t}|^2]leq C E[int_0^T |f_n(s,cdot)-f(s,cdot)|^2ds]to 0.
$$ This implies that $X_{n.t} to_p X_t$ uniformly on every compact interval $[0,T]$. We may further assume that $gin C^2_0$ via localization method. Then,
$$
g(X_{n,t}), g'(X_{n,t}), g''(X_{n,t})
$$ converge in probability to
$$
g(X_t), g'(X_t), g''(X_t)
$$ locally uniformly.
answered Dec 19 '18 at 10:43
SongSong
16.2k1739
16.2k1739
add a comment |
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3046224%2fitos-formula-proof-why-can-we-assume-ut-omega-vt-omega-are-elementar%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
Hi Xiaomi, does your textbook define stochastic integrals as limits in probability (as opposed to limits in $L^2$)?
$endgroup$
– AddSup
Dec 19 '18 at 13:05
$begingroup$
Yes. Limit in $L^2$ is used for the introduction, but then switched to limit in probability to relax the assumptions on the class of functions.
$endgroup$
– Xiaomi
Dec 19 '18 at 13:09