Plotting absorbing state probabilities from state 1
up vote
3
down vote
favorite
I have the following transition matrix:
[ScriptCapitalP] = DiscreteMarkovProcess[1, {{0., 0.5, 0., 0., 0.5, 0., 0., 0., 0., 0.}, {0., 0., 0.5, 0., 0., 0.5, 0., 0., 0., 0.}, {0., 0., 0., 0.5, 0., 0., 0.5, 0., 0., 0.}, {0., 0., 0., 1., 0., 0., 0., 0., 0., 0.}, {0., 0., 0., 0., 0., 0.5, 0., 0.5, 0., 0.}, {0., 0., 0., 0., 0., 0., 0.5, 0., 0.5, 0.}, {0., 0., 0., 0., 0., 0., 1., 0., 0., 0.}, {0., 0., 0., 0., 0., 0., 0., 0., 0.5, 0.5}, {0., 0., 0., 0., 0., 0., 0., 0., 1., 0.}, {0., 0., 0., 0., 0., 0., 0., 0., 0., 1.}}]
Visually, it looks as shown at the bottom.
Looking at the graph, I can see the absorbing states easily, and I can calculate individual probabilities of reaching a particular absorbing state from state 1. For example, from state 1 to state 9:
PDF[[ScriptCapitalP][∞], 9]
However, this manual process is hardly practical with larger matrices.
So, what I wish to achieve is an automatic computation of all absorbing state probabilities from state 1, so that I can finally plot these.
How might that be achieved?
plotting markov-chains markov-process
add a comment |
up vote
3
down vote
favorite
I have the following transition matrix:
[ScriptCapitalP] = DiscreteMarkovProcess[1, {{0., 0.5, 0., 0., 0.5, 0., 0., 0., 0., 0.}, {0., 0., 0.5, 0., 0., 0.5, 0., 0., 0., 0.}, {0., 0., 0., 0.5, 0., 0., 0.5, 0., 0., 0.}, {0., 0., 0., 1., 0., 0., 0., 0., 0., 0.}, {0., 0., 0., 0., 0., 0.5, 0., 0.5, 0., 0.}, {0., 0., 0., 0., 0., 0., 0.5, 0., 0.5, 0.}, {0., 0., 0., 0., 0., 0., 1., 0., 0., 0.}, {0., 0., 0., 0., 0., 0., 0., 0., 0.5, 0.5}, {0., 0., 0., 0., 0., 0., 0., 0., 1., 0.}, {0., 0., 0., 0., 0., 0., 0., 0., 0., 1.}}]
Visually, it looks as shown at the bottom.
Looking at the graph, I can see the absorbing states easily, and I can calculate individual probabilities of reaching a particular absorbing state from state 1. For example, from state 1 to state 9:
PDF[[ScriptCapitalP][∞], 9]
However, this manual process is hardly practical with larger matrices.
So, what I wish to achieve is an automatic computation of all absorbing state probabilities from state 1, so that I can finally plot these.
How might that be achieved?
plotting markov-chains markov-process
add a comment |
up vote
3
down vote
favorite
up vote
3
down vote
favorite
I have the following transition matrix:
[ScriptCapitalP] = DiscreteMarkovProcess[1, {{0., 0.5, 0., 0., 0.5, 0., 0., 0., 0., 0.}, {0., 0., 0.5, 0., 0., 0.5, 0., 0., 0., 0.}, {0., 0., 0., 0.5, 0., 0., 0.5, 0., 0., 0.}, {0., 0., 0., 1., 0., 0., 0., 0., 0., 0.}, {0., 0., 0., 0., 0., 0.5, 0., 0.5, 0., 0.}, {0., 0., 0., 0., 0., 0., 0.5, 0., 0.5, 0.}, {0., 0., 0., 0., 0., 0., 1., 0., 0., 0.}, {0., 0., 0., 0., 0., 0., 0., 0., 0.5, 0.5}, {0., 0., 0., 0., 0., 0., 0., 0., 1., 0.}, {0., 0., 0., 0., 0., 0., 0., 0., 0., 1.}}]
Visually, it looks as shown at the bottom.
Looking at the graph, I can see the absorbing states easily, and I can calculate individual probabilities of reaching a particular absorbing state from state 1. For example, from state 1 to state 9:
PDF[[ScriptCapitalP][∞], 9]
However, this manual process is hardly practical with larger matrices.
So, what I wish to achieve is an automatic computation of all absorbing state probabilities from state 1, so that I can finally plot these.
How might that be achieved?
plotting markov-chains markov-process
I have the following transition matrix:
[ScriptCapitalP] = DiscreteMarkovProcess[1, {{0., 0.5, 0., 0., 0.5, 0., 0., 0., 0., 0.}, {0., 0., 0.5, 0., 0., 0.5, 0., 0., 0., 0.}, {0., 0., 0., 0.5, 0., 0., 0.5, 0., 0., 0.}, {0., 0., 0., 1., 0., 0., 0., 0., 0., 0.}, {0., 0., 0., 0., 0., 0.5, 0., 0.5, 0., 0.}, {0., 0., 0., 0., 0., 0., 0.5, 0., 0.5, 0.}, {0., 0., 0., 0., 0., 0., 1., 0., 0., 0.}, {0., 0., 0., 0., 0., 0., 0., 0., 0.5, 0.5}, {0., 0., 0., 0., 0., 0., 0., 0., 1., 0.}, {0., 0., 0., 0., 0., 0., 0., 0., 0., 1.}}]
Visually, it looks as shown at the bottom.
Looking at the graph, I can see the absorbing states easily, and I can calculate individual probabilities of reaching a particular absorbing state from state 1. For example, from state 1 to state 9:
PDF[[ScriptCapitalP][∞], 9]
However, this manual process is hardly practical with larger matrices.
So, what I wish to achieve is an automatic computation of all absorbing state probabilities from state 1, so that I can finally plot these.
How might that be achieved?
plotting markov-chains markov-process
plotting markov-chains markov-process
edited Dec 1 at 16:32
kglr
175k9197402
175k9197402
asked Dec 1 at 14:38
user120911
53818
53818
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
up vote
4
down vote
accepted
You can use MarkovProcessProperties
absorbingStateProbs1[p_] := Extract @@ (MarkovProcessProperties[
p, #] & /@ {"ReachabilityProbability", "AbsorbingClasses"});
absorbingStateProbs1@[ScriptCapitalP]
{0.125, 0.375, 0.375, 0.125}
Alternatively,
absorbingStateProbs2[p_] := PDF[p[∞], #] & /@
Flatten[MarkovProcessProperties[p, "AbsorbingClasses"]]
absorbingStateProbs2@[ScriptCapitalP]
{0.125, 0.375, 0.375, 0.125}
add a comment |
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
4
down vote
accepted
You can use MarkovProcessProperties
absorbingStateProbs1[p_] := Extract @@ (MarkovProcessProperties[
p, #] & /@ {"ReachabilityProbability", "AbsorbingClasses"});
absorbingStateProbs1@[ScriptCapitalP]
{0.125, 0.375, 0.375, 0.125}
Alternatively,
absorbingStateProbs2[p_] := PDF[p[∞], #] & /@
Flatten[MarkovProcessProperties[p, "AbsorbingClasses"]]
absorbingStateProbs2@[ScriptCapitalP]
{0.125, 0.375, 0.375, 0.125}
add a comment |
up vote
4
down vote
accepted
You can use MarkovProcessProperties
absorbingStateProbs1[p_] := Extract @@ (MarkovProcessProperties[
p, #] & /@ {"ReachabilityProbability", "AbsorbingClasses"});
absorbingStateProbs1@[ScriptCapitalP]
{0.125, 0.375, 0.375, 0.125}
Alternatively,
absorbingStateProbs2[p_] := PDF[p[∞], #] & /@
Flatten[MarkovProcessProperties[p, "AbsorbingClasses"]]
absorbingStateProbs2@[ScriptCapitalP]
{0.125, 0.375, 0.375, 0.125}
add a comment |
up vote
4
down vote
accepted
up vote
4
down vote
accepted
You can use MarkovProcessProperties
absorbingStateProbs1[p_] := Extract @@ (MarkovProcessProperties[
p, #] & /@ {"ReachabilityProbability", "AbsorbingClasses"});
absorbingStateProbs1@[ScriptCapitalP]
{0.125, 0.375, 0.375, 0.125}
Alternatively,
absorbingStateProbs2[p_] := PDF[p[∞], #] & /@
Flatten[MarkovProcessProperties[p, "AbsorbingClasses"]]
absorbingStateProbs2@[ScriptCapitalP]
{0.125, 0.375, 0.375, 0.125}
You can use MarkovProcessProperties
absorbingStateProbs1[p_] := Extract @@ (MarkovProcessProperties[
p, #] & /@ {"ReachabilityProbability", "AbsorbingClasses"});
absorbingStateProbs1@[ScriptCapitalP]
{0.125, 0.375, 0.375, 0.125}
Alternatively,
absorbingStateProbs2[p_] := PDF[p[∞], #] & /@
Flatten[MarkovProcessProperties[p, "AbsorbingClasses"]]
absorbingStateProbs2@[ScriptCapitalP]
{0.125, 0.375, 0.375, 0.125}
answered Dec 1 at 16:44
kglr
175k9197402
175k9197402
add a comment |
add a comment |
Thanks for contributing an answer to Mathematica Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmathematica.stackexchange.com%2fquestions%2f187101%2fplotting-absorbing-state-probabilities-from-state-1%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown