Markov Chain with Unique Stationary Distribution [closed]












1














Why is it that a non-negative integers Markov chain with transition prob. matrix (t.p.m) $P$ given by $p_{i,i+1} = p$ and $p_{i,0} = 1 − p$, has a unique
stationary distribution $π$?










share|cite|improve this question













closed as off-topic by Kavi Rama Murthy, Saad, Gibbs, Did, Christopher Nov 27 at 10:37


This question appears to be off-topic. The users who voted to close gave this specific reason:


  • "This question is missing context or other details: Please improve the question by providing additional context, which ideally includes your thoughts on the problem and any attempts you have made to solve it. This information helps others identify where you have difficulties and helps them write answers appropriate to your experience level." – Kavi Rama Murthy, Saad, Gibbs, Did, Christopher

If this question can be reworded to fit the rules in the help center, please edit the question.









  • 1




    You are expected to show your effort.
    – Kavi Rama Murthy
    Nov 27 at 5:57
















1














Why is it that a non-negative integers Markov chain with transition prob. matrix (t.p.m) $P$ given by $p_{i,i+1} = p$ and $p_{i,0} = 1 − p$, has a unique
stationary distribution $π$?










share|cite|improve this question













closed as off-topic by Kavi Rama Murthy, Saad, Gibbs, Did, Christopher Nov 27 at 10:37


This question appears to be off-topic. The users who voted to close gave this specific reason:


  • "This question is missing context or other details: Please improve the question by providing additional context, which ideally includes your thoughts on the problem and any attempts you have made to solve it. This information helps others identify where you have difficulties and helps them write answers appropriate to your experience level." – Kavi Rama Murthy, Saad, Gibbs, Did, Christopher

If this question can be reworded to fit the rules in the help center, please edit the question.









  • 1




    You are expected to show your effort.
    – Kavi Rama Murthy
    Nov 27 at 5:57














1












1








1


1





Why is it that a non-negative integers Markov chain with transition prob. matrix (t.p.m) $P$ given by $p_{i,i+1} = p$ and $p_{i,0} = 1 − p$, has a unique
stationary distribution $π$?










share|cite|improve this question













Why is it that a non-negative integers Markov chain with transition prob. matrix (t.p.m) $P$ given by $p_{i,i+1} = p$ and $p_{i,0} = 1 − p$, has a unique
stationary distribution $π$?







probability probability-theory statistics markov-chains markov-process






share|cite|improve this question













share|cite|improve this question











share|cite|improve this question




share|cite|improve this question










asked Nov 27 at 4:53









Note

1468




1468




closed as off-topic by Kavi Rama Murthy, Saad, Gibbs, Did, Christopher Nov 27 at 10:37


This question appears to be off-topic. The users who voted to close gave this specific reason:


  • "This question is missing context or other details: Please improve the question by providing additional context, which ideally includes your thoughts on the problem and any attempts you have made to solve it. This information helps others identify where you have difficulties and helps them write answers appropriate to your experience level." – Kavi Rama Murthy, Saad, Gibbs, Did, Christopher

If this question can be reworded to fit the rules in the help center, please edit the question.




closed as off-topic by Kavi Rama Murthy, Saad, Gibbs, Did, Christopher Nov 27 at 10:37


This question appears to be off-topic. The users who voted to close gave this specific reason:


  • "This question is missing context or other details: Please improve the question by providing additional context, which ideally includes your thoughts on the problem and any attempts you have made to solve it. This information helps others identify where you have difficulties and helps them write answers appropriate to your experience level." – Kavi Rama Murthy, Saad, Gibbs, Did, Christopher

If this question can be reworded to fit the rules in the help center, please edit the question.








  • 1




    You are expected to show your effort.
    – Kavi Rama Murthy
    Nov 27 at 5:57














  • 1




    You are expected to show your effort.
    – Kavi Rama Murthy
    Nov 27 at 5:57








1




1




You are expected to show your effort.
– Kavi Rama Murthy
Nov 27 at 5:57




You are expected to show your effort.
– Kavi Rama Murthy
Nov 27 at 5:57










2 Answers
2






active

oldest

votes


















2














Let $pi$ denote a stationary distribution for the given Markov chain, assuming for the moment such a distribution exists. We may write $pi$ as a vector, thus:



$pi = (pi_0, pi_1, pi_2, ldots, pi_k, pi_{k + 1}, ldots ); tag 1$



also, $pi$ must satisfy the normalization condition



$displaystyle sum_0^infty pi_k = 1, tag 2$



which ensures that $pi$ is in fact a probability distribution on the non-negative integers



$Bbb Z_ge = { z in Bbb Z, ; z ge 0 }. tag 3$



Now it will be recalled that the equations



$p_{i, i + 1} = p, ; p_{i, 0} = 1 - p tag 4$



define the conditional transition probabilities 'twixt the specified states ($i$ and $i + 1$, $i$ and $0$ according to the indices); thus the stationary probabilities $pi_k$ must satisfy



$pi_{i + 1} = p pi_i, tag 5$



and



$pi_0 = (1 - p) pi_i; tag 6$



if we apply (5) repeatedly, starting with $i = 0$ we find



$pi_1 = p pi_0, tag 7$



$pi_2 = ppi_1 = p^2 pi_0, tag 8$



and it is easy to see this pattern leads to



$pi_k = p^k pi_0, ; k in Bbb Z_ge; tag 9$



substituting this into the normalization equation (2) yields



$pi_0 displaystyle sum_0^infty p^k = sum_0^infty p^k pi_0 = sum_0^infty pi_k = 1, tag{10}$



and since



$displaystyle sum_0^infty p^k = dfrac{1}{1 - p}, ; 0 < p < 1; tag{11}$



we may solve (10) and obtain



$pi_0 = 1 - p, tag{12}$



and hence from (9),



$pi_k = p^k(1 - p). tag{13}$



As a consistency check, we may use (13) in the equation for $pi_0$:



$pi_0 = displaystyle sum_0^infty (1 - p)pi_i = sum_0^infty (1 - p)(1 - p)p^k = (1 - p)^2 sum_0^infty p^k = (1 - p)^2 dfrac{1}{1 - p} = 1 - p. tag{14}$



Now since a distribution $pi$ as in (1) is given by (13), we see that a stationary distribution indeed exists; furthermore, it is clear that it is uniquely determined by (9)-(13). Thus the given Markov chain defined by (4) has a unique stationary distribution.



Nota Bene: Of course we should note that the above only applies subject to the condition



$0 < p < 1; tag{15}$



if $p = 1$, then (6) and (9) show that



$pi_k = 0, ; k in Bbb Z_ge, tag{16}$,



which, since such $pi$ can't satisfy (2), is inadmissible as a probability distribution; if $p = 0$, (16) again follows from (5) and (6), and again we arrive at an inadmissible $pi$. So we need to adopt (15) to obtain a unique and sensible result. End of Note.






share|cite|improve this answer

















  • 1




    You are God sent. Now I understand clearly. :)
    – Note
    Nov 27 at 8:40










  • @Note: thanks for the kind words!!!
    – Robert Lewis
    Nov 27 at 9:17



















0














Algebraically you are trying to solve $pi P = pi$ (or equivalently $pi(P-I) = 0$. Write out a couple of implied equations and you will see how to derive the solution and that it is unique...






share|cite|improve this answer

















  • 2




    The $i$th row of $P$ looks like $begin{bmatrix} 1-p & 0 & 0 & dots & p & 0 & dots & 0 end{bmatrix}$, where the $p$ is the $i+1$ position. The $i$th column then looks like $begin{bmatrix} 0 \ 0 \ dots \ p \ 0 \ dots \ 0 end{bmatrix}$ where now the $p$ is in the $i-1$ position, for $i>0$; for $i=0$ all the entries of the column are $1-p$. So the equations for the stationary distribution, for $i>0$, are $pi_i=p pi_{i-1}$.
    – Note
    Nov 27 at 6:58










  • So how do I proceed from here
    – Note
    Nov 27 at 7:07










  • Does $pi = (p/(1-p), 1, 1)$
    – Note
    Nov 27 at 7:14


















2 Answers
2






active

oldest

votes








2 Answers
2






active

oldest

votes









active

oldest

votes






active

oldest

votes









2














Let $pi$ denote a stationary distribution for the given Markov chain, assuming for the moment such a distribution exists. We may write $pi$ as a vector, thus:



$pi = (pi_0, pi_1, pi_2, ldots, pi_k, pi_{k + 1}, ldots ); tag 1$



also, $pi$ must satisfy the normalization condition



$displaystyle sum_0^infty pi_k = 1, tag 2$



which ensures that $pi$ is in fact a probability distribution on the non-negative integers



$Bbb Z_ge = { z in Bbb Z, ; z ge 0 }. tag 3$



Now it will be recalled that the equations



$p_{i, i + 1} = p, ; p_{i, 0} = 1 - p tag 4$



define the conditional transition probabilities 'twixt the specified states ($i$ and $i + 1$, $i$ and $0$ according to the indices); thus the stationary probabilities $pi_k$ must satisfy



$pi_{i + 1} = p pi_i, tag 5$



and



$pi_0 = (1 - p) pi_i; tag 6$



if we apply (5) repeatedly, starting with $i = 0$ we find



$pi_1 = p pi_0, tag 7$



$pi_2 = ppi_1 = p^2 pi_0, tag 8$



and it is easy to see this pattern leads to



$pi_k = p^k pi_0, ; k in Bbb Z_ge; tag 9$



substituting this into the normalization equation (2) yields



$pi_0 displaystyle sum_0^infty p^k = sum_0^infty p^k pi_0 = sum_0^infty pi_k = 1, tag{10}$



and since



$displaystyle sum_0^infty p^k = dfrac{1}{1 - p}, ; 0 < p < 1; tag{11}$



we may solve (10) and obtain



$pi_0 = 1 - p, tag{12}$



and hence from (9),



$pi_k = p^k(1 - p). tag{13}$



As a consistency check, we may use (13) in the equation for $pi_0$:



$pi_0 = displaystyle sum_0^infty (1 - p)pi_i = sum_0^infty (1 - p)(1 - p)p^k = (1 - p)^2 sum_0^infty p^k = (1 - p)^2 dfrac{1}{1 - p} = 1 - p. tag{14}$



Now since a distribution $pi$ as in (1) is given by (13), we see that a stationary distribution indeed exists; furthermore, it is clear that it is uniquely determined by (9)-(13). Thus the given Markov chain defined by (4) has a unique stationary distribution.



Nota Bene: Of course we should note that the above only applies subject to the condition



$0 < p < 1; tag{15}$



if $p = 1$, then (6) and (9) show that



$pi_k = 0, ; k in Bbb Z_ge, tag{16}$,



which, since such $pi$ can't satisfy (2), is inadmissible as a probability distribution; if $p = 0$, (16) again follows from (5) and (6), and again we arrive at an inadmissible $pi$. So we need to adopt (15) to obtain a unique and sensible result. End of Note.






share|cite|improve this answer

















  • 1




    You are God sent. Now I understand clearly. :)
    – Note
    Nov 27 at 8:40










  • @Note: thanks for the kind words!!!
    – Robert Lewis
    Nov 27 at 9:17
















2














Let $pi$ denote a stationary distribution for the given Markov chain, assuming for the moment such a distribution exists. We may write $pi$ as a vector, thus:



$pi = (pi_0, pi_1, pi_2, ldots, pi_k, pi_{k + 1}, ldots ); tag 1$



also, $pi$ must satisfy the normalization condition



$displaystyle sum_0^infty pi_k = 1, tag 2$



which ensures that $pi$ is in fact a probability distribution on the non-negative integers



$Bbb Z_ge = { z in Bbb Z, ; z ge 0 }. tag 3$



Now it will be recalled that the equations



$p_{i, i + 1} = p, ; p_{i, 0} = 1 - p tag 4$



define the conditional transition probabilities 'twixt the specified states ($i$ and $i + 1$, $i$ and $0$ according to the indices); thus the stationary probabilities $pi_k$ must satisfy



$pi_{i + 1} = p pi_i, tag 5$



and



$pi_0 = (1 - p) pi_i; tag 6$



if we apply (5) repeatedly, starting with $i = 0$ we find



$pi_1 = p pi_0, tag 7$



$pi_2 = ppi_1 = p^2 pi_0, tag 8$



and it is easy to see this pattern leads to



$pi_k = p^k pi_0, ; k in Bbb Z_ge; tag 9$



substituting this into the normalization equation (2) yields



$pi_0 displaystyle sum_0^infty p^k = sum_0^infty p^k pi_0 = sum_0^infty pi_k = 1, tag{10}$



and since



$displaystyle sum_0^infty p^k = dfrac{1}{1 - p}, ; 0 < p < 1; tag{11}$



we may solve (10) and obtain



$pi_0 = 1 - p, tag{12}$



and hence from (9),



$pi_k = p^k(1 - p). tag{13}$



As a consistency check, we may use (13) in the equation for $pi_0$:



$pi_0 = displaystyle sum_0^infty (1 - p)pi_i = sum_0^infty (1 - p)(1 - p)p^k = (1 - p)^2 sum_0^infty p^k = (1 - p)^2 dfrac{1}{1 - p} = 1 - p. tag{14}$



Now since a distribution $pi$ as in (1) is given by (13), we see that a stationary distribution indeed exists; furthermore, it is clear that it is uniquely determined by (9)-(13). Thus the given Markov chain defined by (4) has a unique stationary distribution.



Nota Bene: Of course we should note that the above only applies subject to the condition



$0 < p < 1; tag{15}$



if $p = 1$, then (6) and (9) show that



$pi_k = 0, ; k in Bbb Z_ge, tag{16}$,



which, since such $pi$ can't satisfy (2), is inadmissible as a probability distribution; if $p = 0$, (16) again follows from (5) and (6), and again we arrive at an inadmissible $pi$. So we need to adopt (15) to obtain a unique and sensible result. End of Note.






share|cite|improve this answer

















  • 1




    You are God sent. Now I understand clearly. :)
    – Note
    Nov 27 at 8:40










  • @Note: thanks for the kind words!!!
    – Robert Lewis
    Nov 27 at 9:17














2












2








2






Let $pi$ denote a stationary distribution for the given Markov chain, assuming for the moment such a distribution exists. We may write $pi$ as a vector, thus:



$pi = (pi_0, pi_1, pi_2, ldots, pi_k, pi_{k + 1}, ldots ); tag 1$



also, $pi$ must satisfy the normalization condition



$displaystyle sum_0^infty pi_k = 1, tag 2$



which ensures that $pi$ is in fact a probability distribution on the non-negative integers



$Bbb Z_ge = { z in Bbb Z, ; z ge 0 }. tag 3$



Now it will be recalled that the equations



$p_{i, i + 1} = p, ; p_{i, 0} = 1 - p tag 4$



define the conditional transition probabilities 'twixt the specified states ($i$ and $i + 1$, $i$ and $0$ according to the indices); thus the stationary probabilities $pi_k$ must satisfy



$pi_{i + 1} = p pi_i, tag 5$



and



$pi_0 = (1 - p) pi_i; tag 6$



if we apply (5) repeatedly, starting with $i = 0$ we find



$pi_1 = p pi_0, tag 7$



$pi_2 = ppi_1 = p^2 pi_0, tag 8$



and it is easy to see this pattern leads to



$pi_k = p^k pi_0, ; k in Bbb Z_ge; tag 9$



substituting this into the normalization equation (2) yields



$pi_0 displaystyle sum_0^infty p^k = sum_0^infty p^k pi_0 = sum_0^infty pi_k = 1, tag{10}$



and since



$displaystyle sum_0^infty p^k = dfrac{1}{1 - p}, ; 0 < p < 1; tag{11}$



we may solve (10) and obtain



$pi_0 = 1 - p, tag{12}$



and hence from (9),



$pi_k = p^k(1 - p). tag{13}$



As a consistency check, we may use (13) in the equation for $pi_0$:



$pi_0 = displaystyle sum_0^infty (1 - p)pi_i = sum_0^infty (1 - p)(1 - p)p^k = (1 - p)^2 sum_0^infty p^k = (1 - p)^2 dfrac{1}{1 - p} = 1 - p. tag{14}$



Now since a distribution $pi$ as in (1) is given by (13), we see that a stationary distribution indeed exists; furthermore, it is clear that it is uniquely determined by (9)-(13). Thus the given Markov chain defined by (4) has a unique stationary distribution.



Nota Bene: Of course we should note that the above only applies subject to the condition



$0 < p < 1; tag{15}$



if $p = 1$, then (6) and (9) show that



$pi_k = 0, ; k in Bbb Z_ge, tag{16}$,



which, since such $pi$ can't satisfy (2), is inadmissible as a probability distribution; if $p = 0$, (16) again follows from (5) and (6), and again we arrive at an inadmissible $pi$. So we need to adopt (15) to obtain a unique and sensible result. End of Note.






share|cite|improve this answer












Let $pi$ denote a stationary distribution for the given Markov chain, assuming for the moment such a distribution exists. We may write $pi$ as a vector, thus:



$pi = (pi_0, pi_1, pi_2, ldots, pi_k, pi_{k + 1}, ldots ); tag 1$



also, $pi$ must satisfy the normalization condition



$displaystyle sum_0^infty pi_k = 1, tag 2$



which ensures that $pi$ is in fact a probability distribution on the non-negative integers



$Bbb Z_ge = { z in Bbb Z, ; z ge 0 }. tag 3$



Now it will be recalled that the equations



$p_{i, i + 1} = p, ; p_{i, 0} = 1 - p tag 4$



define the conditional transition probabilities 'twixt the specified states ($i$ and $i + 1$, $i$ and $0$ according to the indices); thus the stationary probabilities $pi_k$ must satisfy



$pi_{i + 1} = p pi_i, tag 5$



and



$pi_0 = (1 - p) pi_i; tag 6$



if we apply (5) repeatedly, starting with $i = 0$ we find



$pi_1 = p pi_0, tag 7$



$pi_2 = ppi_1 = p^2 pi_0, tag 8$



and it is easy to see this pattern leads to



$pi_k = p^k pi_0, ; k in Bbb Z_ge; tag 9$



substituting this into the normalization equation (2) yields



$pi_0 displaystyle sum_0^infty p^k = sum_0^infty p^k pi_0 = sum_0^infty pi_k = 1, tag{10}$



and since



$displaystyle sum_0^infty p^k = dfrac{1}{1 - p}, ; 0 < p < 1; tag{11}$



we may solve (10) and obtain



$pi_0 = 1 - p, tag{12}$



and hence from (9),



$pi_k = p^k(1 - p). tag{13}$



As a consistency check, we may use (13) in the equation for $pi_0$:



$pi_0 = displaystyle sum_0^infty (1 - p)pi_i = sum_0^infty (1 - p)(1 - p)p^k = (1 - p)^2 sum_0^infty p^k = (1 - p)^2 dfrac{1}{1 - p} = 1 - p. tag{14}$



Now since a distribution $pi$ as in (1) is given by (13), we see that a stationary distribution indeed exists; furthermore, it is clear that it is uniquely determined by (9)-(13). Thus the given Markov chain defined by (4) has a unique stationary distribution.



Nota Bene: Of course we should note that the above only applies subject to the condition



$0 < p < 1; tag{15}$



if $p = 1$, then (6) and (9) show that



$pi_k = 0, ; k in Bbb Z_ge, tag{16}$,



which, since such $pi$ can't satisfy (2), is inadmissible as a probability distribution; if $p = 0$, (16) again follows from (5) and (6), and again we arrive at an inadmissible $pi$. So we need to adopt (15) to obtain a unique and sensible result. End of Note.







share|cite|improve this answer












share|cite|improve this answer



share|cite|improve this answer










answered Nov 27 at 8:18









Robert Lewis

43.6k22863




43.6k22863








  • 1




    You are God sent. Now I understand clearly. :)
    – Note
    Nov 27 at 8:40










  • @Note: thanks for the kind words!!!
    – Robert Lewis
    Nov 27 at 9:17














  • 1




    You are God sent. Now I understand clearly. :)
    – Note
    Nov 27 at 8:40










  • @Note: thanks for the kind words!!!
    – Robert Lewis
    Nov 27 at 9:17








1




1




You are God sent. Now I understand clearly. :)
– Note
Nov 27 at 8:40




You are God sent. Now I understand clearly. :)
– Note
Nov 27 at 8:40












@Note: thanks for the kind words!!!
– Robert Lewis
Nov 27 at 9:17




@Note: thanks for the kind words!!!
– Robert Lewis
Nov 27 at 9:17











0














Algebraically you are trying to solve $pi P = pi$ (or equivalently $pi(P-I) = 0$. Write out a couple of implied equations and you will see how to derive the solution and that it is unique...






share|cite|improve this answer

















  • 2




    The $i$th row of $P$ looks like $begin{bmatrix} 1-p & 0 & 0 & dots & p & 0 & dots & 0 end{bmatrix}$, where the $p$ is the $i+1$ position. The $i$th column then looks like $begin{bmatrix} 0 \ 0 \ dots \ p \ 0 \ dots \ 0 end{bmatrix}$ where now the $p$ is in the $i-1$ position, for $i>0$; for $i=0$ all the entries of the column are $1-p$. So the equations for the stationary distribution, for $i>0$, are $pi_i=p pi_{i-1}$.
    – Note
    Nov 27 at 6:58










  • So how do I proceed from here
    – Note
    Nov 27 at 7:07










  • Does $pi = (p/(1-p), 1, 1)$
    – Note
    Nov 27 at 7:14
















0














Algebraically you are trying to solve $pi P = pi$ (or equivalently $pi(P-I) = 0$. Write out a couple of implied equations and you will see how to derive the solution and that it is unique...






share|cite|improve this answer

















  • 2




    The $i$th row of $P$ looks like $begin{bmatrix} 1-p & 0 & 0 & dots & p & 0 & dots & 0 end{bmatrix}$, where the $p$ is the $i+1$ position. The $i$th column then looks like $begin{bmatrix} 0 \ 0 \ dots \ p \ 0 \ dots \ 0 end{bmatrix}$ where now the $p$ is in the $i-1$ position, for $i>0$; for $i=0$ all the entries of the column are $1-p$. So the equations for the stationary distribution, for $i>0$, are $pi_i=p pi_{i-1}$.
    – Note
    Nov 27 at 6:58










  • So how do I proceed from here
    – Note
    Nov 27 at 7:07










  • Does $pi = (p/(1-p), 1, 1)$
    – Note
    Nov 27 at 7:14














0












0








0






Algebraically you are trying to solve $pi P = pi$ (or equivalently $pi(P-I) = 0$. Write out a couple of implied equations and you will see how to derive the solution and that it is unique...






share|cite|improve this answer












Algebraically you are trying to solve $pi P = pi$ (or equivalently $pi(P-I) = 0$. Write out a couple of implied equations and you will see how to derive the solution and that it is unique...







share|cite|improve this answer












share|cite|improve this answer



share|cite|improve this answer










answered Nov 27 at 4:57









gt6989b

33k22452




33k22452








  • 2




    The $i$th row of $P$ looks like $begin{bmatrix} 1-p & 0 & 0 & dots & p & 0 & dots & 0 end{bmatrix}$, where the $p$ is the $i+1$ position. The $i$th column then looks like $begin{bmatrix} 0 \ 0 \ dots \ p \ 0 \ dots \ 0 end{bmatrix}$ where now the $p$ is in the $i-1$ position, for $i>0$; for $i=0$ all the entries of the column are $1-p$. So the equations for the stationary distribution, for $i>0$, are $pi_i=p pi_{i-1}$.
    – Note
    Nov 27 at 6:58










  • So how do I proceed from here
    – Note
    Nov 27 at 7:07










  • Does $pi = (p/(1-p), 1, 1)$
    – Note
    Nov 27 at 7:14














  • 2




    The $i$th row of $P$ looks like $begin{bmatrix} 1-p & 0 & 0 & dots & p & 0 & dots & 0 end{bmatrix}$, where the $p$ is the $i+1$ position. The $i$th column then looks like $begin{bmatrix} 0 \ 0 \ dots \ p \ 0 \ dots \ 0 end{bmatrix}$ where now the $p$ is in the $i-1$ position, for $i>0$; for $i=0$ all the entries of the column are $1-p$. So the equations for the stationary distribution, for $i>0$, are $pi_i=p pi_{i-1}$.
    – Note
    Nov 27 at 6:58










  • So how do I proceed from here
    – Note
    Nov 27 at 7:07










  • Does $pi = (p/(1-p), 1, 1)$
    – Note
    Nov 27 at 7:14








2




2




The $i$th row of $P$ looks like $begin{bmatrix} 1-p & 0 & 0 & dots & p & 0 & dots & 0 end{bmatrix}$, where the $p$ is the $i+1$ position. The $i$th column then looks like $begin{bmatrix} 0 \ 0 \ dots \ p \ 0 \ dots \ 0 end{bmatrix}$ where now the $p$ is in the $i-1$ position, for $i>0$; for $i=0$ all the entries of the column are $1-p$. So the equations for the stationary distribution, for $i>0$, are $pi_i=p pi_{i-1}$.
– Note
Nov 27 at 6:58




The $i$th row of $P$ looks like $begin{bmatrix} 1-p & 0 & 0 & dots & p & 0 & dots & 0 end{bmatrix}$, where the $p$ is the $i+1$ position. The $i$th column then looks like $begin{bmatrix} 0 \ 0 \ dots \ p \ 0 \ dots \ 0 end{bmatrix}$ where now the $p$ is in the $i-1$ position, for $i>0$; for $i=0$ all the entries of the column are $1-p$. So the equations for the stationary distribution, for $i>0$, are $pi_i=p pi_{i-1}$.
– Note
Nov 27 at 6:58












So how do I proceed from here
– Note
Nov 27 at 7:07




So how do I proceed from here
– Note
Nov 27 at 7:07












Does $pi = (p/(1-p), 1, 1)$
– Note
Nov 27 at 7:14




Does $pi = (p/(1-p), 1, 1)$
– Note
Nov 27 at 7:14



Popular posts from this blog

How do I know what Microsoft account the skydrive app is syncing to?

When does type information flow backwards in C++?

Grease: Live!