markov-chains's questions - Chinese 1answer

3.065 markov-chains questions.

I recently started to learn about Markov Chains and had a problem regarding the expected time to absorption: Problem: Markov has an untrained mouse that he place in a maze. The mouse may move ...

Problem 2 in section 6.1 of Grimmett and Stirzaker (G+M) asks: a die is rolled repeatedly which of the following are Markov chains, and supply the transition matrix for those that are: a.) the ...

Let M be a reversible Markov chain, with stationary distribution Q. Suppose we construct a metropolis Hastings proposal in order to get a chain N with stationary distribution P. I'd like to bound the ...

I got the first part of this question answered see link below: Markov Chain problem from Grimmett and Stirzaker Ex 6.1.2.a There is however a second part that I still don't understand: The original ...

Suppose I have a transition density (matrix) $K$, and I want to calculate the covariance between random variables at two different times. \begin{align*} Cov_{\mu_0}(X_n,X_{n+k})&=E_{\mu_0}(X_n-E_{...

I have found an exercise in an old exam without a solution. Let $X_{0},X_{1},...$ be a Markov chain with state space $S$ and $N$ a stopping time. Let $\mathbb{E}_{x}(\cdot)$ denote the conditional ...

The ergodic theorem says that for an irreducible and positive-recurrent Markov chain $P$, any distribution $\lambda$, and $x_n (n\geq0) \sim Markov(\lambda, P)$ then it follows that for any bounded ...

I just read on wikipedia that a way to check whether a Markov chain is ergodic is to compute the eigenvalues of the transition matrix, and if those are all (except for 1) less than 1, then the chain ...

I have a problem setup that reduces to finding the stationary state of a discrete time continuous state stationary Markov process. Is that state unique, as it is in the discrete state case? ...

I am a bit confused with the following: Suppose that we have a Markov Chain and a state $i$ that is transient, that is $f_i<1$. Since $i$ is transient, we cannot define the average time of ...

Our course material gives us the following equation without proving it : Let's consider a continuous-time Markov chain. Its discrete state-space is $\epsilon$ and its infinitesimal generator is $\...

A few weeks ago, I came across the following problem. Alice and Bob are planning to gamble against each other. Before they start to play the game, Bob wants to calculate its chance of winning to ...

Let $\{X_n:n=0,1,2,\ldots\}$ be a Markov chain with transition probabilities as given below: Determine the period of each state. The answer is "The only state with period $> 1$ is $1$, which ...

A Markov chain is ergodic if it is recurrent and aperiodic. Given its state space $S$ and transition matrix $T$ of size $|S|\times|S|$, the stationary distribution for the chain can be seen as the ...

How does one show that any finite-state time homogenous Markov Chain has at least one stationary distribution in the sense of $\pi = \pi Q$ where $Q$ is the transition matrix and $\pi$ is the ...

Taken from “Introduction to Linear Algebra” by Gilbert Strang: I am getting stuck on this proof, specifically at the inequality reasoning in the area I’ve hi-lighted. Why is the strict inequality ...

A mixture of Markov chains(mmc) on a set $U$ is a $2$-tuple $(F,\mu)$ such that $F$ is a set of Markov chains on $U$. $\displaystyle\sum_{f\in F}\sum_{x\in U}\mu^f(x)=1$ Let $U$ be any set and $F$ a ...

It is very well known that expected time for a standard Brownian motion to exit from interval $[a,b]$ (where $a<0$ and $b>0$) is $-ab$. In one of my projects, I wanted to calculate the similar ...

Suppose a computer has $s$ processors with identical independent exponential processing times with rate $\mu$. Instructions are processed on a first-come first-serve basis as soon as a processor ...

Define a steady-state vector for a transition matrix $T$ as a probability vector $v$ such that $Tv = v$ ($1$ is the eigenvalue for $v$). Define a transition matrix $T$ as regular if there exist a ...

I'm stuck with a seemingly easy relation which I have not been able to find in the standard probability literature. This is surprising since the result exists for much more general processes (for Levy ...

When we have a countably infinite state space what can we use to define the dynamics of our process, and how can we use this to then give a definition of a process in an equilibrium distribution /and ...

Axelrod's model is a grid of agents who all have features that are randomly assigned. The probability they interact is determined by their similarity in traits. The grid will eventually converge and ...

Consider a state $x_t$ that probabilistically evolves over time according to a controlled Markov chain, i.e., according to known probabilities $$\mathbb P(x_{t+1}=x' \,|\, x_t=x,a_t=a)$$ where $...

The maternity has 2 beds. Admissions are made at the beginning of the day. With each day, there is a probability of $1/2$ that no admission will arrive and probability $1/2$ that only one potential ...

Consider a queueing system accepting two types of arrivals: some customers arrive alone, according to a Poisson process of rate $\lambda > 0$, whereas other customers arrive two by two, with ...

Can someone please provide a reference to a detailed proof of the CLT for MC? I have a paper MC's for MCMC'ists but I have a problem understanding a crucial part. Namely, the transition from line 2 to ...

Let $X_0,X_1,\ldots$ be a Markov chain with state space $\mathbb{Z}$. Now I found in a textbook that we have $$ \mathbb{P}(X_{n+1} \in A \mid X_n = i, (X_{0},X_1,\ldots,X_{n-1}) \in B) = \mathbb{P}(X_{...

Consider the random walk $S_{n}$ for $n \geq 1$. Specifically, let $X_{1},X_{2},..$ be Independent with $$ \mathbb{P}(X_{n}=1) = p,~~\mathbb{P}(X_{n}=-1) = 1- p =:q $$ and $S_{n} = \sum_{k=1}^{n}X_{k}...

I learned that if a Markov chain is ergodic (irreducible, aperiodic and positive-recurrent), then it is guaranteed that a limiting distribution exists (ref: http://www.columbia.edu/~ks20/stochastic-I/...

Consider the random walk $S_{n}$ for $n \geq 1$. Specifically, let $X_{1},X_{2},..$ be Independent with $$ \mathbb{P}(X_{n}=1) = p,~~\mathbb{P}(X_{n}=-1) = 1- p =:q $$ and $S_{n} = \sum_{k=1}^{n}X_{k}...

The Springeld Maternity Ward contains two beds. Admissions are made only at the beginning of the day. Each day, there is a probability $\dfrac12$ that no admission will arrive, and probability $\...

Let $(G_n)$ be a sequence of expander graphs with maximal degree $\Delta$. Find a number $\beta(\Delta)$ such that for $\beta>\beta(\Delta)$, the relaxation time for Glauber dynamics for the Ising ...

I know that a reversible distribution is a stationary distribution, but is there a case where we have a stationary distribution but it's not reversible distribution? In other words, every stationary ...

I have some problems in solving this problem: "Two independent walkers move on a discrete ring with N state (N odd) and periodic boundary conditions, with transition probability: $$P_{n\rightarrow n\...

Let $X_n$ be a simple, symmetric random walk on $\mathbb{Z}$ with $X_0=0$. Let $$ T=\inf\{n\ge 1 : X_n=0\} $$ Compute $\mathbb{E}(s^T)$ for fixed $s\in(0,1)$. Apologies if this has been asked ...

Let $\mathbb{S}$ be countable and $\Omega$ be the set of all right continuous functions $\omega:[0,\infty)\rightarrow \mathbb{S}$. Let $X_t(\omega) = \omega(t)$ denote a continuous Markov chain. ...

Consider the following Markov chain, which has an infinite number of states $s_1, s_2, \ldots$, with transition probabilites $$P_{1,1} = \frac 1 2,\quad P_{1,2} = \frac 1 2,\quad P_{j,1} = \frac{1}{j+...

I am trying to get some intuition on Harris recurrence in Markov chains. Define state space $\mathcal S$ comprising a single communication class, $f_{ii}^{(n)}=P(X_n=i, X_{n-1}\ne i,\ldots X_1\ne i\...

I am studying the fastest-mixing Markov chain in Stephen Boyd & Lieven Vandenberghe's Convex Optimization, but I am stuck trying to understand the following derivation. How does it calculate the ...

I am not understanding how is the transition probability matrix of the following example constructed. Suppose that whether or not it rains today depends on previous weather conditions through the ...

I came across this question while studying for a probability midterm, and I can't figure out whether the statement is true or not. The main point of confusion I am having is if a stationary ...

My domain is [0,1000], and for one special case, I have p(v=0)=0.99, p(v>0)=0.01. When I start from a random value and do MCMC with a normal distribution based proposal on old value, I found the ...

Given $X_1,X_2,\dots$ i.i.d. $\mathbb{R}^d$-valued random variables, define $S_0 := 0$, $S_n := X_1 + \dots + X_n$ to be a random walk starting at $0$. Suppose that for some $x \in \mathbb{R}^d$ there ...

Consider a periodic Markov process, of size n, with probability $p$ of rotating clockwise and with probability $q$ of rotating otherwise. Write a program in Python and simulate that Markov chain. I ...

Say we have two i.i.d. continuous-time Markov chains $X(t)$ and $Y(t)$ whose state space is multidimensional and discrete. Also, $X(0) = Y(0)$. I am trying to understand the following: $$P(X(t) = Y(t))...

Suppose that the weather tomorrow depends only on the weather today and that there are $3$ states for the weather: Sun, Cloudy, Rainy. Suppose today is day $1$ and is Sun, what's the probability of ...

Related tags

Hot questions

Language

Popular Tags