# What makes a Markov chain ergodic

## What are Markov chains?

I am currently reading some articles on Markov chain lumps and see no difference between a Markov chain and a simply directed weighted graph.

For example, in the article Optimal State Space Clumping in Markov Chains, they offer the following definition of a CTMC (Continuous Time Markov Chain):

We consider a finite CTMC with the state space through a transition rate matrix. S = {x 1, x 2, ..., xn} Q: S × S → R + (S., Q) S. = {x1, x2, ... , Xn}} F: S. × S. → R. +

You don't mention the Markov property at all, and in fact, if the weight on the edges is a probability, I believe the Markov property is trivial since the probability depends only on the current state of the chain and not the path that leads to.

In another article on relational properties of clumpability, Markov chains are defined similarly:

A Markov chain is called a triplet where the finite set of states of, is the transition probability matrix, which indicates the probability of getting from one state to another, and is initial probability distribution, which represents the probability with which the system is in a particular State starts. (S, P, π) SMP πM. (S., P., π) SMPπ

Again, no mention of past or future or independence.

There is a third paper, Simple O (m logn) Time Markov Chain Lumping, in which not only is it never stated that the weights on the edges are probabilities, but even:

In many applications the values ​​are not negative. We do not assume, however, as there are also applications where intentionally considered what normally makes it negative. W (s, s) - W (s, S ∖ {s}) W. (s, s') W . (s, s) - W. (s, S. ∖ {s})

It also states that pooling should be a way of reducing the number of states while maintaining the Markov property (by pooling the "equivalent" state into one larger state). However, to me it looks like it is simply adding up probabilities and it shouldn't even guarantee the resulting peobabilities of the transitions to / from the aggregated states in the area. Then what does the lump actually keep? [0, 1]

So I see two options:

• I didn't understand what a Markov chain is, right
• The use of the term Markov chain in these papers is incorrect

Could someone clear up the situation?

It really looks like there are different communities using this term and they mean very different things. From these 3 articles that I think looks like the Markov property is either trivial or useless, it looks fundamental when looking at any other type of paper.

N.N. × N.

However, the first paper defines a notation that begins with a continuous time Markov chain matches, sometimes called the Markov process is called , while the second paper defines a notation that starts with a time-discrete Markov chain matches. They say

P.π

[0, 1] P.1π1

I cannot read the third newspaper, it is chargeable. If the entries in each column of the matrix need to be summed to 1, then they are probabilities and they are discrete-time Markov chains. If the entries in each column can result in any number, the entries represent Rates and do not represent probabilities and they are continuous-time Markov chains.

1

For both continuous-time and discrete-time Markov chains, the Markov property is implied by the constant edge weights (or, equivalently, the constant entries in the transition matrix).

Markov chains come in two flavors: continuous time and discrete time.

Both continuous time markov chains (CTMC) and discrete time markov chains (DTMC) are displayed as directed weighted graphs.

With DTMCs, the transitions always take one "time" unit. As a result, there is no choice of how much weight you want on an arch - you are putting the probability of going on "y", assuming you are on "i".

For CTMCs, the transition time between any two states is necessarily given by an exponential random variable. This is the main difference between CTMCs and DTMCs: DTMCs always have a unit transition time. CTMCs have a random transition period.

The general convention for a CTMC is to weight an arc according to the rate of exponential random variables that goes from source to destination. That is, the convention is Guess to bet on the arcs, not on probabilities.

Negative prices

Although all of the CTMCs I remember presented with positive rates around the edges, negative rates appear in the CTMC analysis.

Suppose we are in state A, which is connected to B, C and D as below.

A -> B (the rate in A. of B is negative) A -> C (the rate in A. of C is negative) D -> A (the rate in A. of D is positive)

This is probably not quite what your article is referring to. I'm bringing it up to show that negative weights aren't necessarily ridiculous when someone is working with an appropriate convention.

Markov property

For DTMCs you are right. The Markov property is trivially fulfilled. The Markov property is fulfilled for CTMCs, since the transitions are given by exponential random variables (which are "memoryless"). If the transitions were not given by exponential random variables (say instead they were uniform) then we would be talking about "semi-Markov chains" or "semi-Markov processes".

We use cookies and other tracking technologies to improve your browsing experience on our website, to show you personalized content and targeted ads, to analyze our website traffic, and to understand where our visitors are coming from.

By continuing, you consent to our use of cookies and other tracking technologies and affirm you're at least 16 years old or have consent from a parent or guardian.