1) Markov analysis is a technique that deals with the probabilities of future occurrences by analyzing currently known probabilities.
2) In the matrix of transition probabilities, Pij is the conditional probability of being in state i in the future, given the current state j.
3) In Markov analysis it is assumed that states are both mutually exclusive and collectively exhaustive.
4) In Markov analysis, the transition probability Pij represents the conditional probability of being in state i in the future given the current state of j.
5) The probabilities in any column of the matrix of transition probabilities will always sum to one.
6) The vector of state probabilities for any period is equal to the vector of state probabilities for the preceding period multiplied by the matrix of transition probabilities.
7) An equilibrium condition exists if the state probabilities for a future period are the same as the state probabilities for a previous period.
8) Equilibrium state probabilities may be estimated by using Markov analysis for a large number of periods.
9) Creating the fundamental matrix requires a partition of the matrix of transition.
10) When absorbing states exist, the fundamental matrix is used to compute equilibrium conditions.
11) For any absorbing state, the probability that a state will remain unchanged in the future is one.
12) The four basic assumptions of Markov analysis are:
1. There are a limited or finite number of possible states.
2. The probability of changing states remains the same over time.
3. A future state is predictable from previous state and transition matrix.
4. The size and makeup of the system are constant during analysis.
13) (n + 1) = nP
14) In Markov analysis, the row elements of the transition matrix must sum to 1.
15) “Events” are used to identify all possible conditions of a process or a system.
16) Once a Markov process is in equilibrium, it stays in equilibrium.
17) In Markov analysis, initial-state probability values determine equilibrium conditions.
18) Markov analysis assumes that there are a limited number of states in the system.
19) Markov analysis assumes that while a member of one state may move to a different state over time, the overall makeup of the system will remain the same.
20) The vector of state probabilities gives the probability of being in particular states at a particular point in time.
21) The matrix of transition probabilities gives the conditional probabilities of moving from one state to another.
22) Collectively exhaustive means that a system can be in only one state at any point in time.
23) If you are in an absorbing state, you cannot go to another state in the future.
24) A Markov process could be used as a model of how a disease progresses from one set of symptoms to another.
25) One of the problems with using the Markov model to study population shifts is that we must assume that the reasons for moving from one state to another remain the same over time.
26) Markov analysis is a technique that deals with the probabilities of future occurrences by
A) using the simplex solution method.
B) analyzing currently known probabilities.
C) statistical sampling.
D) the minimal spanning tree.
E) None of the above
27) Markov analysis might be effectively used for
A) market share analysis.
B) university enrollment predictions.
C) machine breakdowns.
D) bad debt prediction.
E) All of the above
28) Which of the following is not an assumption of Markov processes?
A) The state variable is discrete.
B) There are a limited number of possible states.
C) The probability of changing states remains the same over time.
D) We can predict any future state from the previous state and the matrix of transition probabilities.
E) The size and the makeup of the system do not change during the analysis.
29) In Markov analysis, we also assume that the sates are
A) collectively exhaustive.
B) mutually exclusive.
C) independent.
D) A and B
E) A, B, and C
30) The probability that we will be in a future state, given a current or existing state, is called
A) state probability.
B) prior probability.
C) steady state probability.
D) joint probability.
E) transition probability.