Assessment |
Biopsychology |
Comparative |
Cognitive |
Developmental |
Language |
Individual differences |
Personality |
Philosophy |
Social |
Methods |
Statistics |
Clinical |
Educational |
Industrial |
Professional items |
World psychology |
Statistics: Scientific method · Research methods · Experimental design · Undergraduate statistics courses · Statistical tests · Game theory · Decision theory
In mathematics, a Markov chain, named after Andrey Markov, is a discrete-time stochastic process with the Markov property. Having the Markov property means that, given the present state, future states are independent of the past states. In other words, the present state description fully captures all the information that can influence the future evolution of the process. Thus, given the present, the future is conditionally independent of the past.
At each time instant the system may change its state from the current state to another state, or remain in the same state, according to a certain probability distribution. The changes of state are called transitions, and the probabilities associated with various state-changes are termed transition probabilities. Also see sequential analysis.
Formal definition[]
A Markov chain is a sequence of random variables X1, X2, X3, ... with the Markov property, namely that, given the present state, the future and past states are independent. Formally,
The possible values of Xi form a countable set S called the state space of the chain.
Markov chains are often described by a directed graph, where the edges are labeled by the probabilities of going from one state to the other states.
Variations[]
Continuous-time Markov processes have a continuous index.
Time-homogeneous Markov chains (or, Markov chains with time-homogeneous transition probabilities) are processes where
for all n.
A Markov chain of order m (or a Markov chain with memory m) where m is finite, is where
for all n. It is possible to construct a chain (Yn) from (Xn) which has the 'classical' Markov property as follows: Let Yn = (Xn, Xn−1, ..., Xn−m+1), the ordered m-tuple of X values. Then Yn is a Markov chain with state space Sm and has the classical Markov property.
Example[]
A finite state machine can be used as a representation of a Markov chain. If the machine is in state y at time n, then the probability that it moves to state x at time n + 1 depends only on the current state.
A thorough development and many examples can be found in the on-line monograph
Meyn & Tweedie 2005 [1]
The appendix of Meyn 2007 [2], also available on-line, contains an abridged Meyn & Tweedie.
Properties of Markov chains[]
Define the probability of going from state i to state j in n time steps as
and the single-step transition as
The n-step transition satisfies the Chapman-Kolmogorov equation, that for any k such that 0 < k < n,
The marginal distribution Pr (Xn = x) is the distribution over states at time n. The initial distribution is Pr (X0 = x). The evolution of the process through one time step is described by
The superscript is intended to be an integer-valued label only; however, if the Markov chain is time-stationary, then this superscript can also be interpreted as a "raising to the power of", discussed further below.
Reducibility[]
A state j is said to be accessible from a different state i (written i → j) if, given that we are in state i, there is a non-zero probability that at some time in the future, we will be in state j. Formally, state j is accessible from state i if there exists an integer n≥0 such that
Allowing n to be zero means that every state is defined to be accessible from itself.
A state i is said to communicate with state j (written i ↔ j) if it is true that both i is accessible from j and that j is accessible from i. A set of states C is a communicating class if every pair of states in C communicates with each other, and no state in C communicates with any state not in C. (It can be shown that communication in this sense is an equivalence relation). A communicating class is closed if the probability of leaving the class is zero, namely that if i is in C but j is not, then j is not accessible from i.
Finally, a Markov chain is said to be irreducible if its state space is a communicating class; this means that, in an irreducible Markov chain, it is possible to get to any state from any state.
Periodicity[]
A state i has period k if any return to state i must occur in multiples of k time steps. For example, if it is only possible to return to state i in an even number of steps, then i is periodic with period 2. Formally, the period of a state is defined as
(where "gcd" is the greatest common divisor). Note that even though a state has period k, it may not be possible to reach the state in k steps. For example, suppose it is possible to return to the state in {6,8,10,12,...} time steps; then k would be 2, even though 2 does not appear in this list.
If k = 1, then the state is said to be aperiodic; otherwise (k>1), the state is said to be periodic with period k.
It can be shown that every state in a communicating class must have the same period.
A finite state irreducible Markov chain is said to be ergodic if its states are aperiodic.
Recurrence[]
A state i is said to be transient if, given that we start in state i, there is a non-zero probability that we will never return back to i. Formally, let the random variable Ti be the next return time to state i (the "hitting time"):
Then, state i is transient iff there exists a finite Ti such that:
If a state i is not transient (it has finite hitting time with probability 1), then it is said to be recurrent or persistent. Although the hitting time is finite, it need not have a finite average. Let Mi be the expected (average) return time,
Then, state i is positive recurrent if Mi is finite; otherwise, state i is null recurrent (the terms non-null persistent and null persistent are also used, respectively).
It can be shown that a state is recurrent if and only if
A state i is called absorbing if it is impossible to leave this state. Therefore, the state i is absorbing if and only if
- and for
Ergodicity[]
A state i is said to be ergodic if it is aperiodic and positive recurrent. If all states in a Markov chain are ergodic, then the chain is said to be ergodic.
Steady-state analysis and limiting distributions[]
If the Markov chain is a time-homogeneous Markov chain, so that the process is described by a single, time-independent matrix pij, then the vector π is a stationary distribution (also called an equilibrium distribution or invariant measure) if its entries πj sum to 1 and satisfy
An irreducible chain has a stationary distribution if and only if all of its states are positive-recurrent. In that case, π is unique and is related to the expected return time:
Further, if the chain is both irreducible and aperiodic, then for any i and j,
Note that there is no assumption on the starting distribution; the chain converges to the stationary distribution regardless of where it begins.
If a chain is not irreducible, its stationary distributions will not be unique (consider any closed communicating class in the chain; each one will have its own unique stationary distribution. Any of these will extend to a stationary distribution for the overall chain, where the probability outside the class is set to zero). However, if a state j is aperiodic, then
and for any other state i, let fij be the probability that the chain ever visits state j if it starts at i,
Markov chains with a finite state space[]
If the state space is finite, the transition probability distribution can be represented by a matrix, called the transition matrix, with the (i, j)'th element of P equal to
P is a stochastic matrix. Further, when the Markov chain is a time-homogeneous Markov chain, so that the transition matrix P is independent of the label n, then the k-step transition probability can be computed as the k'th power of the transition matrix, Pk.
The stationary distribution π is a (row) vector which satisfies the equation
In other words, the stationary distribution π is a normalized left eigenvector of the transition matrix associated with the eigenvalue 1.
Alternatively, π can be viewed as a fixed point of the linear (hence continuous) transformation on the unit simplex associated to the matrix P. As any continuous transformation in the unit simplex has a fixed point, a stationary distribution always exists, but is not guaranteed to be unique, in general. However, if the Markov chain is irreducible and aperiodic, then there is a unique stationary distribution π. In addition, Pk converges to a rank-one matrix in which each row is the stationary distribution π, that is,
where 1 is the column vector with all entries equal to 1. This is stated by the Perron-Frobenius theorem. This means that as time goes by, the Markov chain forgets where it began (its initial distribution) and converges to its stationary distribution.
Reversible Markov chain[]
The idea of a reversible Markov chain comes from the ability to "invert" a conditional probability using Bayes' Rule:
It now appears that time has been reversed. Thus, a Markov chain is said to be reversible if there is a π such that
This condition is also known as the detailed balance condition.
Summing over gives
so for reversible Markov chains, π is always a stationary distribution.
Bernoulli scheme[]
A Bernoulli scheme is a special case of a Markov chain where the transition probability matrix has identical rows, which means that the next state is even independent of the current state (in addition to being independent of the past states). A Bernoulli scheme with only two possible states is known as a Bernoulli process.
Markov chains with general state space[]
Many results for Markov chains with finite state space can be generalized to chains with uncountable state space through Harris chains. The main idea is to see if there is a point in the state space that the chain hits with probability one. Generally, it is not true for continuous state space, however, we can define sets A and B along with a positive number ε and a probability measure ρ, such that
- If , then for all .
- If and , then.
Then we could collapse the sets into an auxiliary point α, and a recurrent Harris chain can be modified to contain α. Lastly, the collection of Harris chains is a comfortable level of generality, which is broad enough to contain a large number of interesting examples, yet restrictive enough to allow for a rich theory.
Applications[]
Physics[]
Markovian systems appear extensively in physics, particularly statistical mechanics, whenever probabilities are used to represent unknown or unmodelled details of the system, if it can be assumed that the dynamics are time-invariant, and that no relevant history need be considered which is not already included in the state description.
Testing[]
Several theorists have proposed the idea of the Markov chain statistical test, a method of conjoining Markov chains to form a 'Markov blanket', arranging these chains in several recursive layers ('wafering') and producing more efficient test sets — samples — as a replacement for exhaustive testing. MCSTs also have uses in temporal state-based networks; Chilukuri et al.'s paper entitled "Temporal Uncertainty Reasoning Networks for Evidence Fusion with Applications to Object Detection and Tracking" (ScienceDirect) gives an excellent background and case study for applying MCSTs to a wider range of applications.
Queueing theory[]
Markov chains can also be used to model various processes in queueing theory and statistics.[2]. Claude Shannon's famous 1948 paper A mathematical theory of communication, which at a single step created the field of information theory, opens by introducing the concept of entropy through Markov modeling of the English language. Such idealised models can capture many of the statistical regularities of systems. Even without describing the full structure of the system perfectly, such signal models can make possible very effective data compression through entropy coding techniques such as arithmetic coding. They also allow effective state estimation and pattern recognition. The world's mobile telephone systems depend on the Viterbi algorithm for error-correction, while hidden Markov models are extensively used in speech recognition and also in bioinformatics, for instance for coding region/gene prediction. Markov chains also play an important role in reinforcement learning.
Internet applications[]
The PageRank of a webpage as used by Google is defined by a Markov chain. It is the probability to be at page in the stationary distribution on the following Markov chain on all (known) webpages. If is the number of known webpages, and a page has links then it has transition probability for all pages that are linked to and for all pages that are not linked to. The parameter is taken to be about 0.15.
Markov models have also been used to analyze web navigation behavior of users. A user's web link transition on a particular website can be modeled using first- or second-order Markov models and can be used to make predictions regarding future navigation and to personalize the web page for an individual user.
Statistical[]
Markov chain methods have also become very important for generating sequences of random numbers to accurately reflect very complicated desired probability distributions, via a process called Markov chain Monte Carlo (MCMC). In recent years this has revolutionised the practicability of Bayesian inference methods, allowing a wide range of posterior distributions to be simulated and their parameters found numerically.
Mathematical biology[]
Markov chains also have many applications in biological modelling, particularly population processes, which are useful in modelling processes that are (at least) analogous to biological populations. The Leslie matrix is one such example, though some of its entries are not probabilities (they may be greater than 1). Another important example is the modeling of cell shape in dividing sheets of epithelial cells]. The distribution of shapes -- predominantly hexagonal -- was a long standing mystery until it was explained by a simple Markov Model, where a cell's state is its number of sides. Empirical evidence from frogs, fruit flies, and hydra further suggests that the stationary distribution of cell shape is exhibited by almost all multicellular animals.[1]
Gambling[]
Markov chains can be used to model many games of chance. The children's games Snakes and Ladders and "Hi Ho! Cherry-O", for example, are represented exactly by Markov chains. At each turn, the player starts in a given state (on a given square) and from there has fixed odds of moving to certain other states (squares).
Music[]
Markov chains are employed in algorithmic music composition, particularly in software programs such as CSound or Max. In a first-order chain, the states of the system become note or pitch values, and a probability vector for each note is constructed, completing a transition probability matrix (see below). An algorithm is constructed to produce and output note values based on the transition matrix weightings, which could be MIDI note values, frequency (Hz), or any other desirable metric.
Note | A | C# | Eb |
---|---|---|---|
A | 0.1 | 0.6 | 0.3 |
C# | 0.25 | 0.05 | 0.7 |
Eb | 0.7 | 0.3 | 0 |
Note | A | D | G |
---|---|---|---|
AA | 0.18 | 0.6 | 0.22 |
AD | 0.5 | 0.5 | 0 |
AG | 0.15 | 0.75 | 0.1 |
DD | 0 | 0 | 1 |
DA | 0.25 | 0 | 0.75 |
DG | 0.9 | 0.1 | 0 |
GG | 0.4 | 0.4 | 0.2 |
GA | 0.5 | 0.25 | 0.25 |
GD | 1 | 0 | 0 |
A second-order Markov chain can be introduced by considering the current state and also the previous state, as indicated in the second table. Higher, nth-order chains tend to "group" particular notes together, while 'breaking off' into other patterns and sequences occasionally. These higher-order chains tend to generate results with a sense of phrasal structure, rather than the 'aimless wandering' produced by a first-order system[3].
Baseball[]
Markov chains models have been used in advanced baseball analysis since 1960, although their use is still rare. Each half-inning of a baseball game fits the Markov chain state when the number of runners and outs are considered. For each half-inning there are 24 possible run-out combinations. Markov chain models can be used to evaluate runs created for both individual players as well as a team. [4].
Markov parody generators[]
Markov processes can also be used to generate superficially "real-looking" text given a sample document: they are used in a variety of recreational "parody generator" software (see dissociated press, Jeff Harrison, Mark V Shaney, [5] [6] ).
History[]
Andrey Markov produced the first results (1906) for these processes, purely theoretically. A generalization to countably infinite state spaces was given by Kolmogorov (1936). Markov chains are related to Brownian motion and the ergodic hypothesis, two topics in physics which were important in the early years of the twentieth century, but Markov appears to have pursued this out of a mathematical motivation, namely the extension of the law of large numbers to dependent events. In 1913, he applied his findings for the first time, namely, to the first 20,000 letters of Pushkin's "Eugene Onegin".
See also[]
- Hidden Markov model
- Examples of Markov chains
- Markov process
- Markov chain Monte Carlo
- Semi-Markov process
- Variable-order Markov model
- Markov decision process
- Shift of finite type
- Mark V Shaney
- Phase-type distribution
- Markov chain mixing time
- Quantum Markov chain
- Markov network
- Belief propagation
- Factor graph
- Recurrence period density entropy
References[]
- ↑ S. P. Meyn and R.L. Tweedie, 2005. Markov Chains and Stochastic Stability. Second edition to appear, Cambridge University Press, 2008.
- ↑ 2.0 2.1 S. P. Meyn, 2007. Control Techniques for Complex Networks, Cambridge University Press, 2007.
- ↑ Curtis Roads (ed.) (1996). The Computer Music Tutorial, MIT Press. ISBN 0262181584.
- ↑ Pankin, Mark D. MARKOV CHAIN MODELS: THEORETICAL BACKGROUND. URL accessed on 2007-11-26.
- ↑ Kenner, Hugh; O'Rourke, Joseph (November 1984), "A Travesty Generator for Micros", BYTE 9 (12): 129-131, 449-469
- ↑ Hartman, Charles (1996), The Virtual Muse: Experiments in Computer Poetry, Hanover, NH: Wesleyan University Press, ISBN 0819522392
- A.A. Markov. "Rasprostranenie zakona bol'shih chisel na velichiny, zavisyaschie drug ot druga". Izvestiya Fiziko-matematicheskogo obschestva pri Kazanskom universitete, 2-ya seriya, tom 15, pp 135-156, 1906.
- A.A. Markov. "Extension of the limit theorems of probability theory to a sum of variables connected in a chain". reprinted in Appendix B of: R. Howard. Dynamic Probabilistic Systems, volume 1: Markov Chains. John Wiley and Sons, 1971.
- Classical Text in Translation: A. A. Markov, An Example of Statistical Investigation of the Text Eugene Onegin Concerning the Connection of Samples in Chains, trans. David Link. Science in Context 19.4 (2006): 591-600. Online: http://journals.cambridge.org/production/action/cjoGetFulltext?fulltextid=637500
- Leo Breiman. Probability. Original edition published by Addison-Wesley, 1968; reprinted by Society for Industrial and Applied Mathematics, 1992. ISBN 0-89871-296-3. (See Chapter 7.)
- J.L. Doob. Stochastic Processes. New York: John Wiley and Sons, 1953. ISBN 0-471-52369-0.
- S. P. Meyn and R. L. Tweedie. Markov Chains and Stochastic Stability. London: Springer-Verlag, 1993. ISBN 0-387-19832-6. online: http://decision.csl.uiuc.edu/~meyn/pages/book.html . Second edition to appear, Cambridge University Press, 2008.
- S. P. Meyn. Control Techniques for Complex Networks. Cambridge University Press, 2007. ISBN-13: 9780521884419. Appendix contains abridged Meyn & Tweedie. online: http://decision.csl.uiuc.edu/~meyn/pages/CTCN/CTCN.html
- Booth, Taylor L. (1967). Sequential Machines and Automata Theory, 1st, New York: John Wiley and Sons, Inc.. Library of Congress Card Catalog Number 67-25924. Extensive, wide-ranging book meant for specialists, written for both theoretical computer scientists as well as electrical engineers. With detailed explanations of state minimization techniques, FSMs, Turing machines, Markov processes, and undecidability. Excellent treatment of Markov processes pp.449ff. Discusses Z-transforms, D transforms in their context.
- Kemeny, John G.; Hazleton Mirkil, J. Laurie Snell, Gerald L. Thompson (1959). Finite Mathematical Structures, 1st, Englewood Cliffs, N.J.: Prentice-Hall, Inc.. Library of Congress Card Catalog Number 59-12841. Classical text. cf Chapter 6 Finite Markov Chains pp.384ff.
Further reading[]
- Aitkin, M., & Aitkin, I. (2005). Bayesian Inference for Factor Scores. Mahwah, NJ: Lawrence Erlbaum Associates Publishers.
- Allenby, G. M., & Rossi, P. E. (2006). Hierarchical Bayes models. Thousand Oaks, CA: Sage Publications, Inc.
- Alma'adeed, S., Iggins, C., & Elliman, D. (2004). Off-line recognition of handwritten Arabic words using multiple hidden Markov models: Knowledge-Based Systems Vol 17(2-4) May 2004, 75-79.
- Anderson, R. B. (1974). A Markov chains model of medical specialty choice: Journal of Mathematical Sociology Vol 3(2) 1974, 259-274.
- Andersson, S. A., & Perlman, M. D. (1998). Normal Linear Regression Models With Recursive Graphical Markov Structure: Journal of Multivariate Analysis Vol 66(2) Aug 1998, 133-187.
- Ansari, A., & Jedidi, K. (2000). Bayesian factor analysis for multilevel binary observations: Psychometrika Vol 65(4) Dec 2000, 475-496.
- Ansari, A., Jedidi, K., & Dube, L. (2002). Heterogeneous factor analysis models: A Bayesian approach: Psychometrika Vol 67(1) Mar 2002, 49-78.
- Antosik, R. (1991). Pattern recognition with Markov probabilistic models: Dissertation Abstracts International.
- Arnold, P. G. (1976). On the nature of associations: Dissertation Abstracts International.
- Arundale, R. B. (1984). SAMPLE and TEST: Two FORTRAN IV programs for analysis of discrete-state, time-varying data using first-order, Markov-chain techniques: Behavior Research Methods, Instruments & Computers Vol 16(3) Jun 1984, 335-336.
- Ashurst, J. T. (1983). A Markov model for the treatment attendance of alcoholic patients: Dissertation Abstracts International.
- Averbeck, B. B., & Romanski, L. M. (2006). Probabilistic encoding of vocalizations in macaque ventral lateral prefrontal cortex: Journal of Neuroscience Vol 26(43) Oct 2006, 11023-11033.
- Ay, N., & Wennekers, T. (2003). Dynamical properties of strongly interacting Markov chains: Neural Networks Vol 16(10) Dec 2003, 1483-1497.
- Bala, M. V., & Mauskopf, J. A. (2006). Optimal Assignment of Treatments to Health States Using a Markov Decision Model: An Introduction to Basic Concepts: PharmacoEconomics Vol 24(4) 2006, 345-354.
- Baldi, P., & Chauvin, Y. (1994). Smooth on-line algorithms for Hidden Markov Models: Neural Computation Vol 6(2) Mar 1994, 307-318.
- Barnard, E. (1993). Temporal-difference methods and Markov models: IEEE Transactions on Systems, Man, & Cybernetics Vol 23(2) Mar-Apr 1993, 357-365.
- Bassi, A., Yoma, N. B., & Loncomilla, P. (2006). Estimating tonal prosodic discontinuities in Spanish using HMM: Speech Communication Vol 48(9) Sep 2006, 1112-1125.
- Batabyal, A. A. (1998). Aspects of arranged marriages and the theory of Markov decision processes: Theory and Decision Vol 45(3) Dec 1998, 241-253.
- Bauml, K.-H. (1992). Discrimination learning with rotated faces: A Markov analysis of encoding and association processes: Zeitschrift fur Experimentelle und Angewandte Psychologie Vol 39(1) 1992, 1-17.
- Bauml, K.-H. (1996). A Markov model for measuring storage loss and retrieval failure in retroactive inhibition: Acta Psychologica Vol 92(3) Aug 1996, 231-250.
- Benjamin, L. S. (1979). Use of structural analysis of social behavior (SASB) and Markov chains to study dyadic interactions: Journal of Abnormal Psychology Vol 88(3) Jun 1979, 303-319.
- Bernbach, H. A. (1966). Derivation of learning process statistics for a general Markov model: Psychometrika 31(2) 1966, 225-234.
- Bockenholt, U. (1999). Measuring change: Mixed Markov models for ordinal panel data: British Journal of Mathematical and Statistical Psychology Vol 52(1) May 1999, 125-136.
- Bockenholt, U. (2002). Markov models with random effects for binary panel data: Methods of Psychological Research Vol 7(2) 2002, 19-32.
- Bockenholt, U. (2005). A Latent Markov Model for the Analysis of Longitudinal Data Collected in Continuous Time: States, Durations, and Transitions: Psychological Methods Vol 10(1) Mar 2005, 65-83.
- Bogartz, R. S. (1966). Theorems for a finite sequence from a two-state, first-order Markov chain with stationary transition probabilities: Psychometrika 31(3) 1966, 383-395.
- Bolt, D. M., & Lall, V. F. (2003). Estimation of Compensatory and Noncompensatory Multidimensional Item Response Models Using Markov Chain Monte Carlo: Applied Psychological Measurement Vol 27(6) Nov 2003, 395-514.
- Bouchaffra, D., & Tan, J. I. (2006). Structural hidden Markov models: An application to handwritten numeral recognition: Intelligent Data Analysis Vol 10(1) 2006, 67-79.
- Boudreaux, D. L. (1993). An investigation of a discrete state Markov model for predicting college freshman grades: Dissertation Abstracts International.
- Bowe, T. R. (1981). A systems analysis of the maturation of sleep using a semi-Markov model: Dissertation Abstracts International.
- Bowe, T. R., & Anders, T. F. (1979). The use of the semi-Markov model in the study of the development of sleep-wake states in infants: Psychophysiology Vol 16(1) Jan 1979, 41-48.
- Brainerd, C. J. (1979). Markovian interpretations of conservation learning: Psychological Review Vol 86(3) May 1979, 181-213.
- Brainerd, C. J., & Howe, M. L. (1980). Developmental invariance in a mathematical model of associative learning: Child Development Vol 51(2) Jun 1980, 349-363.
- Brainerd, C. J., Howe, M. L., & Desrochers, A. (1982). The general theory of two-stage learning: A mathematical review with illustrations from memory development: Psychological Bulletin Vol 91(3) May 1982, 634-665.
- Bridle, J. S. (1990). Alpha-nets: A recurrent "neural" network architecture with a hidden Markov model interpretation: Speech Communication Vol 9(1) Feb 1990, 83-92.
- Bryden, M. P. (1961). Review of Markov learning models for multiperson interactions: Canadian Journal of Psychology/Revue Canadienne de Psychologie Vol 15(3) 1961, 191-192.
- Budescu, D. V. (1985). Analysis of dichotomous variables in the presence of serial dependence: Psychological Bulletin Vol 97(3) May 1985, 547-561.
- Burnett, R. C. (2005). Inference from complex causal models. Dissertation Abstracts International: Section B: The Sciences and Engineering.
- Busemeyer, J. R., Wang, Z., & Townsend, J. T. (2006). Quantum dynamics of human decision-making: Journal of Mathematical Psychology Vol 50(3) Jun 2006, 220-241.
- Butts, C. T. (2003). Network inference, error, and informant (in)accuracy: A Bayesian approach: Social Networks Vol 25(2) May 2003, 103-140.
- Butts, C. T. (2007). Review of Models and methods in social network analysis: Social Networks Vol 29(4) Oct 2007, 603-608.
- Cadot, O., & Sinclair-Desgagne, B. (1996). On the Computation of Randomized Markov Equilibria: Games and Economic Behavior Vol 17(1) Nov 1996, 129-134.
- Cane, V. R. (1978). On fitting low-order Markov chains to behaviour sequences: Animal Behaviour Vol 26(2) May 1978, 332-338.
- Caraco, T. (1982). A remark on Markov models of behavioural sequences: Animal Behaviour Vol 30(4) Nov 1982, 1255-1256.
- Caramellino, L., & Spizzichino, F. (1996). WBF Property and Stochastical Monotonicity of the Markov Process Associated to Schur-Constant Survivial Functions: Journal of Multivariate Analysis Vol 56(1) Jan 1996, 153-163.
- Carbonari, J. P., DiClemente, C. C., & Sewell, K. B. (1999). Stage transitions and the transtheoretical "stages of change" model of smoking cessation: Swiss Journal of Psychology/Schweizerische Zeitschrift fur Psychologie/Revue Suisse de Psychologie Vol 58(2) Jun 1999, 134-144.
- Carpenter, J. P. (2003). Bargaining Outcomes as the Result of Coordinated Expectations: An experimental study of sequential bargaining: Journal of Conflict Resolution Vol 47(2) Apr 2003, 119-139.
- Chechile, R. A. (2003). Review of Martingales and Markov Chains: Solved Exercises and Elements of Theory: Journal of Mathematical Psychology Vol 47(3) Jun 2003, 384.
- Chechile, R. A. (2004). Review of Finite Markov chains and algorithmic applications: Journal of Mathematical Psychology Vol 48(1) Feb 2004, 85-86.
- Chickering, D. M., & Paek, T. (2007). Personalizing influence diagrams: Applying online learning strategies to dialogue management: User Modeling and User-Adapted Interaction Vol 17(1-2) Mar 2007, 71-91.
- Chien, J.-T., & Wang, H.-C. (1997). Telephone speech recognition based on Bayesian adaptation of hidden Markov models: Speech Communication Vol 22(4) Sep 1997, 369-384.
- Chirikos, T., & Nestel, G. (1989). "Occupation, impaired health, and the functional capacity of men to continue working:" Erratum: Research on Aging Vol 11(4) Dec 1989, 517.
- Chmelar, V., & Osecky, P. (1975). The Markov model of active attention: Studia Psychologica Vol 17(2) 1975, 94-104.
- Chmelar, V., & Osecky, P. (1975). Microanalysis of the duration course of active visual attention of boys and girls aged 6-11 during successive visual perception of meaningless syllables: Ceskoslovenska Psychologie Vol 19(1) 1975, 26-41.
- Chmelar, V., & Osecky, P. (1975). Microanalysis of the duration course of active visual attention of boys and girls aged 6superscript 11 during successive visual perception of meaningless syllables: Ceskoslovenska Psychologie Vol 19(1) 1975, 26-41.
- Chmelar, V., & Osecky, P. (1981). Interindividual differences in the active attention course and corresponding mathematical models: Studia Psychologica Vol 23(2) 1981, 129-134.
- Cho, S.-B., & Kim, J. H. (1995). An HMM/MLP architecture for sequence recognition: Neural Computation Vol 7(2) Mar 1995, 358-369.
- Clancy, D., & O'Neill, P. D. (2007). Exact Bayesian inference and model selection for stochastic models of epidemics among a community of households: Scandinavian Journal of Statistics Vol 34(2) Jun 2007, 259-274.
- Clark, C. W. (1987). The lazy, adaptable lions: A Markovian model of group foraging: Animal Behaviour Vol 35(2) Apr 1987, 361-368.
- Coenders, G., Saris, W. E., Batista-Foguet, J., & Andreenkova, A. (1999). Stability of three-wave simplex estimates of reliability: Structural Equation Modeling Vol 6(2) 1999, 135-157.
- Colgan, P. W., & Slater, P. J. (1980). Clustering acts from transition matrices: Animal Behaviour Vol 28(3) Aug 1980, 965-966.
- Commenges, D., & Gegout-Petit, A. (2007). Likelihood for generally coarsened observations from multistate or counting process models: Scandinavian Journal of Statistics Vol 34(2) Jun 2007, 432-450.
- Commenges, D., Joly, P., Gegout-Petit, A., & Liquet, B. (2007). Choice between semi-parametric estimators of Markov and non-Markov multi-state models from coarsened observations: Scandinavian Journal of Statistics Vol 34(1) Mar 2007, 33-52.
- Conlisk, J. (1976). Interactive Markov chains: Journal of Mathematical Sociology Vol 4(2) 1976, 157-185.
- Conlisk, J. (1978). A stability theorem for an interactive Markov chain: Journal of Mathematical Sociology Vol 6(1) 1978, 163-168.
- Cooper, J. A. (2004). Soft Markov chain relations for modeling organizational behavior: Risk, Decision & Policy Vol 9(2) Apr-Jun 2004, 177-188.
- Cotton, J. (1976). Review of Markov chains: Theory and applications: PsycCRITIQUES Vol 21 (8), Aug, 1976.
- Cotton, M. M., & Wood, W. P. (1984). On probability models and avoidance learning: Psychological Reports Vol 55(2) Oct 1984, 387-400.
- Cowan, T. M. (1966). A Markov model for order of emission in free recall: Journal of Mathematical Psychology 3(2) 1966, 470-483.
- Curtat, L. O. (1996). Markov Equilibria of Stochastic Games with Complementarities: Games and Economic Behavior Vol 17(2) Dec 1996, 177-199.
- Dagsvik, J. K. (1983). Discrete dynamic choice: An extension of the choice models of Thurstone and Luce: Journal of Mathematical Psychology Vol 27(1) Mar 1983, 1-43.
- Das, P., Ravalji, H. R., & Bhowmik, K. L. (1974). On application of Markov chains in predicting diffusion behaviour at macro level: Society & Culture Vol 5(1) Jan 1974, 33-38.
- Datta, S., & McCormick, W. P. (1995). Some Continuous Edgeworth Expansions for Markov Chains with Applications to Bootstrap: Journal of Multivariate Analysis Vol 52(1) Jan 1995, 83-106.
- Daume, H., III, & Marcu, D. (2006). A Bayesian model for supervised clustering with the Dirichlet process prior: Journal of Machine Learning Research Vol 6 Dec 2006, 1551-1577.
- Davis, D. G., & Staddon, J. E. (1990). Memory for reward in probabilistic choice: Markovian and non-markovian properties: Behaviour Vol 114(1-4) Sep 1990, 37-64.
- Davis, S. A. (1980). Modeling case movement in a vocational rehabilitation system: Dissertation Abstracts International.
- Dayan, P. (1993). Improving generalization for temporal difference learning: The successor representation: Neural Computation Vol 5(4) Jul 1993, 613-624.
- de la Torre, A., Peinado, A. M., Rubio, A. J., Segura, J. C., & Benitez, C. (2002). Discriminative feature weighting for HMM-based continuous speech recognizers: Speech Communication Vol 38(3-4) Nov 2002, 267-286.
- De La Torre, J., & Douglas, J. A. (2004). Higher-order latent trait models for cognitive diagnosis: Psychometrika Vol 69(3) Sep 2004, 333-353.
- de la Torre, J., Stark, S., & Chernyshenko, O. S. (2006). Markov Chain Monte Carlo Estimation of Item Parameters for the Generalized Graded Unfolding Model: Applied Psychological Measurement Vol 30(3) May 2006, 216-232.
- Dellaert, F., Seitz, S. M., Thorpe, C. E., & Thrun, S. (2003). EM, MCMC, and chain flipping for structure from motion with unknown correspondence: Machine Learning Vol 50(1-2) Jan-Feb 2003, 45-71.
- Deng, L., & Erler, K. (1992). Structural design of hidden Markov model speech recognizer using multivalued phonetic features: Comparison with segmental speech units: Journal of the Acoustical Society of America Vol 92(6) Dec 1992, 3058-3067.
- Didelez, V. (2007). Graphical models for composable finite Markov processes: Scandinavian Journal of Statistics Vol 34(1) Mar 2007, 169-185.
- Dobson, C. W., & Lemon, R. E. (1979). Markov sequences in songs of American thrushes: Behaviour Vol 68(1-2) 1979, 86-105.
- Dockner, E. J., & Nishimura, K. (2005). Capital accumulation games with a non-concave production function: Journal of Economic Behavior & Organization Vol 57(4) Aug 2005, 408-420.
- Dolan, C. V., Jansen, B. R. J., & van der Maas, H. L. J. (2004). Constrained and Unconstrained Multivariate Normal Finite Mixture Modeling of Piagetian Data: Multivariate Behavioral Research Vol 39(1) Jan 2004, 69-98.
- Dolan, C. V., Schmittmann, V. D., Lubke, G. H., & Neale, M. C. (2005). Regime Switching in the Latent Growth Curve Mixture Model: Structural Equation Modeling Vol 12(1) Jan 2005, 94-119.
- Douglas, J. M., & Tweed, R. L. (1979). Analysing the patterning of a sequence of discrete behavioural events: Animal Behaviour Vol 27(4) Nov 1979, 1236-1252.
- Douglass, A. B., Benson, K., Hill, E. M., & Zarcone, V. P. (1992). Markovian analysis of phasic measures of REM sleep in normal, depressed, and schizophrenic subjects: Biological Psychiatry Vol 31(6) Mar 1992, 542-559.
- Dresher, A. R. (2003). The examination of local item dependency of naep assessments using the testlet model. Dissertation Abstracts International: Section B: The Sciences and Engineering.
- Duchateau, J., Demuynck, K., & Van Compernolle, D. (1998). Fast and accurate acoustic modelling with semi-continuous HMMs: Speech Communication Vol 24(1) Apr 1998, 5-17.
- Duys, D. K., & Headrick, T. C. (2004). Using Markov Chain Analyses in Counselor Education Research: Counselor Education and Supervision Vol 44(2) Dec 2004, 108-120.
- Eaves, L., & Erkanli, A. (2003). Markov Chain Monte Carlo Approaches to Analysis of Genetic and Environmental Components of Human Developmental Change and GxE Interaction: Behavior Genetics Vol 33(3) May 2003, 279-299.
- Eaves, L., Erkanli, A., Silberg, J., Angold, A., Maes, H. H., & Foley, D. (2005). Application of Bayesian inference using Gibbs sampling to item-response theory modeling of multi-symptom genetic data: Behavior Genetics Vol 35(6) Nov 2005, 765-780.
- Edwards, M. C. (2006). A Markov chain Monte Carlo approach to confirmatory item factor analysis. Dissertation Abstracts International: Section B: The Sciences and Engineering.
- Egan, M. A. (1998). Multiple phases in bona fide groups: An investigation of Markov-defined phases in Committee on Special Education decision-making groups. Dissertation Abstracts International Section A: Humanities and Social Sciences.
- Eid, M. (2002). A closer look at the measurement of change: Integrating latent state-trait models into the general framework of latent mixed Markov modeling: Methods of Psychological Research Vol 7(2) 2002, 33-52.
- Elzinga, C. H., Hoogendoorn, A. W., & Dijkstra, W. (2007). Linked Markov sources: Modeling outcome-dependent social processes: Sociological Methods & Research Vol 36(1) Aug 2007, 26-47.
- England, J. L. (1973). Mathematical models of two-party negotiations: Behavioral Science Vol 18(3) May 1973, 189-197.
- Eshima, N. (1993). Dynamic latent structure analysis through the latent Markov chain model: Behaviormetrika Vol 20(2) Jul 1993, 151-160.
- Estes, W. K., & Suppes, P. (1974). Foundations of stimulus sampling theory. Oxford, England: W H Freeman.
- Falmagne, J. C., & Doignon, J. P. (1988). A Markovian procedure for assessing the state of a system: Journal of Mathematical Psychology Vol 32(3) Sep 1988, 232-258.
- Faraone, S. V. (1986). A Statistical Analysis System (SAS) computer program for Markov chain analysis: Journal of Psychopathology and Behavioral Assessment Vol 8(4) Dec 1986, 367-379.
- Feger, H., & Sorembe, V. (1972). Relationships between reaction variables in decision processes: Zeitschrift fur Experimentelle und Angewandte Psychologie Vol 19(4) 1972, 529-541.
- Feng, G. (2003). From Eye Movement to Cognition: Toward a General Framework of Inference. Comment on Liechty et al., 2003: Psychometrika Vol 68(4) Dec 2003, 551-556.
- Fergusson, D. M., & Horwood, L. J. (1983). A Markovian model of childhood family history: Journal of Mathematical Sociology Vol 9(2) 1983, 139-154.
- Fergusson, D. M., & Horwood, L. J. (1995). Transitions to cigarette smoking during adolescence: Addictive Behaviors Vol 20(5) Sep-Oct 1995, 627-642.
- Ferreira, M. A. R., & De Oliveira, V. (2007). Bayesian reference analysis for Gaussian Markov random fields: Journal of Multivariate Analysis Vol 98(4) Apr 2007, 789-812.
- Fine, S., Singer, Y., & Tishby, N. (1998). The hierarchical hidden Markov model: Analysis and applications: Machine Learning Vol 32(1) Jul 1998, 41-62.
- Flexerand, A., Dorffner, G., Sykacekand, P., & Rezek, I. (2002). An automatic, continuous and probabilistic sleep stager based on a hidden markov model: Applied Artificial Intelligence Vol 16(3) Mar 2002, 199-208.
- Fokoue, E., & Titterington, D. M. (2003). Mixtures of factor analysers: Bayesian estimation and inference by stochastic simulation: Machine Learning Vol 50(1-2) Jan-Feb 2003, 73-94.
- Foster, D., & Dayan, P. (2002). Structure in the space of value functions: Machine Learning Vol 49(2-3) Nov-Dec 2002, 325-346.
- Fox, J.-P. (2005). Randomized Item Response Theory Models: Journal of Educational and Behavioral Statistics Vol 30(2) Sum 2005, 189-212.
- Fridlyand, J., Snijders, A. M., Pinkel, D., Albertson, D. G., & Jain, A. N. (2004). Hidden Markov models approach to the analysis of array CGH data: Journal of Multivariate Analysis Vol 90(1) Jul 2004, 132- 153.
- Fridlyand, J., Snijders, A. M., Pinkel, D., Albertson, D. G., & Jain, A. N. (2005). Corrigendum to "Hidden Markov models approach to the analysis of array CGH data": Journal of Multivariate Analysis Vol 92(2) Feb 2005, 465.
- Friedman, N., & Koller, D. (2003). Being Bayesian about network structure: A Bayesian approach to structure discovery in Bayesian networks: Machine Learning Vol 50(1-2) Jan-Feb 2003, 95-125.
- Fries, S. (1997). Empirical validation of a Markovian learning model for knowledge structures: Journal of Mathematical Psychology Vol 41(1) Mar 1997, 65-70.
- Fujita, H., & Ishii, S. (2007). Model-based reinforcement learning for partially observable games with sampling-based state estimation: Neural Computation Vol 19(11) Nov 2007, 3051-3087.
- Gardner, W. (1990). Analyzing sequential categorical data: Individual variation in Markov chains: Psychometrika Vol 55(2) Jun 1990, 263-275.
- Gaumond, R. P., Molnar, C. E., & Kim, D. O. (1982). Stimulus and recovery dependence of cat cochlear nerve fiber spike discharge probability: Journal of Neurophysiology Vol 48(3) Sep 1982, 856-873.
- Gauvain, J.-L., & Lee, C.-h. (1992). Bayesian learning for hidden Markov model with Gaussian mixture state observation densities: Speech Communication Vol 11(2-3), Spec Issue Jun 1992, 205-213.
- Gebhardt, W. A., Dusseldorp, E., & Maes, S. (1999). Measuring transitions through stages of exercise: Application of latent transition analysis: Perceptual and Motor Skills Vol 88(3, Pt 2) Jun 1999, 1097-1106.
- Gerchak, Y. (1983). On interactive chains with finite populations: Journal of Mathematical Sociology Vol 9(3) 1983, 255-258.
- Ghahramani, Z., & Jordan, M. I. (1997). Factorial hidden markov models: Machine Learning Vol 29(1-3) Oct-Dec 1997, 245-273.
- Ginsberg, R. B. (1972). Incorporating causal structure and exogenous information with probabilistic models: With special reference to choice, gravity, migration, and Markov chains: Journal of Mathematical Sociology Vol 2(1) Jan 1972, 83-103.
- Giraldeau, L.-A., & Gillis, D. (1988). Do lions hunt in group sizes that maximize hunters' daily food returns? : Animal Behaviour Vol 36(2) Apr 1988, 611-613.
- Giudici, P., & Castelo, R. (2003). Improving Markov chain Monte Carlo model search for data mining: Machine Learning Vol 50(1-2) Jan-Feb 2003, 127-158.
- Golden, R. M., Rumelhart, D. E., Strickland, J., & Ting, A. (1994). Markov random fields for text comprehension. Hillsdale, NJ, England: Lawrence Erlbaum Associates, Inc.
- Goldstein, H., & Browne, W. (2002). Multilevel factor analysis modelling using Markov chain Monte Carlo estimation. Mahwah, NJ: Lawrence Erlbaum Associates Publishers.
- Goosen, C., & Metz, J. A. (1980). Dissecting behaviour: Relations between autoaggression, grooming and walking in a macaque: Behaviour Vol 75(1-2) 1980, 97-132.
- Goritz, A. S., & Wolff, H.-G. (2007). Lotteries as Incentives in Longitudinal Web Studies: Social Science Computer Review Vol 25(1) Feb 2007, 99-110.
- Grasman, R. P. P. P., & Wagenmakers, E.-J. (2006). Rescue the Gardiner book! : Journal of Mathematical Psychology Vol 50(4) Aug 2006, 431-435.
- Greeno, J. G. (1974). Representation of learning as discrete transition in a finite state space. Oxford, England: W H Freeman.
- Greff, J.-Y., Idoumghar, L., & Schott, R. (2004). Application of Markov Decision Processes to the Frequency Assignment Problem: Applied Artificial Intelligence Vol 18(8) Sep 2004, 761-773.
- Gregson, R. A. M. (2005). Identifying Ill-Behaved Nonlinear Processes Without Metrics: Use of Symbolic Dynamics: Nonlinear Dynamics, Psychology, and Life Sciences Vol 9(4) Oct 2005, 479-503.
- Griffin, W. A. (2003). Affect pattern recognition: Using discrete hidden Markov models to discriminate distressed from nondistressed couples: Marriage & Family Review Vol 34(1-2) 2003, 139-163.
- Grimmett, G., & Treisman, M. (1980). On taking up position in a group: A continuous-time Markov model for biased random movement: British Journal of Mathematical and Statistical Psychology Vol 33(2) Nov 1980, 247-261.
- Haccou, P., Dienske, H., & Meelis, E. (1983). Analysis of time-inhomogeneity in Markov chains applied to mother-infant interactions of rhesus monkeys: Animal Behaviour Vol 31(3) Aug 1983, 927-945.
- Haccou, P., & Meelis, E. (1992). Statistical analysis of behavioural data: An approach based on time-structured models. New York, NY: Oxford University Press.
- Haenlein, M., Kaplan, A. M., & Beeser, A. J. (2007). A model to determine customer lifetime value in a retail banking context: European Management Journal Vol 25(3) Jun 2007, 221-234.
- Halff, H. M. (1976). Parameterizations of Markov models for two-stage learning: Journal of Mathematical Psychology Vol 14(2) Oct 1976, 123-129.
- Halff, H. M. (1976). Precriterion stationarity in Markovian learning models: Psychometrika Vol 41(3) Sep 1976, 301-320.
- Harel, M., & Puri, M. L. (1999). Conditional Empirical Processes Defined by Nonstationary Absolutely Regular Sequences: Journal of Multivariate Analysis Vol 70(2) Aug 1999, 250-285.
- Hariharan, R., & Viikki, O. (2002). An integrated study of speaker normalisation and HMM adaptation for noise robust speaker-independent speech recognition: Speech Communication Vol 37(3-4) Jul 2002, 349-361.
- Harper, J. M. (1980). Fitting a Markov chain model to complementary, symmetrical, and parallel relationship styles in marital dyads: Dissertation Abstracts International.
- Harris, C. M. (1993). On the reversibility of Markov scanning in free-viewing. Philadelphia, PA: Taylor & Francis.
- Hawes, L. C., & Foley, J. M. (1976). Group decisioning: Testing a finite stochastic model. Oxford, England: Sage.
- Hayes-Roth, F., & Longabaugh, R. (1972). REACT: A tool for the analysis of complex transitional behavior matrices: Behavioral Science Vol 17(4) Jul 1972, 384-394.
- Hepler, S. P. (1982). Markov models of information acquisition: Journal of Structural Learning Vol 7(2) 1982, 151-162.
- Heyman, G. M. (1979). A Markov model description of changeover probabilities on concurrent variable-interval schedules: Journal of the Experimental Analysis of Behavior Vol 31(1) Jan 1979, 41-51.
- Hines, W. G., Hurnik, J. F., & Mullen, K. (1983). Analysing qualitative behavioural data: A Markov chain aid: Applied Animal Ethology Vol 11(2) Nov 1983, 111-121.
- Hiroya, S., & Mochida, T. (2006). Multi-speaker articulatory trajectory formation based on speaker-independent articulatory HMMs: Speech Communication Vol 48(12) Dec 2006, 1677-1690.
- Holland, T. R., & McGarvey, B. (1984). Crime specialization, seriousness progression, and Markov chains: Journal of Consulting and Clinical Psychology Vol 52(5) Oct 1984, 837-840.
- Holmes, C. C., & Denison, D. G. T. (2003). Classification with Bayesian MARS: Machine Learning Vol 50(1-2) Jan-Feb 2003, 159-173.
- Hu, X., & Shapley, L. S. (2003). On authority distributions in organizations: Equilibrium: Games and Economic Behavior Vol 45(1) Oct 2003, 132-152.
- Huang, S.-Y., & Lu, H. H.-S. (2001). Extended Gauss-Markov Theorem for Nonparametric Mixed-Effects Models: Journal of Multivariate Analysis Vol 76(2) Feb 2001, 249-266.
- Huang, X. (1991). Semi-continuous hidden Markov models for speech recognition: Dissertation Abstracts International.
- Huisman, M., & Snijders, T. A. B. (2003). Statistical Analysis of Longitudinal Network Data With Changing Composition: Sociological Methods & Research Vol 32(2) Nov 2003, 253-287.
- Hung, W.-W., & Wang, H.-C. (1999). Smoothing hidden Markov models by using an adaptive signal limiter for noisy speech recognition: Speech Communication Vol 28(3) Jul 1999, 243-260.
- Hwang, C.-R., & Sheu, S.-J. (1998). On the Geometrical Convergence of Gibbs Sampler in R-super(d)*: Journal of Multivariate Analysis Vol 66(1) Jul 1998, 22-37.
- Ikeno, H., & Ohtani, T. (2001). Reconstruction of honeybee behavior within the observation hive: Neurocomputing: An International Journal Vol 38-40 Jun 2001, 1317-1323.
- Ishii, S., Fujita, H., Mitsutake, M., Yamazaki, T., Matsuda, J., & Matsuno, Y. (2005). A Reinforcement Learning Scheme for a Partially-Observable Multi-Agent Game: Machine Learning Vol 59(1-2) May 2005, 31-54.
- Jacobson, Z., Pullman, N. J., & Treurniet, W. (2002). The cell assembly, mark III: Transitions between brain states and the localization and generalization of function: International Journal of Neuroscience Vol 112(3) Jan 2002, 277-290.
- Jamieson, R. K., & Mewhort, D. J. K. (2005). The Influence of Grammatical, Local, and Organizational Redundancy on Implicit Learning: An Analysis Using Information Theory: Journal of Experimental Psychology: Learning, Memory, and Cognition Vol 31(1) Jan 2005, 9-23.
- Jansen, A. R., Marriott, K., & Yelland, G. W. (2007). Parsing of algebraic expressions by experienced users of mathematics: European Journal of Cognitive Psychology Vol 19(2) 2007, 286-320.
- Joe, L. T. (1978). Bayesian nonparametric policies in Markov decision models: Dissertation Abstracts International.
- Johnson, M. S., & Junker, B. W. (2003). Using Data Augmentation and Markov Chain Monte Carlo for the Estimation of Unfolding Response Models: Journal of Educational and Behavioral Statistics Vol 28(3) Fal 2003, 195-230.
- Johnson, M. S., & Sinharay, S. (2005). Calibration of Polytomous Item Families Using Bayesian Hierarchical Modeling: Applied Psychological Measurement Vol 29(5) Sep 2005, 369-400.
- Johnson, T. R. (2003). On the Use of Heterogeneous Thresholds Ordinal Regression Models to Account for Individual Differences in Response Style: Psychometrika Vol 68(4) Dec 2003, 563-583.
- Julesz, B., Gilbert, E. N., Shepp, L. A., & Frisch, H. L. (1973). Inability of humans to discriminate between visual textures that agree in second-order statistics: Revisited: Perception Vol 2(4) 1973, 391-405.
- Kaiser, J., Horvat, B., & Kacic, Z. (2002). Overall risk criterion estimation of hidden Markov model parameters: Speech Communication Vol 38(3-4) Nov 2002, 383-398.
- Kaiser, M. S., & Cressie, N. (2000). The Construction of Multivariate Distributions from Markov Random Fields: Journal of Multivariate Analysis Vol 73(2) May 2000, 199-220.
- Kalmar, Z., Szepesvari, C., & Lorincz, A. (1998). Module-based reinforcement learning: Experiments with a real robot: Machine Learning Vol 31(1-3) Apr-Jun 1998, 55-85.
- Karabatsos, G., & Batchelder, W. H. (2003). Markov chain estimation for test theory without an answer key: Psychometrika Vol 68(3) Sep 2003, 373-389.
- Kassaye, W. W., & Schumacher, P. P. (1998). Testing the impact of trying in behavior intervention: Journal of Economic Psychology Vol 19(1) Feb 1998, 75-106.
- Katsikopoulos, K. V. (1999). Characterizing and optimizing the performance of younger and older adults in paired associate tasks: A Markov modeling approach. Dissertation Abstracts International: Section B: The Sciences and Engineering.
- Katsikopoulos, K. V., & Fisher, D. (2002). Erratum: Journal of Mathematical Psychology Vol 46(2) Apr 2002, 251.
- Katsikopoulos, K. V., & Fisher, D. L. (2001). Formal requirements of Markov state models for paired associate learning: Journal of Mathematical Psychology Vol 45(2) Apr 2001, 324-333.
- Kayser, B. D. (1976). Educational aspirations as a Markovian process: A panel study of college and non-college plans during high school: Journal of Mathematical Sociology Vol 4(2) 1976, 295-306.
- Kearns, M., Mansour, Y., & Ng, A. Y. (2002). A sparse sampling algorithm for near-optimal planning in large Markov decision processes: Machine Learning Vol 49(2-3) Nov-Dec 2002, 193-208.
- Kearns, M., & Singh, S. (2002). Near-optimal reinforcement learning in polynomial time: Machine Learning Vol 49(2-3) Nov-Dec 2002, 209-232.
- Kim, D. K., & Kim, N. S. (2004). Maximum a posteriori adaptation of HMM parameters based on speaker space projection: Speech Communication Vol 42(1) Jan 2004, 59-73.
- Kim, J.-S., & Bolt, D. M. (2007). Estimating item response theory models using Markov chain Monte Carlo methods: Educational Measurement: Issues and Practice Vol 26(4) Win 2007, 38-51.
- Kim, S.-H. (2001). An evaluation of a Markov chain Monte Carlo method for the Rasch model: Applied Psychological Measurement Vol 25(2) Jun 2001, 163-176.
- Kincses, W. E., Braun, C., Kaiser, S., Grodd, W., Ackermann, H., & Mathiak, K. (2003). Reconstruction of Extended Cortical Sources for EEG and MEG Based on a Monte-Carlo-Markov-Chain Estimator: Human Brain Mapping Vol 18(2) Feb 2003, 100-110.
- Kindermann, R. P., & Snell, J. L. (1980). On the relation between Markov random fields and social networks: Journal of Mathematical Sociology Vol 7(1) 1980, 1-13.
- Kingma, J., & Van den Bos, K. P. (1987). Fix-count: A program for computing the error-success transitions in fixed-trial two stage Markov learning experiments: Educational and Psychological Measurement Vol 47(3) Fal 1987, 649-654.
- Kingma, J., & Van den Bos, K. P. (1987). Markov-fixed: Four programs for computing the parameters of one-stage and two-stage learning models and hypothesis testing in fixed-trial experiments: Educational and Psychological Measurement Vol 47(3) Fal 1987, 655-672.
- Kingma, J., & Van den Bos, K. P. (1987). Markov-forget: A package for parameter estimation and hypothesis testing of 5, 7, 8, 9, and 10-parameter two-stage forgetting models: Educational and Psychological Measurement Vol 47(3) Fal 1987, 673-687.
- Kintsch, W., & Morris, C. J. (1965). Application of a Markov model to free recall and recognition: Journal of Experimental Psychology Vol 69(2) Feb 1965, 200-206.
- Knoblauch, K., Neitz, M., & Neitz, J. (2006). An urn model of the development of L/M cone ratios in human and macaque retinas: Visual Neuroscience Vol 23(3-4) May-Aug 2006, 387-394.
- Komenda, S., & et al. (1977). Loss of information induced by reduction of data in the laboratory language method: Activitas Nervosa Superior Vol 19(4) 1977, 295-296.
- Kose, H., & Akin, H. L. (2007). The reverse Monte Carlo localization algorithm: Robotics and Autonomous Systems Vol 55(6) Jun 2007, 480-489.
- Krauth, J. (2005). Psychotherapy research based on two-sample CFA and Markov-Chain analysis: Psychology Science Vol 47(3-4) 2005, 315-325.
- Kugihara, N. (1989). Sequential fluctuation of conformity and fixation of escape behavior in an emergency, with a mathematical model: Japanese Journal of Psychology Vol 60(3) Aug 1989, 156-162.
- Kujala, J. V., & Lukka, T. J. (2006). Bayesian adaptive estimation: The next dimension: Journal of Mathematical Psychology Vol 50(4) Aug 2006, 369-389.
- Kumar, N., & Andreou, A. G. (1998). Heteroscedastic discriminant analysis and reduced rank HMMs for improved speech recognition: Speech Communication Vol 26(4) Dec 1998, 283-297.
- Kuravsky, L. S., & Malykh, S. B. (2003). Applying Markov's models to analyze evolution of psychological characteristics in a population: Voprosy Psychologii No 4 2003, 51-62.
- Lagazio, M., & Marwala, T. (2006). Assessing Different Bayesian Neural Network Models for Militarized Interstate Dispute: Outcomes and Variable Influences: Social Science Computer Review Vol 24(1) Spr 2006, 119-131.
- Landua, D. (1992). An attempt to classify satisfaction changes: Methodological and content aspects of a longitudinal problem: Social Indicators Research Vol 26(3) May 1992, 221-241.
- Lane, T., & Brodley, C. E. (2003). An Empirical Study of Two Approaches to Sequence Learning for Anomaly Detection: Machine Learning Vol 51(1) Apr 2003, 73-107.
- Langeheine, R. (1988). Manifest and latent Markov chain models for categorical panel data: Journal of Educational Statistics Vol 13(4) Win 1988, 299-312.
- Langeheine, R. (1994). Latent variables Markov models. Thousand Oaks, CA: Sage Publications, Inc.
- Langeheine, R., Stern, E., & van de Pol, F. (1994). State mastery learning: Dynamic models for longitudinal data: Applied Psychological Measurement Vol 18(3) Sep 1994, 277-291.
- Langeheine, R., & van den Pol, F. (2000). Fitting higher order Markov chains: Methods of Psychological Research Vol 5(1) 2000, 32-55.
- Langley-Hawthorne, C. (1997). Modeling the lifetime costs of treating schizophrenia in Australia: Clinical Therapeutics: The International Peer-Reviewed Journal of Drug Therapy Vol 19(6) Nov-Dec 1997, 1470-1495.
- Lanza, S. T., Collins, L. M., Schafer, J. L., & Flaherty, B. P. (2005). Using Data Augmentation to Obtain Standard Errors and Conduct Hypothesis Tests in Latent Class and Latent Transition Analysis: Psychological Methods Vol 10(1) Mar 2005, 84-100.
- Larkin, J. H., & Wickens, T. D. (1980). Population states and eigenstructure: A simplifying view of Markov learning models: Journal of Mathematical Psychology Vol 22(3) Dec 1980, 176-208.
- Laskey, K. B., & Myers, J. W. (2003). Population Markov chain Monte Carlo: Machine Learning Vol 50(1-2) Jan-Feb 2003, 175-196.
- Lauritzen, S. L., & Nilsson, D. (2001). Representing and solving decision problems with limited information: Management Science Vol 47(9) 2001, 1235-1251.
- Lee, H. E. (1972). A Markov chain model for Asch-type experiments: Journal of Mathematical Sociology Vol 2(1) Jan 1972, 131-142.
- Lee, H. K. H. (2003). A noninformative prior for neural networks: Machine Learning Vol 50(1-2) Jan-Feb 2003, 197-212.
- Lee, J., Kaiser, M. S., & Cressie, N. (2001). Multiway Dependence in Exponential Family Conditional Distributions: Journal of Multivariate Analysis Vol 79(2) Nov 2001, 171-190.
- Levine, R. A., & Casella, G. (2006). Optimizing random scan Gibbs samplers: Journal of Multivariate Analysis Vol 97(10) Nov 2006, 2071-2100.
- Li, C. H. (2005). Guided Cluster Discovery with Markov Model: Applied Intelligence Vol 22(1) Jan-Feb 2005, 37-46.
- Li, X. M., Wang, Y. L., Sun, L. Y., & Li, L. (2006). Modeling uncertainties involved with software development with a stochastic Petri net: Expert Systems: International Journal of Knowledge Engineering and Neural Networks Vol 23(5) Nov 2006, 302-312.
- Lichtenberg, J. W., & Heck, E. J. (1986). Analysis of sequence and pattern in process research: Journal of Counseling Psychology Vol 33(2) Apr 1986, 170-181.
- Lichtenberg, J. W., & Hummel, T. J. (1976). Counseling as stochastic process: Fitting a Markov chain model to initial counseling interviews: Journal of Counseling Psychology Vol 23(4) Jul 1976, 310-315.
- Liechty, J., Pieters, R., & Wedel, M. (2003). Global and Local Covert Visual Attention: Evidence from a Bayesian Hidden Markov Model: Psychometrika Vol 68(4) Dec 2003, 519-541.
- Liu, C.-S., Lee, C.-H., Chou, W., Juang, B.-H., & et al. (1995). A study on minimum error discriminative training for speaker recognition: Journal of the Acoustical Society of America Vol 97(1) Jan 1995, 637-648.
- Logan, J. A. (1981). A structural model of the higher-order Markov process incorporating reversion effects: Journal of Mathematical Sociology Vol 8(1) 1981, 75-89.
- Lohnes, P. R. (1965). Markov models for human development research: Journal of Counseling Psychology Vol 12(3) Fal 1965, 322-327.
- Magherini, A., Saetti, M. C., Berta, E., Botti, C., & Faglioni, P. (2005). Time ordering in frontal lobe patients: A stochastic model approach: Brain and Cognition Vol 58(3) Aug 2005, 286-299.
- Malafant, K. W., & Tweedie, R. L. (1982). Computer production of kinetograms: Applied Animal Ethology Vol 8(1-2) Jan 1982, 179-187.
- Mantysalo, J., Torkkola, K., & Kohonen, T. (1994). Mapping context dependent acoustic information into context independent form by LVQ: Speech Communication Vol 14(2) Apr 1994, 119-130.
- Maragos, P., & Potamianos, A. (1999). Fractal dimensions of speech sounds: Computation and application to automatic speech recognition: Journal of the Acoustical Society of America Vol 105(3) Mar 1999, 1925-1932.
- Marroquin, J. L. (1992). Random measure fields and the integration of visual information: IEEE Transactions on Systems, Man, & Cybernetics Vol 22(4) Jul-Aug 1992, 705-716.
- Mas, A. (2007). Weak convergence in the functional autoregressive model: Journal of Multivariate Analysis Vol 98(6) Jul 2007, 1231-1261.
- McConway, K. J. (1982). Simpler calculations for the SPAN technique: Journal of General Psychology Vol 107(1) Jul 1982, 91-98.
- McDonald, J. W., Smith, P. W. F., & Forster, J. J. (2007). Markov chain Monte Carlo exact inference for social networks: Social Networks Vol 29(1) Jan 2007, 127-136.
- McDonough, J., Schaaf, T., & Waibel, A. (2004). Speaker adaptation with all-pass transforms: Speech Communication Vol 42(1) Jan 2004, 75-91.
- Meiser, T., & Ohrt, B. (1996). Modeling structure and chance in transitions: Mixed latent partial Markov-chain models: Journal of Educational and Behavioral Statistics Vol 21(2) Sum 1996, 91-109.
- Metze, F. (2007). Discriminative speaker adaptation using articulatory features: Speech Communication Vol 49(5) May 2007, 348-360.
- Mishler, E. G. (1975). Studies in dialogue and discourse: An exponential law of successive questioning: Language in Society Vol 4(1) Apr 1975, 31-51.
- Monderer, D., & Samet, D. (1995). Stochastic Common Learning: Games and Economic Behavior Vol 9(2) May 1995, 161-171.
- Mooijaart, A., & van Montfort, K. (2007). Latent Markov Models for Categorical Variables and Time-Dependent Covariates. Mahwah, NJ: Lawrence Erlbaum Associates Publishers.
- Moulin, H., & Stong, R. (2003). Filling a multicolor urn: An axiomatic analysis: Games and Economic Behavior Vol 45(1) Oct 2003, 242-269.
- Moustaki, I., & Knott, M. (2005). Computational Aspects of the E-M and Bayesian Estimation in Latent Variable Models. Mahwah, NJ: Lawrence Erlbaum Associates Publishers.
- Muller, E., Buesing, L., Schemmel, J., & Meier, K. (2007). Spike-frequency adapting neural ensembles: Beyond mean adaptation and renewal theories: Neural Computation Vol 19(11) Nov 2007, 2958-3010.
- Musal, R. M. (2007). Bayesian modeling of group preferences and population utility estimation. Dissertation Abstracts International: Section B: The Sciences and Engineering.
- Myers, J. L., Suydam, M. M., & Heuckeroth, O. (1966). Choice behavior and reward structure: Differential payoff: Journal of Mathematical Psychology 3(2) 1966, 458-469.
- Myerson, R. B. (1997). Dual Reduction and Elementary Games: Games and Economic Behavior Vol 21(1-2) Oct 1997, 183-202.
- Nadabar, S. G. (1993). Markov random field contextual models in computer vision: Dissertation Abstracts International.
- Nafziger, D. H. (1973). A Markov chain analysis of the movement of young men using the Holland occupational classification: Dissertation Abstracts International.
- Natale, M. (1976). A Markovian model of adult gaze behavior: Journal of Psycholinguistic Research Vol 5(1) Jan 1976, 53-63.
- Nelson, L. M. (2005). A multilevel Item Response Theory model for time structured data. Dissertation Abstracts International: Section B: The Sciences and Engineering.
- Nicas, M., & Sun, G. (2006). An Integrated Model of Infection Risk in a Health-Care Environment: Risk Analysis Vol 26(4) Aug 2006, 1085-1096.
- Nicolson, R. I. (1982). Shades of all-or-none learning: A stimulus sampling model: British Journal of Mathematical and Statistical Psychology Vol 35(2) Nov 1982, 162-170.
- Nishii, R., & Eguchi, S. (2006). Image classification based on Markov random field models with Jeffreys divergence: Journal of Multivariate Analysis Vol 97(9) Oct 2006, 1997-2008.
- Nock, H. J., & Young, S. J. (2002). Modelling asynchrony in automatic speech recognition using loosely coupled hidden Markov models: Cognitive Science: A Multidisciplinary Journal Vol 26(3) May-Jun 2002, 283-301.
- Nowotny, S. (1979). Social mobility as a process: Mathematical models: Studia Socjologiczne Vol 3(74) 1979, 59-80.
- Nwe, T. L., Foo, S. W., & De Silva, L. C. (2003). Speech emotion recognition using hidden Markov models: Speech Communication Vol 41(4) Dec 2003, 603-623.
- Ogasawara, H. (1981). Markov simplex structure of the Uchida-Kraepelin Psychodiagnostic Test: Japanese Journal of Psychology Vol 52(3) Aug 1981, 152-158.
- Okamoto, E. (1994). Statistical decision tasks in probability learning situations: Japanese Psychological Review Vol 37(3) 1994, 324-332.
- Okamoto, E. (1994). "Statistical decision tasks in probability learning situations": Reply to Ono's and Nakahara's comments: Japanese Psychological Review Vol 37(3) 1994, 335-336.
- Ormoneit, D., & Sen, S. (2002). Kernel-based reinforcement learning: Machine Learning Vol 49(2-3) Nov-Dec 2002, 161-178.
- Oster, G., Delea, T. E., Huse, D. M., Regan, M. M., & et al. (1996). The benefits and risks of over-the-counter availability of nicotine polacrilex ("nicotine gum"): Medical Care Vol 34(5) May 1996, 389-402.
- Otterpohl, J. R., Emmert-Streib, F., & Pawelzik, K. (2001). A constrained HMM-based approach to the estimation of perceptual switching dynamics in pigeons: Neurocomputing: An International Journal Vol 38-40 Jun 2001, 1495-1501.
- Park, Y. B. (1991). A neural network generalization of a hidden Markov model for continuous speech: Dissertation Abstracts International.
- Patten, S. B., & Lee, R. C. (2004). Epidemiological theory, decision theory and mental health services research: Social Psychiatry and Psychiatric Epidemiology Vol 39(11) Nov 2004, 893-898.
- Patten, S. B., & Lee, R. C. (2004). Towards a dynamic description of major depression epidemiology: Epidemiologia e Psichiatria Sociale Vol 13(1) Jan-Mar 2004, 21-28.
- Patz, R. J., & Junker, B. W. (1999). Applications and extensions of MCMC in IRT: Multiple item types, missing data, and rated responses: Journal of Educational and Behavioral Statistics Vol 24(4) Win 1999, 342-366.
- Patz, R. J., & Junker, B. W. (1999). A straightforward approach to Markov chain Monte Carlo methods for item response models: Journal of Educational and Behavioral Statistics Vol 24(2) Sum 1999, 146-178.
- Pellom, B. L., & Hansen, J. H. L. (1998). Automatic segmentation of speech recorded in unknown noisy channel characteristics: Speech Communication Vol 25(1-3) Aug 1998, 97-116.
- Pelzer, B., Eisinga, R., & Franses, P. H. (2005). "Panelizing" Repeated Cross Sections: Quality & Quantity: International Journal of Methodology Vol 39(2) Apr 2005, 155-174.
- Pentland, A., & Liu, A. (1999). Modeling and prediction of human behavior: Neural Computation Vol 11(1) Jan 1999, 229-242.
- Pettiway, L. E., Dolinsky, S., & Grigoryan, A. (1994). The drug and criminal activity patterns of urban offenders: A Markov chain analysis: Journal of Quantitative Criminology Vol 10(1) Mar 1994, 79-107.
- Pike, A. R. (1966). Stochastic models of choice behaviour: Response probabilities and latencies of finite Markov chain systems: British Journal of Mathematical and Statistical Psychology 19(1) 1966, 15-32.
- Pla, F., & Molina, A. (2004). Improving part-of-speech tagging using lexicalized HMMs: Natural Language Engineering Vol 10(2) Jun 2004, 167-189.
- Plewis, I. (1981). Using longitudinal data to model teachers' ratings of classroom behavior as a dynamic process: Journal of Educational Statistics Vol 6(3) Fal 1981, 237-255.
- Pollack, I. (1972). Visual discrimination thresholds for one- and two-dimensional Markov spatial constraints: Perception & Psychophysics Vol 12(2-A) Aug 1972, 161-167.
- Pollack, I. (1973). Discrimination of third-order Markov constraints within visual displays: Perception & Psychophysics Vol 13(2) Apr 1973, 276-280.
- Polson, P. G. (1972). Presolution performance functions for Markov models: Psychometrika Vol 37(4, Pt 1) Dec 1972, 453-459.
- Polson, P. G., & Huizinga, D. (1974). Statistical methods for absorbing Markov-chain models for learning: Estimation and identification: Psychometrika Vol 39(1) Mar 1974, 3-22.
- Poltyreva, T. E., & Petrov, E. S. (1982). Behaviour of dogs in situation of probabilistic reinforcement in a situation of choice: Zhurnal Vysshei Nervnoi Deyatel'nosti Vol 32(1) 1982, 3-9.
- Porta, J. M., Vlassis, N., Spaan, M. T. J., & Poupart, P. (2006). Point-based value iteration for continuous POMDPs: Journal of Machine Learning Research Vol 7 2006, 2329-2367.
- Predtetchinski, A. (2007). The strong sequential core for stationary cooperative games: Games and Economic Behavior Vol 61(1) Oct 2007, 50-66.
- Rardin, R. L., & Gray, P. (1973). Analysis of crime control strategies: Journal of Criminal Justice Vol 1(4) Win 1973, 339-346.
- Rauschenberger, J., Schmitt, N., & Hunter, J. E. (1980). A test of the need hierarchy concept by a Markov model of change in need strength: Administrative Science Quarterly Vol 25(4) Dec 1980, 654-670.
- Rauschenberger, J. M. (1979). The development and test of a Markov chain model of the need hierarchy concept: Dissertation Abstracts International.
- Reichle, E. D., & Nelson, J. R. (2003). Local vs. Global Covert Visual Attention: Are Two States Necessary? Comment on Liechty et al., 2003: Psychometrika Vol 68(4) Dec 2003, 543-549.
- Reignier, P., Brdiczka, O., Vaufreydaz, D., Crowley, J. L., & Maisonnasse, J. (2007). Context-aware environments: From specification to implementation: Expert Systems: International Journal of Knowledge Engineering and Neural Networks Vol 24(5) Nov 2007, 305-320.
- Reinhard, K., & Niranjan, M. (2002). Diphone subspace mixture trajectory models for HMM complementation: Speech Communication Vol 38(3-4) Nov 2002, 237-265.
- Ren, L. (2006). Parameter estimation for latent mixture models with applications to psychiatry. Dissertation Abstracts International: Section B: The Sciences and Engineering.
- Robertson, C. (1982). Modelling recall order in free recall experiments: British Journal of Mathematical and Statistical Psychology Vol 35(2) Nov 1982, 171-182.
- Robins, G., Pattison, P., & Woolcock, J. (2005). Small and Other Worlds: Global Network Structures from Local Processes: American Journal of Sociology Vol 110(4) Jan 2005, 894-936.
- Rojek, D. G., & Erickson, M. L. (1982). Delinquent careers: A test of the career escalation model: Criminology: An Interdisciplinary Journal Vol 20(1) May 1982, 5-28.
- Rook, A. J., & Penning, P. D. (1991). Stochastic models of grazing behaviour in sheep: Applied Animal Behaviour Science Vol 32(2-3) Nov 1991, 167-177.
- Rost, J. (2002). Mixed and latent Markov models as item response models: Methods of Psychological Research Vol 7(2) 2002, 53-72.
- Roy, N. K., Potter, W. D., & Landau, D. P. (2004). Designing Polymer Blends Using Neural Networks, Genetic Algorithms, and Markov Chains: Applied Intelligence Vol 20(3) May-Jun 2004, 215-229.
- Rumsey, D. J. (1994). Nonresponse models for social network stochastic processes. Dissertation Abstracts International: Section B: The Sciences and Engineering.
- Salvi, G. (2006). Dynamic behaviour of connectionist speech recognition with strong latency constraints: Speech Communication Vol 48(7) Jul 2006, 802-818.
- Samet, D. (1998). Iterated Expectations and Common Priors: Games and Economic Behavior Vol 24(1-2) Jul 1998, 131-141.
- Santharam, G., & Sastry, P. S. (1997). A reinforcement learning neural network for adaptive control of Markov chains: IEEE Transactions on Systems, Man, & Cybernetics Part A: Systems & Humans Vol 27(5) Sep 1997, 588-600.
- Sato, M., Abe, K., & Takeda, H. (1988). Learning control of finite Markov chains with an explicit trade-off between estimation and control: IEEE Transactions on Systems, Man, & Cybernetics Vol 18(5) Sep-Oct 1988, 677-684.
- Saul, L. K., Jordan, M. I., & Smyth, P. (1999). Mixed memory Markov models: Decomposing complex stochastic processes as mixtures of simpler ones: Machine Learning Vol 37(1) Oct 1999, 75-87.
- Schinnar, A. P., & Stewman, S. (1978). A class of Markov models of social mobility with duration memory patterns: Journal of Mathematical Sociology Vol 6(1) 1978, 61-86.
- Schmittmann, V. D., Dolan, C. V., van der Maas, H. L. J., & Neale, M. C. (2005). Discrete Latent Markov Models for Normally Distributed Response Data: Multivariate Behavioral Research Vol 40(4) Oct 2005, 461-488.
- Schulz, U., & Albert, D. (1976). Reproduction from memory and its termination: A class of counter models: Zeitschrift fur Experimentelle und Angewandte Psychologie Vol 23(4) 1976, 678-699.
- Schulz, U., & Heuer, H. (1978). A model of avoidance learning: Psychologische Beitrage Vol 20(3) 1978, 390-413.
- Schut, M., Wooldridge, M., & Parsons, S. (2004). The theory and practice of intention reconsideration: Journal of Experimental & Theoretical Artificial Intelligence Vol 16(4) Oct-Dec 2004, 261-293.
- Scott, S. L. (2003). Hidden Markov processes and sequential pattern mining. Mahwah, NJ: Lawrence Erlbaum Associates Publishers.
- Sedlak, A. J., & Walton, M. D. (1982). Sequencing in social repair: A Markov grammar of children's discourse about transgressions: Developmental Review Vol 2(3) Sep 1982, 305-329.
- Segura, J. C., Rubio, A. J., Peinado, A. M., Garcia, P., & et al. (1994). Multiple VQ hidden Markov modelling for speech recognition: Speech Communication Vol 14(2) Apr 1994, 163-170.
- Seltzer, M. H., Wong, W. H., & Bryk, A. S. (1996). Bayesian analysis in applications of hierarchical models: Issues and methods: Journal of Educational and Behavioral Statistics Vol 21(2) Sum 1996, 131-167.
- Serow, R. C., & Taylor, R. G. (1990). Values as behavior among postsecondary students: An emphasis on transitional states: College Student Journal Vol 24(1) Spr 1990, 6-13.
- Shahin, I. (2006). Enhancing speaker identification performance under the shouted talking condition using second-order circular hidden Markov models: Speech Communication Vol 48(8) Aug 2006, 1047-1055.
- Sherry, C. J., Barrow, D. L., & Klemm, W. R. (1982). Serial dependencies and Markov properties of neuronal interspike intervals from rat cerebellum: Brain Research Bulletin Vol 8(2) Feb 1982, 163-169.
- Shields, C. G. (1988). Family therapy interaction in the initial interview: Differences in process parameters using a Markov Chain model: Dissertation Abstracts International.
- Shiffrin, R. M., & Thompson, M. (1988). Moments of transition-additive random variables defined on finite, regenerative random processes: Journal of Mathematical Psychology Vol 32(3) Sep 1988, 313-340.
- Shumway, M., Chouljian, T. L., & Hargreaves, W. A. (1994). Patterns of substance use in schizophrenia: A Markov modeling approach: Journal of Psychiatric Research Vol 28(3) May-Jun 1994, 277-287.
- Siegel, J. E., Weinstein, M. C., & Fineberg, H. V. (1991). Bleach programs for preventing AIDS among IV drug users: Modeling the impact of HIV prevalence: American Journal of Public Health Vol 81(10) Oct 1991, 1273-1279.
- Singh, S., & Dayan, P. (1998). Analytical mean squared error curves for temporal difference learning: Machine Learning Vol 32(1) Jul 1998, 5-40.
- Sinharay, S. (2004). Experiences With Markov Chain Monte Carlo Convergence Assessment in Two Psychometric Examples: Journal of Educational and Behavioral Statistics Vol 29(4) Win 2004, 461-488.
- Sirigos, J., Fakotakis, N., & Kokkinakis, G. (2002). A hybrid syllable recognition system based on vowel spotting: Speech Communication Vol 38(3-4) Nov 2002, 427-440.
- Smithson, M. (1999). Taking exogenous dynamics seriously in public goods and resource dilemmas. New York, NY: Psychology Press.
- Stadje, W. (1999). Joint Distributions of the Numbers of Visits for Finite-State Markov Chains: Journal of Multivariate Analysis Vol 70(2) Aug 1999, 157-176.
- Stankiewicz, B. J., Legge, G. E., Mansfield, J. S., & Schlicht, E. J. (2006). Lost in Virtual Space: Studies in Human and Ideal Spatial Navigation: Journal of Experimental Psychology: Human Perception and Performance Vol 32(3) Jun 2006, 688-704.
- Stein, W. E., & Rapoport, A. (1978). A discrete time model for detection of randomly presented stimuli: Journal of Mathematical Psychology Vol 17(2) Apr 1978, 110-137.
- Stewman, S. (1975). Two Markov models of open system occupational mobility: Underlying conceptualizations and empirical tests: American Sociological Review Vol 40(3) Jun 1975, 298-321.
- Stewman, S. (1976). Markov models of occupational mobility: Theoretical development and empirical support: I. Careers: Journal of Mathematical Sociology Vol 4(2) 1976, 201-245.
- Stewman, S. (1976). Markov models of occupational mobility: Theoretical development and empirical support: II. Continuously operative job systems: Journal of Mathematical Sociology Vol 4(2) 1976, 247-278.
- Still, A. W. (1976). An evaluation of the use of Markov models to describe the behaviour of rats at a choice point: Animal Behaviour Vol 24(3) Aug 1976, 498-506.
- Stokmans, M. (1992). Analyzing information search patterns to test the use of a two-phased decision strategy: Acta Psychologica Vol 80(1-3) Aug 1992, 213-227.
- Sung, M. (2001). Bayesian analyses of Markov chains: Applications to longitudinal data from psychiatric treatment programs. Dissertation Abstracts International: Section B: The Sciences and Engineering.
- Sweillam, A., & Tardiff, K. (1978). Prediction of psychiatric inpatient utilization: A Markov chain model: Administration in Mental Health Vol 6(2) Win 1978, 161-173.
- Symeonaki, M. A., Stamou, G. B., & Tzafestas, S. G. (2002). Fuzzy non-homogeneous Markov systems: Applied Intelligence Vol 17(2) Sep 2002, 203-214.
- Szita, I., Takacs, B., & Lorincz, A. (2003). epsilon -MDPs: Learning in varying environments: Journal of Machine Learning Research Vol 3(1) Jan 2003, 145-174.
- Tadic, V. B. (2006). Asymptotic analysis of temporal-difference learning algorithms with constant step-sizes: Machine Learning Vol 63(2) May 2006, 107-133.
- Tellegen, B., & Sorbi, M. (1984). The usefulness of the first-order Markov model in the analysis of individual headache patterns: Gedragstherapie Vol 17(3) Sep 1984, 199-208.
- Testerman, R. L. (1975). Threshold determination by titration: A Markoff chain model: IEEE Transactions on Bio-Medical Engineering Vol 22(1) Jan 1975, 53-57.
- Theios, J., & Brelsford, J., Jr. (1966). Theoretical interpretations of a Markov model for avoidance conditioning: Journal of Mathematical Psychology 3(1) 1966, 140-162.
- Theios, J., Leonard, D. W., & Brelsford, J. W. (1977). Hierarchies of learning models that permit likelihood ratio comparisons: Journal of Experimental Psychology: General Vol 106(3) Sep 1977, 213-225.
- Thomas, A. P., Roger, D., & Bull, P. (1983). A sequential analysis of informal dyadic conversation using Markov chains: British Journal of Social Psychology Vol 22(3) Sep 1983, 177-188.
- Tollenaar, N., & Mooijaart, A. (2003). Type I errors and power of the parametric bootstrap goodness-of-fit test: Full and limited information: British Journal of Mathematical and Statistical Psychology Vol 56(2) Nov 2003, 271-288.
- Tomasini, L. M. (1978). Optimal choice of reward levels in an organization: Theory and Decision Vol 9(2) Apr 1978, 195-198.
- Tracey, T. J. (1985). The N of 1 Markov chain design as a means of studying the stages of psychotherapy: Psychiatry: Journal for the Study of Interpersonal Processes Vol 48(2) May 1985, 196-204.
- Trentin, E., & Cattoni, R. (1999). Learning perception for indoor robot navigation with a hybrid hidden Markov model/recurrent neural networks approach: Connection Science Vol 11(3-4) Dec 1999, 243-265.
- Tsukada, M., Terasawa, M., & Hauske, G. (1983). Temporal pattern discrimination in the cat's retinal cells and Markov system models: IEEE Transactions on Systems, Man, & Cybernetics Vol 13(5) Sep-Oct 1983, 953-964.
- van de Pol, F., & Mannan, H. (2002). Questions of a novice in latent Markov modelling: Methods of Psychological Research Vol 7(2) 2002, 1-18.
- van der Linden, W. J. (2006). A lognormal model for response times on test items: Journal of Educational and Behavioral Statistics Vol 31(2) 2006, 181-204.
- Van der Sanden, A. L. (1978). Latent Markov chain analysis of a value conflict in Prisoner's Dilemma games: British Journal of Mathematical and Statistical Psychology Vol 31(2) Nov 1978, 126-143.
- Vermunt, J. K., Langeheine, R., & Bockenholt, U. (1999). Discrete-time discrete-state latent Markov models with time-constant and time-varying covariates: Journal of Educational and Behavioral Statistics Vol 24(2) Sum 1999, 179-207.
- Visser, I., Raijmakers, M. E. J., & Molenaar, P. C. M. (2000). Confidence intervals for hidden Markov model parameters: British Journal of Mathematical and Statistical Psychology Vol 53(2) Nov 2000, 317-327.
- Visser, I., Raijmakers, M. E. J., & Molenaar, P. C. M. (2007). Characterizing sequence knowledge using online measures and hidden Markov models: Memory & Cognition Vol 35(6) Sep 2007, 1502-1517.
- Visser, I., Schmittmann, V. D., & Raijmakers, M. E. J. (2007). Markov Process Models for Discrimination Learning. Mahwah, NJ: Lawrence Erlbaum Associates Publishers.
- Vivarelli, F., & Williams, C. K. I. (2001). Comparing Bayesian neural network algorithms for classifying segmented outdoor images: Neural Networks Vol 14(4-5) May 2001, 427-437.
- Waksman, J. A. (1981). Non-stationary contingency table Markov chains: Dissertation Abstracts International.
- Wandell, B. A., Greeno, J. G., & Egan, D. E. (1974). Equivalence classes of functions of finite Markov chains: Journal of Mathematical Psychology Vol 11(4) Nov 1974, 391-403.
- Ward, L. M., Livingston, J. W., & Li, J. (1988). On probabilistic categorization: The Markovian observer: Perception & Psychophysics Vol 43(2) Feb 1988, 125-136.
- Wedel, M., Pieters, R., & Liechty, J. (2003). Evidence for Covert Attention Switching from Eye-Movements. Reply to Commentaries on Liechty et al., 2003: Psychometrika Vol 68(4) Dec 2003, 557-562.
- Wegenkittl, S. (2002). A generalized phi -divergence for asymptotically multivariate normal models: Journal of Multivariate Analysis Vol 83(2) Nov 2002, 288-302.
- Wennekers, T., & Ay, N. (2006). A temporal learning rule in recurrent systems supports high spatio-temporal stochastic interactions: Neurocomputing: An International Journal Vol 69(10-12) Jun 2006, 1199-1202.
- Werts, C. E., Pike, L. W., Rock, D., & Grandy, J. (1981). Applications of quasi-Markov simplex models across populations: Educational and Psychological Measurement Vol 41(2) Sum 1981, 295-307.
- Wheeler, S., Bean, N., Gaffney, J., & Taylor, P. (2006). A Markov analysis of social learning and adaptation: Journal of Evolutionary Economics Vol 16(3) Aug 2006, 299-319.
- Winter, C. S., Taylor, R. G., & Williams, T. C. (1996). A strategy for deciding when interventions should occur in computer assisted instruction: Journal of Instructional Psychology Vol 23(3) Sep 1996, 245-248.
- Woodbury, M. A., Manton, K. G., & Siegler, I. C. (1982). Markov Network Analysis: Suggestions for innovations in covariance structure analysis: Experimental Aging Research Vol 8(3-4) Fal-Win 1982, 135-140.
- Wozniak, M. (2006). Proposition of common classifier construction for pattern recognition with context task: Knowledge-Based Systems Vol 19(8) Dec 2006, 617-624.
- Wright, A. A. (1990). Markov choice processes in simultaneous matching-to-sample at different levels of discriminability: Animal Learning & Behavior Vol 18(3) Aug 1990, 277-286.
- Wu, C.-H., Yan, G.-L., & Lin, C.-L. (2002). Speech act modeling in a spoken dialog system using a fuzzy fragment-class Markov model: Speech Communication Vol 38(1-2) Sep 2002, 183-199.
- Xiang, Y., Lee, J., & Ben-David, S. (2006). Learning decomposable markov networks in pseudo-independent domains with local evaluation: Machine Learning Vol 65(1) Oct 2006, 199-277.
- Yamada, J. (1991). The discrimination learning of the liquids /r/ and /l/ by Japanese speakers: Journal of Psycholinguistic Research Vol 20(1) Jan 1991, 31-46.
- Yang, J., Xu, Y., & Chen, C. S. (1997). Human action learning via hidden Markov model: IEEE Transactions on Systems, Man, & Cybernetics Vol 27(1) Jan 1997, 34-44.
- Yang, R., & Chen, M.-H. (1995). Bayesian Analysis for Random Coefficient Regression Models Using Noninformative Priors: Journal of Multivariate Analysis Vol 55(2) Nov 1995, 283-311.
- Yang, W.-j., & Wang, H.-c. (1990). Finite register length effects in a Hidden Markov Model speech recognizer: Speech Communication Vol 9(3) Jun 1990, 239-245.
- Yao, K., Paliwal, K. K., & Lee, T.-W. (2005). Generative factor analyzed HMM for automatic speech recognition: Speech Communication Vol 45(4) Apr 2005, 435-454.
- Yao, L., & Boughton, K. A. (2007). A Multidimensional Item Response Modeling Approach for Improving Subscale Proficiency Estimation and Classification: Applied Psychological Measurement Vol 31(2) Mar 2007, 83-105.
- Yao, L., & Schwarz, R. D. (2006). A Multidimensional Partial Credit Model With Associated Item and Test Statistics: An Application to Mixed-Format Tests: Applied Psychological Measurement Vol 30(6) Nov 2006, 469-492.
- Yarnold, P. R., & Soltysik, R. C. (2005). Sequential analyses. Washington, DC: American Psychological Association.
- Yellott, J. I., Jr., & Lofgren, C. R. P. (1984). Teaching Learning Theory: PsycCRITIQUES Vol 29 (6), Jun, 1984.
- Yi, N., Xu, S., George, V., & Allison, D. B. (2004). Mapping multiple quantitative trait loci for ordinal traits: Behavior Genetics Vol 34(1) Jan 2004, 3-15.
- Yin, G., Zhang, Q., & Badowski, G. (2000). Singularly Perturbed Markov Chains: Convergence and Aggregation: Journal of Multivariate Analysis Vol 72(2) Feb 2000, 208-229.
- Yun, Y.-S., & Oh, Y.-H. (2002). A segmental-feature HMM for continuous speech recognition based on a parametric trajectory model: Speech Communication Vol 38(1-2) Sep 2002, 115-130.
- Zabrodin, I. Y., Petrov, E. S., & Vartanyan, G. A. (1983). Analysis of unrestrained behaviour of animals on the basis of its probabilistic characteristics: Zhurnal Vysshei Nervnoi Deyatel'nosti Vol 33(1) 1983, 71-78.
- Zahn, G. L., & Wolf, G. (1981). Leadership and the art of cycle maintenance: A simulation model of superior-subordinate interaction: Organizational Behavior & Human Performance Vol 28(1) Aug 1981, 26-49.
- Zapan, G. (1971). Application of the theory of chains with complete links to the learning process and, in general, to evolutive systems with preferential events: Revue Roumaine des Sciences Sociales - Serie de Psychologie Vol 15(2) 1971, 149-164.
- Zaric, G. S. (2003). The Impact of Ignoring Population Heterogeneity When Markov Models Are Used in Cost-Effectiveness Analysis: Medical Decision Making Vol 23(5) Sep-Oct 2003, 379-396.
- Zhu, S. C., & Wu, Y. N. (1999). From local features to global perception--A perspective of Gestalt psychology from Markov random field theory: Neurocomputing: An International Journal Vol 26-27 Jun 1999, 939-945.
- Zufryden, F. S. (1986). Multibrand transition probabilities as a function of explanatory variables: Estimation by a least-squares-based approach: Journal of Marketing Research Vol 23(2) May 1986, 177-183.
External links[]
- (pdf) Markov Chains chapter in American Mathematical Society's introductory probability book
- Generates random parodies in the style of another body of text using a Markov chain algorithm
- A generator that uses Markov Chains to create random words
- Markov Chains
- Class structure on PlanetMath
- Chapter 5: Markov Chain Models
- Generating Text (About generating random text using a Markov chain.)
- The World's Largest Matrix Computation (Google's PageRank as the stationary distribution of a random walk through the Web.)
- Dissociated Press in Emacs approximates a Markov process
- Markov Chain Example
- Markov Chains for Search Engines
- Steganography proof-of-concept using Markov Chains.
- nth order Markov Chain implementation in Ruby
- Baseball Run Modeler using Markov Chains
- Theory of Markov chains in baseball
- Sequential analysis software for generating visual representations of probability models
This page uses Creative Commons Licensed content from Wikipedia (view authors). |