In a supply chain, quantity ordered from a down stream supplier or manufacturer are not necessarily always completely fulfilled, because of various factors. If the extent of under fulfillment could be predicted over a time horizon, then the shortfall items could be ordered from another fallback supplier. In this post we will go through a prediction model over a time horizon based on *Continuous Time Markov Chain* and how it can be used to solve the supply chain problem.

The Spark implementation for CTMC is available in my open source project *avenir*. It involves two Spark jobs. The first job calculates state transition rate and the second one calculate various summary statistics using the output of the first job

## Markov Chain and Continuous Time Markov Chain

We use *Markov Chain* (MC), to model systems with many states. The basic building block of *Markov Chain* is a state transition probability matrix. An element *m(i,j)* of the matrix defines the conditional probability of transitioning to state j given the current state of i. In first order Markov Chain, the transition probability only depends on the current state.

Some of the problems that can be solved with a Markov Chain model using sequence data , are as follows.

*Probability of a sequence of states**Classification using sequence training data with labels.*

In *Continuous Time Markov Chain* (CTMC) , we associate a temporal dimension to a *Markov Chain*. Instead of reasoning with state transition probability as in MC, we reason with state transition rate. An element *cm(i,j)* of the state transition rate matrix is the rate of transition to state j, give the current state of i.

While the system is in some state, transition to each of the other states is a process with exponential distribution. For certain elapsed time in the current state, there is a transition probability for each of the other states.

Here are some of the prediction problems that can be solved for a specified future time horizon.

*Expected amount of time spent in certain state.**Given an initial state , the probability of a future state at the end of the time horizon**Expected number of state transitions between two given states*

*Continuous Time Markov Chain* has been successfully used to solve many prediction problems in healthcare and finance.

## Order Fulfillment

We will consider order fulfillment in a supply chain as a Markov Process with 3 states as defined below, depending on magnitude of under fulfillment

*Completely fulfilled (***F**)*Fulfillment percentage above 60, but below 100 (***P**)*Fulfillment percentage below 60 (***L**)

We will assume that the distributor orders once a week from a manufacturer. The manufacturer, for various reasons, may not always be able to fulfill orders 100%. Here are some of the potential reasons

*Machines may be down in manufacturing floor**Issue in raw material supply chain*

We are going to do forecasting for a 4 week time horizon. For various items that the distributor orders from the manufacturer, the distributor is interested in finding the expected number of days, the fulfillment process is below 60% fulfillment i.e. in state L.

Based on the prediction of the shortfall, the distributor may place additional orders from a fallback manufacturer to mitigate the risk of running low on inventory.

## State Transition Rate

The spark job *StateTransitionRate* calculates state transition rate matrices. It’s a 3 x 3 matrix, since there are 3 states. We are doing the computation for 3 products. So there will be 3 such matrices. Here is some sample input. Each record contains *product ID*, *time stamp* and *fulfillment state*.

DS86LWL2N563,1413417600000,F FYB54YDCP418,1413417600000,P Q05SQLVC40X9,1413417600000,L DS86LWL2N563,1414022400009,F FYB54YDCP418,1414022400009,F Q05SQLVC40X9,1414022400009,F

We have used the order data for past 2 years along with fulfillment status as training data to build the state transition rate matrix. The fulfillment states are as defined earlier.

Here is some partial output. The first field in each line is the product ID. The remaining 9 fields are the elements of the 3×3 state transition rate matrix.

(Q05SQLVC40X9,-0.293333333,0.253333333,0.040000000,0.719999999,-0.759999999,0.040000000,0.999999994,0.000000000,-0.999999994) (DS86LWL2N563,-0.416666666,0.150000000,0.266666666,0.590909091,-0.772727273,0.181818182,0.521739129,0.347826086,-0.869565215)

Spark used 2 partitions for this job. This is the output for one of the partitions. For any row in the matrix, the diagonal element value is negative of the sum of the off diagonal values. Unit of time for the state transition rate is week.

The implementation for state transition rate is simple but only valid provided these assumptions are true.

*Training data captures all the state transitions in the underlying process.**The time stamp in the training corresponds to the actual state transition time*

If these assumptions are not valid, unfortunately more complex solution as outlined later will be required.

## Summary Statistics from State Transition Rate

The second Spark job *ContTimeStateTransitionStats* takes the output from the first job calculates various statistics based on the state transition rate. Following statistics for a given future time window are supported

*Expected time spent in a given state**Expected number of transitions between two given states**Probability of a given state at the end of the time horizon*

For our use case, we will calculate the first statistics i.e. expected time spent in a state. I have run this for the State L, which is most severe case with lowest level of fulfillment. Input to this Spark job is the following file containing one line for each product.

DS86LWL2N563,L FYB54YDCP418,L Q05SQLVC40X9,L

Essentially, for each product we are interested in the expected time spent in sate L. The output file paths for the first job is provided through a configuration parameter. the files collectively contain all the state transition rate matrices.

Here is some sample output for 2 products.The first field is the product ID and the second field in time spent in weeks with fulfillment status L.

(DS86LWL2N563,0.8946329973840808) (FYB54YDCP418,0.5986323190420122)

For example, the product *DS86LWL2N563* is in fulfillment state L for 6 days out of a 28 days time horizon.

This kind of forecasting helps the planner to provision for additional inventory from a different manufacturer. Although products from the first manufacturer may be preferred and more in demand, it’s better to have some inventory in stead of shortage.

There are various algorithms for calculating summary statistics from the state transition rate. I have used a technique called *uniformization*. I will discuss there techniques in a later section.

## State Observation Data

For the state training data used for the *StateTransitionRate *Spark job, there are some assumptions listed earlier that must hold.

For our case it’s not an issue, and the assumptions hold. Orders are placed exactly once a week and we immediately get a response from the manufacturer indicating the actual quantity that will be fulfilled for the order. So we know the state for the next week and we know the precise state transition time.

Let’s consider disease progression from healthcare domain as another use case for CTMC, where we run into more complications. Periodically the patient will have doctor visit accompanied by some tests. Based on doctor’s assessment and the test results, the patient may diagnosed to be in a different state of disease progression. However the actual disease state transition may have happened in between doctor visits.

For the second more complicated scenario, our implementation for *StateTransitionRate *won’t be valid. A solution based on maximum likelihood technique needs to be implemented. It proposes two solutions, one based on *Expectation Maximization* (EM) and the other based on *Markov Chain Monte Carlo* (MCMC).

## Algorithms for Summary Statistics

Most summary statistics calculation require computation of state transitions matrix for some time horizon. It is obtained from a matrix exponential as below

*P(t) = e ^{Qt} *

*where*

*P(t) = state transition matrix at time t*

*Q = state transition rate*

*t = time*

It is also necessary to compute the following integral, which is also difficult to evaluate directly.

*I ^{ab}_{cd}(t) = ∫_{0}^{t} p_{ac}(u) p_{db}(t-u) d_{u} *

*where*

*a,b = start and end state*

*c,d = intermediate states*

*p = probability of state transition*

The direct computation of the exponential and the integral is difficult. Following 3 techniques are available to cope with the problem.

**Eigenvalue decomposition**: When the rate matrix is diagonizable, eigenvalue decomposition can be used**Uniformization**: The exponential and the integral is expressed as sum of Poisson distributions**Exponentiationused**: Uses an auxiliary matrix

I have used the *Uniformization* technique. It’s relatively easy to implement and provides good accuracy.

## Use Cases from Other Domains

I already cited an example of forecasting disease progression from medical domain where CTMC can be effective. However, as I alluded to earlier, because it’s not possible to capture the exact state transition time for this problem, it will require a maximum likelihood technique to calculate the state transition rate.

CTMC is also an effective tool in manufacturing domain to predict machine down time for a future time horizon. A machine has two operational states. It’s either up or down. The transition time between the the two states are known and can be recorded precisely. In addition to predicting down time, predicting number of transitions between up and down states may also be vauable.

It can also be powerful predictive tool in finance. A bank could use it to manage cash and liquidity for some future time horizon.

## Wrapping Up

We have gone through a Spark implementation of Continuous Time Markov Chain and saw how it could used to forecast supplier fulfillment in a supply chain. Here is a tutorial document for the details of the steps need to execute the use case.

For commercial support for this solution or other solutions in my github repositories, please talk to ThirdEye Data Science Services. Support is available for *Hadoop* or Spark deployment on cloud including installation, configuration and testing,

.