This is the first part of a two part series of blog posts on how to solve and / or optimise complex decision making elucidated using a simple hypothetical problem.
Decision makers are faced with complex policy decisions where they are expected to maximise immediate utility for stakeholder value but not at the expense of future stakeholders’ interests. The effect of the current action determined through policy thus contributes to both current utility and to future utility through its effect on the future state of the system. In Stochastic dynamic programming (SDP) is the a relevant tool for determining an optimal sequence of decisions over time and was developed through applied mathematics.The principle of SDP relies on the partitioning of a complex problem in simpler subproblems across several steps that, once solved, are combined to give an overall solution.
Markov Decision Processes (MDPs) are made of two components: Markov chains that model the uncertain future states of the system given an initial state and a decision model. First, a MDP is a Markov chain in which the system undergoes successive transitions from one state to another through time, as in our previous post on discrete time markov chains. Second, a MDP involves a decision-making process in which an action is being implemented at each sequential state transition.
The process can be broken down into six steps:
- The first step defines the optimization objective of the problem.
- The second step is to define the set of states that represents the possible configuration of the system at each time step. Let X~t~ be the state variable of the system at time t.
- In the third step, one needs to define the decision variable, A~t~, that is the component of the system dynamic that one can control to meet the objective.
- The fourth step is to build a transition model describing the system dynamics and its behaviour in terms of the effect of a decision on the state variables.
- In the fifth step, one needs to define the utility function U~t~ at time t also called the immediate reward.
- Sixth, the final step consists in determining the optimal solution to the optimization problem. The optimal solution, also called the optimal strategy or policy, maximizes our chance of achieving our objective over a time horizon.
Objective
This post will elucidate this process using an example in R, to assist its implementation for deciding on the optimal policy for a hypothetical problem.
The problem
Let’s continue from our previous problem where we represent a water-works company (Monopoly ltd.) with a set of pipes that can either be in Good or OK condition (state A & B), or poor and about to break (C & D). These poor condition pipes represent our backlog of repairs that need money to repair or replace.
The backlog makes up 0.2814815, or almost a third of our estate. That is a concern! Repairing and replacing pipes costs different amounts of money depending on what condition the pipes are in. The condition of the pipes next year or timestep is determined by the current state multiplied by the transition matrix.
We can determine the condition of our pipes through time as discussed in our prevous post. But what if we made a stand against entropy and used some of our maintenance budget to repair pipes?
Say we could afford to replace five pipes of condition d (thus making them a) each year. What would this investment look like compared to no investment in repair? How would it affect the backlog?
Although we could use this for a range of policies we cannot iterate through time very easily nor do we produce a nice dataframe to hold all the data and observe changes through time. Also the sequence of events is important whereby our pipes are replaced after deterioration to condition d, as we assume we don’t know which ones are going to decay or repairs take place on New Years Eve.
This function works by creating a dataframe called ‘pipe_df’ then iterating through ‘timesteps’ where the ‘initialstate’ provides the first row. To get the next row we multiply the current condition of pipes by the transition matrix and then implement the ‘action’. That comprises one timestep, we iterate until finished. We also use ‘dplyr’ to ‘mutate’ some of the variables into the new backlog related variables.
This function could be used to compare different actions. In order to compare different actions that are of equivalent cost, we need a metric to quantify the effect of the action. Previously we mentioned backlog, let’s compare two different policies that are enacted by repairing different pipes or maintaining them in different ways and see whether they can keep backlog below 50% over ten years.
Policy one is pretty good and meets our success criteria just in time.
Our second policy could be to use a fancy new nanotechnology magic spray-paint on those pipes that are of condition a or b, this reduces the a to b and b to c deterioration rate, thus we need a new transition matrix, tm2
.
Thus we can see what happens with the new spray paint and no other action. How does this affect backlog over ten years compared to policy1
?
Thus this policy fails after about 3 years, with the backlog exceeding the 50% threshold. The first policy should be preferred as if it is equivalent in cost then it fulfils the management objectives in terms of keeping pipe backlog below 50%. It’s fallacious to consider a new technology better just because it’s new (and vice versa for old or traditional methods, consider blood-letting). Thus we rely on evidence generated by predictions using a suitable model.
Optimal policy
There are many plausible (possible) policies, how do we know that we have considered the best ones for this problem? Wel we almost certainly haven’t and we don’t want to manually check through all of the possible policy space as this represents an array of Babel. We will continue as to how to solve this problem using algorithms next time, in part two of this post. Like a knackered sewerage pipe, I need a break!
References
- Marescot, L., Chapron, G., Chads, I., Fackler, P. L., Duchamp, C., Marboutin, E., & Gimenez, O. (2013). Complex decisions made simple: A primer on stochastic dynamic programming. Methods in Ecology and Evolution, 4(9), 872-884. http://doi.org/10.1111/2041-210X.12082