Probabilistic planning problems are often modeled as Markov decision
processes (MDPs), which assume that a single action is executed per decision
epoch and that actions take unit time. However, in the real world it is
common to execute several actions in parallel, and the durations of these
actions may differ. We are developing extensions to MDPs that incorporate
these features. In particular, we propose the model of Concurrent MDPs,
which allows simultaneous execution of multiple unit-duration actions at a
time point. We extend this to handle concurrent durative actions with
deterministic as well as stochastic durations.
We released the code for our COMDP solver described in the AAAI'04 paper. Please download it here.