Learning from Conflict in Multi-Agent, Classical, and Temporal Planning


Conflict-directed learning has been one of the most significant advances in combinatorial optimisation in recent decades, but until very recently has not been directly applied to planning problems. Iterated compilations to conflict-directed solving technologies have seen significant success in planning, but these compilations do not learn from conflicts encountered during earlier iterations, seriously limiting their scalability and applicability to cost-optimal planning. This thesis introduces a number of novel algorithms that take into account information derived from conflict on some important variants of the planning problem. These approaches use combinations of conflict-directed encodings, relaxation-refinement procedures, and state-based search. Our results show that conflict-directed reasoning is a highly effective approach to cost-optimal planning when appropriate decompositions and explanations are known. Automatically deriving the most effective of these decompositions and explanations from unannotated problem descriptions will be a fruitful avenue of further work. We also highlight the unique potential for learning strongly generalisable knowledge from conflict within plan-space planning algorithms. This learned knowledge enables our plan-space search algorithms to compete with state-space search on some standard domains, and outperform it by orders of magnitude in appropriately structured industrial ones.

The Univesity of Melbourne