Avatar

Associate Professor in Artificial Intelligence

The University of Melbourne

Biography

I am an Associate Professor in Artificial intelligencee at the School of Computing and Information Systems, The University of Melbourne. I’m a member of the Agent Lab group and the Digital Agriculture, Food and Wine lab.

My research focuses on how to introduce different approaches to the problem of inference in sequential decision problems, as well as applications to autonomous systems in agriculture.

I completed my PhD at the Artificial Intelligence and Machine Learning Group, Universitat Pompeu Fabra, under the supervision of Prof. Hector Geffner. I was a research fellow for 3 years under the supervision of Prof. Peter Stuckey and Prof. Adrian Pearce, working on solving Mining Scheduling problems through automated planning, constraint programming and operations research techniques.

Interests

  • AI planning
  • Search
  • Learning
  • Verification
  • Constraint Programming
  • Operations Research
  • Intention Recognition
  • Sequential Decision Problems
  • Autonomous Systems

Education

  • Graduate Certificate in University Teaching, 2020

    The University of Melbourne

  • PhD in Artificial Intelligence, 2012

    Universitat Pompeu Fabra

  • MEng in Artificial Intelligence, 2007

    Universitat Pompeu Fabra

  • BSc in Computer Science, 2004

    Universitat Pompeu Fabra

Recent News

All news»

[8/24] New ECAI-24 Paper on Count-based Novelty Exploration in Classical Planning

[5/24] New AAMAS-24 Best-Student Paper Award on Human Goal Recognition as Bayesian Inference: Investigating the Impact of Actions, Timing, and Goal Solvability

[3/24] The University of Melbourne, Teaching Video for Algorithms and Data Structures

[2/24] New AAAI-24 Paper on Generalized Planning for the Abstraction and Reasoning Corpus

[1/24] The Guardian newspaper featured Farmbots, flavour pills and zero-gravity beer: inside the mission to grow food in space, and shared too through Farm.bot’s official newsletter

[12/23] TV Channel 10 News featured the collaborative projects on plants for Space as well as a new short video released by The University of Melbourne

[12/23] Pursuit media article You can’t explore the solar system on an empty stomach, featuring the integration of new sensors over Farm.bots

[11/23] New ICPM-23 Paper on Data-Driven Goal Recognition in Transhumeral Prostheses Using Process Mining Techniques

[10/23] New AIJ Paper on Fast and accurate data-driven goal recognition using process mining techniques

[09/23] New ECAI-23 Paper on Diverse, Top-k, and Top-Quality Planning Over Simulators

Projects

AI Planning Solvers Online

Planning as a Service (PaaS) is an extendable API to deploy planners online in local or cloud servers

Farm.bot at The University of Melbourne

Farm.bot is an open-source robotic platform to explore problems on AI and Automation (Planning, Vision, Learning) for small scale …

Width Based Planning

Width Based Planning searches for solutions through a general measure of state novelty. Performs well over black-box simulators and …

Planimation

Planimation is a framework to visualise sequential solutions of planning problems specified in PDDL

Classical Planners

Awarded top performance classical planners in serveral International Planning Competitions 2008 - 2019

Trapper

Invariants, Traps, Un-reachability Certificates, and Dead-end Detection

AI 4 Education

Software to support AI courses in Mel & RMIT Unis (Melbourne, AUS)

Arcade Learning Environment

Classical Planners playing Atari 2600 games as well as Deep Reinforcement Learning

Linear Temporal Logic, Planning and Synthesis

classical planners computing infinite loopy plans, and FOND planners synthesizing controllers expressed as policies.

LAPKT

Lightweight Automated Planning ToolKiT (LAPKT) to build, use or extend basic to advanced Automated Planners

Recent Publications

Quickly discover relevant content by filtering publications.

Count-Based Novelty Exploration in Classical Planning - Master Thesis

Count-based exploration methods are widely employed to improve the exploratory behavior of learning agents over sequential decision problems. Meanwhile, Novelty search has achieved success in Classical Planning through recording of the first, but not successive, occurrences of tuples. In order to structure the exploration, however, the number of tuples considered needs to grow exponentially as the search progresses. We propose a new novelty technique, classical count-based novelty, which aims to explore the state space with a constant number of tuples, by leveraging the frequency of each tuple’s appearance in a search tree. We then justify the mechanisms through which lower tuple counts lead the search towards novel tuples. We also introduce algorithmic contributions in the form of a trimmed open list that maintains a constant size by pruning nodes with bad novelty values. These techniques are shown to complement existing novelty heuristics when integrated in a classical solver, achieving competitive results in challenging benchmarks from recent International Planning Competitions. Moreover, adapting our solver as the frontend planner in dual configurations that utilize both memory and time thresholds demonstrates a significant increase in instance coverage, surpassing current state-of-the-art solvers, while also maintaining competitive planning time performance. Finally, we introduce two solvers implementing alternative count-based heuristics and provide promising results for future developments of the ideas presented in this study

Adaptive goal recognition using process mining techniques

Goal Recognition (GR) is a research problem that studies ways to infer the goal of an intelligent agent based on its observed behavior and knowledge of the environment in which the agent operates. A common assumption of GR is that the environment is static. However, in many real-world scenarios, for example, recognizing customers’ preferences, it is necessary to recognize the goals of multiple agents or multiple goals of a single agent over an extended period. Therefore, it is reasonable to expect the environment to change throughout a series of goal recognition tasks. This paper presents three process mining-based solutions to the problem of adaptive GR in a changing environment implemented as different control strategies of a system for solving standard GR problems. As a standard GR system that gets controlled, we use the system grounded in process mining techniques, as it can adjust its internal GR mechanisms based on data collected while observing the operating agents. We evaluated our control strategies over synthetic and real-world datasets. The synthetic datasets were generated using the extended version of the Goal Recognition Amidst Changing Environments (GRACE) tool. The datasets account for different types of changes and drifts in the environment. The evaluation results demonstrate a trade-off between the GR performance over time and the effort invested in adaptations of the GR mechanisms of the system, showing that few well-planned adaptations can lead to a consistently high GR performance.

Human Goal Recognition as Bayesian Inference: Investigating the Impact of Actions, Timing, and Goal Solvability

Goal recognition is a fundamental cognitive process that enables individuals to infer intentions based on available cues. Current goal recognition algorithms often take only observed actions as input, but here we use a Bayesian framework to explore the role of actions, timing, and goal solvability in goal recognition. We analyze human responses to goal-recognition problems in the Sokoban domain, and find that actions are assigned most importance, but that timing and solvability also influence goal recognition in some cases, especially when actions are uninformative. We leverage these findings to develop a goal recognition model that matches human inferences more closely than do existing algorithms. Our work provides new insight into human goal recognition and takes a step towards more human-like AI models.

Reinforcement learning for decision-making under deep uncertainty

Planning under complex uncertainty often asks for plans that can adapt to changing future conditions. To inform plan development during this process, exploration methods have been used to explore the performance of candidate policies given uncertainties. Nevertheless, these methods hardly enable adaptation by themselves, so extra efforts are required to develop the final adaptive plans, hence compromising the overall decision-making efficiency. This paper introduces Reinforcement Learning (RL) that employs closed-loop control as a new exploration method that enables automated adaptive policy-making for planning under uncertainty. To investigate its performance, we compare RL with a widely-used exploration method, Multi-Objective Evolutionary Algorithm (MOEA), in two hypothetical problems via computational experiments. Our results indicate the complementarity of the two methods. RL makes better use of its exploration history, hence always providing higher efficiency and providing better policy robustness in the presence of parameter uncertainty. MOEA quantifies objective uncertainty in a more intuitive way, hence providing better robustness to objective uncertainty. These findings will help researchers choose appropriate methods in different applications.

Lazy Constraint Generation and Tractable Approximations for Large-scale Planning Problems

In our research, we explore two orthogonal but related methodologies of solving planning instances: planning algorithms based on direct but lazy, incremental heuristic search over transition systems and planning as satisfiability. We address numerous challenges associated with solving large planning instances within practical time and memory constraints. This is particularly relevant when solving real-world problems, which often have numeric domains and resources and, therefore, have a large ground representation of the planning instance. Our first contribution is an approximate novelty search, which introduces two novel methods. The first approximates novelty via sampling and Bloom filters, and the other approximates the best-first search using an adaptive policy that decides whether to forgo the expansion of nodes in the open list. For our second work, we present an encoding of the partial order causal link (POCL) formulation of the temporal planning problems into a CP model that handles the instances with required concurrency, which cannot be solved using sequential planners. Our third significant contribution is on lifted sequential planning with lazy constraint generation, which scales very well on large instances with numeric domains and resources. Lastly, we propose a novel way of using novelty approximation as a polynomial reachability propagator, which we use to train the activity heuristics used by the CP solvers.

Students

Current Students

Ph.D.

  • Giacomo Rosa [2024 - current] co-supervised with Prof. Sebastian Sardina and Dr. Jean Honorio, Topic: Exploration methods for Planning

  • Jiajia Song [2024 - current] co-supervised with Prof. Sebastian Sardina and Dr. William Umboh, Topic: What Makes AI Planning Hard? From Complexity Analysis to Algorithm Design

  • David Adams [2024 - current] co-supervised with Dr. Renata Borovica-Gajic, Topic: Exploration Methods for Databases

  • Qingtan Shen [2023 - current] co-supervised with A/Prof. Artem Polyvyanyy and Dr. Timotheus Kampik, Topic: Multi-agent system discovery

  • Muhammad Bilal [2023 - current] co-supervised with Dr. Wafa Johal and Prof. Denny Oetomo, Topic: Towards Interactive Robot Learning for Complex Sequential Tasks

  • Ciao Lei [2022 - current]. co-supervised with Dr. Kris Ehinger and A/Prof Sigfredo Fuentes, Topic: Generalized vision planning problems and their applications in Agriculture

  • Zhiaho Pei [2022 - current]. co-supervized with Dr. Angela Rojas, Dr. Fjalar De Haan and Dr. Enayat A. Moallemi, Topic: Robust decision making for complex systems

  • Sukai Huang [2022 - current]. co-supervized with Prof. Trevor Cohn, Topic: NLP and sequential decision problems

  • Lingfei Wang [2021 - current], co-supervised with Dr.Maria Rodriguez. Topic: Scheduling and Learning for High Performance Computing (HPC)

  • Guang Hu [2020 - current], co-supervised with Dr.Tim Miller. Topic: Epistemic Planning and Explanability

Alumni

Ph.D.

Masters

Honours and Awards

Distinguished Program Committee - IJCAI-ECAI 2022

The quality of my reviews were ranked in the top 3% out of 3000+ reviewers.

Winner (PROBE planner) and Runner-up (BFWS planner)

Winner - Agile Track | Runner-up - Satisficing Track (BFWS planners)

Winner - Time Track | Runner-Up - Quality and Coverage tracks (LAPKT planners)

Best Dissertation Award (ICAPS)

Text of Award: Nir Lipovetzky takes a new, and very original, look at automated planning: how to reason your way to a plan, instead of searching (blindly or heuristically) for it. First, he has developed a range of novel inference techniques that, combined, produce classical planners that can work with very little backtracking – in many cases none at all – and perform well enough to be awarded at two IPCs. Second, he has invented a novel measure of the hardness of a planning problem, called “width”, and has shown that by properly exploiting it, a simple blind search can do as well as the best-performing heuristic search planners.

Service

Conference Chair

  • International Conference on Automated Planning and Scheduling, ICAPS (2025)

Program Chair

  • International Conference on Automated Planning and Scheduling, ICAPS (2019)

Organizing Committee

  • International Conference on Automated Planning and Scheduling – Publicity co-chair, ICAPS (2010)

  • First Unsolvability International Planning Competition – Co-Organizer, UIPC-1 (2016)

  • Heuristics and Search for Domain-independent Planning – Co-Organizer, ICAPS workshop HSDIP (2015,2016,2017,2018)

  • Demonstration track – Co-Chair AAAI 2023

  • Student Abstract track – Co-Chair, AAAI (2018,2019)

  • Journal Presentation track – Co-Chair ICAPS (2018)

Senior Program Committee

  • Association for the Advancement of Artificial Intelligence, AAAI (2020,2021,2022,2023)
  • International Joint Conferences on Artificial Intelligence IJCAI (2021,2023)

Program Committee

  • International Joint Conferences on Artificial Intelligence IJCAI (2011,2013,2015,2017,2018,2020,2022)

  • Association for the Advancement of Artificial Intelligence, AAAI (2013,2015,2016,2017,2018,2019)

  • European Conference on Artificial Intelligence, ECAI (2014,2016)

  • International Conference on Automated Planning and Scheduling, ICAPS (2015,2016,2017,2018,2020)

  • Symposium on Combinatorial Search SOCS (2020,2021,2022,2023)

Reviewer

  • Journal of Artificial Intelligence Research, JAIR

  • Reviewer Artificial Intelligence, Elsevier AIJ

Other

  • ICAPS Awards Committee 2024

Teaching

  • Pacman Capture the flag Inter-University Contest, run for Unimelb AI coure and Hall of Fame contest, 2016 - current

  • AI Planning for Autonomy (Lecturer), at M.Sc. AI specialization, The University of Melbourne, 2016 - current

  • Data Structures and Algorithms (Lecturer), at The University of Melbourne, 2016 - current

  • Software Agents (Lecturer), at M.Sc. Software, The University of Melbourne, 2013, 2014, 2015

  • Autonomous Systems, at M.Sc. Intelligent Interactive Systems, University Pompeu Fabra, 2012

  • Advanced course on AI: workshop on RoboSoccer simulator, at Polytechnic School, University Pompeu Fabra, 2009, 2010, 2011

  • Artificial Intelligence course, at Polytechnic School, University Pompeu Fabra, 2010, 2011

  • Introduction to Data Structures and Algorithms course, at Polytechnic School, University Pompeu Fabra, 2008

  • Programming course, at Polytechnic School, University Pompeu Fabra, 2008, 2009, 2010, 2011

Contact