Learning Temporal Plan Preferences from Examples: An Empirical StudyValentin Seimetz; Rebecca Eifler; Jörg Hoffmann
In: Zhi-Hua Zhou (Hrsg.). Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence. International Joint Conference on Artificial Intelligence (IJCAI-21), August 19-27, Montreal, Pages 4160-4166, ISBN 978-0-9992411-9-6, nternational Joint Conferences on Artificial Intelligence Organization, 8/2021.
Temporal plan preferences are natural and impor- tant in a variety of applications. Yet users often find it difficult to formalize their preferences. Here we explore the possibility to learn preferences from example plans. Focusing on one preference at a time, the user is asked to annotate examples as good/bad. We leverage prior work on LTL formula learning to extract a preference from these exam- ples. We conduct an empirical study of this ap- proach in an oversubscription planning context, us- ing hidden target formulas to emulate the user pref- erences. We explore four different methods for gen- erating example plans, and evaluate performance as a function of domain and formula size. Overall, we find that reasonable-size target formulas can often be learned effectively.