00:00/00:00 </>
​7_CP24-14-constraint modelling.mp3

Constraint Modelling with LLMs using In-Context Learning

Compartir
Kostis Michailidis from KU Leuven in Belgium, tells that the Constraint Programming (CP) allows for the modelling and solving of a wide range of combinatorial problems. However, modelling such problems using constraints over decision variables still requires significant expertise, both in conceptual thinking and syntactic use of modelling languages. In this paper, we explore the potential of using pre-trained Large Language Models (LLMs) as coding assistants, to transform textual problem descriptions into concrete and executable CP specifications. We investigate different transformation pipelines with explicit intermediate representations, and we investigate the potential benefit of various retrieval-augmented example selection strategies for in-context learning. We evaluate our approach on 2 datasets from the literature, namely NL4Opt (optimisation) and Logic Grid Puzzles (satisfaction), and on a heterogeneous set of exercises from a CP course. The results show that pre-trained LLMs have promising potential for initialising the modelling process, with retrieval-augmented in-context learning significantly enhancing their modelling capabilities ​
Aquest document està subjecte a una llicència Creative Commons:Reconeixement – No comercial – Compartir igual (by-nc-sa) Creative Commons by-nc-sa4.0