Jul 22, 2024
"Large Language Models (LLMs, or n-gram models on steroids) that have been trained originally to generate text by repeatedly predicting the next word in the context of a window of previous words, have captured the attention of the AI (and the world) community. Part of the reason for this is their ability to produce meaningful completions for prompts relating to almost any area of human intellectual endeavors. This sheer versatility has also led to claims that these predictive text completion systems may be capable of abstract reasoning and planning. In this tutorial we take a critical look at the ability of LLMs to help in planning tasks–either in autonomous modes, or in assistive modes. The tutorial will both point out the fundamental limitations of LLMs in generating plans (especially those that will normally require resolving subgoal interactions with combinatorial search), and also show constructive ``LLM-Modulo'' uses as complementary technologies to the sound planners, plan verifiers, simulators, unit testers etc. In addition to presenting our own work in this area, we provide a critical survey of many related efforts. Materials Link: https://yochan-lab.github.io/tutorial/LLMs-Planning/"
Professional recording and live streaming, delivered globally.
Presentations on similar topic, category or speaker