Dec 15, 2023
AI desires to imitate human intelligence for designing efficient decision-making systems, but are we really training them the way humans learn every day or take decisions? Studies have shown humans are inherently more comfortable making decisions on a relative scale or choosing alternatives from a set, which often helps us converge to an optimal decision faster. In recent times, as we are employing more and more AI tools for executing everyday tasks, it’s becoming necessary to align machine behavior with human-like decisions. Another critical challenge in training user-friendly systems lies in the requirement of a huge amount of human feedback, which is often costly and hard to obtain. The solution lies in learning to train our machines through human preferences! Our tutorial aims to address the critical need for educating researchers on different types of preference models by exploring real-world problems and showcasing how training systems through preference feedback can provide cutting-edge solutions. We will equip attendees with a comprehensive understanding of diverse preference models and inference techniques. Another goal of the tutorial is to encourage collaboration among various communities that have significant connections to preference-based learning, including bandits, multiagent games, econometrics, social choice theory, RL, optimization, robotics, and more. We will consider our tutorial a success if it inspires researchers to embark on novel insights in the general area of preference-based learning, bringing attention from different communities to foster dissemination, cross-fertilization, and discussion at scale. Let’s learn to train our machines like humans: Machine Learning meets Human Learning through preference feedback!
Professional recording and live streaming, delivered globally.
Presentations on similar topic, category or speaker