Mini-Class

Avoiding failures in ML-enabled systems: Tutorial on Runtime Monitoring and Contingency Planning

This tutorial led by Rohan Sinha, PhD candidate in the Autonomous Systems Lab, will examine runtime monitoring as a paradigm to detect when an autonomous system operates outside its region of competence – its operational design domain (ODD) – and design methods to safely transition to a minimal risk condition.

Transformers for Robotics: Architectural Concepts and Applications

This tutorial led by Edward Schmerling, Robotics Researcher in the Autonomous Systems Lab, aims to answer the following questions at a conceptual level: what are transformers, what problems do they solve, and what applications in robotics, autonomous vehicles, operations research, and numerous other fields beyond their marquee use-case of natural language processing, may be amenable to their deployment?

Tutorial on Graph Reinforcement Learning: RL Graph Neural Networks and Applications to Mobility

This tutorial led by Daniele Gammelli, postdoctoral scholar in the Autonomous Systems Lab, will introduce core concepts in the fields of Deep Reinforcement Learning and Graph Neural Networks from the ground up, towards the ultimate goal of devising learning agents capable of controlling complex network-structured systems through Graph Reinforcement Learning.

Going beyond Imitation Learning: Learning from Non-Traditional Sources of Human Data

Prof. Dorsa Sadigh presents a set of techniques to address some of the challenges of learning from non-traditional sources of data, i.e., suboptimal demonstrations, rankings, and play data. Sadigh will first introduce the basics of imitation learning then will talk about her lab's confidence-aware imitation learning approach that simultaneously estimates a confidence measure over demonstrations and the policy parameters.

Tutorial on Decision Making Under Uncertainty

This tutorial led by Robert Moss, Ph.D. candidate in the Stanford Intelligent Systems Lab, covers how to build and solve sequential decision making problems in uncertain environments. Splitting the discussion between problem formulation and solution methods, this tutorial focuses on the mathematical framework for optimal sequential decision making—the Markov decision process (MDP)—and will cover online and offline solution methods (such as value iteration, Q-learning, SARSA, and Monte Carlo tree search).

Trajectory Planning for Autonomous Vehicles Under Motion Sensing Uncertainties

In this tutorial Akshay Shetty, postdoctoral scholar in the Navigation and Autonomous Systems Lab will begin by looking at traditional tree-based planning algorithms. He will then cover recent extensions that account for motion and sensing uncertainties using tools such as reachability analysis. Finally, Akshay will explore how these tools can be applied to ensure collision-safety for neural network-based planners.

Tutorial on Reachability Analysis for AV and Robotic Applications

This tutorial lead by Shreyas Kousik, postdoctoral scholar in the Autonomous Systems Lab, explores reachability analysis, a fundamental tool in the study of safety and performance of uncertain dynamical systems. We will start with an overview of how reachability analysis fits into control and will enumerate common reachable set definitions. The overall goal of this tutorial is to introduce reachability as a framework for understanding and implementing safety and performance guarantees of control systems.

Tutorial on Reinforcement Learning

This tutorial lead by Sandeep Chinchali, postdoctoral scholar in the Autonomous Systems Lab, will cover deep reinforcement learning with an emphasis on the use of deep neural networks as complex function approximators to scale to complex problems with large state and action spaces. The second half will describe a case study using deep reinforcement learning for compute model selection in cloud robotics.

Pages