Members Only

HAI Fellow Kate Vredenburgh: The Right to an Explanation

Are individuals owed explanations when AI makes decisions that affect their lives? Kate Vredenburgh, HAI and McCoy Family Center for Ethics and Society postdoctoral fellow (seen at left), discusses what it would mean to implement such a right. If AI developers know they will have to justify their work during implementation the hope is this would create incentives for building more morally justifiable algorithms. Link

How Work Will Change Following the Pandemic

In the face of COVID-19 to keep workers safe and continue operations, companies have ramped up remote work and are aggressively automating some operations and exploring machine learning (ML). Prof. Erik Brynjolfsson, director of Stanford HAI's Digital Economy Lab, along with scientists from CMU and MIT, have identified tasks most suitable for ML. While more tasks in lower wage jobs could be replaced by ML, no occupation is immune.

Stanford scientists discuss what makes big sustainability efforts stick

To understand what it takes for sustainability efforts to deliver lasting benefits, Stanford scientists looked through some of the biggest sustainability efforts from the past 25 years to understand what makes solutions hold at large scale. Strong multi-stakeholder, multi-level coalitions formed around ambitious objectives are needed to create transformation at scale.

Safe and Robust Navigation for Aerial and Ground Autonomous Vehicles recording available

At our May webinar, Grace Gao, assistant professor, aeronautics and astronautics and director of the Stanford Navigation and Autonomous Vehicles Lab discussed her work on reliable and safe positioning and navigation for autonomous systems. Gao provided examples of model-driven, data-driven and proof-based approaches for intelligently fusing GPS, LiDAR, vision and inertial measurements.   View the webinar recording here.

Universities and Tech Giants Back National Cloud Computing Project

The National Research Cloud, an initiative proposed by John Etchemendy and Fei-Fei Li, co-directors of the Stanford Institute for Human-Centered Artificial Intelligence (seen here at left) and backed by Stanford, CMU and Ohio State and tech companies including Google, Amazon, Microsoft and IBM received bipartisan support in both the House and the Senate.

Pages