Written by Lene Harbott
As a neuroscientist in a lab working on automated vehicles, I am often asked by friends and acquaintances questions such as “Why bother? Cars will be driving themselves in x years.” (the value of x depending on age and optimism), and “Aren’t your colleagues making your job obsolete?” You may be wondering something similar…
Regardless of your vision of the future of automobiles 5, 10, or 50 years hence, I think we can all agree that for the foreseeable future humans will still be in the loop, even in highly automated vehicles. Whether the human role is to be in control of the car under certain circumstances (for e.g. busy urban areas or adverse weather conditions); to actively monitor the driving environment for hazardous situations; to take control when the car requests that he or she intervene; or to be simply a passive passenger, it is important to design automated driving systems that support, rather than distract or confuse, the human in the loop.
Designing safe and user-friendly automated systems requires a detailed understanding not only of people’s behavior under different driving conditions, but of the brain processes underlying this behavior; what information the human is capable of processing; where they focus their attention; and which aspects of controlling or monitoring an automated vehicle require more or less mental workload. And for this, I would argue, you need a neuroscientist. Or to be more precise, a collaboration between neuroscientists and automated vehicle engineers, which is exactly what I have been fortunate enough to be part of for the past couple of years, working on the Stanford Cars and Brains (CAB) project.
Imaging brain activity inside a vehicle, in real-world driving situations, presents a particular set of challenges; the recording system must be portable, relatively comfortable, and robust to a lot of head motion. Happily, the neuroimaging technique of fNIRS (functional near infra-red spectroscopy) meets these criteria. Even more happily, the CAB project includes fNIRS experts from CIBSR (Center for Interdisciplinary Brain Sciences Research) here at Stanford, under the direction of Prof. Allan Reiss.
NIRS measures the same physiological response as fMRI (functional magnetic resonance imaging). When there is increased neuronal activity in a particular part of your brain, the metabolic demand in that area increases, and to meet this demand the local blood vessels dilate, to deliver more oxygenated blood to that region. Oxygenated and deoxygenated hemoglobin have different absorption spectra for near infra-red (NIR) light, and so if you place a cap with both NIR emitters and detectors of on someone’s head, you can record changes in the local concentration of oxygenated blood as an indirect measure of changes in neural activity.
We are using NIRS, along with other physiological measures such as eye-tracking, pupillometry, and heart rate, in a systematic series of studies, from basic computer-based motion control tasks, to sophisticated driving simulator studies, and all the way to on-the-road experiments using our test vehicles. This approach allows us to break down the complex task of controlling or monitoring aspects of driving into simpler component parts, under very controlled experimental conditions (during computer-based tasks), and then build these components back up gradually (in the driving simulator), to the point of real-world driving (on-the-road experiments in our test vehicles), so that we can really understand the cognitive processes involved in driving and monitoring automated vehicles.
One of my favorite aspects of this project so far has also been one of the most complicated to set up and run; simultaneous fNIRS and fMRI recordings during driving simulation. As I mentioned previously, fNIRS and fMRI use the same hemodynamic response to measure brain activity indirectly. fMRI is the gold standard in brain imaging, because of its high spatial resolution compared to fNIRS, but fMRI is very far from portable. Recording fMRI data at the same time as fNIRS, while running our driving simulator software within the MRI, allows us to validate the conclusions we draw from the fNIRS data collected in our real-world driving experiments.
Obviously, everything used in an MRI suite must be MR-safe; in other words, no ferrous metal, so one early challenge was to design and fabricate driving controllers without any magnetic components. Three very talented undergraduate students achieved this with liberal use of the 3-D printer, producing a small, finger-operated steering wheel, and accelerator and brake toe pedals (any movement of the head or upper torso invalidates fMRI scans). Running an experimental participant for a simultaneous fNIRS and fMRI scan involves a custom NIRS cap, more that 8m of fiber optic cables, a specially shielded data acquisition unit, and 3 experimenters. It is the most labor-intensive study I have ever done to date, and our reward is a huge amount of data that informs basic scientific question as to which neural processes are involved in speed vs. direction control, all the way to whether we can predict someone’s driving performance based on their pattern of brain activity.
Of course, the most exciting (and potentially frustrating!) studies are those that we run in our experimental vehicle; a student-designed and –built drive-by-wire car we call X1. These studies involve three additional types of sensor for the participant to wear (in addition to the fNIRS cap), and 6 experimenters, not to mention the complex control algorithm that allows X1 to behave as a fully or partially automated vehicle. Logistically, it is a huge effort to run these studies, but the ability to image someone’s brain activity in real time, as they adapt to an unexpected change in vehicle control (to emulate handover of control by an automated vehicle), or as they are sitting in the driver’s seat of an automated vehicle without needing to control any aspect of the driving, is (I would argue) extremely cool, not to mention informative, and important for the design of future cars that are both safe and enjoyable.
Our next step will be to extend the CAB project by investigating the effect of providing the driver, passenger, or supervisor, with real-time feedback on their neural state. We want to investigate how this neurofeedback affects driver performance, mental workload, and the way the user interacts with the system. One particularly intriguing possibility this raises is the future design of automated vehicles that actually train the operator, which would not only improve performance, but also trust; a hugely important commodity in human-car interaction.
Full publication here.