Skip to content Skip to navigation

Study Finds Racial and Gender Discrimation

Discrimination Persists in Peer Transportation Systems

Written by Stephen Zoepf of the Center for Automotive Research at Stanford.

This week Yanbo Ge, Don MacKenzie, Chris Knittel and I released a working paper http://www.nber.org/papers/w22776 in which we find evidence of racial and gender discrimination by Uber and Lyft drivers. The conclusions we come to are unfortunate but unsurprising in the face of overwhelming evidence of discrimination elsewhere in society. A phrase in the paper captures our key question: “Is a taxi driver’s decision, made in public view, not to stop for an African-American passenger being eliminated? Or is it just being replaced by a TNC driver’s screen swipe, made in private, that has the same effect?” 
 
Our findings appear to support the conclusion that discrimination still occurs, with the offenders hiding behind the relative safety of their smartphones. The dramatically higher rate of cancellations for UberX passengers in Boston when using black-sounding names seems particularly damning. However, the density of UberX drivers is so high that we cannot detect any statistically significant difference in average wait times for passengers using Uber and Lyft in the same city.
 
Individual prejudice will be present in any non-utopian society, so what are we to make of these results? They seem to indicate that in general peer-based transportation systems offer some robustness to bad behavior by a few individuals. That being said, being canceled on by a driver is always a frustrating experience. While an “average” passenger might not be affected, for a passenger traveling to the airport or a time-sensitive meeting a canceled ride can ratchet up stress levels and introduce additional uncertainty into travel plans. If black passengers are subject to additional cancellations, inevitably their quality of service will suffer.
 
Most peer economy platforms including Uber (https://www.uber.com/legal/other/non-discrimination-policy/), Lyft (https://help.lyft.com/hc/en-us/articles/214218517-Anti-Discrimination-Policies), and AirBnB (https://www.airbnb.com/help/article/1405/airbnb-s-nondiscrimination-policy--our-commitment-to-inclusion-and-respect) now publish anti-discrimination policies. In obvious cases these policies should prove adequate to rectify blatant instances of racism, such as inflammatory comments in listings or provocative language in communication with a customer. But what about subtler discrimination? How should an individual customer who has been canceled, rejected, or otherwise been marginally affected by another individual’s actions react?
 
In common law the principle “For every wrong, the law provides a remedy” (ubi jus ibi remedium) has been widely accepted and cited by the U.S. Circuit Court of appeals (Leo Feist v. Young, 138 F.2d 972 Circuit Court of Appeals, 7th Circuit 1943). But what if one individual is incapable of definitively stating whether a wrong has occurred? For any single ride cancellation there are a wide variety of perfectly legitimate reasons why a driver might choose not to accept or cancel a ride: the car needs gas, the driver needs a bathroom break, or perhaps the driver accepted the ride accidentally. It is only over a large number of events where it could be observed that race or gender affects an individual driver’s decision-making.
 
As we and others (Pope & Syndor, 2009; Edelman et al. 2016) have identified, it is only cumulative evidence that seems to support the conclusion that discrimination persists in the digitized and disaggregated world of the peer economy. Conventional performance metrics, such as an individual driver’s aggregate rate of acceptance or cancellation, are insufficient to protect against discrimination. For example, imagine that Uber or Lyft implement a limit of 20% cancellations beyond which a driver will be banned, and 10% of riders are black. Each individual driver could choose to deny service to all black passengers and no white passengers, and while no one driver would run afoul of the performance restrictions, performance system-wide for black passengers would be abysmal.
 
At each transportation provider there is a profound need for new, more sophisticated performance measures to gauge the behavior of individual actors in peer economy systems, and whether their decisions are indeed “random” or whether their actions demonstrate some underlying bias. Since the system-wide performance of peer systems is the product of hundreds or thousands of individual decisions, those individual decisions must be carefully vetted.
 
The need for new policy tools at the provider level is reflected and magnified at the metropolitan level. In some metro areas, Uber and Lyft are becoming de facto complements or replacements for other more regulated forms of transportation including conventional taxis and metro transit. In my opinion it now becomes the responsibility of metropolitan, state and federal decision makers to evaluate whether the aggregate performance of peer transportation alternatives (in some cases aggressively pushed) comply with the Civil Rights Act that outlaws discrimination based on race, color, religion, sex, or national origin (Pub.L. 88–352, 78 Stat. 241, enacted July 2, 1964). If, on average, a black Uber passenger waits 15 seconds longer for a ride than a white passenger, does that constitute discrimination? What about 30 seconds? Two minutes? At what point do we say that a disaggregated system is inadequate to provide service to our collective communities?
 
The solutions to the problems of peer transportation discrimination are devilishly difficult to unwind. If we turn the screws on all Uber and Lyft drivers so that they are forced to pick up virtually all passengers, perhaps discrimination (by those drivers who otherwise would have canceled) might now manifest itself as lower star ratings for black passengers. Those black passengers would find their ratings lowered and they would now be subject to decreased probability of pickup by all Uber drivers, not just those who chose to discriminate based on race. As a result, tightening acceptance requirements might actually worsen performance, not improve it.
 
The feedback mechanism is a core component of the peer economy, dating back to the early days of eBay in the 1990s. Previous writers have identified potential manipulations by affluent users against novices. But what if, when provider and users physically see each other, feedback is influenced by a user’s physical appearance rather than their responsible (or irresponsible) actions on the platform? If a meta-moderating system were to be implemented, how is a third party to disentangle common misunderstandings and indecorous communication from cultural differences and split-second decisions based on socioeconomic factors?
 
One element has become obvious to me, which is that transportation providers must become more open with the data they do, and could, collect. This project was completed as an independent field study of a mere 1,500 observations of companies that have collectively delivered more than 1 Billion rides. This study took us more nearly a year of fieldwork and perhaps $100k of funding to complete. With the collaboration of transportation providers, we could have focused our efforts on analysis rather than data gathering, and our results would have been undoubtedly more revealing and compelling. Any transportation provider, public or private, should be subject to open records requests of an academic institution, and should not be able to raise the specter of privacy or business concerns to shield against independent audits that seek to protect against social issues.
 
Ultimately, all these policy and strategic measures will take years to resolve. How can you, as an individual actor, seek to advance collective benefit today? Well for starters, if you’re a driver or service provider, don’t discriminate! To the degree possible, try to deliver equal quality of service to all customers and attempt to step outside your own preexisting assumptions. And as a customer, you should be relentless in reporting cancellations or misbehavior, even if it takes you an extra few minutes, particularly if you are a minority. Over time, fastidious reporting of possibly misbehaving drivers should slowly alert market-makers, even through the noise of millions of transactions.