Google Maps' Wrong Turn
Its new routing algorithm, designed to reduce carbon emissions, is not the way to stop climate change.
I remember learning what an algorithm is, not in a coding or computer science course, but in high school psychology. Known today more in the context of search engines and social media feeds than social science, an algorithm is simply any step-by-step problem-solving formula. The classic example is long division: If I follow the standard set of rules, I can compute the quotient of any two real numbers.
Recently, with these memories of psychology class in mind, I read about a forthcoming Google Maps feature that will opt drivers into the most “eco-friendly” routes according to algorithms developed with data and analysis from the National Renewable Energy Laboratory, part of the U.S. Department of Energy. Russell Dicker, a director of product at Google, said that “for around half of routes, we are able to find an option more eco-friendly with minimal or no time-cost tradeoff.”
I paused when I read that. I don’t doubt that Google’s eco-routes often yield minimal time-cost tradeoffs. But that raises the question: What about other tradeoffs? Will eco-friendly routes take me along surface streets with more noise or risks to pedestrians? Will mass reliance on these new routes create more localized pollution along them, mitigating one problem while exacerbating another?
I do not know how the new Google algorithm will traverse these cascading complexities. I suspect the engineers at Google do not, either. And I worry that, as in other walks of life, we are shifting our ecological decision-making onto opaque cloud-based algorithms that obscure the tradeoffs inherent in all our forms of consumption and behavior.
This is not a new phenomenon. Admonitions toward ecologically virtuous activities have typically ignored, omitted, or denied tradeoffs. Most of these have taken the form of heuristics—mental shortcuts designed to speed up and simplify decision-making—rather than algorithms. But heuristics and algorithms are similar in that they both obscure tradeoffs and simplify truth straight out of the equation.
“Eating locally” is a heuristic that elevates the environmental impacts of transporting food over the impacts of producing it when, in fact, the latter is far more significant.
Eliminating single-use plastic is all the rage these days, but wrapping some produce in plastic extends shelf life and reduces food waste.
Reducing personal water consumption is important, but the amount of water that goes to sinks and showers is dwarfed by industrial and agricultural consumption.
I can sympathize with some of these campaigns. It is a pretty reliable rule of thumb, for instance, that eating less beef is good for the climate. But these examples reveal the ways in which our understanding of ecological sustainability is broken. Rules for eco-consumption are often posed as absolutes when, in reality, the best choice of behavior or consumption will balance the environmental, the economic, the social, and the personal. If I replace my hamburgers with chicken sandwiches, I am lowering my carbon footprint at the expense of far more animal lives lost, since one slaughtered cow provides many more meals than one chicken. Tradeoffs!
Paternalistic actions by corporations, often built on these heuristics, can also have unexpected consequences. Epicurious, the food and cooking website, recently decided to stop publishing new recipes with beef to promote sustainability. But performative actions like this can actually create a backlash against the very cause they are promoting. People simply resent being manipulated, which can turn them off to environmental causes more broadly.
New eco-algorithms will obscure complex tradeoffs and backfire in much the same way.
When I follow the eco-friendly route on Google Maps, will I actually reduce the overall carbon footprint of my trip? Maybe! I have little reason to doubt the expertise of scientists and programmers at Google and NREL. But what tradeoffs will I overlook by passing my decision-making over to an algorithm? Recent research shows that the system used by airlines to calculate carbon offsets is deeply flawed. So, when I pay extra to offset the carbon emissions of my flight on Delta Airlines, what am I really paying for? Similarly, does the Reconomy algorithm, which evaluates organizational sustainability by how often corporate communications use words like “renewables” and “green energy,” tell us anything valuable? What happens when smaller enterprises less exposed to public scrutiny, or outright scammers, deploy their own eco-algorithms?
I don’t want to be a Luddite or a hypocrite. I happily use Twitter and Amazon, not to mention the algorithmic logic of grocery store shelving. I rely on algorithms not just to influence my choices but often to choose for me. But I fear handing over our already messy understanding of ecological problems to impenetrable algorithms. Environmental campaigns have long struggled with a public wary of being manipulated, shamed, or restricted. In a future of opaque eco-algorithms, those pitfalls will remain, undermining environmental action and progress against climate change.
Alex Trembath is deputy director at the Breakthrough Institute, a think tank that focuses on technological solutions to environmental problems.
We need to stop concerning ourselves so much with individual behavior and carbon footprints and start focusing on the small collection of giant corporations (not to mention the US military, the biggest polluting force on the planet) that contribute over 70 percent of all global carbon emissions
This is a poorly conceived article.
The premise is that when one optimizes for thing X, one is implicitly ignoring everything else that's not explicitly X. Of course this is correct; this underspecification problem is well known in computer science. The issue I have with Trembath is that all those externalaities are equally unmeasured in the original algorithm as in the eco-friendly algorithm. Once one notices this, Trembath's piece collapses to an argument that changing things can lead to new negative consequences; the issue is that he gives no justification (and there is essentially none that can be given*) that these consequences will be worse than the original unmeasured consequences. For example: the argument that Google is optimizing for speed alone could be used to conclude that Google might often direct people through neighborhoods where the speed limit is too high and therefore dangerous to pedestrians. This consideration exists equally for the speed-only algorithm and for the eco-friendly algorithm.
For this reason, I find that the main objective point of the piece is incorrect. The rest is opinion. He brings up some fine points. It's just that they're all organized around an incoherently presented thesis. Here is my condensed version of this article which captures what I find valuable in it: "We should attempt to quantify side-effects of algorithms so we better understand their net utility."
* In order to be friendly to Trembath, I can imagine one making the heuristic argument that the eco-friendly algorithm is pursuing two objectives at once and is therefore more constrained in its solution space; thus, the compromises it makes on unspecified externalities will be more extreme. This is a natural outcome to suspect, but it is hard to quantify--and certainly Trembath makes no effort to do so. In this case, the solution space is so large to begin with that my hunch is that the additional constraint is a drop in the bucket.