Here at Simply Secure we love to share the good work that others are doing. Our interview series is a way for us to spread current ideas, insights, and experiments to anyone interested in ethical design.
Marcel Schouwenaar is part of the team behind the Transparent Charging Station, a device that visualizes “smart” charging at charging stations. He spoke with us about fair allocation of resources, visualizing algorithms, and the role of transparency in design.
In designing this Transparent Charging Station, what was your underlying assumption?
Electric power is a resource that fluctuates a lot, especially with renewable resources. Sometimes too much power is generated, and sometimes there is a scarcity; and similarly, depending on the time of day, a lot of power is needed sometimes, and none at all at other times. Grid operators have to constantly decide how to allocate the available energy, and it is no different with electric charging stations around the city. So when there is not enough energy available to charge all the hooked up cars, the company has to prioritize charging according to certain parameters. The challenge here is to build a reliable provision of power, a reliable network so that everyone’s mobility is guaranteed.
So electric power is a public good, and the question you posed was how to distribute that public good. Was this the goal of the project?
Basically, yes. We have to decide how to prioritize some people over other people. But then the question is “to which people, and why?” As a design challenge we actually got the problem wrong for weeks. We were building UX for people to express their urgency in this scenario, or some sort of voting system for decision making. But that was all wrong. The problem here isn’t so much deciding on those parameters, but understanding how algorithms are making those decisions in the first place. The design challenge, therefore, is to make that decision process transparent to the public. People will be affected by these decisions, so we need to have a way to communicate. And this project is an attempt to visualizes resource allocation according to whatever algorithm is in charge. The problem here isn’t so much deciding on those parameters, but understanding how algorithms are making those decisions in the first place.
How did you come up with the idea?
This project was commissioned by a client, a grid operator in the Netherlands that saw the urgency of having a public conversation around algorithmic decision making in smart cities. As a B Corp charged with the task of distributing energy across the Netherlands, they have de facto no competition on the market, but an even greater responsibility to the public. Companies like them are therefore interested in making the process as transparent as possible, and the transparent charging station is just the latest incarnation of that in the “smart city”.
What were some of the challenges in designing this machine?
First of all, humans are not used to algorithmic decision making. We’d like to think that decisions are very singular, that at one particular moment, there is a clear number of choices, and a determinate decision on those choices. However, with an algorithm like this, things are very different. It’s many tiny decisions over time, and much harder to picture.
Secondly, a decision based on parameters is hard to communicate, because it’s very abstract. So we thought we might have a way for people to experience these parameters. In our prototype, we had two of them: stress and privilege. By interacting with the machine, you can express how much stress you have (“I need this much charge before this hour”), and also identify yourself via an ID that has a level of privilege attached to it. This was our way to explain the abstract idea of parameters to the general public.
And how does your machine communicate these decisions?
The machine shows the amount of charge available in the upcoming four hours, as well as who it will be distributed to and why. So if you’re asking yourself, why am I getting so little charge, you might see that the person who’s charging now has more “privilege”. It also records a history of the power distribution. Of course, in practice, nothing goes according to plan, ever. I might see a certain plan for the next four hours, but then someone else might arrive in the meantime and get all the power instead. That’s why the recording is important, too. We thought long and hard about this, because there are many ways to give people something like a digital receipt. However, we felt that another app made by the company or the city is not as trustworthy. For some time we were even experimenting with thermal paper that could be a physical (and unalterable) imprint of your charge history. But we settled for the less cumbersome solution: your own phone camera. It’s a more intuitive way to engage with the physical world. So if decisions seem unfair, the idea here is that customers can take a quick video of the “replay” to document it for a possible complaint. We call it an actionable perspective.
So will we see machines like this on future street corners?
We don’t think a machine like this will be built, at least not in this form. But it’s a good conversation starter for entities that operate in both the private and public sector. They are eager to prove that they are doing the right thing, and this is, we think, one possible way to do it.
Transparency is not a solution for data governance, but it is a way to give users of the system some leverage points.
One central idea here is that discrimination is not determined by bad intentions, but only by bad outcomes. You only need to look at the results of a program to see whether discrimination exists or not. What do you think about this approach?
This was also a revelation to our client. For one, modern-day applications are never that easy where you can look at a piece of code and “see” the inequality. Systems can become complex and interdependent, you don’t always know what influences what. Also consider optimization algorithms, used in machine learning or artificial intelligence. It’s constantly changing and trying to come up with the best possible output. But doing so is letting in lots of bias of course—this is old news. However, that cannot be traced back to one particular input, or to one decision. So the scrutiny has to be on a different level. Transparency is not a solution for data governance, but it is a way to give users of the system some leverage points.
Do you think that this principle of transparency can be applied in other domains also?
We think that similar prototypes could also be developed for, say, internet traffic monitoring. If you’re home and trying to watch Netflix, and you realize your internet speed is very slow, a system like ours could help to communicate the reasons. Right now there is just no way to see why! It could be the dust accumulating in your computer, some websites mining Bitcoins in the background, your ISP screwing you over…
I suspect we will see more and more of that in the future. “Smart cities” are just trying to optimize allocation of different scarcities, be it energy, water, safety, internet traffic, clean streets etc. How we allocate resources is and should be a political choice. Just because the IBMs and Ciscos of this world are informing the government on how we do it doesn’t make it any less political, especially when the methods amplify discrimination and bias. The role of transparency here is to make an open, political discussion around these issues possible in the first place.
You’ve been touring Europe with this prototype recently. What was the response from people working in this area?
A lot of the discussion around these topics have been very academic, and I’m proud to have a physical prototype that I can show. There is a lot of commitment from the cities, from academia, the industry. They got the message. It doesn’t matter if they discard this prototype, because what matters is that the point got across, and people are actively thinking and designing new stuff. As for the machine: three cities are ready to make smart and transparent charging systems, it has also sparked quite a few research projects, and will be part of an exhibition in the AMS Institute.
Thank you, Marcel!
Marcel Schouwenaar, a designer and strategist at The Incredible Machine. In 2012 he co-founded this company together with partner Harm van Beek. In their work they research opportunities in the realm of internet connected products and services for clients like LEGO Group, Zodiac Aerospace, Festool and Alliander. With a strong focus on experiment and validation, The Incredible Machine is most often involved in the earliest stages of innovation.
It is in these stages that Harm and Marcel encountered ethical dilemmas where the interest of business conflicted with those of people and society. This friction led to the writing of the IoT Design Manifesto – in collaboration with fellow designers and researchers – and the establishment of the Just Things Foundation. Both of these efforts are aimed at addressing what it means to create products in a data-driven society and empowering professionals to act in the interests of people, society, and the environment. Marcel is also a co-organizer of ThingsCon Amsterdam.