Header

Why algorithms should be accompanied by a package insert

Published

27.05.2021

Model of a neck around which a golden chain with scales pendant is hanging

Algorithms are a hot topic of discussion these days and can become positively incendiary when viewed in conjunction with ethical considerations. Lea Strohm is Joint Managing Director at ethix, Lab for innovation ethics, which shines a light on the ethical flashpoints arising out of digital change. In this interview she explains why algorithmic decision-making processes are never objective; what is needed to make algorithms fairer; and why the focus should be more on end users.

Lea Strohm, from an ethical point of view, what should algorithms be allowed to do?

Lea Strohm: When talking about algorithms I wouldn't necessarily use the phrase «be allowed to», as in my opinion this suggests that they are capable of independent action, which is simply not the case. Algorithms are part of a system that has been generated by humans. And it is humans that decide, for a huge array of different subject areas, which tasks should be assigned to an algorithm and how it should be trained to perform them.

There are countless numbers of algorithms out there. Which are relevant where ethics are concerned?

Strohm: The relevant algorithms, in particular, are those that relate to people; and they become critical if they make decisions about people. That is to say, when a computer makes a decision that a human has been making up until now. It begs the question, what effect might that have on the decision and the decision-making chain. At the start, in particular, we thought the computer was more objective than a human, because it is not influenced by personal sympathies, experiences and prejudices - which, of course, is just not true.

Why aren't algorithms objective?

Strohm: Algorithms are programmed by people who have their own value concepts. And these value concepts turn up again in the algorithms. This is compounded by the fact that the datasets used to train the algorithms or systems to make automated decisions are frequently not free from distortion, which can lead to discrimination. There is a lot of focus at the moment on how to design, at a technical level, algorithms to be more ethical. We should not forget, however, that it is still the case today that relatively few decisions are made on a wholly-automated basis. In the majority of cases, algorithms are used to support humans with their decision-making. And these people themselves have their own prejudices and value concepts, too. That is why it is important that they understand the algorithm and know what its limitations are - but that is frequently not the case.

Can you give an example?

Strohm: In Germany during the refugee crisis there were migration offices that determined what accent a refugee had, where s/he came from and even whether s/he was not telling the truth, all based on a recording of his/her voice. Together with other factors, this influenced the decision on whether to grant asylum. On the basis of the recording, the computer spat out a probability factor. So, for example, it stated there was a 67.82 percent probability that the person recorded came from Eastern Syria. This figure suggested to staff that it was underpinned by precise science. But anyone who knows about probability calculations also knows you can only apply these figures with caveats. Data scientists who work with such material day-to-day know this - but an admin clerk quite possibly doesn't. The example shows that, with algorithms, it's not just the technology that is a problem, but also how it is applied and understood.

Lea Strohm, Joint Managing Director of ethix

The technology will always have its limitations. That is why the processes that lead to the decisions must be made transparent. That in turn leads to greater equity and fairness.

Lea Strohm, Joint Managing Director of ethix, Lab for innovation ethics (Photo: David Bürgisser)

Given this complexity, how do we nevertheless succeed in getting algorithms to work ethically?

Strohm: There are clear criteria when it comes to developing algorithms. They should, for example, conform to certain quality standards in their construction and be logical and easy to understand. But that is just one part of it. The important thing is for the whole process to be covered. So, for example, the end users also need to understand what an algorithm and/or an application can do and what the limitations are. You could almost say that algorithmic decision-making systems should always come with a set of instructions – almost like a medication package insert, stating the risks and side effects. Just as important is monitoring by the organisation or company that is using these sorts of application. It is their responsibility to check the quality of the algorithms used and to ensure the quality is maintained.

You have talked about algorithms having limitations. What do you see these as being?

Strohm: The limitations can vary immensely; it always depends on the algorithm used. But in most cases we're talking about technical limitations that often only come to light when the data is analysed or the algorithm is examined. For example, in the medical field algorithms are trained using health data based on Central European patients. The provenance of this data should, as a matter of course, limit the use of such an algorithm to the same geographical region, as it cannot be assured that the algorithm works as well for different regions. Checking this would mean more time and expense for companies or organisations - which is why many of them do not perform the checks as standard. Currently there are no legal requirements to carry out such quality checks. But it's vital that they are carried out; indeed, in many cases, it's absolutely in the company's best interests to do so.

Why are these checks still not being carried out sufficiently?

Strohm: There is a lack of awareness that these limitations exist. Algorithms have existed for more than a hundred years now. There have been continuous improvements to the technology in recent decades; and ever more complex tasks are being delegated to algorithms - without understanding precisely how the technology works. That is why we need processes that ensure quality post-implementation.

Who should be responsible for regulating this? The state?

Strohm: It needn't necessarily be just the state. Everyone - from the developers to the resellers and even the end users - should shoulder some of the responsibility.

You once said in an interview that we need to teach algorithms to make fair decisions. What do you mean by «fair» in this context?

Strohm: That is a key question that we are also discussing in our Innosuisse project «Algorithmic Fairness», which we are conducting with several partners, including the ZHAW School of Management and Law and the University of Zurich. The project is geared to the technical level - and its primary aim is to establish «fairness criteria» and to translate these into technical terms and actions. In other projects we are considering, together with companies and organisations, how to work with the end users to ensure algorithms become more equitable and deliver high-quality results. Taking the example of the refugee speech recognition program, a small modification could be made which would have the effect of end users being better-placed to interpret the results. For example, instead of giving straight figures for the probabilities calculated by the algorithm on the basis of the person's accent, levels of probability - high, medium and low - could be used. That gives the users more certainty when interpreting the results, which in turn end up being fairer than they were before. In addition, we must in future place more emphasis on communicating the results that the algorithms produce in the right way. Transparency is the big issue here. This calls for interdisciplinary teams to work on producing the clear translations needed.

Can transparency engender trust in the algorithms?

Strohm: Transparency won't automatically lead to trust, but can create a degree of credibility. But it will not suffice to simply inform people that an algorithm has been used to reach a decision. They must also have the opportunity to have the decision explained to them or to contest the decision if they feel that they have been treated unfairly, for example, in a job application process or over welfare payments. Failing to do so could result in terrible inequities that go unidentified or cannot be reversed. From an ethical point of view, this is extremely dangerous. We have to be aware that technology will always have its limitations. That is why the processes that lead to the decisions must be made transparent. That in turn leads to greater equity and fairness.


Interview: Marion Loher

About the pioneering project ethix, Lab for innovation ethics

The ethix pioneering project – made possible by the Migros Pioneer Fund –  is developing online and offline tools that will help highlight the ethical dimensions of innovations in key areas such as digitalisation or artifical intelligence. Find out more about the project ethix, Lab for innovation ethics.

More on the subject of algorithms

Algorithms affect our daily lives - yet so inconspicuously that many people aren't aware of their presence. The organisation AlgorithmWatch Switzerland monitors the problematic aspects of algorithmic decision-making processes. Find out more in the report Small tools with big consequences.

A new project, supported by the Migros Pioneer Fund and coming under its «Technology and Ethics» umbrella, launched recently: by developing trustworthy processes and software, HestiaLabs is creating synergies between various stakeholders and thereby putting our data at the service of social progress. Find out more about the project HestiaLabs.

Photo/stage: Simon Tanner

Your opinion