At Women in AI we’d like to inspire others by featuring female role models that are making a difference in AI. We aim to empower women who are working in AI by highlighting their success stories. This way we hope to inspire other women and girls to get into STEM-related fields. In our series ‘In the Spotlight’ we shine a light on an expert in the field and today we’d like you to meet Willy Tadema.
Author: Ingrid van Heuven van Staereling.
Willy is an AI ethics lead and civil servant. She does not believe in an AI dystopia, nor believes in an AI utopia. “I do believe that AI can help us solve complex social problems, such as poverty, unequal opportunities in education or employment, and long wait times at court. But only if we uphold human rights, democratic values and the rule of law.” Willy tries to contribute to this from within the government, while looking outward. She does this not with a theoretical, but a practical view. “And always together with others.” -she adds.
Willy is very much convinced that ethics is not something you learn from a textbook or by reading a legal text. “It is not a standard checklist you can tick off, nor is it something you can outsource to an external party. It is something you only learn by doing.” Which is very context-specific and she emphasizes that it must be in the DNA of organizations. This is what drives her very much in her work. “How do we bridge the gap between theory and practice? How do we operationalize ethical and legal frameworks? And in turn, where do practitioners need more support and clarity from legislators, regulators and policy makers to do their jobs well.”
With this in mind she, with the help of her colleagues, has been able to set up a pool of AI experts within the central government that help governments at the national, regional and local levels to use AI responsibly. It took some years to set up, but with this group of people they were able to conduct human rights impact assessments and bias assessments. This also allowed them to contribute to the further development of supporting tools such as the Fundamental Rights Algorithm Impact Assessment (FRAIA or IAMA in Dutch).
Willy is also an active participant in creating a better way to use AI: “I actively share acquired knowledge and experience, for example by giving presentations, organizing workshops or participating in panel discussions.” She also likes to organize activities where they discuss the ethical aspects of AI with people in a different, fun way. An example is the ‘80s party’ during the hackers camp ‘May Contain Hackers’ that she helped organize (the phonetic word ‘80s’ in English means ‘ethical’ in Dutch). And it doesn’t just end there. Willy also contributes to the professionalization of the field as a member of the National Standardization Body (NEN) for AI and by initiating pilots. She has been part of the Z-Inspection pilot in cooperation with Roberto Zicari’s scientific team and the province of Friesland.
We can only conclude she is a good addition to any team, and takes her responsibilities very seriously: “I try to be a liaison officer between policy makers and practitioners, because I think it is important that the latter group is also at the table when new laws or policies are made.”
“You can realize the value of AI and manage the risks in a transparent, inclusive, democratic and accountable way ."
Willy works in government, which essentially has a major impact on its citizens. And you can imagine this also applies to the AI systems that the government uses to make its decisions or provide its services. “Unfortunately, we have seen that this can very much go wrong” -Willy hints to the Child Care Benefits Scandal. She sees that those type of situations have a different effect on organizations. “Part of the governmental organizations still thinks that something like this cannot happen to them, and another part has become so afraid that they stop or delay AI projects.” Together with her team she tries to give them a different point of view. “I show them that you can realize the value of AI and manage the risks in a transparent, inclusive, democratic and accountable way by setting up AI governance and AI assurance.”
Willy is very passionate about what she does. “AI ethics or Responsible AI is a very interesting field that offers many opportunities.” But she admits that there are challenges. When it comes to addressing the ethical use of AI you need to be able to zoom in and out, because details matter. “Never losing sight of the big picture may very well be the most difficult part of my job.” Willy tells us you also need to learn to deal with uncertainty and ambiguity. She has some tips for those who also encounter those problems too: “You are at the forefront of a new field, it is not always clear what is expected of you and what you have to do. Don’t see this as a problem, but as an opportunity to show what you’ve got.”
Half of the job is finding what interests you most and where you want to be better at. “And start a project! It doesn’t have to be grand. The most important thing is that you get started and get moving.” As with many new things, she believes insights will come naturally once you get going. “And connect with other people. Learning is so much better and so much more fun together. Find other people with similar interests. There are some amazing communities out there.” Being able to connect with like minded people is something that we very much see within our community. Some other examples of communities she highlights are the study group of R-ladies Amsterdam, the alumni group of the summer school on the Ethics, Law and Policies of AI of the KU Leuven and the community of ForHumanity.
In the end Willy emphasizes that even though it is challenging, it is also very meaningful work, especially working within the government. “It is all about making connections, both between people, projects and organizations. But also connections in content, like between policy, academic research, and practice.” She encourages others that are interested in the field to always follow their interests and passion. “Listen to your gut feeling. Shake off the imposter syndrome and make your voice heard. Because only by learning together we can discover how we can realize the potential of AI and at the same time uphold human rights, democratic values and the rule of law.”