At Women in AI we’d like to inspire others by featuring female role models that are making a difference in AI. We aim to empower women who are working in AI by highlighting their success stories. This way we hope to inspire other women and girls to get into STEM-related fields. In our series ‘In the Spotlight’ we shine a light on an expert in the field and today we’d like you to meet Niya Stoimenova.
Author: Ingrid van Heuven van Staereling.
With her finalized PhD at TU Delft, which focused on shaping the theoretical foundations of common-sense reasoning, Niya created a model that formally defines the manner in which different types of abductive inferences are generated and evaluated. Next to that she is also the Responsible AI & Strategic Foresight Lead of DEUS.ai where she focuses on driving AI product development forward – both internally and for DEUS’ partners. In both of these roles, she relies on her impressive background in ethics (she was a Bulgarian national laureate in Philosophy in 2011) and (organizational) design (she has published extensively on the topic before 2018) to ensure that everything she does is aligned with human objectives.
Working with some of the largest European organizations, she and her team help these organizations to build more meaningful AI systems by implementing human oversight practices. Their current focus is on conversational AI, AI agents and accurately measuring the impact of AI on organizational processes and KPIs. “We are on an experimentation and research track within the company where we explore topics like optimal voice-based modalities and interactions, scaling experiments with LLMs, and optimizing software architectures like micro services through the use of generative AI.” They aim to disseminate knowledge through guides, principles, methods and tools. “Our monthly newsletter, called Iris Insights, is where we bring together strategic, technical and human-centered insights.”
“Our mission is to empower teams and organizations to address the potential unintended outcomes of AI systems (e.g., bias) early on.”
During her PhD, she focused on shaping the theoretical foundations of common-sense reasoning (i.e., abductive reasoning) and its application in aligning human and AI objectives (throughout the entire AI lifecycle). In her Responsible AI & Strategic Foresight Lead role, she and her team are developing platforms and tools that enable organizations to leverage the full potential of (generative) AI. One of their latest tools is focused on automating the documentation tasks of development teams. Another one on providing a systematic overview, and impact predictions for deployed AI models. “Our mission is to empower teams and organizations to address the potential unintended outcomes of these systems (e.g., bias) early on.”
One of the most important frontiers of AI research has been in endowing models with the capability of abductive or common-sense reasoning. This holds the potential to introduce smaller, more sustainable and elegant models that rely on significantly smaller data sets. “As my research demonstrates, these foundational principles also offer a promising avenue to address the pressing AI alignment problem.” -Niya says. An important and very relevant task. Since AI models still sometimes generate responses that are plausible, but factually incorrect or entirely fabricated. “With the proliferation of hallucinatory answers and the increasing dominance of a handful of companies able to train current state-of-the-art models, the significance of work similar to mine (both in academia and industry) becomes ever more pronounced.”
The phenomenon is common in generative AI models that sometimes “hallucinate” details that aren’t based on their training data. But also, her concerns about the concentration of power is very valid, since this can lead to a lack of diversity in AI research and development. So by supporting diverse and independent research in this area, like Niya does, we can create more reliable, ethical, and widely applicable AI systems.
Niya believes that the field of AI needs people who have a more holistic perspective that focuses on the well-being of people. “While the way we’ve been traditionally socialized has held us up in multiple areas, our attention to and care for other people is precisely what makes us uniquely important for the further development of AI that is aligned with human objectives.” Research has shown that women in tech often place a strong emphasis on ethical considerations and responsible AI practices. This can lead to AI systems that are more aligned with societal values and ethical norms. So having women involved in the design and development of AI can help reduce gender biases in AI systems. Diverse viewpoints can help in identifying a broader range of human objectives and ethical considerations that might otherwise be overlooked.
When it comes to working in the field of AI, Niya thinks there are three qualities you need to have:
Prominent women in AI have been at the forefront of advocating for ethical AI, fairness, and inclusivity. Their work highlights the significant contributions women can make in aligning AI with human objectives. But how do you gain more knowledge or skills if you want to become part of these women that are working in the field of AI? “Surround yourself with people who know more than you do and keep asking stupid questions.” –Niya explains. “By definition, AI is an extremely wide field, so focusing only on ML will only narrow down your vision on what might come next.”