What is Responsible AI? Depends on who is asking

Last week I attended a panel discussion about Responsible AI, which as the panelists pointed out, can still mean different things depending on the company, organization, and even country. The event was organized by Women in AI, a nonprofit global do-tank on a mission to increase female representation and participation in AI.

Author: Victoria Bruno.

The panel was led by the Program Lead Aoibhinn Reddington, and appreciably the group of experts involved was very diverse including a lawyer, consultants, and people from different industries, namely:

Regulations vs innovations in AI

The panel discussion focused on the trade-offs involved in responsible AI development and implementation. While everyone agrees on the importance of certain principles for ethical and responsible AI, it was noted that not everyone can be made happy due to the many trade-offs involved in reality. One of the challenges highlighted was the struggle to balance the need for responsible AI with the regulations that exist, the industry demands, and competitiveness.

The EU AI Act, which was introduced in 2021 as a first draft, was cited as an example of how the law struggles to keep up with rapidly advancing technology. It was noted that the AI Act did not even include systems like ChatGPT, indicating the difficulty in regulating AI effectively.

Organizations were discussed in terms of being already tied down by many policies and regulations, with an emphasis on the question if we need to instill responsible AI practices as a culture rather than just waiting on regulatory frameworks. On the other hand, it was pointed out that if policies are created and introduced, they can influence the culture, as demonstrated by the example of the General Data Protection Regulation (GDPR), which became a selling point for some companies once it was rolled out.

The perspective of startups was also highlighted by the CRO of a start-up in the panel, who shared that the pressure to innovate quickly and attract investment sometimes conflicts with the need for responsible AI development. It was noted that being responsible in AI development can be a long-term cost-return approach, as innovation that is not inclusive of end users may not reach the deployment stage. To add to this point, a large company perspective was shared, noting that responsible innovation is essential for long-term success, similar to how sustainability has become a crucial factor in business models. Different services can be offered with responsible AI in mind, taking into account the potential impacts and benefits.

Data, privacy and ethics

The discussion also touched on examples of ethical concerns and potential biases in AI. The panel shared an example from a well-known Dutch company that developed and rolled out an employee uniform sizing app using computer vision that led to privacy concerns and negative media coverage. It was emphasized that what is considered ethical can vary among organizations, and the definition of responsible AI may evolve and be set by the time we have the AI Act.

Transparency and accountability were also discussed, with the need to balance data privacy with the need to explain how AI systems work. Defining transparency can be complex as it is stakeholder dependent, therefore there is a need to think about which information should be known by which stakeholders. The AI Act was mentioned as lacking in terms of accountability and monitoring that one is not being affected by the AI. The impact of AI on gender bias and the challenges of complying with both GDPR and the AI Act were also discussed.

AI models and accountability

The explainability of AI models was another topic of discussion, with the trade-off between explainability and performance. It was noted that explainability should start from the user’s perspective, what would the user need and how can it be made explainable, and not just be a checkbox requirement. Technical and social explainability were differentiated, together with their impact on decision-making in fields such as healthcare. The example used to explain this difference was a system predicting cancer with a 60% probability. How would this number impact the surgeon’s decision-making? Who would take accountability? How would patients be impacted by predictions? What if a surgeon, even in a scenario in which the AI predicts cancer with 99% probability, still decided to operate based on her/his experience? These are all things we need to unravel.

The role of regulations like GDPR in limiting or supporting AI development was debated. We currently need to rely on GDRP regulation for example due to the fast pace of AI development that leverages training datasets without people’s consent.

The discussion final note mentioned that all of these considerations (and most likely more, I will add) need to be brought up at a political and board level to shape the responsible AI landscape.