How might Generative AI redefine society? A deep dive into future scenarios

When you speak of AI today, most people think of ChatGPT, Perplexity, Gemini, Sora and the sleuth of diverse tools at our disposal that seem to weave magic through words, images and even videos. One of the prominent questions I get when I tell someone I work in AI is ‘Will AI take our jobs?’, increasingly posed with the advent of GenAI. With even the Supreme Court of the Netherlands using LLMs (used by GenAI for text-based tasks) to simplify complex legal documents (see previous article) it is no longer possible to ignore GenAI, thus necessitating and probably pushing the governments to take a harder stance on AI, its societal impact and framing regulations accordingly. In such an environment, how does the intersection of societal acceptance of GenAI with regulatory restrictions play out? This foresight study, conceptualised by Sanne Janssen, served as the foundation for a workshop led by Lilian de Jong and Sanne Janssen, and was the highlight of the inaugural Women in AI Oost NL circle event.

A Topic Relevant for Many

When I attended my first WAI Circle event, I was so nervous that I was early by an hour. This time however, though I am seasoned, I was late, despite leaving early from home. Thank you traffic jam! Hence, I completely missed the introduction of the Oost NL Circle Lead Parul, WAI, Avisi that was kind enough to host the event and the speakers of the day Lilian and Sanne. The crowd with participants from IT, data sciences as well as Ethics and Philosophy, reflected the multidisciplinary nature of the workshop. In fact it was also a pleasure to bump into Piek Knijff, the speaker at the inaugural event of Rotterdam Circle while we were figuring out how to get into the building.

If you are familiar with the Dutch AI Ethics scene, there is a slim chance you wouldn’t have heard of Lilian, the co-founder of the Dutch AI Ethics Community. With an MSc in Artificial Intelligence, she works as a Foresight Researcher at The Netherlands Study Centre for Technology Trends (STT). Tangential to her commitment to Ethics, as a nominee for the WAI Benelux Responsible AI Leader award, she has given a shoutout to her fellow nominees, a welcome change in the world of cutthroat competition. The other speaker, Sanne, is a Master’s student in AI from Radboud University, carrying out her thesis in ‘The future effects of generative AI on the relationships in society between companies, governments, and citizens’ at STT. In fact, the matrix that formed the basis of the workshop is a part of Sanne’s thesis.

About Implications and Regulations

Having mentioned multiple times about diverse future scenarios, it is now time to describe what those scenarios are. Recall that we are discussing regulation and acceptance. Splitting them into two classes we get light regulation-heavy regulation and low acceptance-high acceptance respectively as the individually different scenarios. Their intersection i.e. light regulation-low acceptance, light regulation-high acceptance, heavy regulation-low acceptance and heavy regulation-high acceptance are the four possibilities in the future that we imagined and discussed (Refer the figure provided with the article). Naturally, the audience was split into four groups to discuss each scenario. But before you get into the highlights of the discussion, pause for a moment, look into the matrix and ask yourself – which scenario do you think we are in today? Sanne’s view will be revealed at the end.

To guide the discussion on the different scenarios, three questions were given that helped the participants break ice and interact with one another. They were – What are 2-3 positive long-term implications of GenAI in this scenario?, What are 3 negative long-term implications of GenAI in this scenario and, How do these implications impact the dynamics within society? Consider the relationship/interaction between citizens, the government, and companies. Members of each team wrote their thoughts on each question on sticky notes which were pasted under the respective question. One representative per team then read out these answers.

The Need for Education and Awareness

Staying true to my background in Mathematics, let me start with the first quadrant – heavy regulation and high acceptance. If you are unfamiliar with quadrants, we start from the top-right quarter and go anti-clockwise. According to the participants, this would lead to an increased trust in the government, enhanced usage and convenience. The biases that AI and GenAI seem to amplify can be kept at check, thanks to the regulations. However if you have ever heard a comparison of how the electricity used to train GenAI algorithms can be used to power households or even villages in the less privileged parts of the world, expect such concerns to be exacerbated due to the high acceptance and its ensuing usage. It could also happen that someone that doesn’t want to use GenAI might find it forced on them. At a societal level, the participants felt that education is required along with the awareness of regulations. Due to the highly regulated environment, it can so happen that people can end up being overly dependent on the government.

Levels of Acceptance in AI

Coming to the second quadrant – heavy regulation and low acceptance, the participants felt that the negatives of GenAI such as privacy breach and plagiarism will be addressed. Also, any development that happens will meet strict standards. On the other hand, innovation, quintessential to the development of any field, will be hard hit while the government can steer the progress in the direction it deems fit. The speed of development will also be affected. Due to low acceptance among the masses, companies would not be keen on investing in GenAI. The participants also felt that there would be a lack of trust in the government. On a funny note, it was also noted that there will be less ChatGPT-authored content.

The third quadrant focuses on light regulation coupled with low acceptance. In this case, the participants felt that something else might have replaced GenAI as people adapt whatever is new pretty easily. Also due to the light regulation there might be more innovation coupled with more open source code. The flip side to the uninhibited progress is that it might be focused more on research and less on applications benefiting humanity due to the low acceptance. Hence, investments on such technology might reduce leading to economic stagnation and possibly a recession. However, the participants also felt that having less regulation might lead to abuse of open source, which certain tech giants have been accused of indulging in. From a societal standpoint, the participants felt that such a scenario can lead to lack of transparency and trust.

The Risks and Consequences Involved

Finally, we arrive at the last quadrant which focuses on high acceptance and light regulation. Here the participants felt that such a situation can lead to new jobs with human labour having more value. It can also be quicker to learn the best use case. There will be more accessibility and equality. On the negative side, the world here will be rife with misinformation, deepfakes with biases and discrimination furthered. There can be a loss of valuable skills, inequality with the potential of everyone being further prejudiced and can lead to a societal collapse. Piggybacking on the previous point, at a societal level, the participants felt that an increased acceptance of GenAI can reduce social interactions while making us more isolated. Also with information available at the fingertips, youth may lose out on character building stage while failing to recognize safety. The participants opined that such a situation could spiral into doomsday.

Sanne feels that our current situation shows the most similarities with the fourth quadrant. While this scenario offers plenty of opportunities for innovation, it also poses significant risks, such as potential misuse in the form of deepfakes, misinformation and unchecked biases.  It is easy to feel a sense of defeat when analysing these risks, but we should not resign to the potential long-term negative consequences of unchecked penetration of GenAI. In fact, we need to recognise that we are active participants in the very system that we enjoy and occasionally fear. The future is not set in stone and we can impact it. And the very first step to creating impact is engaging in active dialogue which is what this workshop aimed to spark.

If you want to meet AI enthusiasts and discuss such topics, listen to a diverse group of experts in the AI world or simply network with a diverse group of professionals, then do check out the Circle Events page of Women in AI and reserve your spot for the next one. And when you do attend and meet me, ask yourself if I used ChatGPT (GenAI) to write this article. What do you think?

Subscribe to our newsletter

Don't miss out on the Women in AI NL news and updates! Every month we will update you about what's new within the WAI community.