Generative Artificial Intelligence and its Examination from the Perspective of the Humanities and Social Sciences

Through the planned funding guideline ‘Generative Artificial Intelligence (genAI) as a Subject of Study in the Humanities and Social Sciences’, the Federal Ministry of Research, Technology and Space (BMFTR) is supporting humanities-focused research into generative artificial intelligence.

Junger Mann mit blauem Haar sitzt an einem Laptop und berührt ein schwebendes, transparentes Hologramm mit dem Wort 'Generate' und einem roten Wiedergabe-Button

AI generated

The roots of Artificial Intelligence (AI) date back to the 1930s, when the British mathematician Alan Mathison Turing demonstrated that machines can replicate complex thought processes through clear, step-by-step instructions – so-called algorithms. The term ‘Artificial Intelligence’ itself was first coined in 1956 at the Dartmouth Conference. ELIZA was the first chatbot (1966), long before the programmes now integrated into numerous websites, apps or messaging platforms that answer user queries in real time. Unlike today, the technical capabilities of the time relied on a simple, rule-based programme, which severely limited the scope of interaction. Despite a lack of genuine language comprehension, ELIZA’s imitation of conversation was so convincing that many users believed they were speaking to a human – an effect that ELIZA shares with today’s AI-based chatbots.

From the 2010s onwards, the computing power required to meet the high performance demands gradually became available, enabling the AI applications now used by almost everyone. However, whether they are chatbots based on large language models or (in some cases) hidden AI assistance systems, the high resource requirements mean that AI’s energy consumption continues to grow. This should not be forgotten amidst all the enthusiasm for AI.

Chatbots are based on so-called large language models (LLMs) and thus fall within the realm of generative AI, i.e. creative AI. Of course, the degree of novelty in the creative process must be viewed critically, as the technology here merely reproduces learned content. In addition to text generation, the field of generative AI also includes tools for generating images, videos, code and audio. AI is now perceived as one of the most important key technologies; however, its disruptive, i.e. profoundly transformative, impact is (still) strongly associated with dystopian scenarios, meaning it is linked to bleak visions of the future. It offers the potential for a wide range of tools with enormous opportunities for science, growth, prosperity, competitiveness and society. Germany and Europe aim to take a leading position in this field. This encompasses not only their own technological developments but also regulatory frameworks for the use of AI (such as the so-called AI Act). At the same time, rapid technological developments and the widespread individual use of AI present society with new challenges. The topic of AI is not only on everyone’s lips but also in everyone’s lives – and this often goes largely unnoticed. It must be assumed that the almost unmanageable spread of AI will have serious and long-term effects on our culture, our society and our working lives, as well as on the way we produce and acquire knowledge, how we use language, speak to and interact with one another, and how we produce and consume content (texts, images, music, videos and much more).

Given the disruptive nature of this technological development, which encompasses virtually all areas of life, both interdisciplinary and transdisciplinary approaches are necessary.

For how can we ensure that generative AI is used responsibly and ethically, particularly with regard to copyright infringements, plagiarism and the spread of misinformation? To what extent is the increasing automation of creative and analytical processes through AI changing the structure of professional fields, and what adjustments are necessary to shape a fair and sustainable world of work? And what measures are effective in ensuring transparency in AI-generated content and safeguarding trust in digital communication as well as the authenticity of information?

With the planned funding guideline “Generative Artificial Intelligence (genAI) as a subject of study in the humanities and social sciences”, the Federal Ministry of Research, Technology and Space (BMFTR) is supporting humanities-focused research on generative artificial intelligence. The aim is to examine the key challenges arising from generative artificial intelligence and related developments from the perspective of the humanities and qualitative social sciences. In doing so, the humanities and social sciences are to become more actively involved in the discourse on generative AI, in particular by developing expertise in this field among early-career researchers, thereby enabling them to play a greater role in shaping a safe, trustworthy and human-centred AI. The findings are intended to contribute to advising academia, society and policymakers on the classification and further development of generative artificial intelligence.