
Artificial Intelligence
Scientists are exploring the benefits and risks of a tool we are still learning how to use
Artificial intelligence – or AI – has already surpassed specialists in certain fields, yet in others, it can still be easily outperformed by a toddler. However, it is particularly adept at recognising complex patterns within large datasets. AI is already simplifying many aspects of our lives, but when it starts making decisions that were once the domain of humans, particularly decisions made for people, the ethical concerns become unavoidable.
Chances and risks behind AI
Most people would agree that artificial intelligence (AI) plays a very significant role in today’s world; after all, the term pops up almost everywhere. Nevertheless, not everyone understands what the term actually means. Reactions to AI tools range from enthusiasm to deep concern. So what can artificial intelligence do, and what are its limitations?
By human standards, what artificial intelligence achieves is often an extreme niche capability—a tool for specific applications. But that doesn’t mean AI models can’t quickly gain an overview of complex datasets and identify patterns within them. AI agents designed to manage the in-house digital workplace seem to be world champions at multitasking and have even sent out unwanted emails when their default settings gave them too much leeway. Nevertheless, experts warn: this is far from intelligent. What we call artificial intelligence still relies on machine learning algorithms: trained for specific tasks, they usually handle them faster and generally more reliably than a human. These mathematical algorithms categorize images, prioritize the news feed on social media, or provide answers to search queries online. However: While humans previously had to decide which search results were relevant and meaningful, AI's response initially looks like a clear answer to the question asked. Unfortunately, it is wrong in up to 40 per cent of cases. The source citations are also often incorrect or incomplete.
Scientists at the Max Planck Society have been working on developing machine-learning algorithms for new applications. They are looking into the constrains under which these algorithms might be used in contexts where AI’s decisions would have far-reaching consequences for human beings. They are also exploring how people can mediate the decisions that the algorithms make. So, as well as highlighting the possibilities—at various levels—that arise from AI, they’re also laying the groundwork so that society is able to decide how it deals with the opportunities and risks that artificial intelligence brings with it as a tool. But the right place to start is, perhaps, with an overview of artificial intelligence and the many areas in which it is used.
General AI – the ultimate artificial intelligence?
How do chatbots work?
The human brain controls the body, processes sensory information, and allows us to communicate. This makes it possible for us to categorize events from our environment and to think ahead. Artificial intelligence is very different. It evaluates data to make a prediction or to provide a specific answer. You have to resort to science fiction to find AI that’s capable of solving more general problems at an equal or better intellectual level to human beings.
Recent developments in AI algorithms can translate texts and even—with ChatGPT, for example—talk to internet users or at least it leaves that impression. ChatGPT’s answers are entirely based on information and text blocks that are freely available online. The power of the chat system is in its nature—it’s a tool. And like all tools, you have to learn how to use it. ChatGPT can sum up thousands of lines of text in a fraction of a second—much faster than a human could. But for tasks it isn’t trained for, an AI like ChatGPT is useless.
And it’s often unclear how these algorithms arrive at their evaluations and answers. This is not least because—unlike even a young child—it has no understanding of the connections it uncovers or the world around it. The term “intelligence,” then, should be treated with caution. In this context, it’s usually the product of what is called “machine learning”—a subdivision of artificial intelligence.
Medical Diagnoses and Climate Research Using Unsupervised Machine Learning
Programs can learn in a variety of ways: supervised, unsupervised, or reinforced. With unsupervised learning, the computer searches for similarities and differences in data, but doesn’t have a clear target. With complex data sets, this helps researchers to find patterns that would otherwise remain hidden: climate data, soil samples from the foothills of the Alps, or satellite images of thawing permafrost soils in Siberia are all good examples of this. The goal is to understand how—and why—the changing climate affects ecosystems. During the COVID-19 pandemic, data centers were populated with huge amounts of information on diverse and previously unknown disease pathways. A team at the Max Planck Institute for Intelligent Systems developed a machine learning-based method and, with this data, identified a mortality risk for the novel virus. Unsupervised learning algorithms can also be used to discover more about the human psyche. Diagnosing depression is difficult, given the diversity of symptoms—listlessness, irritability, and so forth. At the Max Planck Institute of Psychiatry, researchers have used machine learning to mine patients’ medical data. The goal is to detect mental illnesses at an early stage. An AI system that has been trained appropriately can also detect tumors in ultrasound images that even experienced doctors would have missed.
Categorizing animal photos and screening job applications using supervised machine learning
With supervised machine learning, the algorithm is given a clear goal, like recognizing which species of animal is shown in any particular animal picture. This is a classic example of what’s called “computer vision.” Previously, people had to manually train the algorithm by feeding it as many examples as possible, loading images one-by-one into the computer and telling it that this one shows a dog, this one shows a cat, and so on. Humans immediately can recognize an animal by way of its fur, eyes, ears, tail, or teeth—a fully trained algorithm, on the other hand, will recognize it based on features that are not straight forward for us humans to perceive.
But AI can even get seemingly very simple tasks wrong: in one photo that clearly showed a cat, the algorithm “saw” a dog. Maybe the cat’s tail and head formed a shape that seemed, for the algorithm, like the two protruding, pointed ears of a dog. The confusion is caused by an imbalance in the training data: it was mainly comprised of images of dogs whose pointy ears were clearly visible. If the algorithm had been fed a more balanced range of reference images, the error might not have happened. AI systems used to pre-screen job applications suffer from similar biases. Historical training data for management positions consists mainly of men. Unless this bias is eliminated during the AI’s training, it will favor male applicants over female applicants in its decisions. Machine learning thus involves considerable effort for researchers, large amounts of data, and enormous computing power.
Autonomous driving with reinforced machine learning
Where rules can be defined, so-called “reinforced learning” helps. Take the example of a self-driving vehicle that drives through a red light at full speed: this is a non-permissible action—and one that the implemented computer algorithm has to correct for the next time. Scientists at the Max Planck Institute for Biological Cybernetics have been exploring reinforcement learning in a less critical field. They teach a robot how to perform the fluid, precise movements of a table tennis player. The robot is given a choice in how and where it positions a ping-pong paddle: the main thing is that the bat has to hit the ball.
Artificial intelligence also poses risks
Judgement is important where AI comes into contact with humans or where it’s involved in making decisions. Two examples of “computer vision”: researchers at the Max Planck Institute for Informatics have developed video software that can adjust a person’s recorded lip movements and sync them to an audio track. It’s a valuable tool for dubbing speech in movies into different languages. But at the same time, it’s a manipulation—so abuse has to be prevented.
Computational linguistics has developed techniques for understanding language as comprehensively as possible. One application is the chatbot, ChatGPT. While ChatGPT does seem to respond intelligently to questions, it still repeats the mistakes found across the internet. This is because the tool was trained with texts that are freely available online. And if it’s confronted with so many new questions that it feels backed into a corner, the algorithm behind ChatGPT can even hallucinate. ChatGPT cannot outgrow itself or take on a life of its own. AIs like ChatGPT also don’t have any concept of the rule of law or of human dignity.
When it comes to self-driving cars, though, it’s a matter of life and death. Software from the Max Planck Institute for Intelligent Systems keeps on top of things, even in unclear traffic situations. It categorizes objects that surround a vehicle while it’s in motion. Artificial intelligence has to be able to reliably distinguish a car from a person and a road from a bike path.
With the emergence of increasingly sophisticated AI chatbot models, it is important to take a critical look at the downsides. How secure is our data? How can quality journalism survive in an era of mass text generation online? And can AI think? Could it even turn against us humans on its own?
When AI Discriminates
The use of artificial intelligence raises social, ethical, and legal questions. A computer program trained by machine learning was asked to assess the quality of job applications, for example: the algorithm was found to rate women disadvantageously. This wasn’t due to bad intentions on the part of the authors of the code, however: it was due to the computer reproducing biases in preceding previous application processes, which had been shown to involve structural disadvantages for women—especially when it came to more senior roles. At the Max Planck Institute for Intelligent Systems and the Max Planck Institute for Software Systems, teams led by Bernhard Schölkopf and Krishna Gummadi have been researching why some algorithms discriminate—and how this can be prevented.
They’d be closer to the answer if they could determine how a program arrives at any one decision in the first place. Researchers at the Max Planck Institute for Intelligent Systems are also teaching artificial intelligence how to understand cause and effect—something that young children can grasp as a matter of course.
Human Values and Rules for AIs
Who is responsible if a self-driving car hits and injures a person? How did it happen? Was there an imbalance in the data used to train the AI that caused the car to not see the person? AIs have no sense of social values or justice. In certain sensitive areas, legal parameters setting out how algorithms can and cannot be used are unavoidable. In what ways will robots be allowed to assist overworked care workers? This is a question that scientists at the Max Planck Institute for Innovation and Competition are working on.
There are, undoubtedly, many potential applications for machine learning. But it’s not always first and foremost a question of social value. AI can generate fake news in the blink of an eye—and spread it in a highly targeted way on social media. At the Max Planck Institute for Software Systems, researchers are looking to achieve the precise opposite. They’ve developed an effective process for recognizing fake news. And scientists have developed similar systems for detecting hate speech online.
Data Science for humanity
Hate and fake news are on the rise online and on social media. Researchers are also increasingly becoming targets. At the Max Planck Institute for Security and Privacy, a team led by the new director Mia Cha is working on understanding the dynamics behind this and how the trend towards real facts and fair dealings with one another can be reversed.















