AI - how does it work?

The concept of AI (Artificial intelligence) refers to various technologies. One type of AI is generative, which means that it can generate text, sound, video, and images. Below you can read about how AI works, as well as the ethical and environmental that the technology entails.

What is AI?

Artificial intelligence (AI) is a concept that refers to a variety of technologies used for many different purposes. One example is music recommendations generated from what you have listened to previously. Different types of AI are also used in healthcare, for example in diagnostics. Read more about KI's research and use of AI here.

A broad and informal definition of AI would be: AI is an umbrella term for technologies developed to solve complex problems that cannot be handled by regular step-by-step programming. Instead, AI uses neural networks, large language models, or other, similar methods together with a large amount of training data. 

What is Generative AI?

Generative AI is a type of artificial intelligence that can create, or generate, new content, such as text, sound, or images. Chatbots like ChatGPT, Copilot, and other genAI search tools employ an AI technology called large language models (LLMs).

A large language model is a type of AI that learns patterns in human language and is trained on lots of text. It is a statistical model of how human language is structured and what human-written texts usually look like.  When you ask a chatbot a question, it does not know the answer the way a person would. Instead, it calculates a the most likely response based on your input and the patterns it has learned. In short, chatbots and other AI search tools base their responses on probability calculations.

When a large language model gives an answer that does not match reality, the result is often called a "hallucination". This term highlights that the model can produce convincing but incorrect information. However, the model behaves the same whether its response is true or false; it simply generates text based on patterns in language and what words commonly appear together.

How AI works

Quick facts

  • The AI technology behind generative chatbots and AI search tools such as ChatGPT, Copilot, Elicit, and SciSpace is called Large language models (LLMs).  
  • Large language models are based on statistics and probability. They produce text by predicting the words likely to come next in a sentence, based on patterns in the data they were trained on. As a result, they may reproduce factual errors, stereotypes, and biases.
  • A large language model has no concept of what is true or false; it has only been trained to generate language. It can therefore generate text that corresponds to reality, but it can also generate content that is completely incorrect.
  • Since a large language model cannot evaluate or assess what is important or apply common source criticism when selecting sources, it cannot summarise a text, only shorten it. A summary depends on context, and selecting the most important points for a specific context is not always easy or objective.
  • Different AI tools have been trained on different texts and have access to different data. For example, the answers produced by chatbots such as ChatGPT and Copilot will be generated based on many different sources of text, not only scientific ones. If you are looking for scientific information, preferably use AI search tools that generate answers based on scientific material.  
  • AI tools are not reproducible; you will get different answers if you use the same prompt in the same tool again. 

What AI tools can you use?

Different types of generative AI tools have been designed for different tasks, so some are better suited for certain tasks than others. Here is some general advice when choosing tools:

  • Make sure to choose an AI tool made for the specific purpose you want to use it for. For translation, choose for example DeepL, dedicated to translation, instead of a more general tool like ChatGPT.
  • Use AI to explore topics and as a complement to traditional ways of finding information.  
  • Always double-check the information generated by AI.
  • Be aware of factual errors, stereotypes, and biases in content produced by AI tools; for example, content from the Western world and about men is often overrepresented.
  • Be curious and try out different tools.

Ethical aspects

There are several ethical aspects associated with AI use. These include copyright, concerns about the companies that produced these tools, social justice, and environmental impact. 

AI and copyright

Because of the way generative AI tools were built and trained, it is often unclear who owns the content that they generate. Questions remain about who holds the rights to AI-created material:

  • Is it the creator of the training data? Creators of the texts and other materials used as training data have so far received neither recognition nor compensation, which has led to many legal disputes globally.
  • Is it the creator of the algorithm? Most companies behind AI tools, such as Open AI, do not claim copyright on AI output.
  • Is it the creator of the prompt? Many countries have stated that they do not grant copyright to material generated by AI.

Who is behind these tools?

Most major AI tools were developed by large companies with commercial interests and specific values or ideologies. These influences can be reflected in the responses the tools generate.

  • OpenAI is the American company behind ChatGPT and the GPT-n language models (GPT-3, GPT-4, etc.) on which it is based.
  • Another chatbot, DeepSeek, was developed by a Chinese company by the same name.

ChatGPT and DeepSeek have been trained on different datasets and developed using distinct design choices and refinement processes. They will therefore sometimes give different results, which may reflect different political views, or cultural or ideological perspectives. The same applies to all AI tools; there are examples of certain topics in medicine and nursing considered “taboo” and blacklisted by some AI tools, and therefore related terms are not searchable. Furthermore, training data may unintentionally include biased or distorted information, which may cause AI tools to produce discriminatory texts, for example against people of color, minority groups, and women.

Although many AI tools are available free of charge, we pay with our data, both with the text we enter when we prompt the tool and with the personal information required to register. For this reason, always think carefully about the data you choose to share. For example, sensitive information like patient data must never be shared.

AI and society

AI may influence society and social justice in many ways.

  • In many instances, training data has been taken without permission, using the work of individuals and small businesses who receive neither credit nor compensation.
  • To shield users from exposure to disturbing content found in massive training datasets, such as text or imagery depicting war, torture, and abuse, the data has often been 'cleaned' by low-paid workers enduring poor working conditions.

AI and the environment

AI tools have multiple environmental impacts.

  • Significant quantities of rare minerals are needed to produce the hardware that supports AI technologies.
  • Large amounts of energy are needed to power them.
  • They require large amounts of water to cool the servers when in use. For instance, generating just 5 to 50 prompts can consume about half a liter of water for cooling.

Because generative AI consumes significant amounts of energy, it is advisable to use these tools thoughtfully. Try to avoid using them just out of habit. Use them when you have a clear goal and expect useful results. Also, keep in mind that creating AI images uses much more energy than generating text. You can learn more here about how KI contributes to sustainable development.  

Quick facts

  • The source of the data used to train AI tools is often not transparent. These tools are typically developed by large corporations driven by profit and influenced by their own ideologies, which can shape their output.
  • Copyrighted material has often been used as training data without permission, giving rise to legal proceedings.
  • To prevent users from encountering disturbing content that may exist in the training data, the data is often 'cleaned' by low-paid workers operating under poor working conditions.
  • AI tools consume substantial amounts of energy, both during their development and when we use them.
Keep in mind!
Ikon öga cirkel utropstecken

Keep in mind!

You are always responsible for your own learning and what you produce in your studies.

Make sure you do so with academic integrity, that is, be transparent about how you use AI tools and do not use them more than is permitted for your course.

Do not share personal information, sensitive data or copyrighted material with the tools.

Hade du nytta av informationen på denna sida?

Om du vill att vi ska kontakta dig angående din feedback, var god ange dina kontaktuppgifter i formuläret nedan

Jag godkänner att mina (frivilligt lämnade) personuppgifter (namn, e-post) lagras i enlighet med dataskyddsförordningen (GDPR). Uppgifterna används endast till att återkoppla till den som använder formuläret.
Senast uppdaterad: 2025-08-29