The power of algorithms: Risks of AI-driven recommendations

Algorithmic recommendations, such as those provided by streaming services like Spotify and Netflix, and to some extent also generative AI like ChatGPT, are the result of algorithmic analysis based on your previous choices combined with an attempt to understand the choices made by people like you.
According to Mikko Vesa, AI platforms can be used to organise operations in three ways. There are platforms that organise an entire company, such as Wolt or Bolt, where the algorithm assigns tasks to employees. Additionally, there are platforms that shapes the market, with the classic example being Airbnb, which created a whole new market and controls it.
"We have looked at a third dimension where you only have the individual and the AI tool. The AI responses are reflected back only on the individual, and after a while this can have consequences. For example, if we only communicate with an AI system it can weaken our ability to act morally. Normally, we get feedback from other people on the choices and actions we make," says Vesa.
Vesa also sees a risk in how our knowledge and abilities develop. The question is if we have any knowledge and abilities of our own if we constantly consult AI systems in our work.
"AI also creates a certain degree of isolation. Already now, coding often involves arguing with ChatGPT to create a code that works. Many initiatives to resocialise employees after the COVID-19 pandemic are virtually void if employees are reduced to work with AI systems."
One consequence of isolation is an increased feeling of loneliness, which in turn can lead to further reliance on AI-driven systems.
"If you can no longer find your place in society, you can find an algorithmically created group of people with a social identity that you may feel connected to, but you can never really be sure that this group even exists in real life!"
Societal implications of algorithmic recommendations
The societal consequences of an increasing degree of algorithmic recommendations can be far-reaching. A democratic society is built on finding a system where people with different views and values can coexist through some form of compromise.
"If you never have to face people who think differently, you get a very strong bubble effect. We see this, for example, in the US, the UK, and Germany, where there is a heavy polarisation of ideologies. These groups are almost not able to have a dialogue anymore."
Vesa notes that the Nordic countries have so far avoided this phenomenon because there is a strong culture of shared values and norms.
"Small countries generally have strong shared values and norms, and for the Nordic countries, this is certainly the case. We also have a trust-based society where we feel we can trust each other and the governance. So far, this has protected us, but who knows for how long?"
How can we mitigate the negative consequences of AI's strong presence?
According to Vesa, a collective experience that things have gone too far is needed for a counter-reaction to occur. However, this doesn't happen easily. According to him, the situation must first become quite extreme.
"The flight from Twitter/X that we've seen over the past year might be an example of when a collective resistance is created against what we are being fed. But when it comes to algorithmic recommendations, the degree of pessimism among researchers varies as to whether a counter-reaction will come at all."
Read more about Mikko Vesa's and Frank den Hond's research in their article in the scientific journal Organization: Mirror, mirror on the screen, “Wherein can I find me?” – On the sublime qualities of AI recommendation systems, algorithm conformity, and the else
Mikko Vesa was one of the speakers at the event How is AI impacting work-life and organisations today? at Hanken School of Economics on 15 May 2025. Read more about the event: Petri Kokko at Hanken Insights: “Learning is the true superpower in the AI era”
Text: Marlene Günsberg
Photo: Private, Matilda Saarinen