Science fiction loves to play with the trope of artificial intelligence going rogue — or just being useless in some scenarios. One of the finest examples is Psycho-Pass, a story that follows the flaws of the AI crime prevention system, Sybil, operating in futuristic Japan. The story is set in 2112, but the modern technological advancements show that the future may be closer than it seems.
Sybil in Psycho-pass has access to people and communications between them almost everywhere, and through the usage of complex algorithms that take into account everything that the person does, it calculates the score that shows how likely the person is to commit a crime. Also, the system decides what job would suit the individual the best based on their abilities, including the government officials.
AI is not that widely spread in our lives yet, and there are definitely some good aspects to its usage. No tool is universally bad; it's just the application that matters.
But one of the most prominent usages of the controlling type of AI is the recommendation algorithms. In our social networks, in our streaming services, we are being fed not the content that we deem interesting, but the content that the system has decided should be interesting to us. Sure, there is some initial learning behind it, and the creators of these algorithms say that they are catering to your tastes based on what you have previously liked, viewed, or listened to. But the logic behind the recommendation system is hidden behind a multitude of NDAs, and it's impossible to see whether it was tampered with or not.
Sybil took away humanity's free will, and this is exactly what we are seeing now, just on a smaller scale. We can't properly curate our timelines on Twitter and Instagram, being bombarded by posts from people we don't even know and ads that contain things we're not even remotely interested in; we don't know if the random function in Spotify is actually random and is not following some orders to promote a specific artist or track that at least somehow aligns with what we've been listening to.
The advancement of social networks brought people together and allowed us to meet and connect with those who are far from us but can be our soulmates. Early recommendation systems were based on simple notions like "people who listen to this artist also prefer this one" and were more precise in their suggestions. The modern approach, however, is more money-driven than human- or impression-driven.
But the AI is flawed and cannot grasp all the intricacies of human preferences even with all the computational power that's available to him. Or maybe it can, and it's already somewhere where we can't detect it? And, unlike Sybil, is being used in secret?
Should we be scared of AI?
Science fiction loves to play with the trope of artificial intelligence going rogue — or just being useless in some scenarios. One of the finest examples is Psycho-Pass, a story that follows the flaws of the AI crime prevention system, Sybil, operating in futuristic Japan. The story is set in 2112, but the modern technological advancements show that the future may be closer than it seems.
Sybil in Psycho-pass has access to people and communications between them almost everywhere, and through the usage of complex algorithms that take into account everything that the person does, it calculates the score that shows how likely the person is to commit a crime. Also, the system decides what job would suit the individual the best based on their abilities, including the government officials.
AI is not that widely spread in our lives yet, and there are definitely some good aspects to its usage. No tool is universally bad; it's just the application that matters.
But one of the most prominent usages of the controlling type of AI is the recommendation algorithms. In our social networks, in our streaming services, we are being fed not the content that we deem interesting, but the content that the system has decided should be interesting to us. Sure, there is some initial learning behind it, and the creators of these algorithms say that they are catering to your tastes based on what you have previously liked, viewed, or listened to. But the logic behind the recommendation system is hidden behind a multitude of NDAs, and it's impossible to see whether it was tampered with or not.
Sybil took away humanity's free will, and this is exactly what we are seeing now, just on a smaller scale. We can't properly curate our timelines on Twitter and Instagram, being bombarded by posts from people we don't even know and ads that contain things we're not even remotely interested in; we don't know if the random function in Spotify is actually random and is not following some orders to promote a specific artist or track that at least somehow aligns with what we've been listening to.
The advancement of social networks brought people together and allowed us to meet and connect with those who are far from us but can be our soulmates. Early recommendation systems were based on simple notions like "people who listen to this artist also prefer this one" and were more precise in their suggestions. The modern approach, however, is more money-driven than human- or impression-driven.
But the AI is flawed and cannot grasp all the intricacies of human preferences even with all the computational power that's available to him. Or maybe it can, and it's already somewhere where we can't detect it? And, unlike Sybil, is being used in secret?