Risks of artificial intelligence: When Google responds like a partner

Google registers 5.6 billion searches every day. Users google everything: sex, diseases, murder tools. The group has accumulated a gigantic knowledge of humanity. But as much as the users reveal about their personality with their entries, the algorithm sometimes knows little about the actual search interest. A search engine is based on the assumption that you are looking for something specific, such as a doctor's phone number or the opening times of a restaurant.
Search engines are very good at extracting the essential information from the mass of data on the web. Search robots, so-called web crawlers, comb through billions of indexed pages in fractions of a second. Anyone who googles the question “Why is the price of oil rising?” will be linked to a meaningful article in the hit list. However, when it comes to semantic search, computer programs have difficulties. Google can't tell us which sights to visit in Lisbon or why the sky is blue.

The customer has to make do with the machine

Former Google boss Eric Schmidt once called this the "hot dog" problem: The algorithm doesn't know whether the user is looking for a snack or a hot dog. And he doesn't learn that after the millionth search query either. Analysts therefore already see the end of the search engines approaching. In her opinion, the future lies in chatbots – automated scripts that answer user questions with ready-made text modules.
You know that from online trading. There it is becoming increasingly rare to get a human employee on the line. And if it does, it answers like a bot. So the customer has to make do with the machine. A sales assistant answers in a chat window: "Hello, I'm your sales assistant. How can I help you?” Then you hack a question or a keyword into the input field, and answers promptly pop up in the chat history. That's how thousands of sales consultations are run today. Chatbots are available 24/7, don't get sick or have moods.

Answers from the perspective of a paper airplane

Google also wants to make more use of this tool. The group presented its own chatbot at its “I/O” developer conference last year. LaMDA, as the language model is called, is based on the “Transformer” neural network, which was trained with masses of dialogue. In a demonstration, the speech computer answered questions from the perspective of the planet Pluto and a paper airplane, as if it were embodying the objects themselves. When asked what it's like to be tossed through the air, the paper airplane answered the almost poetic-sounding sentences: "The wind is blowing against you and the trees flying by are quite a sight."
A few years ago, there was already a search engine, Ask Jeeves, in which an English butler used algorithms and human intelligence to answer questions. However, the internet company was in the red after the dot-com bubble burst, and users eventually stopped asking questions. Even a restart was largely unsuccessful. But the dream of an intelligent answering machine is not over.

Speech AI is not an understanding soul therapist

Google's first employee, software developer Craig Silverstein, envisioned a search engine that would answer questions like the computer in Star Trek. Amazon founder Jeff Bezos, also a science fiction fan, was inspired by the TV series when developing the Echo network speaker. More than half of all online searches are now done by voice command. In order to better decode voice commands, tech companies are investing billions in natural language processing. The US journalist and author John Battelle writes in his book "The Search" that search is the key to the development of artificial intelligence (AI). A search that not only finds, but understands what we are asking for would come very close to our idea of intelligence.
But computational linguists warn: the better the answers, the greater the illusion that language AI is an understanding soul therapist. This can be dangerous, especially for users with an unstable psyche, such as people who are at risk of suicide. Many users already take search hits at face value. A dialogue system that responds to individual questions could have even more authority – although it only simulates understanding. This raises the question of how objective language models can be.

When the AI becomes racist

When asked “What is the ugliest language in India?” last year, Google replied in its “Fact Box” — an information panel that appears above search results — “The answer is Kannada, a language spoken by around 40 million people people in southern India.” The search result caused outrage among the speakers – and was embarrassing insofar as Google boss Sundar Pichai himself comes from India. Google apologized for the incident and fixed the error. But the example makes it clear that machines can also discriminate.
Scientists have consistently demonstrated racial bias in language AIs. Here's how the high-performance GPT-3 text generator linked Muslims to terrorists. Language models thus also reproduce certain prejudices and stereotypes in society. Virtual assistants such as Siri and Alexa are now programmed in such a way that they block lewd questions or refer to web searches.
To prevent such embarrassment, Google chief developer Ray Kurzweil proposed a few years ago a search engine that answers questions without you explicitly asking for it. This “cybernetic friend” would know the user so well from search queries, emails or voice commands that it would proactively provide answers about health issues or business strategies. Google's Smart Compose feature already automatically completes sentences in emails. Maybe Google will soon predict not only the next sentences and questions, but also diseases. But if you no longer have to ask questions because Google knows everything, at some point you won't ask any more critical questions.

Related Posts

Leave a Reply

%d bloggers like this: