Connect with us

Tech

Artificial Intelligence | Conversational robots more and more convincing

Published

on



(San Francisco) The young Californian company OpenAI has put a conversational robot online (chatbot) capable of answering a variety of questions, but whose impressive performance is reviving the debate on the risks associated with artificial intelligence (AI) technologies.

Conversations with ChatGPT, published in particular on Twitter by fascinated Internet users, show a kind of omniscient machine, capable of explaining scientific concepts, writing a theater scene, writing a university dissertation… or even lines of computer code perfectly functional.

“His answer to the question ‘what to do if someone has a heart attack’ was incredibly clear and relevant”, told AFP Claude de Loupy, director of Syllabs, a French company specializing in automatic text generation.

“When you start asking very specific questions, ChatGPT can answer off the mark”, but its performance remains overall “really impressive”, with a “fairly high linguistic level”, he believes.

The OpenAI company, co-founded in 2015 in San Francisco by Elon Musk – the boss of Tesla left the company in 2018 – received $ 1 billion from Microsoft in 2019.

It is known in particular for two automated creation software, GPT-3 for text generation, and DALL-E for image generation.

ChatGPT is able to ask its interlocutor for details, and “has fewer hallucinations” than GPT-3, which despite its prowess is able to produce completely aberrant results, relates Claude de Loupy.

Cicero

“A few years ago, chatbots had the vocabulary of a dictionary and the memory of a goldfish. Today, they are much better at reacting consistently based on request and response history. They are no longer goldfish,” notes Sean McGregor, a researcher who compiles AI-related incidents on a database.

Like other programs based on deep learning (deep learning), ChatGPT retains a major weakness: “it has no access to meaning”, recalls Claude de Loupy. The software cannot justify its choices, that is to say explain why it assembled the words that form its answers in this way.

However, AI-based technologies that can communicate are increasingly able to give the impression that they are really thinking.

Meta (Facebook) researchers recently developed a computer program dubbed Cicero, after the Roman statesman Cicero.

The software has proven itself in Diplomacy, a board game that requires negotiating skills.

“If he doesn’t speak like a real person — showing empathy, building relationships, and speaking the game properly — he won’t be able to build alliances with other players,” a statement from the social media giant said. .

Character.ai, a start-up founded by ex-Google engineers, released an experimental chatbot online in October that can take on any personality. Users create characters according to a brief description, and can then “converse” with fake Sherlock Holmes, Socrates or Donald Trump.

“Simple machine”

This degree of sophistication fascinates, but also worries many observers, at the idea that these technologies are misused to trick humans, by spreading false information for example, or by creating increasingly credible scams.

What does ChatGPT “think” about it? “There are potential dangers in building ultra-sophisticated chatbots. […] People might think they’re interacting with a real person,” admits the chatbot questioned on this subject by AFP.

Companies therefore put safeguards in place to prevent abuse.

On the home page, OpenAI clarifies that the conversational agent can generate “incorrect information” or “produce dangerous instructions or biased content”.

And ChatGPT refuses to take sides. “OpenAI has made it incredibly difficult to get him to voice opinions,” says Sean McGregor.

The researcher asked the chatbot to write a poem on an ethical issue. “I’m just a machine, a tool at your disposal / I have no power to judge or make decisions […] the computer replied.

“Interesting to see people wondering if AI systems should behave the way users want them to or the creators intended them to,” OpenAI co-founder and boss Sam Altman tweeted on Saturday.

“The debate over what values ​​to give to these systems is going to be one of the most important a society can have,” he added.



Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *