Connect with us

Tech

Artificial intelligence | The genius and quirkiness of ChatGPT

Published

on



Like most sci-fi fans, I’ve spent a lot of time wondering how society will welcome true artificial intelligence, if and when it arrives. Will we panic? Will we begin to flatter our new robot masters? Are we going to ignore it and carry on with our lives as if nothing had happened?

So it’s been fascinating to watch the Twittersphere try to make sense of ChatGPT, a new chatbot (chatbot) state-of-the-art artificial intelligence (AI) machine that was opened for testing last week.

ChatGPT is, quite simply, the best AI-powered chatbot ever released to the general public. It was designed by OpenAI, the San Francisco AI company that is also responsible for tools like GPT-3 and DALL-E 2, the revolutionary image generator released this year.

Like these tools, ChatGPT – GPT standing for “generative pretrained transformer” – was a hit. In five days, more than a million people signed up to test it, according to Greg Brockman, president of OpenAI. Hundreds of screenshots of ChatGPT’s conversations have gone viral on Twitter, and many of its early admirers are talking about it in astonished and grandiose terms, as if it were a mixture of software and sorcery.

For most of the past decade, AI-powered chatbots have been terrible – they were only impressive if you picked out the best responses from the bot and tossed the rest. In recent years, a few AI tools have become good at performing precise and well-defined tasks, such as writing marketing texts, but they still tend to break down when taken out of their comfort zone. . (That’s what happened when my colleagues Priya Krishna and Cade Metz used GPT-3 and DALL-E 2 to find a menu for the meal of Thanksgiving.)

But ChatGPT is different. More intelligent. More weird. More flexible. He can write jokes (some of which are really funny), computer code, and college-level essays.

He can also guess medical diagnoses, create Harry Potter text games, and explain science concepts at multiple levels of difficulty.

The technology that powers ChatGPT is not, strictly speaking, new. It’s based on what the company calls “GPT-3.5,” an improved version of GPT-3, an AI-powered text generator that garnered a lot of interest when it was released in 2020. But while the existence of a high-performance linguistic superbrain may be old news for researchers in artificial intelligence, it is the first time that such a powerful tool has been made available to the general public via a web interface free and easy to use.

Most of the ChatGPT exchanges that have gone viral so far have been zany, borderline stunts. A Twitter user asked him to “write a Bible verse in the style of the King James Bible on how to get a peanut butter sandwich out of a VCR.”

Another asked him to “explain the AI ​​lineup, but write every sentence in the style of a guy who keeps going off topic to brag about the size of the pumpkins he’s got.” ‘he made grow’.

But users have also found more serious applications for it. For example, ChatGPT seems to help programmers spot and fix errors in their code.

It also seems to be very good at answering the kinds of open-ended analytical questions that frequently appear in school assignments. (Many educators have predicted that ChatGPT, and other tools like it, will spell the end of homework assignments and homework exams.)

Most AI chatbots are “stateless,” meaning they treat each new request like a blank slate and aren’t programmed to remember or learn from their previous conversations. But ChatGPT can remember what a user has told it before, which could make it possible to create personalized therapy bots, for example.

The limits of the robot

ChatGPT is far from perfect. The way it generates the answers – in extremely simplified terms, by making probabilistic guesses about which snippets of text go together in a sequence, based on a statistical model trained on billions of text examples pulled from everything the internet – makes him susceptible to giving wrong answers, even on seemingly simple math problems. (On Monday, moderators at Stack Overflow, a website for programmers, temporarily banned users from submitting answers generated with ChatGPT, saying the site had been inundated with incorrect or incomplete submissions.)

Unlike Google, ChatGPT doesn’t crawl the web for news, and its knowledge is limited to what it learned before 2021, which makes some of its answers seem outdated. (When I asked him to write the opening monologue for a late-night show, for example, he offered several topical jokes about former President Donald Trump’s withdrawal from the Paris Agreement. on the climate.) Since his training data includes billions of examples of human opinions, representing every viewpoint imaginable, he is also, in some sense, a moderate by design. For example, without being explicitly invited to do so, it is difficult to obtain from ChatGPT a definite opinion on exciting political debates; in general, you’ll get an unbiased summary of each side’s opinions.

There are also many things that ChatGPT will not do, as a matter of principle. OpenAI has programmed the bot to deny “inappropriate requests,” a nebulous category that appears to include prohibitions such as generating instructions for illegal activity.

But users have found ways to circumvent many of these safeguards, including reframing a request for illicit instructions as a hypothetical thought experiment, asking the robot to write a theater scene, or asking him to deactivate his own security devices.

The potential societal implications of ChatGPT are too great to fit into a single column. Perhaps this is, as some commentators have claimed, the beginning of the end of all white-collar intellectual work and a precursor to mass unemployment. Maybe it’s just a nifty tool that will mostly be used by students, twitter jokers, and customer services until it’s overtaken by something bigger and better .

Personally, I’m still trying to get used to the idea that ChatGPT – a chatbot which some say could make Google obsolete and which is already being compared to the iPhone in terms of potential impact on society – isn’t even OpenAI’s best AI model. Rather, it’s GPT-4, the next incarnation of the company’s grand language model, slated for release next year.

We are not ready.

This article was originally published in the New York Times.



Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *