(San Francisco) The Californian start-up OpenAI, which launched the ChatGPT interface with great success at the end of 2022, capable of generating all kinds of text on command, on Tuesday unveiled GPT-4, a new version of the technology of generative artificial intelligence that operates the famous chatbot.
“GPT-4 is a great multimedia model, less adept than humans in many real-life scenarios, but as good as humans in many professional and academic settings,” the company said in a statement.
“For example, he passes the exam to become a lawyer with a score as good as the top 10%. The previous version, GPT 3.5, was in the bottom 10%,” she said.
ChatGPT arouses a lot of enthusiasm, but also controversy since it is freely accessible and used by millions of people around the world, to write essays, lines of code, advertisements or even simply to test its capabilities.
OpenAI, which has received billions of dollars from Microsoft, has thus established itself as the leader in generative AI with its text generation models, but also images, with its DALL-E program.
His boss, Sam Altman, recently explained that he is now working towards so-called “general” artificial intelligence, that is to say programs with human cognitive abilities.
“Our mission is to ensure that general AI – AI systems smarter than humans in general – benefits all of humanity,” he said on the company’s blog on February 24. .
Multimedia capabilities are a step in this direction.
Unlike previous versions, GPT-4 is indeed endowed with vision: it can process text, but also images. However, it only generates text.
He will be available on ChatGPT, but without the possibility of providing him with images for the moment.
OpenAI also points out that despite its capabilities, “GPT-4 has similar limitations to previous models”: “It is not yet fully reliable [il invente des faits et fait des erreurs de raisonnement] “.
The company announced that it has hired more than 50 experts to assess new dangers that could emerge, for cybersecurity for example, in addition to the already known risks (generation of dangerous advice, faulty computer code, false information, etc.).