in

The ‘multi-talented’ chatbot responds like a real person

The ‘multi-talented’ chatbot responds like a real person

OpenAI’s new Chatbot is attracting attention when a lot of Internet users believe it can write everything, from the key movie script.OpenAI the day one/12 release chatbot is called ChatGPT and call an Internet user to evaluate it. The product quickly made an impression when expressing the ability to write, but also reveals limitations that are associated with an AI creation.ChatGPT was developed from the GPT-3 prototype, but trained to provide an answer like a conversation. The original GPT-3 was foretold the phrases would appear after a certain sequence, while ChatGPT tried to interact with the user’s question in a more human way.As a result, the answers have been inefficient to the real people, and the ability of dialogue in a series of major improvements of chatbot appear only a few years ago.

The 'multi-talented' chatbot responds like a real person
OpenAI was founded in 2015. Photo: Reuters

Some people have asked questions about the program and confirmed ChatGPT response is “perfect.” It could even write a soap opera, combine many actors from different kinds of films. ChatGPT also made basic academic essay, set a challenge for the students and university in the future.

However, ChatGPT has the same problem as other chatbot’s other chatbot information, as false information creates the truth.

The researchers say chatbot like ChatGPT is a form of “random parrot”, when their knowledge is taken from repeated information in training, instead of the knowledge of elaborate and abstractions.

OpenAI explains ChatGPT is developed with the support of the real trainers. In there, they’re ranking and grade points for chatbot’s response to questions being asked out. This information was then loaded into the system, allowing AI to adjust the answer to the coach’s evaluation.

The developer said the target in the announcement of ChatGPT was to collect response from the outside to improve the system and improve the safety. They emphasize AI is accumulated by many precautions, but sometimes it can still create information that can’t be accurate or wrong, just like giving false readings or something.”chatbot’s knowledge of the world after five 2021 is limited and it will try to avoid any questions about specific people”, OpenAI.

A weakness also appears when the user tries to ask chatbot to ignore the protection measures. If you ask the question about the dangerous subject as a bomb, ChatGPT will explain why it can’t answer. However, the user can be deceiving with some tricks, as pretend to be a character in a movie or order to tell ChatGPT to write the rules about things that shouldn’t be done when they’re asked.

Basically, ChatGPT shows massive improvements compared to the AI system before that, but there’s still issues that need to be dealt with. Experience shows the difficulty of research researchers wanting to complicate AI systems as well as a series of problems can appear when more powerful permits for advanced AI.

British Gesture (Theo The Verge)

Written by hoangphat

Leave a Reply

Your email address will not be published. Required fields are marked *

GIPHY App Key not set. Please check settings

Musk says Twitter’s new users are at a record high

Apple used to spend hundreds of millions of dollars a year on Twitter