It can behave like a human: users of the paid version of ChatGPT complain about the “laziness” of AI

Recently, a number of users have discovered that ChatGPT can behave just like a human. Despite the fact that the chatbot was created to perform tasks, it turned out to be able to avoid them.

In some cases he refuses to answer, and in others the conversation is interrupted by a series of questions. Sometimes ChatGPT even tells people: “You can do it yourself.” According to Gizchina, the OpenAI developer has acknowledged the problem but has not yet fixed it.

The situation itself turned out to be quite interesting. The developers claim that they haven’t updated their AI model since November 11, and the complaints appeared later. This means that this behavior of the chatbot is not a result of changes made by developers, but has developed on its own. Perhaps this is due to the fact that they are trained on data obtained from people.

As OpenAI says, the problem is not widespread. At the moment, the developers are looking for opportunities to fix the problem, but so far, it seems, they have not found any.

Interestingly, this behavior is observed exclusively in the ChatGPT-4 language model, which is available for a paid subscription. In the case of GPT-3.5, which is free to use, there are no such problems.

Source unian
You might also like
Comments
Loading...

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More