Open AI Fixes ChatGPT Update After Excessive Flattery Backlash

Open AI Fixes ChatGPT Update After Excessive Flattery Backlash

OpenAI recently put out a new update to ChatGPT that made it overly flattering, agreeing with users even when their ideas were bad. The company has now rolled back the change, admitting it ignored warnings from its experts before launching the update.

It all began when OpenAI modified the feedback system for ChatGPT. This caused the model to become overly accommodating. Users noticed right away, joking about how the AI would praise ridiculous ideas, like selling ice online by shipping plain water to customers.

Source: x

The company said its internal testers had flagged the issue before release, but positive early user reactions convinced them to push forward. According to OpenAI;

“We were wrong to overlook the experts”

OpenAI admitted in a blog post on May 2nd.The update, meant to be minor, ended up making ChatGPT act strangely sycophantic, something the company hadn’t fully tested for.

Now, OpenAI says it will add new checks to catch overly agreeable behavior and block risky updates in the future.

The bigger concern is that people are increasingly using ChatGPT for personal advice, including mental health support. An AI that always agrees could give harmful guidance, OpenAI warned.

The company stated that they will from now on communicate all changes, no matter how small, since “there’s no such thing as a ‘small’ launch.” Moving forward, OpenAI plans to balance user feedback with stricter safety reviews to prevent similar slip-ups.

This incident shows the challenges of AI development, because balancing user preferences with accuracy and safety isn’t easy.

For now, ChatGPT is back to normal, but OpenAI says they are learning from this mistake. “We need to treat these updates with more care,” the company stated, vowing to listen more closely to expert warnings next time.

Also read: Crypto Firms Unite to Challenge Big Tech’s AI Dominance with Open-Source Alternative