Post a Comment Print Share on Facebook

“When you flatter him, he performs better”: why does ChatGPT love compliments as much as we do?

Would ChatGPT be more likely to give detailed answers if you are polite to him or offer him money? This is what users of the social network Reddit say.

- 9 reads.

“When you flatter him, he performs better”: why does ChatGPT love compliments as much as we do?

Would ChatGPT be more likely to give detailed answers if you are polite to him or offer him money? This is what users of the social network Reddit say. “When you flatter it, ChatGPT performs better!”, some of them say, surprised. While another explains having offered a reward of $100,000 to the famous conversational robot developed by the Californian company OpenAI, which would have encouraged it to “put a lot more effort” and “work much better”, relays the American media Tech Crunch.

A point also raised by researchers from Microsoft, Beijing Normal University and the Chinese Academy of Sciences, who detailed in a report published in November 2023, that generative AI models are more efficient when they are requested in a polite manner or with a notion of stake.

For example, by formulating sentences like “It is crucial that I succeed in my thesis defense” or “it is very important for my career”, the robot formulates more complete answers to the question asked by the Internet user. “There have been several assumptions for several months. Before the summer, users had also assured that ChatGPT gave detailed answers if it was offered tips,” also notes Giada Pistilli, researcher in philosophy and head of Ethics for the start-up Hugging Face, which defends a concept open source artificial intelligence resources.

This way of acting on the part of the machine finds its origin in the data with which it is fed. “You have to think of the data in question as a block of text, where ChatGPT reads and discovers that two users have a much richer conversation when they are polite to each other,” explains Giada Pistilli, “If the user asks a question in a courteous manner, ChatGPT will take up this scenario and respond in the same tone.

“Models like ChatGPT use probability. For example, he is given a sentence that he must complete. Depending on his data, he will look for the best option to answer it,” adds Adrian-Gabriel Chifu, doctor in computer science and teacher-researcher at the University of Aix-Marseille, “This is only mathematics, ChatGPT is entirely dependent on the data with which it was trained. Thus, the conversational robot is also capable of being almost aggressive in its response, if the user makes a request abruptly and the way in which the model was constructed allows it to be so too.

Researchers from Microsoft, Beijing Normal University and the Chinese Academy of Sciences even talk about “emotional incentives” in their report. Behind this term, they include the ways of addressing the machine which pushes it to “simulate, as much as possible, human behavior”, analyzes Giada Pistilli.

For ChatGPT to embody a role adapted to the answer sought, all you need to do is use certain keywords, which will cause different personifications of the machine. “By offering him tips, he will perceive the action as an exchange of service and embody this service role,” she uses as an example.

The conversational robot can quickly adopt, in the minds of certain users and depending on their requests, the conversational tone that a professor, a writer or a filmmaker could have. “Which makes us fall into a trap: that of treating him like a human being and thanking him for his time and his help as we would with a real person,” notes the philosophy researcher.

The danger is to completely consider this simulation of behavior, carried out by a literal calculation of the machine, as true “humanity” on the part of the robot. “Anthropomorphism has been sought since the 70s in the development of chatbots,” recalls Giada Pistilli, who cites the case of the Chatbot Eliza as an example.

This artificial intelligence program designed by computer scientist Joseph Weizenbaum in the 1960s simulated a psychotherapist at the time. “In the 70s, it didn't go that far in terms of answers, a person could explain to Eliza that he had problems with his mother only to have her simply respond 'I think you have problems with your mother?’” notes Giada Pistilli. “But that was already enough for some users to humanize it and give it power.”

“Whatever happens, AI mainly goes in our direction depending on what we ask of it,” recalls researcher and lecturer Adrian-Gabriel Chifu. “Hence the need to always be careful in our use of these tools.”

In pop culture, the idea that robots equipped with intelligence could one day be confused with humans is regularly invoked. The manga “Pluto”, by Japanese Naoki Urazawa, published in the 2000s and adapted into a series by Netflix in 2023, is no exception. In this book, robots live with humans, go to school and do jobs just like them. Their difference is increasingly difficult to distinguish.

During episode 2, a robot is even surprised that another, more efficient one, manages to enjoy ice cream with the same pleasure as a human. “The more I pretend, the more I feel like I understand,” he explains. In this regard, researcher Adrian-Gabriel Chifu thinks back to the famous test by mathematician Alan Turing, which aimed to observe the ability of artificial intelligence to imitate humans. “Turing wondered to what extent the machine could be able to fool humans based on the data it possesses...and that was in 1950,” he concludes, thoughtfully.

Avatar
Your Name
Post a Comment
Characters Left:
Your comment has been forwarded to the administrator for approval.×
Warning! Will constitute a criminal offense, illegal, threatening, offensive, insulting and swearing, derogatory, defamatory, vulgar, pornographic, indecent, personality rights, damaging or similar nature in the nature of all kinds of financial content, legal, criminal and administrative responsibility for the content of the sender member / members are belong.