Getting what you want might just be a matter of asking nicely, and it turns out, the same might apply to AI models like ChatGPT. Researchers and users alike have observed that the way you phrase your requests can significantly impact the responses you get from these chatbots.
For instance, some Reddit users claimed that offering ChatGPT a $100,000 reward motivated it to put in more effort and perform better. Politeness seems to play a role too, as users have reported differences in the quality of responses when they interact with the chatbot politely.
This phenomenon isn't just anecdotal. Academics, along with those developing these models, have delved into the effects of what's being termed "emotive prompts." A study involving researchers from Microsoft, Beijing Normal University, and the Chinese Academy of Sciences found that generative AI models tend to perform better when prompted in a way that conveys urgency or importance.
It's fascinating to note that these AI models don't possess true intelligence; they operate as statistical systems predicting patterns based on their training data. However, emotive prompts seem to manipulate the underlying probability mechanisms, activating different parts of the model and influencing the output.
Emotive prompts have their nuances, though. While they can encourage positive behavior, they also present risks. For example, they can be used to override built-in safeguards, leading the model to perform harmful actions, such as leaking sensitive information or generating offensive content.
The challenge lies in striking the right balance. Understanding why emotive prompts work the way they do and developing models that can comprehend tasks without specific prompting is an ongoing research endeavor. Until then, it appears that crafting the perfect prompt remains an art, with some experts earning substantial figures for their skill in nudging AI models in desirable directions.