The scientists are employing a method named adversarial education to prevent ChatGPT from letting consumers trick it into behaving badly (referred to as jailbreaking). This function pits numerous chatbots from one another: one particular chatbot plays the adversary and attacks An additional chatbot by generating textual content to pressure it to bu