Malicious Life Podcast: Hacking Language Models
Language models are everywhere today, and most interestingly they are available via several experiential projects trying to emulate natural conversations such as OpenAI’s GPT-3 and Google’s LaMDA. Can these models be hacked to gain access to the sensitive information they learned from their training data? Check it out...