
The lies and dreams of machines can cause real world risks. As Flascher writes: ‘ChatGPT recently invented a sexual harassment scandal, naming a real law professor as the accused (citing a fake Washington Post article as evidence in support of the allegation). Not only did no such article exist, but the real professor had never been accused of harassing a student, nor had he been present on the trip to Alaska described by the chatbot during which the purported sexual harassment took place.’
Flascher explores potential defamatory scenarios arising from generative AI, and analyses how this might be dealt with under current laws—read the full article here.