9 to 5 Nightmares
We talk about misconduct so you can avoid it! Join hosts Amy Warren and Micole Garatti as they walk through some recent and alarming workplace misconduct scandals!
9 to 5 Nightmares
Grok AI Generating Fake Nudes, Racists Posts, & Viral Food Safety Violations: #9to5Nightmares ep 21
It’s now 2026 and with the new year comes new misconduct scandals breaking news. This month, #9to5Nightmares hosts Amy Warren and Micole Garatti focus on several high profile social media scandals around the world.
Today’s tech-savvy digital world means new types of misconduct can go viral and scale quickly. January’s episode starts by unpacking a new viral trend in which people are using X’s AI Grok to generate non-consensual sexually explicit photos of women and children, creating widespread concerns over online privacy, human rights, and workplace misconduct concerns.
The episode continues exploring additional scandals including a celebrity doctor being under review by medical boards over social media misconduct, a barista going viral for serving a customer a drink she made with her bare hands, and findings from a recent Fama report in Financial Services.
These cases emphasized the importance of robust policies and protocols to prevent workplace harassment and misconduct, while highlighting the need for organizations to review and update their codes of conduct and harassment policies.
Conclusion
In today’s hyper-connected, digital world, the lines between personal online behavior and professional life continue to blur, making organizations vulnerable to a new wave of misconduct that can go viral and cause swift, severe damage to reputation and public trust.
From the weaponization of AI to generate non-consensual images, to blatant food safety violations, and the re-emergence of hateful rhetoric online, these cases underscore a critical need for organizations to proactively safeguard their workplaces and brand. It is no longer enough to react; companies must update their policies and implement robust due diligence strategies, like Fama’s social media screening solution, to consistently identify and mitigate employee and candidate risks before they become a nightmare scenario.
For a deep dive into these scandals and practical steps your organization can take, listen to #9to5Nightmares Ep 21: Grok AI Generating Fake Nudes, Racists Posts, & Viral Food Safety Violations below or on Spotify.
If you want to understand how organizations are identifying online behavioral risk earlier and protecting employees, customers, and their reputation, explore Fama’s solutions while you listen.