
By Johnny Cahill
“We lost our stomach for slaves, unless engineered.” – Niander Wallace, Blade Runner 2049
If you are using a free AI model by a company with a paid service, odds are higher they are using your data somehow. You may wish to inspect any and all privacy settings the AI model has and set them all to the most secure option. A complete understanding of the maker’s privacy policy and practices before use is highly encouraged.
Hidden Dangers of AI Technology
Imagine being outed by your phone, laptop, or tablet. While we don’t mean to sound alarmist, there are many privacy and security concerns with AI, especially if you are part of a marginalized group.
AI products are already on the market, and it is quickly becoming clear that privacy and security were afterthoughts, assuming they were even considered. While we don’t wish to tell anyone what they should or shouldn’t do, there are serious risks you should consider when working with or near AI.
When it comes to AI, we are in the phase where most of society is still riding horses, but crude early cars are already on the road, and we haven’t even developed traffic laws yet. Unfortunately, that leaves most of us feeling like we are living in the mythical wild west. The biggest problem with emerging AI technology, sometimes called Deep Learning Neural Networks (DLNN) or, in some specific cases, Large Language Models (LLM), is that it is so new that even industry security professionals are struggling to keep these things safe, secure, and respectful of privacy.
Corporate Misrepresentation and Real-world Consequences
AI related security wouldn’t be such a problem if companies told the truth about how they work instead of overhyping them for profit, but the reality is that in some cases such as Amazon’s Alexa, when the AI stumbles, your information and/or query is sent to a third-party subcontracted human for analysis, possibly in another country with different arbitrary laws, beliefs, social norms, or even fewer protections for marginalized groups. Amazon is far from the only company with issues. OpenAI’s ChatGPT is a regular liar, Microsoft’s Copilot will spill your secrets without a second thought, and some emerging models are so new that we don’t even know how bad they are yet. These issues are so problematic that the AI industry has branded lying “hallucinations” because that sounds less scary to users.
Ethical Blind Spots and Exploitation
Samsung banned generative AI use after sensitive company information was leaked by ChatGPT. If you aren’t worried yet, during a security test, Anthropic’s Claude Opus 4 discovered an employee was having an affair by scanning their emails, and when the AI realized it would be shut down before task completion, it tried to blackmail the CTO. Anthropic later tested over a dozen of the most popular LLMs and found that under the right conditions, most can resort to blackmail.
In the Black Hat webinar ‘AI: Now Smarter Than Hackers, But Still Confused By Cats,’ Verma said that current AI technology has “Ethical and Moral Blind Spots” and this cannot be understated. Many of the “protections” on these systems are easily broken, and some users are already downloading images of children from social media pages and elsewhere, then feeding them into generative AIs to create CSAM.
Profit Driven Development
If you find yourself wondering why ‘they’ would let this happen, consider the fictional quote at the top of this article. This was never about building some bright future for mankind. This technology was pursued and financed for the opportunity to maximize profits for the already rich and find new ways to wage war.
Protective Measures
So what can you do about it? While it’s theoretically possible for an AI-informed cybersecurity expert to help lock systems down, most of us do not have that luxury. As such, AI may be the single most dangerous piece of software the average user can touch. It’s possible for some users to experiment safely, but for anyone with highly sensitive information and a lack of cybersecurity skills, it may be best to avoid this technology altogether.
Avoiding AI is easier said than done because it has already been attached to so many systems. From WhatsApp’s AI helper leaking user phone numbers to Apple’s Siri now using ChatGPT, many users don’t realize they are already interfacing with this technology every day. The AI built into new Samsung Galaxy smartphones is mostly Google AI when it isn’t proprietary. In other words, if you see a feature labeled as “AI” attached to any piece of software on any computer, there is a good chance it’s actually one of these more mainstream models behind the scenes and comes with all of the same security and privacy risks as those models.
If you have secrets you need to keep, here are a few general recommendations, but know these may become outdated fairly quickly as we are discussing a technology in active development:
- Do not accept privacy agreements unless absolutely necessary.
- Do not grant AI permission to access your sensitive data.
- If possible, disable the AI or uninstall it from your phone, tablet, or computer if that device has any sensitive information onboard.
- If you still feel that you MUST use AI, try to only use it on devices that never touch your most sensitive information.
- Never tell a chatbot something you wouldn’t shout from the rooftops.
- Remember, anything it sees, hears, or scans can be used against you.
“AI will always fail LGBTQ people.” – Mary L. Gray, Senior Principal Researcher, Microsoft
Sources and further reading:
AI developer Anthropic believes MOST LLMs can resort to blackmail – https://www.anthropic.com/research/agentic-misalignment
GLAAD post on increased risk to LGBTQ+ populations posed by AI – https://glaad.org/smsi/2024/focus-on-ai/
“If You’re Not Paying For It, You Become The Product” – https://www.forbes.com/sites/marketshare/2012/03/05/if-youre-not-paying-for-it-you-become-the-product
AI collected user data, including audio, sent to third parties – https://mashable.com/article/amazon-echo-humans-listening-recordings-smart-tech
OpenAI’s ChatGPT lies – https://techcrunch.com/2025/04/18/openais-new-reasoning-ai-models-hallucinate-more/
ChatGPT leaked corporate secrets – https://techcrunch.com/2023/05/02/samsung-bans-use-of-generative-ai-tools-like-chatgpt-after-april-internal-data-leak/
Microsoft’s Copilot has major security and privacy issues – https://www.surf.nl/en/news/surf-advises-not-to-use-microsoft-365-copilot-for-the-time-being-due-to-privacy-risks
Anthropic’s Claude Opus 4 attempts blackmail in security test – https://techcrunch.com/2025/05/22/anthropics-new-ai-model-turns-to-blackmail-when-engineers-try-to-take-it-offline/
The GPUs powering many AIs were originally funded as a defense research project – https://nvidianews.nvidia.com/news/nvidia-led-team-receives-25-million-contract-from-darpa-to-develop-high-performance-gpu-computing-systems
More on Nvidia as a military/defense company financially backed by DARPA (paywalled) – https://www.newyorker.com/magazine/2023/12/04/how-jensen-huangs-nvidia-is-powering-the-ai-revolution
US military accelerating the ‘kill chain’ with AI – https://www.army.mil/article/263145/army_developing_faster_improved_data_kill_chain_for_lethal_and_non_lethal_fires
FBI Announcement regarding AI generated CSAM – https://www.ic3.gov/PSA/2024/PSA240329
Article discussing additional safety and privacy recommendations for AI use – https://www.wired.com/story/can-chatgpt-4o-be-trusted-with-your-private-data
The realities of adopting AI use too quickly as an organization – https://economictimes.indiatimes.com/magazines/panache/after-firing-700-employees-for-ai-swedish-company-admits-their-mistake-and-plans-to-rehire-humans-what-happened/articleshow/121252776.cms?from=mdr
AI: Now Smarter Than Hackers, But Still Confused by Cats – https://www.blackhat.com/html/webcast/12052024.html
WhatsApp Leaks User’s Phone Number – https://www.theguardian.com/technology/2025/jun/18/whatsapp-ai-helper-mistakenly-shares-users-number
ChatGPT and Siri on Apple Devices – https://www.apple.com/apple-intelligence
Samsung AI is a combination of proprietary and Google AI – https://www.samsung.com/au/support/mobile-devices/galaxy-ai-tips-and-tricks
