亚洲国产日韩欧美一区二区三区,精品亚洲国产成人av在线,国产99视频精品免视看7,99国产精品久久久久久久成人热,欧美日韩亚洲国产综合乱

Table of Contents
LLMs Aren't Safe by Default
Data Leakage
Prompt Injection
Opaque Supply Chains
Slopsquatting
Effective Mitigation Strategies Are Essential
Initial Defense Mitigation Strategies
Self-Hosting Your LLMs for Greater Control
Treat GenAI Systems as Part of Your Attack Surface
References
Home Technology peripherals AI The Hidden Security Risks of?LLMs

The Hidden Security Risks of?LLMs

May 30, 2025 am 10:48 AM

The Hidden Security Risks of?LLMs

As businesses rush to incorporate large language models (LLMs) into their customer service agents, internal copilots, and code generation helpers, a significant oversight is becoming apparent: **security**. While we continue to focus on the ongoing technological progress and excitement surrounding AI, the underlying risks and vulnerabilities frequently go unnoticed. Many companies seem to adopt a double standard regarding security. On-premises IT setups face rigorous examination, yet cloud AI services such as Azure OpenAI Studio or Google Gemini are readily embraced with just a few clicks.

I understand how straightforward it is to create a wrapper solution around hosted LLM APIs, but is this truly the best approach for enterprise applications? If your AI agent is inadvertently disclosing company secrets to OpenAI or falling victim to a skillfully crafted prompt, that’s not progress—it’s a potential breach waiting to occur. Just because we aren’t directly confronted with security decisions concerning the actual models when using these external APIs, it doesn’t mean we can overlook the fact that the companies behind those models have made those choices for us.

In this article, I aim to uncover the hidden risks and advocate for a more security-conscious route: self-hosted LLMs and suitable risk management strategies.

LLMs Aren't Safe by Default

Simply because an LLM produces impressive outputs doesn’t imply it’s automatically safe to integrate into your systems. A recent study by Yao et al. explored the dual nature of LLMs in security [1]. Although LLMs offer numerous opportunities and can sometimes assist with security practices, they also introduce new vulnerabilities and avenues for attack. Traditional cybersecurity measures must still adapt to keep pace with the new threats created by AI-powered solutions.

Let’s examine a few critical security risks associated with working with LLMs.

Data Leakage

Data Leakage occurs when sensitive information (such as client data or intellectual property) is unintentionally exposed, accessed, or misused during model training or inference. With the average cost of a data breach reaching $5 million in 2025 [2], and 33% of employees regularly sharing sensitive data with AI tools [3], data leakage represents a genuine risk that demands attention.

Even if those third-party LLM providers promise not to train on your data, verifying what’s logged, cached, or stored afterward is challenging. This leaves organizations with minimal control over GDPR and HIPAA compliance.

Prompt Injection

An attacker doesn’t need administrative access to harm your AI systems. A basic chat interface already offers ample opportunity. Prompt Injection is a technique where a hacker manipulates an LLM into producing unintended outputs or even executing unauthorized commands. OWASP lists prompt injection as the top security risk for LLMs [4].

For instance:

A user employs an LLM to summarize a webpage containing hidden instructions that cause the LLM to disclose chat details to an attacker.

The greater autonomy your LLM has, the higher the vulnerability to prompt injection attacks [5].

Opaque Supply Chains

LLMs like GPT-4, Claude, and Gemini are proprietary. Thus, you won’t know:

  • What data they were trained on
  • When they were last updated
  • How susceptible they are to zero-day exploits

Using them in production creates blind spots in your security.

Slopsquatting

As more LLMs serve as coding assistants, a new security threat has arisen: slopsquatting. You may be familiar with typesquatting, where hackers exploit common typos in code or URLs to launch attacks. In slopsquatting, hackers don’t rely on human errors; instead, they exploit LLM hallucinations.

LLMs often generate non-existent packages when creating code snippets, and if these snippets are utilized without proper verification, it gives hackers an ideal chance to infect your systems with malware and similar threats [6]. These fabricated packages frequently resemble legitimate ones, making them harder for humans to detect.

Effective Mitigation Strategies Are Essential

I realize most LLMs appear quite intelligent, but they lack the ability to distinguish between normal user interactions and cleverly disguised attacks. Depending solely on them to detect threats is akin to asking autocomplete to configure your firewall. Hence, having robust processes and tools in place to mitigate risks related to LLM-based systems is crucial.

Initial Defense Mitigation Strategies

There are methods to minimize risk when dealing with LLMs:

  • Input/output sanitization (using regex filters). Similar to its importance in front-end development, it shouldn’t be neglected in AI systems.
  • System prompts with strict boundaries. While system prompts aren’t foolproof, they can establish a solid foundation for setting limits.
  • Adopting AI guardrails frameworks to prevent malicious usage and enforce your usage policies. Frameworks like Guardrails AI simplify implementing this kind of protection [7].

Ultimately, these mitigation strategies form only the initial layer of defense. If you’re relying on third-party hosted LLMs, your data still exits your secure environment, and you remain dependent on those LLM companies to address security vulnerabilities appropriately.

Self-Hosting Your LLMs for Greater Control

Numerous powerful open-source alternatives exist that you can operate locally within your own environments, tailored to your needs. Recent developments have led to performant language models capable of running on modest infrastructure [8]! Considering open-source models isn’t merely about cost or customization (though these are beneficial advantages too). It’s about control.

Self-hosting grants you:

  • Complete data ownership, ensuring nothing leaves your selected environment!
  • Custom fine-tuning possibilities with private data, enhancing performance for your specific use cases.
  • Strict network isolation and runtime sandboxing.
  • Auditability. You know exactly which model version you’re using and when changes occurred.

Yes, it necessitates additional effort: orchestration (e.g., BentoML, Ray Serve), monitoring, scaling. I’m also not suggesting that self-hosting is the solution for everything. Nevertheless, when discussing use cases involving sensitive data, the trade-offs are worthwhile.

Treat GenAI Systems as Part of Your Attack Surface

If your chatbot can make decisions, access documents, or call APIs, it functions essentially as an unvetted external consultant with access to your systems. Consequently, treat it similarly from a security perspective: regulate access, monitor diligently, and avoid outsourcing sensitive tasks to them. Retain vital AI systems in-house, under your control.

References

[1] Y. Yao et al., A Survey on Large Language Model (LLM) Security and Privacy: The Good, The Bad, and The Ugly (2024), ScienceDirect

[2] Y. Mulayam, Data Breach Forecast 2025: Costs & Key Cyber Risks (2025), Certbar

[3] S. Dobrontei and J. Nurse, Oh, Behave! The Annual Cybersecurity Attitudes and Behaviors Report 2024–2025 — CybSafe (2025), Cybsafe and the National Cybersecurity Alliance

[4] 2025 Top 10 Risk & Mitigations for LLMs and Gen AI Apps (2025), OWASP

[5] K. Greshake et al., Not What You’ve Signed Up For: Compromising Real-World LLM-Integrated Applications with Indirect Prompt Injection (2023), Association for Computing Machinery

[6] J. Spracklen et al., We Have a Package for You! A Comprehensive Analysis of Package Hallucinations by Code Generating LLMs (2025), USENIX 2025

[7] Guardrails AI, GitHub — guardrails-ai/guardrails: Adding Guardrails to Large Language Models.

[8] E. Shittu, Google’s Gemma 3 Can Run on a Single TPU or GPU (2025), TechTarget

The above is the detailed content of The Hidden Security Risks of?LLMs. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undress AI Tool

Undress AI Tool

Undress images for free

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

AI Investor Stuck At A Standstill? 3 Strategic Paths To Buy, Build, Or Partner With AI Vendors AI Investor Stuck At A Standstill? 3 Strategic Paths To Buy, Build, Or Partner With AI Vendors Jul 02, 2025 am 11:13 AM

Investing is booming, but capital alone isn’t enough. With valuations rising and distinctiveness fading, investors in AI-focused venture funds must make a key decision: Buy, build, or partner to gain an edge? Here’s how to evaluate each option—and pr

AGI And AI Superintelligence Are Going To Sharply Hit The Human Ceiling Assumption Barrier AGI And AI Superintelligence Are Going To Sharply Hit The Human Ceiling Assumption Barrier Jul 04, 2025 am 11:10 AM

Let’s talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). Heading Toward AGI And

Kimi K2: The Most Powerful Open-Source Agentic Model Kimi K2: The Most Powerful Open-Source Agentic Model Jul 12, 2025 am 09:16 AM

Remember the flood of open-source Chinese models that disrupted the GenAI industry earlier this year? While DeepSeek took most of the headlines, Kimi K1.5 was one of the prominent names in the list. And the model was quite cool.

Future Forecasting A Massive Intelligence Explosion On The Path From AI To AGI Future Forecasting A Massive Intelligence Explosion On The Path From AI To AGI Jul 02, 2025 am 11:19 AM

Let’s talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). For those readers who h

Grok 4 vs Claude 4: Which is Better? Grok 4 vs Claude 4: Which is Better? Jul 12, 2025 am 09:37 AM

By mid-2025, the AI “arms race” is heating up, and xAI and Anthropic have both released their flagship models, Grok 4 and Claude 4. These two models are at opposite ends of the design philosophy and deployment platform, yet they

Chain Of Thought For Reasoning Models Might Not Work Out Long-Term Chain Of Thought For Reasoning Models Might Not Work Out Long-Term Jul 02, 2025 am 11:18 AM

For example, if you ask a model a question like: “what does (X) person do at (X) company?” you may see a reasoning chain that looks something like this, assuming the system knows how to retrieve the necessary information:Locating details about the co

Senate Kills 10-Year State-Level AI Ban Tucked In Trump's Budget Bill Senate Kills 10-Year State-Level AI Ban Tucked In Trump's Budget Bill Jul 02, 2025 am 11:16 AM

The Senate voted 99-1 Tuesday morning to kill the moratorium after a last-minute uproar from advocacy groups, lawmakers and tens of thousands of Americans who saw it as a dangerous overreach. They didn’t stay quiet. The Senate listened.States Keep Th

This Startup Built A Hospital In India To Test Its AI Software This Startup Built A Hospital In India To Test Its AI Software Jul 02, 2025 am 11:14 AM

Clinical trials are an enormous bottleneck in drug development, and Kim and Reddy thought the AI-enabled software they’d been building at Pi Health could help do them faster and cheaper by expanding the pool of potentially eligible patients. But the

See all articles