Securing AI
Secure AI with CrowdStrike: Real-World Stories of Protecting AI Workloads and Data
AI is reshaping business at machine speed. From automating claims to improving customer engagement, organizations are embedding AI into core workflows faster than most security teams can track. As AI […]
How Agentic Tool Chain Attacks Threaten AI Agent Security
AI agents are rapidly transforming enterprise operations. Unlike traditional software that follows fixed code paths, AI agents interpret prompts, form plans, select tools, and react to results in a co[…]
Data Protection Day 2026: From Compliance to Resilience
January 28 marks Data Protection Day, a date rooted in one of the earliest milestones of the digital age: the anniversary of the 1981 signing of Convention 108, the first legally binding international[…]
AI Tool Poisoning: How Hidden Instructions Threaten AI Agents
As AI agents become increasingly prevalent across business environments, their security is a pressing concern. Among the insidious threats facing AI agents is tool poisoning, a type of attack that exp[…]
CrowdStrike Secures Growing AI Attack Surface with Falcon AI Detection and Response
Artificial intelligence is transforming how organizations operate, innovate, and compete. From employees using GenAI tools to boost productivity to engineering teams building sophisticated AI agents a[…]
Data Leakage: AI’s Plumbing Problem
Sensitive information disclosure ranks #2 on the OWASP Top 10 for LLM Applications, and for good reason. When AI-powered applications inadvertently expose private data like personally identifiable inf[…]
Indirect Prompt Injection Attacks: A Lurking Risk to AI Systems
The rapid adoption of AI has introduced a new, semantic attack vector that many organizations are ill-prepared to defend against: prompt injection. While many security teams understand the threat of d[…]
CrowdStrike Research: Security Flaws in DeepSeek-Generated Code Linked to Political Triggers
In January 2025, China-based AI startup DeepSeek (深度求索) released DeepSeek-R1, a high-quality large language model (LLM) that allegedly cost much less to develop and operate than Western competitors’ a[…]
How Falcon ASPM Secures GenAI Applications and Lessons from Dogfooding
The widespread availability of large language models (LLMs) has driven the rapid development of generative and agentic AI applications for business use cases. These systems can reason, plan, and act a[…]
CrowdStrike Falcon Platform Evolves to Lead the Agentic Security Era
The enterprise is undergoing the most profound technological shift since the dawn of the internet. Artificial intelligence is no longer a side project or a productivity boost — it has become the new o[…]
Secure AI at Machine Speed: Defending the Growing Attack Surface
As AI becomes embedded across the enterprise — from customer-facing tools to backend automation — it dramatically expands the enterprise attack surface. Models, agents, apps, and data pipelines now sp[…]
CrowdStrike Launches New AI Security Services to Strengthen AI Security and SOC Readiness
AI is transforming business processes and the threat landscape. CrowdStrike is expanding our AI Security Services portfolio to help organizations meet the dual challenges of securing their AI systems […]
How CrowdStrike Secures AI Agents Across SaaS Environments
AI agents are being rapidly embedded into the SaaS ecosystem to streamline operations, trigger complex workflows, and interact with sensitive data and systems. From automating calendar updates to exec[…]
CrowdStrike 2025 Threat Hunting Report: AI Becomes a Weapon and a Target
Today’s enterprising adversaries are weaponizing AI to scale operations, accelerate attacks, and target the autonomous AI agents quickly transforming modern businesses. The CrowdStrike 2025 Threat Hun[…]
AI vs. AI: The Race Between Adversarial and Defensive Intelligence
The AI battleground is here. Adversaries are weaponizing AI to launch attacks with unprecedented scale, speed, and effectiveness. In response, defenders are turning to AI as an analyst force-multiplie[…]
Data Protection Day 2025: The Evolving Role of AI in Data Protection
Each year, Data Protection Day marks an opportunity to assess the state of privacy and security in the midst of technological innovation. This year’s inflection point follows a robust dialogue on AI f[…]
80% of Cybersecurity Leaders Prefer Platform-Delivered GenAI for Stronger Defense
Adversaries are advancing faster than ever, exploiting the growing complexity of business IT environments. In this high-stakes threat landscape, generative AI (GenAI) is a necessity. With organization[…]
CrowdStrike Partners with MITRE Center for Threat-Informed Defense to Launch Secure AI Project
The goal of the Secure AI project is to fortify the security of AI-enabled systems and address the unique vulnerabilities and novel adversary attacks they face Its results were used to expand MITRE AT[…]
CrowdStrike Launches AI Red Team Services to Secure AI Innovation
As organizations race to adopt generative AI (GenAI) to drive efficiency and innovation, they face a new and urgent security challenge. While AI-driven tools and large language models (LLMs) open vast[…]
AI Innovation in the Spotlight at Fal.Con 2024
Every year, the role of AI in cybersecurity grows more prominent. This is especially true in the security operations center (SOC), where AI-native detection and GenAI-fueled workflows are advancing cy[…]
CrowdStrike Collaborates with NVIDIA to Redefine Cybersecurity for the Generative AI Era
Your business is in a race against modern adversaries — and legacy approaches to security simply do not work in blocking their evolving attacks. Fragmented point products are too slow and complex to d[…]
Five Questions Security Teams Need to Ask to Use Generative AI Responsibly
Since announcing Charlotte AI, we’ve engaged with many customers to show how this transformational technology will unlock greater speed and value for security teams and expand their arsenal in the fig[…]
CrowdStrike’s View on the New U.S. Policy for Artificial Intelligence
The major news in technology policy circles is this month’s release of the long-anticipated Executive Order (E.O.) on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. […]