Saturday, March 21, 2026

FBI Warns Russian Hackers Target Signal, WhatsApp in Mass Phishing Attacks

Threat actors affiliated with Russian Intelligence Services are conducting phishing campaigns to compromise commercial messaging applications (CMAs) like WhatsApp and Signal to seize control of accounts belonging to individuals with high intelligence value, the U.S. Cybersecurity and Infrastructure Security Agency (CISA) and Federal Bureau of Investigation (FBI) said Friday.

"The campaign targets individuals of high intelligence value, including current and former U.S. government officials, military personnel, political figures, and journalists," FBI Director Kash Patel said in a post on X. "Globally, this effort has resulted in unauthorized access to thousands of individual accounts. After gaining access, the actors can view messages and contact lists, send messages as the victim, and conduct additional phishing from a trusted identity."

CISA and the FBI said the activity has resulted in the compromise of thousands of individual CMA accounts. It's worth noting that the attacks are designed to break into the targeted accounts and do not exploit any security vulnerability or weakness to crack the platforms' encryption protections.

While the agencies did not attribute the activity to a specific threat actor, prior reports from Microsoft and Google Threat Intelligence Group have linked such campaigns to multiple Russia-aligned threat clusters tracked as Star Blizzard, UNC5792 (aka UAC-0195), and UNC4221 (aka UAC-0185).

In a similar alert, the Cyber Crisis Coordination Center (C4), part of the National Cybersecurity Agency of France (ANSSI), warned of a surge in attack campaigns targeting instant messaging accounts associated with government officials, journalists, and business leaders.

"These attacks – when successful – can allow malicious actors to access conversation histories, or even take control of their victims' messaging accounts and send messages while impersonating them," C4 said.

The end goal of the campaign is to enable the threat actors to gain unauthorized access to victims' accounts, enabling them to view messages and contact lists, send messages on their behalf, and even conduct secondary phishing against other targets by abusing trusted relationships.

As recently alerted by cybersecurity agencies from Germany and the Netherlands, the attack involves the adversary posing as "Signal Support" to approach targets and urge them to click on a link (or alternatively scan a QR code) or provide the PIN or verification code. In both cases, the social engineering scheme allows the threat actors to gain access to the victim's CMA account.

However, the campaign has two different outcomes for the victim depending on the method used -

  • If the victim opts to provide the PIN or verification code to the threat actor, they lose access to their account, as the attacker has used it to recover the account on their end. While the threat actor cannot access past messages, the method can be used to monitor fresh messages and send messages to others by impersonating the victim.
  • If the victim ends up clicking the link or scanning the QR code, a device under the control of the threat actor gets linked to the victim's account, allowing them to access all messages, including those sent in the past. In this scenario, the victim continues to have access to the CMA account unless they are explicitly removed from the app settings.

To better protect against the threat, users are advised to never share their SMS code or verification PIN with anyone, exercise caution when receiving unexpected messages from unknown contacts, check links before clicking them, and periodically review linked devices and remove those that appear suspicious.

"These attacks, like all phishing, rely on social engineering. Attackers impersonate trusted contacts or services (such as the non-existent 'Signal Support Bot') to trick victims into handing over their login credentials or other information," Signal said in a post on X earlier this month.

"To help prevent this, remember that your Signal SMS verification code is only ever needed when you are first signing up for the Signal app. We also want to emphasize that Signal Support will *never* initiate contact via in-app messages, SMS, or social media to ask for your verification code or PIN. If anyone asks for any Signal-related code, it is a scam."



from The Hacker News https://ift.tt/cdzB9jp
via IFTTT

Oracle Patches Critical CVE-2026-21992 Enabling Unauthenticated RCE in Identity Manager

Oracle has released security updates to address a critical security flaw impacting Identity Manager and Web Services Manager that could be exploited to achieve remote code execution.

The vulnerability, tracked as CVE-2026-21992, carries a CVSS score of 9.8 out of a maximum of 10.0.

"This vulnerability is remotely exploitable without authentication," Oracle said in an advisory. "If successfully exploited, this vulnerability may result in remote code execution."

CVE-2026-21992 affects the following versions -

  • Oracle Identity Manager versions 12.2.1.4.0 and 14.1.2.1.0
  • Oracle Web Services Manager versions 12.2.1.4.0 and 14.1.2.1.0

According to a description of the flaw in the NIST National Vulnerability Database (NVD), it's "easily exploitable" and could allow an unauthenticated attacker with network access via HTTP to compromise Oracle Identity Manager and Oracle Web Services Manager. This, in turn, can result in the successful takeover of susceptible instances.

Oracle makes no mention of the vulnerability being exploited in the wild. However, the tech giant has urged customers to apply the update without delay for optimal protection.

In November 2025, the U.S. Cybersecurity and Infrastructure Security Agency (CISA) added CVE-2025-61757 (CVSS score: 9.8), a pre-authenticated remote code execution flaw impacting Oracle Identity Manager, to the Known Exploited Vulnerabilities (KEV) catalog, citing evidence of active exploitation.



from The Hacker News https://ift.tt/wcD8CIR
via IFTTT

CISA Flags Apple, Craft CMS, Laravel Bugs in KEV, Orders Patching by April 3, 2026

The U.S. Cybersecurity and Infrastructure Security Agency (CISA) on Friday added five security flaws impacting Apple, Craft CMS, and Laravel Livewire to its Known Exploited Vulnerabilities (KEV) catalog, urging federal agencies to patch them by April 3, 2026.

The vulnerabilities that have come under exploitation are listed below -

  • CVE-2025-31277 (CVSS score: 8.8) - A vulnerability in Apple WebKit that could result in memory corruption when processing maliciously crafted web content. (Fixed in July 2025)
  • CVE-2025-43510 (CVSS score: 7.8) - A memory corruption vulnerability in Apple's kernel component that could allow a malicious application to cause unexpected changes in memory shared between processes. (Fixed in December 2025)
  • CVE-2025-43520 (CVSS score: 8.8) - A memory corruption vulnerability in Apple's kernel component that could allow a malicious application to cause unexpected system termination or write kernel memory. (Fixed in December 2025)
  • CVE-2025-32432 (CVSS score: 10.0) - A code injection vulnerability in Craft CMS that could allow a remote attacker to execute arbitrary code. (Fixed in April 2025)
  • CVE-2025-54068 (CVSS score: 9.8) - A code injection vulnerability in Laravel Livewire that could allow unauthenticated attackers to achieve remote command execution in specific scenarios. (Fixed in July 2025)

The addition of the three Apple vulnerabilities to the KEV catalog comes in the wake of reports from Google Threat Intelligence Group (GTIG), iVerify, and Lookout about an iOS exploit kit codenamed DarkSword that leverages these shortcomings, along with three bugs, to deploy various malware families like GHOSTBLADE, GHOSTKNIFE, and GHOSTSABER for data theft.

CVE-2025-32432 is assessed to have been exploited as a zero-day by unknown threat actors since February 2025, per Orange Cyberdefense SensePost. Since then, an intrusion set tracked as Mimo (aka Hezb) has also been observed exploiting the vulnerability to deploy a cryptocurrency miner and residential proxyware.

Rounding off the list is CVE-2025-54068, whose exploitation was recently flagged by the Ctrl-Alt-Intel Threat Research team as part of attacks mounted by the Iranian state-sponsored hacking group, MuddyWater (aka Boggy Serpens).

In a report published earlier this week, Palo Alto Networks Unit 42 called out the adversary's consistent targeting of diplomatic and critical infrastructure, including energy, maritime, and finance, across the Middle East and other strategic targets worldwide.

"While social engineering remains its defining trait, the group is also increasing its technological capabilities," Unit 42 said. "Its diverse toolset includes AI-enhanced malware implants that incorporate anti-analysis techniques for long-term persistence. This combination of social engineering and rapidly developed tools creates a potent threat profile."

"To support its large-scale social engineering campaigns, Boggy Serpens uses a custom-built, web-based orchestration platform," Unit 42 said. "This tool enables operators to automate mass email delivery while maintaining granular control over sender identities and target lists."

Attributed to the Iranian Ministry of Intelligence and Security (MOIS), the group is primarily focused on cyber espionage, although it has also been linked to disruptive operations targeting the Technion Israel Institute of Technology by adopting the DarkBit ransomware persona.

One of the defining hallmarks of MuddyWater's tradecraft has been the use of hijacked accounts belonging to official government and corporate entities in its spear-phishing attacks, and abuse of trusted relationships to evade reputation-based blocking systems and deliver malware. 

In a sustained campaign targeting an unnamed national marine and energy company in the U.A.E. between August 16, 2025, and February 11, 2026, the threat actor is said to have conducted four distinct waves of attack, leading to the deployment of various malware families, including GhostBackDoor and Nuso (aka HTTP_VIP). Some of the other notable tools in the threat actor's arsenal include UDPGangster and LampoRAT (aka CHAR).

"Boggy Serpens' recent activity exemplifies a maturing threat profile, as the group integrates its established methodologies with refined mechanisms for operational persistence," Unit 42 said. "By diversifying its development pipeline to include modern coding languages like Rust and AI-assisted workflows, the group creates parallel tracks that ensure the redundancy needed to sustain a high operational tempo."



from The Hacker News https://ift.tt/QnkRuDx
via IFTTT

Trivy Supply Chain Attack Triggers Self-Spreading CanisterWorm Across 47 npm Packages

The threat actors behind the supply chain attack targeting the popular Trivy scanner are suspected to be conducting follow-on attacks that have led to the compromise of a large number of npm packages with a previously undocumented self-propagating worm dubbed CanisterWorm.

The name is a reference to the fact that the malware uses an ICP canister, which refers to tamperproof smart contracts on the Internet Computer blockchain, as a dead drop resolver. The development marks the first publicly documented abuse of an ICP canister for the explicit purpose of fetching the command-and-control (C2) server, Aikido Security researcher Charlie Eriksen said.

The list of affected packages is below -

  • 28 packages in the @EmilGroup scope
  • 16 packages in the @opengov scope
  • @teale.io/eslint-config
  • @airtm/uuid-base32
  • @pypestream/floating-ui-dom

The development comes within a day after threat actors leveraged a compromised credential to publish malicious trivy, trivy-action, and setup-trivy releases containing a credential stealer. A cloud-focused cybercriminal operation known as TeamPCP is suspected to be behind the attacks.

The infection chain involving the npm packages involves leveraging a postinstall hook to execute a loader, which then drops a Python backdoor that's responsible for contacting the ICP canister dead drop to retrieve a URL pointing to the next-stage payload. The fact that the dead drop infrastructure is decentralized makes it resilient and resistant to takedown efforts.

"The canister controller can swap the URL at any time, pushing new binaries to all infected hosts without touching the implant," Eriksen said.

Persistence is established by means of a systemd user service, which is configured to automatically start the Python backdoor after a 5-second delay if it gets terminated for some reason by using the "Restart=always" directive. The systemd service masquerades as PostgreSQL tooling ("pgmon") in an attempt to fly under the radar.

The backdoor, as mentioned before, phones the ICP canister with a spoofed browser User-Agent every 50 minutes to fetch the URL in plaintext. The URL is subsequently parsed to fetch and run the executable.

"If the URL contains youtube[.]com, the script skips it," Eriksen explained. "This is the canister's dormant state. The attacker arms the implant by pointing the canister at a real binary, and disarms it by switching back to a YouTube link. If the attacker updates the canister to point to a new URL, every infected machine picks up the new binary on its next poll. The old binary keeps running in the background since the script never kills previous processes."

It's worth noting that a similar youtube[.]com-based kill switch has also been flagged by Wiz in connection with the trojanized Trivy binary (version 0.69.4), which also reaches out to the same ICP canister via a Python dropper ("sysmon.py"). As of writing, the URL returned by the C2 is a rickroll YouTube video.

The Hacker News found that the ICP canister supports three methods – get_latest_link, http_request, update_link – allowing the threat actor to modify the behavior at any time to serve an actual payload.

In tandem, the packages come with a "deploy.js" file that the attacker runs manually to spread the malicious payload to every package a stolen npm token provides access to in a programmatic fashion. The worm, assessed to be vibe-coded using an artificial intelligence (AI) tool, makes no attempt to conceal its functionality.

"This isn't triggered by npm install," Aikido said. "It's a standalone tool the attacker runs with stolen tokens to maximize blast radius."

To make matters worse, a subsequent iteration of CanisterWorm detected in "@teale.io/eslint-config" versions 1.8.11 and 1.8.12 has been found to self-propagate on its own without the need for manual intervention.

Unlike "deploy.js," which was a self-contained script the attacker had to execute with the pilfered npm tokens to push a malicious version of the npm packages to the registry, the new variant incorporates this functionality in "index.js" within a findNpmTokens() function that's run during the postinstall phase to collect npm authentication tokens from the victim's machine.

The main difference here is that the postinstall script, after installing the persistent backdoor, attempts to locate every npm token from the developer's environment and spawns the worm right away with those tokens by launching "deploy.js" as a fully detached background process.

Interestingly, the threat actor is said to have swapped out the ICP backdoor payload for a dummy test string ("hello123"), likely to ensure that the entire attack chain is working as intended before adding the malware.

"This is the point where the attack goes from 'compromised account publishes malware' to 'malware compromises more accounts and publishes itself,'" Eriksen said. "Every developer or CI pipeline that installs this package and has an npm token accessible becomes an unwitting propagation vector. Their packages get infected, their downstream users install those, and if any of them have tokens, the cycle repeats."

(This is a developing story. Please check back for more details.)



from The Hacker News https://ift.tt/O01VlgS
via IFTTT

Friday, March 20, 2026

CTI-REALM: A new benchmark for end-to-end detection rule generation with AI agents

Excerpt: CTI-REALM is Microsoft’s open-source benchmark for evaluating AI agents on real-world detection engineering—turning cyber threat intelligence (CTI) into validated detections. Instead of measuring “CTI trivia,” CTI-REALM tests end-to-end workflows: reading threat reports, exploring telemetry, iterating on KQL queries, and producing Sigma rules and KQL-based detection logic that can be scored against ground truth across Linux, AKS, and Azure cloud environments.

Security is Microsoft’s top priority. Every day, we process more than 100 trillion security signals across endpoints, cloud infrastructure, identity, and global threat intelligence. That’s the scale modern cyber defense demands, and AI is a core part of how we protect Microsoft and our customers worldwide. At the same time, security is, and always will be, a team sport. That’s why Microsoft is committed to AI model diversity and to helping defenders apply the latest AI responsibly. We created CTI‑REALM and open‑sourced it so the broader industry can test models, write better code, and build more secure systems together.

CTI-REALM (Cyber Threat Real World Evaluation and LLM Benchmarking) is Microsoft’s open-source benchmark that evaluates AI agents on end-to-end detection engineering. Building on work like ExCyTIn-Bench, which evaluates agents on threat investigation, CTI-REALM extends the scope to the next stage of the security workflow: detection rule generation. Rather than testing whether a model can answer CTI trivia or classify techniques in isolation, CTI-REALM places agents in a realistic, tool-rich environment and asks them to do what security analysts do every day: read a threat intelligence report, explore telemetry, write and refine KQL queries, and produce validated detection rules.

We curated 37 CTI reports from public sources (Microsoft Security, Datadog Security Labs, Palo Alto Networks, and Splunk), selecting those that could be faithfully simulated in a sandboxed environment and that produced telemetry suitable for detection rule development. The benchmark spans three platforms: Linux endpoints, Azure Kubernetes Service (AKS), and Azure cloud infrastructure with ground-truth scoring at every stage of the analytical workflow.

Why CTI-REALM exists

Existing cybersecurity benchmarks primarily test parametric knowledge: can a model name the MITRE technique behind a log entry, or classify a TTP from a report? These are useful signals. However, they miss the harder question: can an agent operationalize that knowledge into detection logic that finds attacks in production telemetry?

No current benchmark evaluates this complete workflow. CTI-REALM fills that gap by measuring:

  • Operationalization, not recall: Agents must translate narrative threat intelligence into working Sigma rules and KQL queries, validated against real attack telemetry.
  • The full workflow: Scoring captures intermediate decision quality—CTI report selection, MITRE technique mapping, data source identification, iterative query refinement. Scoring is not just limited to the final output.
  • Realistic tooling: Agents use the same types of tools security analysts rely on: CTI repositories, schema explorers, a Kusto query engine, MITRE ATT&CK and Sigma rule databases.

Business Impact

CTI-REALM gives security engineering leaders a repeatable, objective way to prove whether an AI model improves detection coverage and analyst output.

Traditional benchmarks tend to provide a single aggregate score where a model either passes or fails but doesn’t always tell the team why. CTI-REALM’s checkpoint-based scoring answers this directly. It reveals whether a model struggles with CTI comprehension, query construction, or detection specificity. This helps teams make informed decisions about where human review and guardrails are needed.

Why CTI-REALM matters for business

  • Measures operationalization, not trivia: Focuses on translating narrative threat intel into detection logic that can be validated against ground truth.
  • Captures the workflow: Evaluates intermediate steps (e.g., technique extraction, telemetry identification, iterative refinement) in addition to the final rule quality.
  • Supports safer adoption: Helps teams benchmark models before considering any downstream use and reinforces the need for human review before operational deployment.

Latest results

We evaluated 16 frontier model configurations on CTI-REALM-50 (50 tasks spanning all three platforms).

Animated Gif Image
Model performance on CTI-REALM-50, sorted by normalized reward.

What the numbers tell us

  • Anthropic models lead across the board. Claude occupies the top three positions (0.587–0.637), driven by significantly stronger tool-use and iterative query behavior compared to OpenAI models.
  • More reasoning isn’t always better. Within the GPT-5 family, medium reasoning consistently beats high across all three generations, suggesting overthinking hurts in agentic settings.
  • Cloud detection is the hardest problem. Performance drops sharply from Linux (0.585) to AKS (0.517) to Cloud (0.282), reflecting the difficulty of correlating across multiple data sources in APT-style scenarios.
  • CTI tools matter. Removing CTI-specific tools degraded every model’s output by up to 0.150 points, with the biggest impact on final detection rule quality rather than intermediate steps.
  • Structured guidance closes the gap. Providing a smaller model with human-authored workflow tips closed about a third of the performance gap to a much larger model, primarily by improving threat technique identification.

For complete details around techniques and results, please refer to the paper here: [2603.13517] CTI-REALM: Benchmark to Evaluate Agent Performance on Security Detection Rule Generation Capabilities.

Get involved

CTI-REALM is open-source and free to access. CTI-REALM will be available on the Inspect AI repo soon. You can access it here: UKGovernmentBEIS/inspect_evals: Collection of evals for Inspect AI.

Model developers and security teams are invited to contribute, benchmark, and share results via the official GitHub repository. For questions or partnership opportunities, reach out to the team at msecaimrbenchmarking@microsoft[.]com.

CTI-REALM helps teams evaluate whether an agent can reliably turn threat intelligence into detections before relying on it in security operations.

References

  1. Microsoft raises the bar: A smarter way to measure AI for cybersecurity | Microsoft Security Blog
  2. [2603.13517] CTI-REALM: Benchmark to Evaluate Agent Performance on Security Detection Rule Generation Capabilities
  3. CTI-REALM: Cyber Threat Intelligence Detection Rule Development Benchmark by arjun180-new · Pull Request #1270 · UKGovernmentBEIS/inspect_evals

The post CTI-REALM: A new benchmark for end-to-end detection rule generation with AI agents appeared first on Microsoft Security Blog.



from Microsoft Security Blog https://ift.tt/afzy0V9
via IFTTT

Secure agentic AI end-to-end

Next week, RSAC™ Conference celebrates its 35-year anniversary as a forum that brings the security community together to address new challenges and embrace opportunities in our quest to make the world a safer place for all. As we look towards that milestone, agentic AI is reshaping industries rapidly as customers transform to become Frontier Firms—those anchored in intelligence and trust and using agents to elevate human ambition, holistically reimagining their business to achieve their highest aspirations. Our recent research shows that 80% of Fortune 500 companies are already using agents.1

At the same time, this innovation is happening against a sea change in AI-powered attacks where agents can become “double agents.” And chief information officers (CIOs), chief information security officers (CISOs), and security decision makers are grappling with the resulting security implications: How do they observe, govern, and secure agents? How do they secure their foundations in this new era? How can they use agentic AI to protect their organization and detect and respond to traditional and emerging threats?

The answer starts with trust, and security has always been the root of trust. In this agentic era, security must be woven into, and around, every layer of the AI estate. It must be ambient and autonomous, just like the AI it protects. This is our vision for security as the core primitive of the AI stack.

At RSAC 2026, we are delivering on that vision with new purpose-built capabilities designed to help organizations secure agents, secure their foundations, and defend using agents and experts. Fueled by more than 100 trillion daily signals, Microsoft Security helps protect 1.6 million customers, one billion identities, and 24 billion Copilot interactions.2 Read on to learn how we can help you secure agentic AI.

Secure agents

Earlier this month, we announced that Agent 365 will be generally available on May 1. Agent 365—the control plane for agents—gives IT, security, and business teams the visibility and tools they need to observe, secure, and govern agents at scale using the infrastructure you already have and trust. It includes new Microsoft Defender, Entra, and Purview capabilities to help you secure agent access, prevent data oversharing, and defend against emerging threats.

Agent 365 is included in Microsoft 365 E7: The Frontier Suite along with Microsoft 365 Copilot, Microsoft Entra Suite, and Microsoft 365 E5, which includes many of the advanced Microsoft Security capabilities below to deliver comprehensive protection for your organization.

Secure your foundations

Along with securing agents, we also need to think of securing AI comprehensively. To truly secure agentic AI, we must secure foundations—the systems that agentic AI is built and runs on and the people who are developing and using AI. At RSAC 2026, we are introducing new capabilities to help you gain visibility into risks across your enterprise, secure identities with continuous adaptive access, safeguard sensitive data across AI workflows, and defend against threats at the speed and scale of AI.

Gain visibility into risks across your enterprise

As AI adoption accelerates, so does the need for comprehensive and continuous visibility into AI risks across your environment—from agents to AI apps and services. We are addressing this challenge with new capabilities that give you insight into risks across your enterprise so you know where AI is showing up, how it is being used, and where your exposure to risk may be growing. New capabilities include:

  • Security Dashboard for AI provides CISOs and security teams with unified visibility into AI-related risk across the organization. Now generally available.
  • Entra Internet Access Shadow AI Detection uses the network layer to identify previously unknown AI applications and surface unmanaged AI usage that might otherwise go undetected. Generally available March 31.
  • Enhanced Intune app inventory provides rich visibility into your app estate installed on devices, including AI-enabled apps, to support targeted remediation of high-risk software. Generally available in May.

Secure identities with continuous, adaptive access

Identity is the foundation of modern security, the most targeted layer in any environment, and the first line of defense. With Microsoft Entra, you can secure access and deliver comprehensive identity security using new capabilities that help you harden your identity infrastructure, improve tenant governance, modernize authentication, and make intelligent access decisions.

  • Entra Backup and Recovery strengthens resilience with an automated backup of Entra directory objects to enable rapid recovery in case of accidental data deletion or unauthorized changes. Now available in preview.
  • Entra Tenant Governance helps organizations discover unmanaged (shadow) Entra tenants and establish consistent tenant policies and governance in multi-tenant environments. Now available in preview.
  • Entra passkey capabilities now include synced passkeys and passkey profiles to enable maximum flexibility for end-users, making it easy to move between devices, while organizations looking for maximum control still have the option of device-bound passkeys. Plus, Entra passkeys are now natively integrated into the Windows Hello experience, making phishing-resistant passkey authentication more seamless on Windows devices. Synced passkeys and passkey profiles are generally available, passkey integration into Windows Hello is in preview. 
  • Entra external Multi-Factor Authentication (MFA) allows organizations to connect external MFA providers directly with Microsoft Entra so they can leverage pre-existing MFA investments or use highly specialized MFA methods. Now generally available.
  • Entra adaptive risk remediation helps users securely regain access without help-desk friction through automatic self-remediation across authentication methods, adapting to where they are in their modern authentication journey. Generally available in April.
  • Unified identity security provides end-to-end coverage across identity infrastructure, the identity control plane, and identity threat detection and response (ITDR)—built for rapid response and real-time decisions. The new identity security dashboard in Microsoft Defender highlights the most impactful insights across human and non-human identities to help accelerate response, and the new identity risk score unifies account-level risk signals to deliver a comprehensive view of user risk to inform real-time access decisions and SecOps investigations. Now available in preview.

Safeguard sensitive data across AI workflows

With AI embedded in everyday work, sensitive data increasingly moves through prompts, responses, and grounding flows—often faster than policies can keep up. Security teams need visibility into how AI interacts with data as well as the ability to stop data oversharing and data leakage. Microsoft brings data security directly into the AI control plane, giving organizations clear insight into risk, real-time enforcement at the point of use, and the confidence to enable AI responsibly across the enterprise. New Microsoft Purview capabilities include:

  • Expanded Purview data loss prevention for Microsoft 365 Copilot helps block sensitive information such as PII, credit card numbers, and custom data types in prompts from being processed or used for web grounding. Generally available March 31.
  • Purview embedded in Copilot Control System provides a unified view of AI‑related data risk directly in the Microsoft 365 Admin Center. Generally available in April.
  • Purview customizable data security reports enable tailored reporting and drilldowns to prioritized data security risks. Available in preview March 31.

Defend against threats across endpoints, cloud, and AI services

Security teams need proactive 24/7 threat protection that disrupts threats early and contains them automatically. Microsoft is extending predictive shielding to proactively limit impact and reduce exposure, expanding our container security capabilities, and introducing network-layer protection against malicious AI prompts.

  • Entra Internet Access prompt injection protection helps block malicious AI prompts across apps and agents by enforcing universal network-level policies. Generally available March 31.
  • Enhanced Defender for Cloud container security includes binary drift and antimalware prevention to close gaps attackers exploit in containerized environments. Now available in preview.
  • Defender for Cloud posture management adds broader coverage and supports Amazon Web Services and Google Cloud Platform, delivering security recommendations and compliance insights for newly discovered resources. Available in preview in April.
  • Defender predictive shielding dynamically adjusts identity and access policies during active attacks, reducing exposure and limiting impact. Now available in preview.

Defend with agents and experts

To defend in the agentic age, we need agentic defense. This means having an agentic defense platform and security agents embedded directly into the flow of work, augmented by deep human expertise and comprehensive security services when you need them.

Agents built into the flow of security work

Security teams move fastest with targeted help where and when work is happening. As alerts surface and investigations unfold across identities, data, endpoints, and cloud workloads, AI-powered assistance needs to operate alongside defenders. With Security Copilot now included in Microsoft 365 E5 and E7, we are empowering defenders with agents embedded directly into daily security and IT operations that help accelerate response and reduce manual effort so they can focus on what matters most.

New agents available now include:

  • Security Analyst Agent in Microsoft Defender helps accelerate threat investigations by providing contextual analysis and guided workflows. Available in preview March 26.
  • Security Alert Triage Agent in Microsoft Defender has the capabilities of the phishing triage agent and then extends to cloud and identity to autonomously analyze, classify, prioritize, and resolve repetitive low-value alerts at scale. Available in preview in April.
  • Conditional Access Optimization Agent in Microsoft Entra enhancements add context-aware recommendations, deeper analysis, and phased rollout to strengthen identity security. Agent generally available, enhancements now available in preview.
  • Data Security Posture Agent in Microsoft Purview enhancements include a credential scanning capability that can be used to proactively detect credential exposure in your data. Now available in preview.
  • Data Security Triage Agent in Microsoft Purview enhancements include an advanced AI reasoning layer and improved interpretation of custom Sensitive Information Types (SITs), to improve agent outputs during alert triage. Agent generally available, enhancements available in preview March 31.
  • Over 15 new partner-built agents extend Security Copilot with additional capabilities, all available in the Security Store.

Scale with an agentic defense platform

To help defenders and agents work together in a more coordinated, intelligence-driven way, Microsoft is expanding Sentinel, the agentic defense platform, to unify context, automate end-to-end workflows, and standardize access, governance, and deployment across security solutions.

  • Sentinel data federation powered by Microsoft Fabric investigates external security data in place in Databricks, Microsoft Fabric, and Azure Data Lake Storage while preserving governance. Now available in preview.
  • Sentinel playbook generator with natural language orchestration helps accelerate investigations and automate complex workflows. Now available in preview.
  • Sentinel Granular delegated administrator privileges and unified role-based access control enable secure and scaling management for partners and enterprise customers with cross-tenant collaboration. Now available in preview.
  • Security Store embedded in Purview and Entra makes it easier to discover and deploy agents directly within existing security experiences. Generally available March 31.
  • Sentinel custom graphs powered by Microsoft Fabric enable views unique to your organization of relationships across your environment. Now available in preview.
  • Sentinel model context protocol (MCP) entity analyzer helps automate faster with natural language and harnesses the flexibility of code to accelerate responses. Generally available in April.

Strengthen with experts

Even the most mature security organizations face moments that call for deeper partnership—a sophisticated attack, a complex investigation, a situation where seasoned expertise alongside your team makes all the difference. The Microsoft Defender Experts Suite brings together expert-led services—technical advisory, managed extended detection and response (MXDR), and end-to-end proactive and reactive incident response—to help you defend against advanced cyber threats, build long-term resilience, and modernize security operations with confidence.

Apply Zero Trust for AI

Zero Trust has always been built on three principles: verify explicitly, use least privilege, and assume breach. As AI becomes embedded across your entire environment—from the models you build on, to the data they consume, to the agents that act on your behalf—applying those principles has never been more critical. At RSAC 2026, we’re extending our Zero Trust architecture, the full AI lifecycle—from data ingestion and model training to deployment agent behavior. And we’re making it actionable with an updated Zero Trust for AI reference architecture, workshop, assessment tool, and new patterns and practices articles to help you improve your security posture.

See you at RSAC

If you’re joining the global security community in San Francisco for RSAC 2026 Conference, we invite you to connect with us. Join us at our Microsoft Pre-Day event and stop by our booth at the RSAC Conference North Expo (N-5744) to explore our latest innovations across Microsoft Agent 365, Microsoft Defender, Microsoft Entra, Microsoft Purview, Microsoft Sentinel, and Microsoft Security Copilot and see firsthand how we can help your organization secure agents, secure your foundation, and help you defend with agents and experts. The future of security is ambient, autonomous, and built for the era of AI. Let’s build it together.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.


1Based on Microsoft first-party telemetry measuring agents built with Microsoft Copilot Studio or Microsoft Agent Builder that were in use during the last 28 days of November 2025.

2Microsoft Fiscal Year 2026 First Quarter Earnings Conference Call and Microsoft Fiscal Year 2026 Second Quarter Earnings Conference Call

The post Secure agentic AI end-to-end appeared first on Microsoft Security Blog.



from Microsoft Security Blog https://ift.tt/CNcMZ0Y
via IFTTT

Critical Langflow Flaw CVE-2026-33017 Triggers Attacks within 20 Hours of Disclosure

A critical security flaw impacting Langflow has come under active exploitation within 20 hours of public disclosure, highlighting the speed at which threat actors weaponize newly published vulnerabilities.

The security defect, tracked as CVE-2026-33017 (CVSS score: 9.3), is a case of missing authentication combined with code injection that could result in remote code execution.

"The POST /api/v1/build_public_tmp/{flow_id}/flow endpoint allows building public flows without requiring authentication," according to Langflow's advisory for the flaw.

"When the optional data parameter is supplied, the endpoint uses attacker-controlled flow data (containing arbitrary Python code in node definitions) instead of the stored flow data from the database. This code is passed to exec() with zero sandboxing, resulting in unauthenticated remote code execution."

The vulnerability affects all versions of the open-source artificial intelligence (AI) platform prior to and including 1.8.1. It has been currently addressed in the development version 1.9.0.dev8.

Security researcher Aviral Srivastava, who discovered and reported the flaw on February 26, 2026, said it's distinct from CVE-2025-3248 (CVSS score: 9.8), another critical bug in Langflow that abused the /api/v1/validate/code endpoint to execute arbitrary Python code without requiring any authentication. It has since come under active exploitation, per the U.S. Cybersecurity and Infrastructure Security Agency (CISA).

"CVE-2026-33017 is in /api/v1/build_public_tmp/{flow_id}/flow," Srivastava explained, adding that the root cause stems from the use of the same exec() call as CVE-2025-3248 at the end of the chain.

"This endpoint is designed to be unauthenticated because it serves public flows. You can't just add an auth requirement without breaking the entire public flows feature. The real fix is removing the data parameter from the public endpoint entirely, so public flows can only execute their stored (server-side) flow data and never accept attacker-supplied definitions."

Successful exploitation could allow an attacker to send a single HTTP request and obtain arbitrary code execution with the full privileges of the server process. With this privilege in place, the threat actor can read environment variables, access or modify files to inject backdoors or erase sensitive data, and even obtain a reverse shell.

Cloud security firm Sysdig said it observed the first exploitation attempts targeting CVE-2026-33017 in the wild within 20 hours of the advisory's publication on March 17, 2026.

"No public proof-of-concept (PoC) code existed at the time," Sysdig said. "Attackers built working exploits directly from the advisory description and began scanning the internet for vulnerable instances. Exfiltrated information included keys and credentials, which provided access to connected databases and potential software supply chain compromise."

Threat actors have also been observed moving from automated scanning to leveraging custom Python scripts in order to extract data from "/etc/passwd," and deliver a next-stage payload hosted on "173.212.205[.]251:8443" to facilitate credential harvesting. This suggests planning on part of the threat actor by staging the malware to be delivered once a vulnerable target is identified.

"This is an attacker with a prepared exploitation toolkit moving from vulnerability validation to payload deployment in a single session," Sysdig noted. It's currently not known who is behind the attacks.

The 20-hour window between advisory publication and first exploitation aligns with an accelerating trend that has seen the median time-to-exploit (TTE) shrinking from 771 days in 2018 to just hours in 2024.

According to Rapid7's 2026 Global Threat Landscape Report, the median time from publication of a vulnerability to its inclusion in CISA's Known Exploited Vulnerabilities (KEV) catalog dropped from 8.5 days to five days over the past year.

"This timeline compression poses serious challenges for defenders. The median time for organizations to deploy patches is approximately 20 days, meaning defenders are exposed and vulnerable for far too long," it added. "Threat actors are monitoring the same advisory feeds that defenders use, and they are building exploits faster than most organizations can assess, test, and deploy patches. Organizations must completely reconsider their vulnerability programs to meet reality."

Users are advised to update to the latest patched version as soon as possible, audit environment variables and secrets on any publicly exposed Langflow instance, rotate keys and database passwords as a precautionary measure, monitor for outbound connections to unusual callback services, and restrict network access to Langflow instances using firewall rules or a reverse proxy with authentication.

The exploration activity targeting CVE-2025-3248 and CVE-2026-33017 underscores how AI workloads are landing in attackers' crosshairs owing to their access to valuable data, integration within the software supply chain, and insufficient security safeguards.

"CVE-2026-33017 [...] demonstrates a pattern that is becoming the norm rather than the exception: critical vulnerabilities in popular open-source tools are weaponized within hours of disclosure, often before public PoC code is even available," Sysdig concluded.



from The Hacker News https://ift.tt/lSXC7r9
via IFTTT

The Good, the Bad and the Ugly in Cybersecurity – Week 12

The Good | Operation Synergia III Disrupts Malicious Networks & the EU Sanctions State-Sponsored Attackers

Operation Synergia III, an Interpol-led crackdown spanning July 2025 to January 2026, has disrupted global cybercrime infrastructure across the globe. Authorities across 72 countries sinkholed 45,000 malicious IP addresses and seized 212 devices and servers, resulting in 94 arrests and 110 ongoing investigations.

The operation focused on taking down servers used in connection to extensive phishing, ransomware, malware, and fraud networks. Regional actions highlighted the breadth of the cyber activity: Bangladesh police arrested 40 suspects tied to scams and identity theft, while law enforcement in Togo dismantled a fraud ring engaged in social engineering, including romance scams and sextortion.

Source: emailexpert

In Macau, investigators uncovered over 33,000 phishing sites impersonating casinos, banks, and government services all posed to steal financial data. Building on earlier phases of the operation and complementary operations like Red Card 2.0, Serengeti, and Africa Cyber Surge, these joint efforts point to the growing sophistication of cybercrime and the critical role that coordinated international actions plays in stemming its reach.

To further hinder threat actors, the Council of the European Union has sanctioned three companies and two individuals tied to major cyberattacks on critical infrastructure.

China-linked Integrity Technology Group supported operations that compromised over 65,000 devices across six EU countries, while Anxun Information Technology (aka i-SOON) provided hacker-for-hire services targeting governments. Two of its co-founders have also been sanctioned for their part in executing the cyberattacks.

Iran-based company Emennet Pasargad has also been sanctioned for multiple influence campaigns and breaches, including phishing and disinformation efforts.

The Bad | Researchers Uncover ‘DarkSword’ iOS Exploit Stealing Sensitive Personal Data

A new iOS exploit chain and payload dubbed ‘DarkSword’ is stealing sensitive personal information from iPhones running iOS 18.4 to 18.7. The toolkit is linked to multiple threat actors, including Russian-aligned UNC6353, who previously leveraged a similar exploit chain called Coruna. DarkSword was subsequently uncovered while various researchers analyzed Coruna’s infrastructure.

In early November 2025, NC6748 used DarkSword against Saudi Arabian users via a Snapchat-themed website. Subsequently, other attackers linked to PARS Defense, a Turkish commercial surveillance firm, started running the exploit kit on Apple devices. Early this year, cases involving DarkSword were spotted across Malaysia and, most recently, it has been leveraged to target Ukrainian users.

The snapshare[.]chat decoy page (Source: GTIG)

DarkSword exploits six documented vulnerabilities (CVE-2025-31277, CVE-2025-43529, CVE-2026-20700, CVE-2025-14174, CVE-2025-43510, CVE-2025-43520), which Apple has since patched. Threat actors have used them to deliver at least three malware families: GHOSTBLADE (a data miner collecting crypto, messages, photos, and locations), GHOSTKNIFE (a backdoor exfiltrating accounts and communications), and GHOSTSABER (a JavaScript backdoor enumerating devices and executing code).

The delivery chain begins via Safari exploits, gaining kernel access and executing a main orchestrator (pe_main.js) that injects modules into privileged iOS services, including App Access, Wi-Fi, Keychain, and iCloud. Collected data spans passwords, messages, contacts, call history, location, browser history, Apple Health, and cryptocurrency wallets. The malware removes traces after exfiltration, indicating a focus on rapid theft rather than persistent surveillance.

Experts note that both DarkSword and Coruna exhibit signs of large language model (LLM)-assisted code expansion, showing professional design with maintainability and modularity in mind. Users are advised to update to iOS 26.3.1 and enable Lockdown Mode if at high risk.

The Ugly | Interlock Ransomware Exploits Cisco FMC Zero-Day to Breach Enterprise Firewalls

The Interlock ransomware group has been actively exploiting a critical remote code execution (RCE) zero-day in Cisco’s Secure Firewall Management Center (FMC) software since late January 2026. The vulnerability, tracked as CVE-2026-20131 (CVSS: 10.0), allows unauthenticated attackers to execute arbitrary code with root privileges on unpatched devices due to a case of insecure deserialization of user-supplied Java byte stream. Cisco has since issued a patch, urging customers to update immediately.

Interlock ransomware group is now exploiting a Cisco firewall bug patched on March 4

The bug is a CVSSv3 10/10 RCE in the Cisco Secure Firewall Management Center (FMC) Software: sec.cloudapps.cisco.com/security/cen…

[image or embed]

— Catalin Cimpanu (@campuscodi.risky.biz) 19 March 2026 at 10:42

Interlock, first seen in September 2024, has a history of high-profile attacks, including deploying the NodeSnake remote access trojan (RAT) against U.K. universities. The group has claimed responsibility for incidents affecting organizations such as DaVita, Kettering Health, the Texas Tech University System, and the city of Saint Paul, Minnesota. IBM X-Force researchers recently noted Interlock’s deployment of a new AI-assisted malware strain called Slopoly, highlighting the group’s evolving capabilities.

Latest reports explain that Interlock exploited the FMC flaw 36 days before its public disclosure, beginning on January 26, giving operators a head start to compromise firewalls before defenders were aware. This early access allowed attackers to operate undetected, underlining the danger of zero-day vulnerabilities.

Cisco has faced a series of zero-day exploits in 2026 so far. Earlier this year, maximum-severity flaws in Cisco AsyncOS email appliances, Unified Communications, and Catalyst SD-WAN were patched after being actively exploited, allowing attackers to bypass authentication, compromise controllers, and insert malicious peers.

The most recent incidents affecting FMC demonstrate both Interlock’s aggressive targeting of enterprise networks and the importance of rapid patching management and coordinated vulnerability disclosure. Organizations using Cisco FMC are strongly urged to apply the latest updates to mitigate ongoing risk.



from SentinelOne https://ift.tt/T1Yuymv
via IFTTT

Google Adds 24-Hour Wait for Unverified App Sideloading to Reduce Malware and Scams

Google on Thursday announced a new "advanced flow" for Android sideloading that requires a mandatory 24-hour wait period to install apps from unverified developers in an attempt to balance openness with safety.

The new changes come against the backdrop of a developer verification mandate the tech giant announced last year that requires all Android apps to be registered by verified developers to be installed on certified Android devices. The move, it added, was done to flag bad actors faster and prevent them from distributing malware.

This also includes potential scenarios where cybercriminals trick unsuspecting users who sideload such apps into granting them elevated privileges that make it possible to turn off Play Protect, the anti-malware feature built into all Google-certified Android devices.

However, the mandatory registration requirements have been met with criticism from over 50 app developers and marketplaces, including F-Droid, Brave, The Electronic Frontier Foundation, Proton, The Tor Project, Vivaldi, who say they risk creating friction and barriers to entry, and raise privacy and surveillance concerns in the absence of clarity about what personal information developers must provide, how this data will be stored, secured, and used, and if it could be subject to government requests or legal processes.

As a way of quelling some of these thorny issues, Google has emphasized that the newly developed advanced flow allows power users to maintain the ability to sideload apps from unverified developers with a one-time process that requires them to follow the steps below -

  • Enable developer mode in system settings.
  • Confirm that they are taking this step of their own volition and are not being coached.
  • Restart the phone and re-authenticate so as to prevent a scammer from monitoring what actions a user is taking.
  • Wait for a 24-hour period and confirm that they are really making this change with biometric authentication or device PIN.
  • Install apps from unverified developers once users understand the risks, either indefinitely or for a period of seven days.

"In that 24-hour period, we think it becomes much harder for attackers to persist their attack," Android Ecosystem President, Sameer Samat, was quoted as saying to Ars Technica. "In that time, you can probably find out that your loved one isn’t really being held in jail or that your bank account isn’t really under attack."

Google also said it plans to offer free "limited distribution accounts" that let hobbyist developers and students share apps with up to 20 devices without having to "provide a government-issued ID or pay a registration fee."

It's worth noting that the aforementioned process does not apply to installs via the Android Debug Bridge (ADB). Limited distribution accounts for students and hobbyists, as well as advanced flow for users, will be available in August 2026, before the new developer verification requirements take effect the month after.

"We know a 'one size fits all' approach doesn't work for our diverse ecosystem," Google said. "We want to ensure that identity verification isn't a barrier to entry, so we’re providing different paths to fit your specific needs."

The development coincides with the emergence of a new Android malware called Perseus that's actively targeting users in Turkey and Italy with an aim to conduct device takeover (DTO) and financial fraud.

Over the four months, at least 17 Android malware families have been detected in the wild. They include FvncBot, SeedSnatcher, ClayRat, Wonderland, Cellik, Frogblight, NexusRoute, ZeroDayRAT, Arsink (and its improved variant SURXRAT), deVixor, Phantom, Massiv, PixRevolution, TaxiSpy RAT, BeatBanker, Mirax, and Oblivion RAT.



from The Hacker News https://ift.tt/6OfUrNX
via IFTTT

The Importance of Behavioral Analytics in AI-Enabled Cyber Attacks

Artificial Intelligence (AI) is changing how individuals and organizations conduct many activities, including how cybercriminals carry out phishing attacks and iterate on malware. Now, cybercriminals are using AI to generate personalized phishing emails, deepfakes and malware that evade traditional detection by impersonating normal user activity and bypassing legacy security models. As a result, rule-based models alone are often insufficient for identity security against AI-enabled threats. Behavioral analytics must evolve beyond monitoring suspicious activity patterns over time into dynamic, identity-based risk modeling capable of identifying inconsistencies in real time.

Common risks introduced by AI-enabled attacks

AI-enabled cyber attacks introduce very different security risks compared to traditional cyber threats. By relying on automation and mimicking legitimate behavior, AI allows cybercriminals to scale their attacks while reducing obvious signals to remain undetected.

AI-powered phishing and social engineering

Unlike traditional phishing attacks that use generic messaging, AI enables personalized phishing messages at scale using public data, impersonating the writing styles of executives or creating context-aware messages referencing real events. These AI-powered attacks can reduce obvious red flags, slip past some filtering approaches and rely on psychological manipulation instead of malware delivery, significantly increasing the risk of credential theft and financial fraud.

Automated credential abuse and account takeovers

AI-enhanced credential abuse can optimize login attempts while avoiding triggering lockout thresholds, mimicking human-like timing between authentication attempts and targeting privileged accounts based on context. Since these attacks use compromised credentials, they often appear valid and blend into normal login activity, making identity security a crucial component of modern security strategies.

AI-assisted malware

Before cybercriminals could use AI to accelerate malware development and deployment, they had to manually modify code signatures and spend copious time creating new variants. AI can further speed up variation, scripting and adaptation. With modern adaptive malware, cybercriminals can automatically modify code to avoid detection, change behavior based on the environment and generate new exploit variants with little to no manual effort. Since traditional signature-based detection models struggle against continuously evolving code, organizations must start relying on behavioral patterns rather than static indicators.

How traditional behavioral monitoring can fail against AI-based attacks

Traditional monitoring was designed to detect cyber threats driven by malware, known security vulnerabilities and visible behavioral anomalies. Here are some of the ways traditional behavioral monitoring falls short against AI-enabled attacks:

  • Signature-based detection can’t identify modern threats: Signature-based tools rely on known signs of compromise. AI-assisted malware constantly rewrites its own code and automatically generates new variants, making static code signatures obsolete.
  • Rule-based systems rely on predefined thresholds: Many behavioral monitoring systems depend on rules, such as login frequency or geographic location. AI-assisted cybercriminals adjust their behavior to remain within set limits, conducting malicious activity over a longer period of time and mimicking human behavior to avoid detection.
  • Perimeter-based models fail when compromised credentials are involved: Traditional perimeter-based security models assume trust once a user or device is authenticated. When cybercriminals authenticate with legitimate credentials, these outdated models treat them as valid users, allowing them to carry out malicious actions.
  • AI-based attacks are designed to appear normal: AI-based cyber threats intentionally blend in by operating within assigned permissions, following anticipated workflows and executing their activities gradually. While isolated activity may seem legitimate, the main risk is when activity is regarded in tandem with behavioral context over time.

Why behavioral analytics must shift for AI-based attacks

The shift to modern behavioral analytics requires an evolution from simple threat detection into dynamic, context-aware risk modeling capable of identifying subtle privilege misuse.

Identity-based attacks require context

To appear normal, AI-driven cybercriminals often use credentials compromised through phishing or credential abuse, work from known devices or networks and conduct malicious activity over time to avoid detection. Modern behavioral analytics must evaluate whether even the slightest change in behavior is consistent with a user’s typical behavioral patterns. Advanced behavioral models establish baselines, assess real-time activity and combine identity, device and session context.

Monitoring must extend across the entire stack

Once cybercriminals gain access to systems through compromised, weak or reused credentials, they focus on gradually expanding their access. Behavioral visibility needs to cover the full security stack, including privileged access, cloud infrastructure, endpoints, applications and administrative accounts. For behavioral analytics to be more effective against AI-based cyber attacks, organizations must enforce zero-trust security and assume that no user or device should have implicit trust or automatic authentication based on network location.

AI tools not only empower external cybercriminals but also make it easier for malicious insiders to act within an organization’s network. Malicious insiders can use AI to automate credential harvesting, identify sensitive information or generate believable phishing content. Since insiders often operate with legitimate permissions, detecting privilege misuse requires identifying behavioral anomalies like access beyond defined responsibilities, activity outside normal business hours and repeated activity within critical systems. Eliminating standing access by enforcing Just-in-Time (JIT) access, session monitoring and session recording helps organizations limit exposure and reduce the impact of compromised accounts and insider misuse.

Secure identities against autonomous AI-based cyber attacks

At a time when AI agents can create convincing social engineering campaigns, test credentials at scale and reduce the hands-on effort required to run attacks, AI-enabled cyber attacks are becoming increasingly automated. Protecting both human and Non-Human Identities (NHIs) now requires more than authentication; organizations must implement continuous, context-aware behavioral analysis and granular access controls. Modern Privileged Access Management (PAM) solutions like Keeper consolidate behavioral analytics, real-time session monitoring and JIT access to secure identities across hybrid and multi-cloud environments.

Note: This article was thoughtfully written and contributed for our audience by Ashley D’Andrea, Content Writer at Keeper Security.

Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.



from The Hacker News https://ift.tt/wt8UIVh
via IFTTT

DoJ Disrupts 3 Million-Device IoT Botnets Behind Record 31.4 Tbps Global DDoS Attacks

The U.S. Department of Justice (DoJ) on Thursday announced the disruption of command-and-control (C2) infrastructure used by several Internet of Things (IoT) botnets like AISURU, Kimwolf, JackSkid, and Mossad as part of a court-authorized law enforcement operation.

The effort also saw authorities from Canada and Germany targeting the operators behind these botnets, with a number of private sector firms, including Akamai, Amazon Web Services, Cloudflare, DigitalOcean, Google, Lumen, Nokia, Okta, Oracle, PayPal, SpyCloud, Synthient, Team Cymru, Unit 221B, and QiAnXin XLab assisting in the investigation efforts.

"The four botnets launched distributed denial-of-service (DDoS) attacks targeting victims around the world," the DoJ said. "Some of these attacks measured approximately 30 Terabits per second, which were record-breaking attacks."

In a report last month, Cloudflare attributed AISURU/Kimwolf to a massive 31.4 Tbps DDoS attack that occurred in November 2025 and lasted only 35 seconds. Towards the end of last year, the botnet is also assessed to have engaged in hyper-volumetric DDoS attacks that had an average size of 3 billion packets per second (Bpps), 4 Tbps, and 54 million requests per second (Mrps).

Independent security journalist Brian Krebs also traced the administrator of Kimwolf to a 23-year-old Jacob Butler (aka Dort) from Ottawa, Canada. Butler told Krebs he has not used the Dort persona since 2021 and claimed someone is impersonating him after compromising his old account.

Butler also said, "he mostly stays home and helps his mom around the house because he struggles with autism and social interaction." According to Krebs, the other prime suspect is a 15-year-old residing in Germany. No arrests have been announced.

The botnet has conscripted more than 2 million Android devices into its network, most of which are compromised, off-brand Android TVs. In all, the four botnets are estimated to have infected no less than 3 million devices worldwide, such as digital video recorders, web cameras, or Wi-Fi routers, of which hundreds of thousands are located in the U.S.

"The Kimwolf and JackSkid botnets are accused of targeting and infecting devices which are traditionally 'firewalled' from the rest of the internet. The infected devices were enslaved by the botnet operators," the DoJ said. "The operators then used a 'cybercrime as a service' model to sell access to the infected devices to other cyber criminals."

These infected devices were then used to conduct DDoS attacks against targets of interest across the world. Court documents allege that the four Mirai botnet variants have issued hundreds of thousands of DDoS attack commands -

  • AISURU - >200,000 DDoS attack commands
  • Kimwolf - >25,000 DDoS attack commands
  • JackSkid - >90,000 DDoS attack commands
  • Mossad - >1,000 DDoS attack commands

"Kimwolf represented a fundamental shift in how botnets operate and scale. Unlike traditional botnets that scan the open internet for vulnerable devices, Kimwolf exploited a novel attack vector: residential proxy networks," Tom Scholl, VP/Distinguished Engineer at AWS, said in a post shared on LinkedIn.

"By infiltrating home networks through compromised devices—including streaming TV boxes and other IoT devices — the botnet gained access to local networks that are typically protected from external threats by home routers."

Akamai said the hyper-volumetric botnets generated attacks exceeding 30 Tbps, 14 billion packets per second, and 300 Mrps, adding that cybercriminals leveraged these botnets to launch hundreds of thousands of attacks and demand extortion payments from victims in some cases.

"These attacks can cripple core internet infrastructure, cause significant service degradation for ISPs and their downstream customers, and even overwhelm high-capacity cloud-based mitigation services," the web infrastructure company said.



from The Hacker News https://ift.tt/UQ9K1vW
via IFTTT

Thursday, March 19, 2026

When session recording stops scaling

When a serious breach or security incident hits, enterprises rarely suffer from a lack of data. They suffer from a lack of usable evidence. And the longer it takes to find that evidence, the higher the business risk: audit exposure grows, downtime extends, and customer (not to mention, government regulator) trust erodes.

Where investigations break

In complex incidents, system and security logs are useful but incomplete. They may show who authenticated or when a system failed, but they rarely show what actually happened on screen.

Public cases have shown how quickly forensic certainty can break down when trusted access is misused. And it happens much more than organizations would care to admit or are able to currently identify. In one widely reported incident at a major networking company, an employee with legitimate administrative access stole sensitive data and altered logs, leaving the company unable to fully determine what happened. That is the broader enterprise risk, and this is not unique to one company. Any organization with privileged users, sensitive data, and incomplete audit visibility can face the same problem. The damage comes not just from the misuse itself, but from the time and uncertainty that follow.

This is where session recording matters. When logs are incomplete or cannot be trusted, recorded visual evidence gives teams a clearer way to reconstruct what happened. It helps security, compliance, and operations move from assumption to evidence.

At enterprise scale, reviewing a mountain of session recordings to find when and where the issue occurred creates a new bottleneck. Investigations drag, operational reviews slow down, and critical signals stay buried in hours of video. The evidence exists, but manual playback does not scale.

Turning recordings into usable evidence

Thankfully, AI now makes it possible to convert recorded sessions into structured findings, summaries, and prioritized evidence. Instead of asking expensive analysts and auditors to watch hours of entire recordings, Citrix will help them rapidly zero in with what matters: what happened, where the risk is, and which sessions need review.

That shifts session recording from a passive archive to an active investigation tool.

The value is obvious for this AI use case. Today, Citrix is pleased to announce the general availability of AI Session Recording in Citrix Virtual Apps and Desktops and Citrix DaaS.  

Two outcomes from one evidence source

The value is straightforward: faster security decisions and less operational drag from session review.

Incident response

Incident response teams do not need more footage. They need faster answers. Risk often sits on a spectrum, from accidental exposure to deliberate insider misuse. The challenge is not just detecting activity. It is understanding the context fast enough to decide what matters.

Traditional logs often show the action but miss the context. AI-powered session insights help surface sensitive data exposure, suspicious workflows, and behavior that warrants review. That lets analysts focus on the sessions that matter most instead of watching hours of recordings.

Operational management

Manual session review is expensive, inconsistent, and slow. Most organizations already have the recordings. The problem is the human effort required to extract useful evidence from them.

Citrix helps teams see where time is being lost and where operational friction is building:

  • Work patterns: app-level breakdown plus session highlights
  • Idle vs active: consistent engagement summary
  • Distractions: time outside core apps
  • Tool switching: patterns that indicate workflow friction

That gives operations leaders a clearer view of where work is slowing down and gives security teams a faster way to separate routine activity from sessions that warrant review.

Security and governance on your terms

Enterprise AI only works when governance is explicit. Citrix AI-powered session insights use a customer-configured AI model endpoint, so customers can choose the model provider and configure where that endpoint runs aligned to their privacy and internal control requirements. Customers control which AI endpoint is used for analysis and how prompts are configured for the insights generated.

From recorded sessions to faster decisions

Citrix is not changing the purpose of session recording. It is making the control operationally usable at scale. By turning recorded sessions into structured evidence, organizations can reduce manual review, accelerate investigations, and respond with greater confidence across security, compliance, and operational workflows.

When the incident matters, the issue is not whether you captured the session. It is whether you can get to the truth fast enough to act.

Availability

AI-powered insights for Citrix Session Recording are available now for Cloud DaaS customers. For CVAD customers, on-premises availability is planned for Q2 2026. Additional details are in the product documentation.



from Citrix Blogs https://ift.tt/QLCytb0
via IFTTT