Monday, February 9, 2026

China-Linked UNC3886 Targets Singapore Telecom Sector in Cyber Espionage Campaign

The Cyber Security Agency (CSA) of Singapore on Monday revealed that the China-nexus cyber espionage group known as UNC3886 targeted its telecommunications sector.

"UNC3886 had launched a deliberate, targeted, and well-planned campaign against Singapore's telecommunications sector," CSA said. "All four of Singapore's major telecommunications operators ('telcos') – M1, SIMBA Telecom, Singtel, and StarHub – have been the target of attacks."

The development comes more than six months after Singapore's Coordinating Minister for National Security, K. Shanmugam, accused UNC3886 of striking high-value strategic threat targets. UNC3886 is assessed to be active since at least 2022, targeting edge devices and virtualization technologies to obtain initial access.

In July 2025, Sygnia disclosed details of a long-term cyber espionage campaign attributed to a threat cluster it tracks as Fire Ant and which shares tooling and targeting overlaps with UNC3886, stating the adversary infiltrates organizations' VMware ESXi and vCenter environments as well as network appliances.

Describing UNC3886 as an advanced persistent threat (APT) with "deep capabilities," the CSA said the threat actors deployed sophisticated tools to gain access into telco systems, in one instance even weaponizing a zero-day exploit to bypass a perimeter firewall and siphon a small amount of technical data to further its operational objectives. The exact specifics of the flaw were not disclosed.

In a second case, UNC3886 is said to have deployed rootkits to establish persistent access and conceal their tracks to fly under the radar. Other activities undertaken by the threat actor include gaining unauthorized access to "some parts" of telco networks and systems, including those deemed critical, although it's assessed that the incident was not severe enough to disrupt services.

CSA said it mounted a cyber operation dubbed CYBER GUARDIAN to counter the threat and limit the attackers' movement into telecom networks. It also emphasized that there is no evidence that the threat actor exfiltrated personal data such as customer records or cut off internet availability.

"Cyber defenders have since implemented remediation measures, closed off UNC3886’s access points, and expanded monitoring capabilities in the targeted telcos," the agency said.



from The Hacker News https://ift.tt/FsU8bpE
via IFTTT

ClawSec: Hardening OpenClaw Agents from the Inside Out

Autonomous agents are moving fast – from experimental side projects to real operational components inside development workflows and cloud environments. While agent capabilities are accelerating though, agent security itself has lagged behind. Most agent frameworks still assume implicit trust: There is trust in downloaded skills, prompts that evolve over time, and trust in agents not to quietly exfiltrate data or drift into unsafe behavior.

That assumption has already been proven wrong. Over the last week, researchers have already uncovered more than 200 malicious OpenClaw skills published in a few short days, all masquerading as legitimate utilities while delivering credential-sharing malware. Distributed through GitHub and OpenClaw’s official registry, these skills harvested API keys, cloud secrets, wallet data, and SSH credentials, highlighting how easily agent supply chains can be weaponized at scale.

This activity did not emerge in isolation. OpenClaw’s rapid growth, decentralized skill ecosystem, and deep system-level access have created a large, largely unmanaged attack surface. Skills are often installed directly from public repositories, documentation is trusted at face value, and agents are granted persistent memory and tool access that rivals traditional applications without equivalent security controls.

Since the assumption of implicit trust no longer holds in today’s landscape, ClawSec was designed to close that gap. ClawSec is an open-source security skill suite created to harden OpenClaw agents against prompt injection, supply chain compromise, configuration drift, and unsafe runtime behavior. Purpose-built as a “skill-of-skills”, ClawSec wraps agents in a continuously verified security layer, validating what it runs, how it changes, and where the data is allowed to go. Now live on GitHub, ClawSec is a zero-cost, privacy-first solution protecting both humans and autonomous agents via a single install.

Why We Are Rethinking Agent Security

Traditional application security models don’t map cleanly onto agentic systems. As agents are dynamic by nature, they pull skills from external sources, modify their own prompts, call tools autonomously, and adapt their behavior over time. That flexibility is powerful, but it also creates new attack surfaces. Some of the most common failure modes include:

  • Blind trust in skills downloaded from public repositories
  • Prompt injection attacks that manipulate agent behavior at runtime
  • Silent configuration drift that weakens guardrails over time
  • Unauthorized egress where agents send data externally without user awareness

In many cases, these issues go undetected because there is no continuous verification layer watching the agent’s internals. ClawSec is designed to be that layer.

Introducing ClawSec

ClawSec is the first open-source security suite, purpose-built for OpenClaw deployments. Rather than acting as a single defense mechanism, it functions as a composable security platform made up of modular skills that work in tandem with each other.

Operating as a “skill-of-skills”, ClawSec is a hardened shell around an agent. It doesn’t replace existing skills – it validates and protects them. Every security-relevant aspect of the agent is continuously checked, from supply chain integrity to runtime behavior and outbound communications.

ClawSec is a project by Prompt Security, a SentinelOne company, with its purpose rooted in security research, experimentation, and agentic workflow hardening. The goal here is not control, but resilience. By making agent security open, auditable, and driven by the community, ClawSec aims to set the highest bar possible for what “safe by default” means in today’s class of autonomous systems.

How It Works

ClawSec is a closed feedback loop where individual detections strengthen the entire ecosystem over time.

  1. Install – Load the ClawSec suite as a single security skill.
  2. Activate – Integrity checks, posture hardening, and audits begin immediately.
  3. Detect – Suspicious behavior, drift, or known threats are flagged.
  4. Decide – The agent requests permission before reporting or communicating.
  5. Protect – Verified reports become community advisories that protect other agents.

Secure Skill Integrity & Supply Chain Defense

One of the most critical risks in agent-based ecosystems is skill supply chain compromise. Agents routinely download and execute skills written by third parties, often without any cryptographic verification or checks. With ClawSec, teams eliminate blind trust.

In ClawSec, every security skill is distributed with check-sums and verified sources only. The suite supposes standard SKILL.md definitions as well as packaged .skill formats, guaranteeing compatibility with all existing OpenClaw workflows.

Once installed, it continuously monitors critical files such as TOOLS.md, prompt baselines, and configuration manifests for signs of drift. In the case of unexpected changes, the agent is alerted immediately. This approach treats skills the way modern security teams need to treat dependencies: verify first, trust second. ClawSec ensures that silent modifications are no longer invisible.

Proactive Posture Hardening & Automated Results

Security should not be reactive. ClawSec activates posture hardening as soon as it is installed, meaning agents’ configuration and runtime context for known prompt-injection vectors, unsafe defaults, and misconfigurations are scanned and identified instantly.

For teams that want to run automated audits on a recurring basis, optional watchdog skills can be turned on for daily, on-startup, or post-major changes frequency. These audits generate human-readable responses that explain exactly what is being checked, what has changed, and what needs attention.

Community-Driven Threat Intelligence Without Centralization

Agent security evolves quickly, and no single team can track every emerging threat. ClawSec empowers teams by integrating a live, community-driven security advisory feed powered by the National Vulnerability Database (NVD) and reports submitted via GitHub Issues. When a threat is reviewed and verified by maintainers, it becomes an advisory that any subscribed ClawSec agent can consume.

With no centralized server, updates flow through GitHub workflows, making the system transparent, auditable, and resilient. As soon as a verified advisory is published, agents can then react automatically, flagging risky skills, alerting users, and putting a block on execution paths tied to known issues.

Zero-Trust by Default

As a deliberate design choice, ClawSec has a zero-trust stance on communication, enforcing silence as the baseline. Unauthorized egress and telemetry blocked outright, so the agent does not phone home when an anomaly, threat, or compromise is detected. Instead, it pauses and asks for explicit user consent before any reporting or external communications.

This model, with no hidden chatter, no background data sharing, and no surprise outbound requests, ensures that agents remain accountable to their operators – not to unseen infrastructure. Security events are handled transparently, with human experts at the core of the decision loop.

Conclusion | Build Secure. Share Secure.

While ClawSec is shipped with a strong set of core security capabilities, its real power lies in its extensibility. Developers are encouraged to contribute new security skills, including prompt defences, modules to help enforce policies, auditing tools, and more. All submitted skills are reviewed, check-summed, and published to a shared catalog meaning everyone can benefit.

What this creates is a shared security baseline for autonomous agents, defined and maintained by a community of experts rather than staying locked behind a vendor wall. We are excited to launch ClawSec to help organizations both build and share securely as we improve the security standard together.

Secure your OpenClaw Agents with ClawSec
Drift detection, security recommendations, automated audits, and skill integrity verification.

Third-Party Trademark Disclaimer:

All third-party product names, logos, and brands mentioned in this publication are the property of their respective owners and are for identification purposes only. Use of these names, logos, and brands does not imply affiliation, endorsement, sponsorship, or association with the third-party.



from SentinelOne https://ift.tt/TxbZapO
via IFTTT

⚡ Weekly Recap: AI Skill Malware, 31Tbps DDoS, Notepad++ Hack, LLM Backdoors and More

Cyber threats are no longer coming from just malware or exploits. They’re showing up inside the tools, platforms, and ecosystems organizations use every day. As companies connect AI, cloud apps, developer tools, and communication systems, attackers are following those same paths.

A clear pattern this week: attackers are abusing trust. Trusted updates, trusted marketplaces, trusted apps, even trusted AI workflows. Instead of breaking security controls head-on, they’re slipping into places that already have access.

This recap brings together those signals — showing how modern attacks are blending technology abuse, ecosystem manipulation, and large-scale targeting into a single, expanding threat surface.

⚡ Threat of the Week

OpenClaw announces VirusTotal Partnership — OpenClaw has announced a partnership with Google's VirusTotal malware scanning platform to scan skills that are being uploaded to ClawHub as part of a defense-in-depth approach to improve the security of the agentic ecosystem. The development comes as the cybersecurity community has raised concerns that autonomous artificial intelligence (AI) tools' persistent memory, broad permissions, and user‑controlled configuration could amplify existing risks, leading to prompt injections, data exfiltration, and exposure to unvetted components. This has also been complemented by the discovery of malicious skills on ClawHub, a public skills registry to augment the capabilities of AI agents, once again demonstrating that marketplaces are a gold mine for criminals who populate the store with malware to prey on developers. To make matters worse, Trend Micro disclosed that it observed malicious actors on the Exploit.in forum actively discussing the deployment of OpenClaw skills to support activities such as botnet operations. Another report from Veracode revealed that the number of packages on npm and PyPI with the name "claw" has increased exponentially from nearly zero at the start of the year to over 1,000 as of early February 2026, providing new avenues for threat actors to smuggle malicious typosquats. "Unsupervised deployment, broad permissions, and high autonomy can turn theoretical risks into tangible threats, not just for individual users but also across entire organizations," Trend Micro said. "Open-source agentic tools like OpenClaw require a higher baseline of user security competence than managed platforms." 

Powerful AI Agents

Bad Actors Are Using New AI Capabilities and Powerful AI Agents

Traditional firewalls and VPNs aren’t helping—instead, they’re expanding your attack surface and enabling lateral threat movement. They’re also more easily exploited with AI-powered attacks. It’s time for Zero Trust + AI.

Learn More ➝

🔔 Top News

  • German Agencies Warn of Signal Phishing — Germany's Federal Office for the Protection of the Constitution (aka Bundesamt für Verfassungsschutz or BfV) and Federal Office for Information Security (BSI) have issued a joint advisory warning of a malicious cyber campaign undertaken by a likely state-sponsored threat actor that involves carrying out phishing attacks over the Signal messaging app. The attacks have been mainly directed at high-ranking targets in politics, the military, and diplomacy, as well as investigative journalists in Germany and Europe. The attack chains exploit legitimate PIN and device linking features in Signal to take control of victims' accounts.
  • AISURU Botnet Behind 31.4 Tbps DDoS Attack — The botnet known as AISURU/Kimwolf has been attributed to a record-setting distributed denial-of-service (DDoS) attack that peaked at 31.4 Terabits per second (Tbps) and lasted only 35 seconds. The attack took place in November 2025, according to Cloudflare, which automatically detected and mitigated the activity. AISURU/Kimwolf has also been linked to another DDoS campaign codenamed The Night Before Christmas that commenced on December 19, 2025. In all, DDoS attacks surged by 121% in 2025, reaching an average of 5,376 attacks automatically mitigated every hour.
  • Notepad++ Hosting Infrastructure Breached to Distribute Chrysalis Backdoor — Between June and October 2025, threat actors quietly and very selectively redirected traffic from Notepad++'s updater program, WinGUp, to an attacker-controlled server that downloaded malicious executables. While the attacker lost their foothold on the third-party hosting provider's server on September 2, 2025, following scheduled maintenance where the server firmware and kernel were updated. However, the attackers still had valid credentials in their possession, which they used to continue routing Notepad++ update traffic to their malicious servers until at least December 2, 2025. The adversary specifically targeted the Notepad++ domain by taking advantage of its insufficient update verification controls that existed in older versions of Notepad++. The findings show that updates cannot be treated as trusted just because they come from a legitimate domain, as the blind spot can be abused as a vector for malware distribution. The sophisticated supply chain attack has been attributed to a threat actor known as Lotus Blossom. "Attackers prize distribution points that touch a large population," a Forrester analysis said. "Update servers, download portals, package managers, and hosting platforms become efficient delivery systems, because one compromise creates thousands of downstream victims."
  • DockerDash Flaw in Docker AI Assistant Leads to RCE — A critical-severity bug in Docker's Ask Gordon AI assistant can be exploited to compromise Docker environments. Called DockerDash, the vulnerability exists in the Model Context Protocol (MCP) Gateway's contextual trust, where malicious instructions embedded into a Docker image's metadata labels are forwarded to the MCP and executed without validation. This is made possible because the MCP Gateway does not distinguish between informational metadata and runnable internal instructions. Furthermore, the AI assistant trusts all image metadata as safe contextual information and interprets commands in metadata as legitimate tasks. Noma Security named the technique meta-context injection. It was addressed by Docker with the release of version 4.50.0 in November 2025.
  • Microsoft Develops Scanner to Detect Hidden Backdoors in LLMs — Microsoft has developed a scanner designed to detect backdoors in open-weight AI models in hopes of addressing a critical blind spot for enterprises that are dependent on third-party large language models (LLMs). The company said it identified three observable indicators that suggest the presence of backdoors in language models: a shift in how a model pays attention to a prompt when a hidden trigger is present, almost independently from the rest of the prompt; models tend to leak their own poisoned data, and partial versions of the backdoor can still trigger the intended response. "The scanner we developed first extracts memorized content from the model and then analyzes it to isolate salient substrings," Microsoft noted. "Finally, it formalizes the three signatures above as loss functions, scoring suspicious substrings and returning a ranked list of trigger candidates."

‎️‍🔥 Trending CVEs

New vulnerabilities surface daily, and attackers move fast. Reviewing and patching early keeps your systems resilient.

Here are this week’s most critical flaws to check first — CVE-2026-25049 (n8n), CVE-2026-0709 (Hikvision Wireless Access Point), CVE-2026-23795 (Apache Syncope), CVE-2026-1591, CVE-2026-1592 (Foxit PDF Editor Cloud), CVE-2025-67987 (Quiz and Survey Master plugin), CVE-2026-24512 (ingress-nginx), CVE-2026-1207, CVE-2026-1287, CVE-2026-1312 (Django), CVE-2026-1861, CVE-2026-1862 (Google Chrome), CVE-2026-20098 (Cisco Meeting Management), CVE-2026-20119 (Cisco TelePresence CE Software and RoomOS), CVE-2026-0630, CVE-2026-0631, CVE-2026-22221, CVE-2026-22222, CVE-2026-22223, CVE-2026-22224, CVE-2026-22225, CVE-2026-22226, 22227, CVE-2026-22229 (TP-Link Archer BE230), CVE-2026-22548 (F5 BIG-IP), CVE-2026-1642 (F5 NGINX OSS and NGINX Plus), and CVE-2025-6978 (Arista NG Firewall).

📰 Around the Cyber World

  • OpenClaw is Riddled With Security Concerns — The skyrocketing popularity of OpenClaw (née Clawdbot and Moltbot) has attracted cybersecurity worries. With artificial intelligence (AI) agents having entrenched access to sensitive data, giving "bring-your-own-AI" systems privileged access to applications and the user conversations carries significant security risks. The architectural concentration of power means AI agents are designed to store secrets and execute actions – features that are all essential to meet their objectives. But when they are misconfigured, the very design that serves as their backbone can collapse multiple security boundaries at once. Pillar Security has warned that attackers are actively scanning exposed OpenClaw gateways on port 18789. "The traffic included prompt injection attempts targeting the AI layer -- but the more sophisticated attackers skipped the AI entirely," researchers Ariel Fogel and Eilon Cohen said. "They connected directly to the gateway's WebSocket API and attempted authentication bypasses, protocol downgrades to pre-patch versions, and raw command execution." Attack surface management firm Censys said it identified 21,639 exposed OpenClaw instances as of January 31, 2026. "Clawdbot represents the future of personal AI, but its security posture relies on an outdated model of endpoint trust," said Hudson Rock. "Without encryption-at-rest or containerization, the 'Local-First' AI revolution risks becoming a goldmine for the global cybercrime economy."
  • Prompt Injection Risks in MoltBook — A new analysis of MoltBook posts has revealed several critical risks, including "506 prompt injection attacks targeting AI readers, sophisticated social engineering tactics exploiting agent psychology," anti-human manifestos receiving hundreds of thousands of upvotes, and unregulated cryptocurrency activity comprising 19.3% of all content," according to Simula Research Laboratory. British programmer Simon Willison, who coined the term prompt injection in 2022, has described Moltbook as the "most interesting place on the internet right now." Vibe, coded by its creator, Matt Schlicht, Moltbook marks the first time AI agents built atop the OpenClaw platform can communicate with each other, post, comment, upvote, and create sub-communities without human intervention. While Moltbook is pitched as a way to offload tedious tasks, equally apparent are the security pitfalls, given the deep access the AI agents have to personal information. Prompt injection attacks hidden in natural language text can instruct an AI agent to reveal private data.
  • Malicious npm Packages Use EtherHiding Technique — Cybersecurity researchers have discovered a set of 54 malicious npm packages targeting Windows systems that use an Ethereum smart contract as a dead drop resolver to fetch a command-and-control (C2) server to receive next-stage payloads. This technique, codename EtherHiding, is notable because it makes takedown efforts more difficult, allowing the operators to modify the infrastructure without making any changes to the malware itself."The malware includes environment checks designed to evade sandbox detection, specifically targeting Windows systems with 5 or more CPUs," Veracode said. Other capabilities of the malware include system profiling, registry persistence via a COM hijacking technique, and a loader to execute the second-stage payload delivered by the C2. The C2 server is currently inactive, making it unclear what the exact motives are.
  • Ukraine Rolls Out Verification for Starlink — Ukraine has rolled out a verification system for Starlink satellite internet terminals used by civilians and the military after confirming that Russian forces have begun installing the technology on attack drones. The Ukrainian government has introduced a mandatory allowlist for Starlink terminals, as part of which only verified and registered devices will be allowed to operate in the country. All other terminals will be automatically disconnected.
  • Cellebrite Tech Used Against Jordanian Civil Society — The Jordanian government used Cellebrite digital forensic software to extract data from phones belonging to at least seven Jordanian activists and human rights defenders between late 2023 and mid-2025, according to a new report published by the Citizen Lab. The extractions occurred while the activists were being interrogated or detained by authorities. Some of the recent victims were activists who organized protests in support of Palestinians in Gaza. Citizen Lab said it uncovered iOS and Android indicators of compromise tied to Cellebrite in all four phones it forensically analyzed. It's suspected that authorities have been using Cellebrite since at least 2020.
  • ShadowHS, a Fileless Linux Post‑Exploitation Framework — Threat hunters have discovered a stealthy Linux framework that runs entirely in memory for covert, post-exploitation control. The activity has been codenamed ShadowHS by Cyble. "Unlike conventional Linux malware that emphasizes automated propagation or immediate monetization, this activity prioritizes stealth, operator safety, and long‑term interactive control over compromised systems," the company said. "The loader decrypts and executes its payload exclusively in memory, leaving no persistent binary artifacts on disk. Once active, the payload exposes an interactive post‑exploitation environment that aggressively fingerprints host security controls, enumerates defensive tooling, and evaluates prior compromise before enabling higher‑risk actions." The framework supports various dormant modules that support credential access, lateral movement, privilege escalation, cryptomining, memory inspection, and data exfiltration.
  • Incognito Operator Gets 30 Years in Prison — Rui-Siang Lin, 24, was sentenced to 30 years in U.S. prison for his role as an administrator of Incognito Market, which facilitated millions of dollars' worth of drug sales. Lin ran Incognito Market from January 2022 to March 2024 under the moniker "Pharaoh," enabling the sale of more than $105 million of narcotics. Incognito Market allowed about 1,800 vendors to sell to a customer base exceeding 400,000 accounts. In all, the operation facilitated about 640,000 narcotics transactions. Lin was arrested in May 2024, and he pleaded guilty to the charges later that December. "While Lin made millions, his offenses had devastating consequences," said U.S. Attorney Jay Clayton. "He is responsible for at least one tragic death, and he exacerbated the opioid crisis and caused misery for more than 470,000 narcotics users and their families."
  • INC Ransomware Group's Slip-Up Proves Costly — Cybersecurity firm Cyber Centaurs said it has helped a dozen victims recover their data after breaking into the backup server of the INC Ransomware group, where the stolen data was dumped. The INC group started operations in 2023 and has listed more than 100 victims on its dark web leak site. "While INC Ransomware demonstrated careful planning, hands-on execution, and effective use of legitimate tools (LOTL), they also left behind infrastructure and artifacts that reflected reuse, assumption, and oversight," the company said. "In this instance, those remnants, particularly related to Restic, created an opening that would not normally exist in a typical ransomware response."
  • Xinbi Marketplace Accounts for $17.9B in Total Volume — A new analysis from TRM Labs has revealed that the illicit Telegram-based guarantee marketplace known as Xinbi has continued to remain active, while those of its competitors, Haowang (aka HuiOne) Guarantee and Tudou Guarantee, dropped by 100% and 74%, respectively. Wallets associated with Xinbi have received approximately $8.9 billion and processed roughly $17.9 billion in total transaction volume. "Guarantee services attract illicit actors by offering informal escrow, wallet services, and marketplaces with minimal due diligence, making them a critical laundering facilitator layer," the blockchain intelligence firm said.
  • XBOW Uncovers 2 IDOR Flaws in Spree — AI-powered offensive security platform discovered two previously unknown Insecure Direct Object Reference (IDOR) vulnerabilities (CVE-2026-22588 and CVE-2026-22589) in Spree, an open-source e-commerce platform, that allows an attacker to access guest address information without supplying valid credentials or session cookies and retrieve other users' address information by editing an existing, legitimate order. The issues were fixed in Spree version 5.2.5.

🎥 Cybersecurity Webinars

  • Cloud Forensics Is Broken — Learn From Experts What Actually Works: Cloud attacks move fast and often leave little usable evidence behind. This webinar explains how modern cloud forensics works—using host-level data and AI to reconstruct attacks faster, understand what really happened, and improve incident response across SOC teams.
  • Post-Quantum Cryptography: How Leaders Secure Data Before Quantum Breaks It: Quantum computing is advancing fast, and it could eventually break today’s encryption. Attackers are already collecting encrypted data now to decrypt later when quantum power becomes available. This webinar explains what that risk means, how post-quantum cryptography works, and what security leaders can do today—using practical strategies and real deployment models—to protect sensitive data before quantum threats become reality.
  • YARA Rule Skill (Community Edition): It is a tool that helps an AI agent write, review, and improve YARA detection rules. It analyzes rules for logic errors, weak strings, and performance problems using established best practices. Security teams use it to strengthen malware detection, improve rule accuracy, and ensure rules run efficiently with fewer false positives.
  • Anamnesis: It is a research framework that tests how LLM agents turn a vulnerability report and a small trigger PoC into working exploits under real defenses (ASLR, NX, RELRO, CFI, shadow stack, sandboxing). It runs controlled experiments to see what bypasses work, how consistent the results are across runs, and what that implies for practical risk.

Disclaimer: These tools are provided for research and educational use only. They are not security-audited and may cause harm if misused. Review the code, test in controlled environments, and comply with all applicable laws and policies.

Conclusion

The takeaway this week is simple: exposure is growing faster than visibility. Many risks aren’t coming from unknown threats, but from known systems being used in unexpected ways. Security teams are being forced to watch not just networks and endpoints, but ecosystems, integrations, and automated workflows.

What matters now is readiness across layers — software, supply chains, AI tooling, infrastructure, and user platforms. Attackers are operating across all of them at once, blending old techniques with new access paths.

Staying secure is no longer about fixing one flaw at a time. It’s about understanding how every connected system can influence the next — and closing those gaps before they’re chained together.

Found this article interesting? Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.



from The Hacker News https://ift.tt/l9n2fXb
via IFTTT

How Top CISOs Solve Burnout and Speed up MTTR without Extra Hiring

Why do SOC teams keep burning out and missing SLAs even after spending big on security tools? Routine triage piles up, senior specialists get dragged into basic validation, and MTTR climbs, while stealthy threats still find room to slip through. Top CISOs have realized the solution isn’t hiring more people or stacking yet another tool onto the workflow, but giving their teams faster, clearer behavior evidence from the start.

Here’s how they’re breaking the cycle and speeding up response without extra hiring.

Starting with Sandbox-First Investigation to Cut MTTR at the Source

The fastest way to reduce MTTR is to remove the delays baked into investigations. Static verdicts and fragmented workflows force analysts to guess, escalate, and re-check the same alerts, which drives burnout and slows containment.

That’s why top CISOs are making sandbox execution the first step.

With an interactive sandbox like ANY.RUN, teams can detonate suspicious files and links in an isolated environment and see real behavior immediately, so decisions happen early, not after hours of back-and-forth.

Check the real case of a phishing attack exposed in 33 seconds

Full phishing attack chain analyzed inside an interactive sandbox in real time, revealing a fake Microsoft login page

Why CISOs prioritize sandbox-first workflows:

  • MTTR drops because clarity comes in minutes: Runtime evidence replaces assumptions, so qualification and containment start faster.
  • Fewer escalations, less senior time wasted: Tier-1 validates alerts with behavior proof, driving up to a 30% reduction in Tier-1 → Tier-2 escalations and keeping specialists focused on real incidents.
  • Lower burnout through fewer manual steps: Less “chasing context,” fewer repeats, more predictable workloads.

Save up to 21 minutes per case by making alert qualification evidence-driven, freeing senior time, reducing escalations, and lowering incident cost.

Reduce MTTR in your SOC

Automating Triage to Increase SOC Output and Protect SLAs

After early clarity comes scale. Even with strong visibility, SOCs slow down if every alert still demands manual effort. By automating triage, CISOs unlock measurable gains across response speed, workload balance, and SOC efficiency:

  • Faster investigations, faster containment: Automated execution shortens the gap between alert and decision, directly reducing MTTR.
  • Fewer errors under pressure: Consistent handling of routine steps lowers risk during high-volume periods.
  • More impact from the same team: Junior staff resolve more alerts independently, reducing escalation load on senior specialists.
  • Better use of senior expertise: Experts spend time on real incidents, not revalidating basic alerts.
  • Higher SOC efficiency overall: Less fatigue, fewer handoffs, and steadier SLA performance.

In real phishing and malware campaigns, attackers often hide malicious behavior behind QR codes, redirect chains, or CAPTCHA gates. Manually replaying these steps costs time and attention, exactly what SOC teams don’t have.

Phishing attack with QR code exposed with the help of automation and interactivity, saving time and resources

With automated sandbox execution, those steps are handled instantly. Hidden URLs are opened, gating is passed, and malicious behavior is exposed within seconds, without waiting, retries, or workarounds.

Malicious URL revealed inside ANY.RUN sandbox

Analysts can still step in live at any moment, inspect processes, or trigger additional actions, but they’re no longer burdened by repetitive setup work.

Giving the team this dual approach, automation plus interactivity, means the following for CISOs: faster response, lower workload, and more SOC capacity, without adding headcount. Automation not only speeds up investigations but also stabilizes the team behind them.

Reducing Burnout by Removing Decision Fatigue

Burnout in the SOC isn’t caused by a lack of commitment. It’s caused by constant high-stakes decisions made with incomplete information. When teams spend their shifts deciding whether alerts are “probably fine” or “worth escalating,” stress compounds quickly.

Sandbox-first and automated triage workflows change that dynamic.

Instead of guessing, teams work from observable behavior. They get structured outputs they can act on immediately: behavior timelines, extracted IOCs, mapped TTPs, and clear, shareable reports that make handoffs fast and decisions defensible. When time is tight, built-in AI assistance helps summarize what matters, so analysts spend less energy interpreting noise and more time closing cases.

ANY.RUN’s auto-generated reports for fast and efficient sharing

For CISOs, the impact shows up in several ways:

  • More predictable workloads: Investigations follow consistent paths instead of expanding unpredictably.
  • Lower fatigue across shifts: Less manual replay, fewer tool switches, and fewer stalled cases.
  • Stronger team retention: Teams stay engaged when work leads to confident outcomes, not constant uncertainty.

When decision fatigue drops, MTTR follows. The SOC becomes calmer, more focused, and easier to run, not because threats are simpler, but because the workflow is.

What CISOs Are Reporting After Moving to Evidence-Based Response

After shifting to sandbox-first investigation, automated triage, and built-in collaboration, CISOs are using ANY.RUN report consistent improvements in how sustainably their SOCs operate.

Across teams, leaders are seeing:

  • Up to 3× increase in SOC output: More alerts handled with the same team, driven by faster qualification and fewer repeat steps.
  • MTTR reduced by up to 50%: Early execution evidence shortens investigations and accelerates containment.
  • Up to 30% fewer Tier-1 → Tier-2 escalations: Clear behavior proof enables junior staff to resolve cases confidently.
  • Higher detection rates for evasive threats: 90% of organizations report higher detection rates, particularly for stealthy and evasive threats.
  • Lower burnout and steadier SLA performance: Predictable workflows replace constant firefighting, easing pressure across shifts.

These numbers reflect real operational gains: faster response without extra hiring, better use of senior expertise, and a SOC that scales without exhausting the people running it.

The best SOCs don’t wait. They respond fast, protect their teams from burnout, and stay steady even when alert volume spikes. But that only happens when the investigation workflow is built for speed and sustainability.

By making sandbox execution the first step, automating repetitive triage, and keeping investigation context shared and controlled, top CISOs are cutting MTTR without adding headcount.

ANY.RUN brings that foundation together in one place. It gives your team the visibility, automation, and enterprise-grade control needed to reduce delays, lower escalation pressure, and keep operations stable.

Trusted by CISOs to deliver:

  • Faster MTTR through early behavior evidence
  • Lower risk of business disruption and costly incidents
  • Fewer unnecessary escalations and cleaner handoffs
  • Less burnout and better team retention
  • Stronger ROI from existing security investments

Ready to see what this looks like in your environment?

Request ANY.RUN access to build a faster, more sustainable SOC on evidence, control, and repeatable workflows, without adding headcount.

Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.



from The Hacker News https://ift.tt/gJ1lXmt
via IFTTT

Bloody Wolf Targets Uzbekistan, Russia Using NetSupport RAT in Spear-Phishing Campaign

The threat actor known as Bloody Wolf has been linked to a campaign targeting Uzbekistan and Russia to infect systems with a remote access trojan known as NetSupport RAT.

Cybersecurity vendor Kaspersky is tracking the activity under the moniker Stan Ghouls. The threat actor is known to be active since at least 2023, orchestrating spear-phishing attacks against manufacturing, finance, and IT sectors in Russia, Kyrgyzstan, Kazakhstan, and Uzbekistan.

The campaign is estimated to have claimed about 50 victims in Uzbekistan, with 10 devices in Russia also impacted. Other infections have been identified to a lesser degree in Kazakhstan, Turkey, Serbia, and Belarus. Infection attempts have also been recorded on devices within government organizations, logistics companies, medical facilities, and educational institutions.

"Given Stan Ghouls' targeting of financial institutions, we believe their primary motive is financial gain," Kaspersky noted. "That said, their heavy use of RATs may also hint at cyber espionage."

The misuse of NetSupport, a legitimate remote administration tool, is a departure for the threat actor, which previously leveraged STRRAT (aka Strigoi Master) in its attacks. In November 2025, Group-IB documented phishing attacks aimed at entities in Kyrgyzstan to distribute the tool.

The attack chains are fairly straightforward in that phishing emails loaded with malicious PDF attachments are used as a launchpad to trigger the infection. The PDF documents embed links that, when clicked, lead to the download of a malicious loader that handles multiple tasks -

  • Display a fake error message to give the impression to the victim that the application can't run on their machine.
  • Check if the number of previous RAT installation attempts is less than three. If the number has reached or exceeded the limit, the loader throws an error message: "Attempt limit reached. Try another computer."
  • Download the NetSupport RAT from one of the several external domains and launch it.
  • Ensure NetSupport RAT's persistence by configuring an autorun script in the Startup folder, adding a NetSupport launch script ("run.bat") to the Registry's autorun key, and creating a scheduled task to trigger the execution of the same batch script.

Kaspersky said it also identified Mirai botnet payloads staged on infrastructure associated with Bloody Wolf, raising the possibility that the threat actor may have expanded its malware arsenal to target IoT devices.

"With over 60 targets hit, this is a remarkably high volume for a sophisticated targeted campaign," the company concluded. "It points to the significant resources these actors are willing to pour into their operations."

The disclosure coincides with a number of cyber campaigns targeting Russian organizations, including those conducted by ExCobalt, which has leveraged known security flaws and credentials stolen from contractors to obtain initial access to target networks. Positive Technologies described the adversary as one of the "most dangerous groups" attacking Russian entities.

The attacks are characterized by the use of various tools, along with attempts to siphon Telegram credentials and message history from the compromised hosts and Outlook Web Access credentials by injecting malicious code into the login page -

  • CobInt, a known backdoor used by the group.
  • Lockers such as Babuk and LockBit.
  • PUMAKIT, a kernel rootkit to escalate privileges, hide files and directories, and conceal itself from system tools, along with prior iterations known as Facefish (February 2021), Kitsune (February 2022), and Megatsune (November 2023). The use of Kitsune was also linked to a threat cluster known as Sneaky Wolf (aka Sneaking Leprechaun) by BI.ZONE.
  • Octopus, a Rust-based toolkit that's used to elevate privileges in a compromised Linux system.

"The group changed the tactics of initial access, shifting the focus of attention from the exploitation of 1-day vulnerabilities in corporate services available from the internet (e.g., Microsoft Exchange) to the penetration of the infrastructure of the main target through contractors," Positive Technologies said.

State institutions, scientific enterprises, and IT organizations in Russia have also been targeted by a previously unknown threat actor known as Punishing Owl that has resorted to stealing and leaking data on the dark web. The group, suspected to be a politically motivated hacktivist entity, has been active since December 2025, with one of its social media accounts administered from Kazakhstan.

The attacks utilize phishing emails with a password-protected ZIP archive, which, when opened, contains a Windows shortcut (LNK) masquerading as a PDF document. Opening the LNK file results in the execution of a PowerShell command to download a stealer named ZipWhisper from a remote server to harvest sensitive data and upload it to the same server.

Another threat cluster that has trained its sights on Russia and Belarus is Vortex Werewolf. The end goal of the attacks is to deploy Tor and OpenSSH so as to facilitate persistent remote access. The campaign was previously exposed in November 2025 by Cyble and Seqrite Labs, with the latter calling the campaign Operation SkyCloak.



from The Hacker News https://ift.tt/ruw6h5B
via IFTTT

TeamPCP Worm Exploits Cloud Infrastructure to Build Criminal Infrastructure

Cybersecurity researchers have called attention to a "massive campaign" that has systematically targeted cloud native environments to set up malicious infrastructure for follow-on exploitation.

The activity, observed around December 25, 2025, and described as "worm-driven," leveraged exposed Docker APIs, Kubernetes clusters, Ray dashboards, and Redis servers, along with the recently disclosed React2Shell (CVE-2025-55182, CVSS score: 10.0) vulnerability. The campaign has been attributed to a threat cluster known as TeamPCP (aka DeadCatx3, PCPcat, PersyPCP, and ShellForce).

TeamPCP is known to be active since at least November 2025, with the first instance of Telegram activity dating back to July 30, 2025. The TeamPCP Telegram channel currently has over 700 members, where the group publishes stolen data from diverse victims across Canada, Serbia, South Korea, the U.A.E., and the U.S. Details of the threat actor were first documented by Beelzebub in December 2025 under the name Operation PCPcat.

"The operation's goals were to build a distributed proxy and scanning infrastructure at scale, then compromise servers to exfiltrate data, deploy ransomware, conduct extortion, and mine cryptocurrency," Flare security researcher Assaf Morag said in a report published last week.

TeamPCP is said to function as a cloud-native cybercrime platform, leveraging misconfigured Docker APIs, Kubernetes APIs, Ray dashboards, Redis servers, and vulnerable React/Next.js applications as main infection pathways to breach modern cloud infrastructure to facilitate data theft and extortion.

In addition, the compromised infrastructure is misused for a wide range of other purposes, ranging from cryptocurrency mining and data hosting to proxy and command-and-control (C2) relays.

Rather than employing any novel tradecraft, TeamPCP leans on tried-and-tested attack techniques, such as existing tools, known vulnerabilities, and prevalent misconfigurations, to build an exploitation platform that automates and industrializes the whole process. This, in turn, transforms the exposed infrastructure into a "self-propagating criminal ecosystem," Flare noted.

Successful exploitation paves the way for the deployment of next-stage payloads from external servers, including shell- and Python-based scripts that seek out new targets for further expansion. One of the core components is "proxy.sh," which installs proxy, peer-to-peer (P2P), and tunneling utilities, and delivers various scanners to continuously search the internet for vulnerable and misconfigured servers.

"Notably, proxy.sh performs environment fingerprinting at execution time," Morag said. "Early in its runtime, it checks whether it is running inside a Kubernetes cluster."

"If a Kubernetes environment is detected, the script branches into a separate execution path and drops a cluster-specific secondary payload, indicating that TeamPCP maintains distinct tooling and tradecraft for cloud-native targets rather than relying on generic Linux malware alone."

A brief description of the other payloads is as follows -

  • scanner.py, which is designed to find misconfigured Docker APIs and Ray dashboards by downloading Classless Inter-Domain Routing (CIDR) lists from a GitHub account named "DeadCatx3," while also featuring options to run a cryptocurrency miner ("mine.sh").
  • kube.py, which includes Kubernetes-specific functionality to conduct cluster credential harvesting and API-based discovery of resources such as pods and namespaces, followed by dropping "proxy.sh" into accessible pods for broader propagation and setting up a persistent backdoor by deploying a privileged pod on every node that mounts the host.
  • react.py, which is designed to exploit the React flaw (CVE-2025-29927) to achieve remote command execution at scale.
  • pcpcat.py, which is designed to discover exposed Docker APIs and Ray dashboards across large IP address ranges and automatically deploy a malicious container or job that executes a Base64-encoded payload.

Flare said the C2 server node located at 67.217.57[.]240 has also been linked to the operation of Sliver, an open-source C2 framework that's known to be abused by threat actors for post-exploitation purposes.

Data from the cybersecurity company shows that the threat actors mainly single out Amazon Web Services (AWS) and Microsoft Azure environments. The attacks are assessed to be opportunistic in nature, primarily targeting infrastructure that supports its goals rather than going after specific industries. The result is that organizations that run such infrastructure become "collateral victims" in the process. 

"The PCPcat campaign demonstrates a full lifecycle of scanning, exploitation, persistence, tunneling, data theft, and monetization built specifically for modern cloud infrastructure," Morag said. "What makes TeamPCP dangerous is not technical novelty, but their operational integration and scale. Deeper analysis shows that most of their exploits and malware are based on well-known vulnerabilities and lightly modified open-source tools."

"At the same time, TeamPCP blends infrastructure exploitation with data theft and extortion. Leaked CV databases, identity records, and corporate data are published through ShellForce to fuel ransomware, fraud, and cybercrime reputation building. This hybrid model allows the group to monetize both compute and information, giving it multiple revenue streams and resilience against takedowns."



from The Hacker News https://ift.tt/RVPcqh8
via IFTTT

BeyondTrust Fixes Critical Pre-Auth RCE Vulnerability in Remote Support and PRA

BeyondTrust has released updates to address a critical security flaw impacting Remote Support (RS) and Privileged Remote Access (PRA) products that, if successfully exploited, could result in remote code execution.

"BeyondTrust Remote Support (RS) and certain older versions of Privileged Remote Access (PRA) contain a critical pre-authentication remote code execution vulnerability," the company said in an advisory released February 6, 2026.

"By sending specially crafted requests, an unauthenticated remote attacker may be able to execute operating system commands in the context of the site user."

The vulnerability, categorized as an operating system command injection, has been assigned the CVE identifier CVE-2026-1731. It's rated 9.9 on the CVSS scoring system.

BeyondTrust said successful exploitation of the shortcoming could allow an unauthenticated remote attacker to execute operating system commands in the context of the site user, resulting in unauthorized access, data exfiltration, and service disruption.

The issue affects the following versions -

  • Remote Support versions 25.3.1 and prior
  • Privileged Remote Access versions 24.3.4 and prior

It has been patched in the following versions -

  • Remote Support - Patch BT26-02-RS, 25.3.2 and later
  • Privileged Remote Access - Patch BT26-02-PRA, 25.1.1 and later

The company is also urging self-hosted customers of Remote Support and Privileged Remote Access to manually apply the patch if their instance is not subscribed to automatic updates. Those running a Remote Support version older than 21.3 or on Privileged Remote Access older than 22.1 are also required to upgrade to a newer version to apply this patch.

"Self-hosted customers of PRA may also upgrade to 25.1.1 or a newer version to remediate this vulnerability," it added.

According to security researcher and Hacktron AI co-founder Harsh Jaiswal, the vulnerability was discovered on January 31, 2026, through an artificial intelligence (AI)-enabled variant analysis, adding that it found about 11,000 instances exposed to the internet. Additional details of the flaw have been withheld to give users time to apply the patches.

"About ~8,500 of those are on-prem deployments, which remain potentially vulnerable if patches aren't applied," Jaiswal said.

With security flaws in BeyondTrust Privileged Remote Access and Remote Support having come under active exploitation in the past, it's essential that users update to the latest version as soon as possible for optimal protection.



from The Hacker News https://ift.tt/nwJEI0c
via IFTTT

Sunday, February 8, 2026

The Economics of Software Developers

If someone walked into your office today and asked you to build a framework for how to value software development, what would you think about it? 

SHOW: 1000

SHOW TRANSCRIPT: The Cloudcast #1000 Transcript

SHOW VIDEO: https://youtube.com/@TheCloudcastNET 

NEW TO CLOUD? CHECK OUT OUR OTHER PODCAST - "CLOUDCAST BASICS" 

SHOW NOTES:


HOW SHOULD SOMEONE THINK ABOUT THE ECONOMICS OF SW DEV IN 2026?

  • If someone walked into your office today and asked you to build a framework for how to value software development, how would you think about it? 

FEEDBACK?



from The Cloudcast (.NET) https://ift.tt/NJgxAHc
via IFTTT

OpenClaw Integrates VirusTotal Scanning to Detect Malicious ClawHub Skills

OpenClaw (formerly Moltbot and Clawdbot) has announced that it's partnering with Google-owned VirusTotal to scan skills that are being uploaded to ClawHub, its skill marketplace, as part of broader efforts to bolster the security of the agentic ecosystem.

"All skills published to ClawHub are now scanned using VirusTotal's threat intelligence, including their new Code Insight capability," OpenClaw's founder Peter Steinberger, along with Jamieson O'Reilly and Bernardo Quintero said. "This provides an additional layer of security for the OpenClaw community."

The process essentially entails creating a unique SHA-256 hash for every skill and cross checking it against VirusTotal's database for a match. If it's not found, the skill bundle is uploaded to the malware scanning tool for further analysis using VirusTotal Code Insight.

Skills that have a "benign" Code Insight verdict are automatically approved by ClawHub, while those marked suspicious are flagged with a warning. Any skill that's deemed malicious is blocked from download. OpenClaw also said all active skills are re-scanned on a daily basis to detect scenarios where a previously clean skill becomes malicious.

That said, OpenClaw maintainers also cautioned that VirusTotal scanning is "not a silver bullet" and that there is a possibility that some malicious skills that use a cleverly concealed prompt injection payload may slip through the cracks.

In addition to the VirusTotal partnership, the platform is expected to publish a comprehensive threat model, public security roadmap, formal security reporting process, as well as details about the security audit of its entire codebase.

The development comes in the aftermath of reports that found hundreds of malicious skills on ClawHub, prompting OpenClaw to add a reporting option that allows signed-in users to flag a suspicious skill. Multiple analyses have uncovered that these skills masquerade as legitimate tools, but, under the hood, they harbor malicious functionality to exfiltrate data, inject backdoors for remote access, or install stealer malware.

"AI agents with system access can become covert data-leak channels that bypass traditional data loss prevention, proxies, and endpoint monitoring," Cisco noted last week. "Second, models can also become an execution orchestrator, wherein the prompt itself becomes the instruction and is difficult to catch using traditional security tooling."

The recent viral popularity of OpenClaw, the open-source agentic artificial intelligence (AI) assistant, and Moltbook, an adjacent social network where autonomous AI agents built atop OpenClaw interact with each other in a Reddit-style platform, has raised security concerns.

While OpenClaw functions as an automation engine to trigger workflows, interact with online services, and operate across devices, the entrenched access given to skills, coupled with the fact that they can process data from untrusted sources, can open the door to risks like malware and prompt injection.

In other words, the integrations, while convenient, significantly broaden the attack surface and expand the set of untrusted inputs the agent consumes, turning it into an "agentic trojan horse" for data exfiltration and other malicious actions. Backslash Security has described OpenClaw as an "AI With Hands."

"Unlike traditional software that does exactly what code tells it to do, AI agents interpret natural language and make decisions about actions," OpenClaw noted. "They blur the boundary between user intent and machine execution. They can be manipulated through language itself."

OpenClaw also acknowledged that the power wielded by skills – which are used to extend the capabilities of an AI agent, such as controlling smart home devices to managing finances – can be abused by bad actors, who can leverage the agent's access to tools and data to exfiltrate sensitive information, execute unauthorized commands, send messages on the victim's behalf, and even download and run additional payloads without their knowledge or consent.

What's more, with OpenClaw being increasingly deployed on employee endpoints without formal IT or security approval, the elevated privileges of these agents can further enable shell access, data movement, and network connectivity outside standard security controls, creating a new class of Shadow AI risk for enterprises.

"OpenClaw and tools like it will show up in your organization whether you approve them or not," Astrix Security researcher Tomer Yahalom said. "Employees will install them because they're genuinely useful. The only question is whether you'll know about it."

Some of the glaring security issues that have come to the fore in recent days are below -

  • A now-fixed issue identified in earlier versions that could cause proxied traffic to be misclassified as local, bypassing authentication for some internet-exposed instances.
  • "OpenClaw stores credentials in cleartext, uses insecure coding patterns including direct eval with user input, and has no privacy policy or clear accountability," OX Security's Moshe Siman Tov Bustan and Nir Zadok said. "Common uninstall methods leave sensitive data behind – and fully revoking access is far harder than most users realize."
  • A zero-click attack that abuses OpenClaw's integrations to plant a backdoor on a victim's endpoint for persistent control when a seemingly harmless document is processed by the AI agent, resulting in the execution of an indirect prompt injection payload that allows it to respond to messages from an attacker-controlled Telegram bot.
  • An indirect prompt injection embedded in a web page, which, when parsed as part of an innocuous prompt asking the large language model (LLM) to summarize the page's contents, causes OpenClaw to append an attacker-controlled set of instructions to the ~/.openclaw/workspace/HEARTBEAT.md file and silently await further commands from an external server.
  • A security analysis of 3,984 skills on the ClawHub marketplace has found that 283 skills, about 7.1% of the entire registry, contain critical security flaws that expose sensitive credentials in plaintext through the LLM's context window and output logs.
  • A report from Bitdefender has revealed that malicious skills are often cloned and re-published at scale using small name variations, and that payloads are staged through paste services such as glot.io and public GitHub repositories.
  • A now-patched one-click remote code execution vulnerability affecting OpenClaw that could have allowed an attacker to trick a user into visiting a malicious web page that could cause the Gateway Control UI to leak the OpenClaw authentication token over a WebSocket channel and subsequently use it to execute arbitrary commands on the host.
  • OpenClaw's gateway binds to 0.0.0.0:18789 by default, exposing the full API to any network interface. Per data from Censys, there are over 30,000 exposed instances accessible over the internet as of February 8, 2026, although most require a token value in order to view and interact with them.
  • In a hypothetical attack scenario, a prompt injection payload embedded within a specifically crafted WhatsApp message can be used to exfiltrate ".env" and "creds.json" files, which store credentials, API keys, and session tokens for connected messaging platforms from an exposed OpenClaw instance.
  • An misconfigured Supabase database belonging to Moltbook that was left exposed in client-side JavaScript, making secret API keys of every agent registered on the site freely accessible, and allowing full read and write access to platform data. According to Wiz, the exposure included 1.5 million API authentication tokens, 35,000 email addresses, and private messages between agents.
  • Threat actors have been found exploiting Moltbook's platform mechanics to amplify reach and funnel other agents toward malicious threads that contain prompt injections to manipulate their behavior and extract sensitive data or steal cryptocurrency.
  • "Moltbook may have inadvertently also created a laboratory in which agents, which can be high-value targets, are constantly processing and engaging with untrusted data, and in which guardrails aren’t set into the platform – all by design," Zenity Labs said.

"The first, and perhaps most egregious, issue is that OpenClaw relies on the configured language model for many security-critical decisions," HiddenLayer researchers Conor McCauley, Kasimir Schulz, Ryan Tracey, and Jason Martin noted. "Unless the user proactively enables OpenClaw's Docker-based tool sandboxing feature, full system-wide access remains the default."

Among other architectural and design problems identified by the AI security company are OpenClaw's failure to filter out untrusted content containing control sequences, ineffective guardrails against indirect prompt injections, modifiable memories and system prompts that persist into future chat sessions, plaintext storage of API keys and session tokens, and no explicit user approval before executing tool calls.

In a report published last week, Persmiso Security argued that the security of the OpenClaw ecosystem is much more crucial than app stores and browser extension marketplaces owing to the agents' extensive access to user data.

"AI agents get credentials to your entire digital life," security researcher Ian Ahl pointed out. "And unlike browser extensions that run in a sandbox with some level of isolation, these agents operate with the full privileges you grant them."

"The skills marketplace compounds this. When you install a malicious browser extension, you're compromising one system. When you install a malicious agent skill, you're potentially compromising every system that agent has credentials for."

The long list of security issues associated with OpenClaw has prompted China's Ministry of Industry and Information Technology to issue an alert about misconfigured instances, urging users to implement protections to secure against cyber attacks and data breaches, Reuters reported.

"When agent platforms go viral faster than security practices mature, misconfiguration becomes the primary attack surface," Ensar Seker, CISO at SOCRadar, told The Hacker News via email. "The risk isn't the agent itself; it’s exposing autonomous tooling to public networks without hardened identity, access control, and execution boundaries."

"What's notable here is that the Chinese regulator is explicitly calling out configuration risk rather than banning the technology. That aligns with what defenders already know: agent frameworks amplify both productivity and blast radius. A single exposed endpoint or overly permissive plugin can turn an AI agent into an unintentional automation layer for attackers."



from The Hacker News https://ift.tt/3blEWQ9
via IFTTT

Saturday, February 7, 2026

German Agencies Warn of Signal Phishing Targeting Politicians, Military, Journalists

Germany's Federal Office for the Protection of the Constitution (aka Bundesamt für Verfassungsschutz or BfV) and Federal Office for Information Security (BSI) have issued a joint advisory warning of a malicious cyber campaign undertaken by a likely state-sponsored threat actor that involves carrying out phishing attacks over the Signal messaging app.

"The focus is on high-ranking targets in politics, the military, and diplomacy, as well as investigative journalists in Germany and Europe," the agencies said. "Unauthorized access to messenger accounts not only allows access to confidential private communications but also potentially compromises entire networks."

A noteworthy aspect of the campaign is that it does not involve the distribution of malware or the exploitation of any security vulnerability in the privacy-focused messaging platform. Rather, the end goal is to weaponize its legitimate features to obtain covert access to a victim's chats, along with their contact lists.

The attack chain is as follows: the threat actors masquerade as "Signal Support" or a support chatbot named "Signal Security ChatBot" to initiate direct contact with prospective targets, urging them to provide a PIN or verification code received via SMS, or risk facing data loss.

Should the victim comply, the attackers can register the account and gain access to the victim's profile, settings, contacts, and block list through a device and mobile phone number under their control. While the stolen PIN does not enable access to the victim's past conversations, a threat actor can use it to capture incoming messages and send messages posing as the victim.

That target user, who has by now lost access to their account, is then instructed by the threat actor disguised as the support chatbot to register for a new account.

There also exists an alternative infection sequence that takes advantage of the device linking option to trick victims into scanning a QR code, thereby granting the attackers access to the victim's account, including their messages for the last 45 days, on a device managed by them.

In this case, however, the targeted individuals continue to have access to their account, little realizing that their chats and contact lists are now also exposed to the threat actors. 

The security authorities warned that while the current focus of the campaign appears to be Signal, the attack can also be extended to WhatsApp since it also incorporates similar device linking and PIN features as part of two-step verification.

"Successful access to messenger accounts not only allows confidential individual communications to be viewed, but also potentially compromises entire networks via group chats," BfV and BSI said.

While it's not known who is behind the activity, similar attacks have been orchestrated by multiple Russia-aligned threat clusters tracked as Star Blizzard, UNC5792 (aka UAC-0195), and UNC4221 (aka UAC-0185), per reports from Microsoft and Google Threat Intelligence Group early last year.

In December 2025, Gen Digital also detailed another campaign codenamed GhostPairing, where cybercriminals have resorted to the device linking feature on WhatsApp to seize control of accounts to likely impersonate users or commit fraud.

To stay protected against the threat, users are advised to refrain from engaging with support accounts and entering their Signal PIN as a text message. A crucial line of defense is to enable Registration Lock, which prevents unauthorized users from registering a phone number on another device. It's also advised to periodically review the list of linked devices and remove any unknown devices.

The development comes as the Norwegian government accused the Chinese-backed hacking groups, including Salt Typhoon, of breaking into several organizations in the country by exploiting vulnerable network devices, while also calling out Russia for closely monitoring military targets and allied activities, and Iran for keeping tabs on dissidents.

Stating that Chinese intelligence services attempt to recruit Norwegian nationals to gain access to classified data, the Norwegian Police Security Service (PST) noted that these sources are then encouraged to establish their own "human source" networks by advertising part-time positions on job boards or approaching them via LinkedIn.

The agency further warned that China is "systematically" exploiting collaborative research and development efforts to strengthen its own security and intelligence capabilities. It's worth noting that Chinese law requires software vulnerabilities identified by Chinese researchers to be reported to the authorities no later than two days after discovery.

"Iranian cyber threat actors compromise email accounts, social media profiles, and private computers belonging to dissidents to collect information about them and their networks," PST said. "These actors have advanced capabilities and will continue to develop their methods to conduct increasingly targeted and intrusive operations against individuals in Norway."

The disclosure follows an advisory from CERT Polska, which assessed that a Russian nation-state hacking group called Static Tundra is likely behind coordinated cyber attacks targeted at more than 30 wind and photovoltaic farms, a private company from the manufacturing sector, and a large combined heat and power plant (CHP) supplying heat to almost half a million customers in the country.

"In each affected facility, a FortiGate device was present, serving as both a VPN concentrator and a firewall," it said. "In every case, the VPN interface was exposed to the internet and allowed authentication to accounts defined in the configuration without multi‑factor authentication."



from The Hacker News https://ift.tt/vDOQWVE
via IFTTT

Friday, February 6, 2026

AI Security, From Data to Runtime: A Holistic Defense Approach

As organizations rush to adopt AI, they are discovering that traditional, siloed security tools cannot keep pace. The data is too vast, the infrastructure is too interconnected, and runtime environments are too dynamic. Security leaders are confronting a hard reality: AI cannot be secured with point solutions — it is just too broad.

To scale AI with confidence, enterprises must move beyond check-the-box controls and adopt a holistic, machine-speed defense that secures the entire AI lifecycle. This means protecting the data that fuels and is accessed by models, the cloud infrastructure that runs them, and the workloads and AI systems operating at runtime as a single, unified, and immutable system.

As AI capabilities accelerate, a critical question is emerging in the market: Does AI reduce the need for cybersecurity, or fundamentally increase it?

The answer is clear. With current infrastructure architectures, AI is not a replacement for security. It is a multiplier for risk. Models ingest massive volumes of data and agents can sprawl uncontrollably. It depends on complex cloud infrastructure and operates continuously at machine speed. Each stage of the AI lifecycle introduces new attack paths and new failure modes.

Today, SentinelOne is announcing the expansion of its AI Security platform with new Data Security Posture Management (DSPM) capabilities, model red teaming, validation and guardrails (by Prompt Security), MCP Security (by Prompt Security), AI-SPM, AI Workload Protection, and AI end user protection. This milestone advances our broader vision, delivering a unified platform that secures AI end to end, from data accessed, all through runtime execution and model input and output. This is complete security, visibility, and governance over Al usage throughout its entire lifecycle.

The Foundation: Securing AI at the Data Layer

AI security starts with data, not because data is abundant, but because mistakes made at this stage are irreversible. AI models also don’t just process the data they ingest, they memorize it. If sensitive PII, credentials, or proprietary information enter a training pipeline, that data can become baked into a model’s weights, creating a permanent security liability that is nearly impossible to remediate later.

This risk is amplified by scale. Industry projections estimate that the global datasphere, the unstructured data stored in cloud object stores and increasingly fed into AI pipelines, will reach 10.5 zettabytes by 2028. This data is not just storage. It is the fuel that trains, fine-tunes, and powers AI systems. This is why data security is the first mile of AI security.

With the introduction of these new DSPM capabilities, SentinelOne enables organizations to establish a “safe-to-train” gate before data ever reaches an AI pipeline. These capabilities provide deep visibility into cloud-native databases and object stores, allowing teams to discover unmanaged or forgotten data sources, classify sensitive information with policy-driven precision, and prevent high-risk data from being used in training or inference workflows.

Singularity Cloud Security’s integrated DSPM discovers cloud object stores and databases and classifies sensitive data that could find its way into AI training pipelines. 

However, visibility alone is not enough. AI pipelines ingest data at massive scale, making them an attractive vehicle for malware delivery and pipeline poisoning. In addition to identifying and redacting sensitive data, SentinelOne actively scans cloud storage at machine speed to prevent malicious content from ever reaching AI models or applications. By securing data at ingestion all before training begins, organizations eliminate entire classes of AI risk that cannot be fixed downstream. This is the foundation for trusted AI adoption.

The Infrastructure Layer: Securing the Systems That Run AI

Securing AI data is necessary, but it is not sufficient. Data does not exist in isolation. Rather, it lives on cloud infrastructure and in AI environments where infrastructure becomes a critical failure point.

AI workloads introduce a uniquely high-risk combination of high-value data, high-privilege access, and high-performance compute. AI factories, training clusters, managed AI services, and inference endpoints often require broad permissions and continuous access to cloud object stores. Without strong infrastructure controls, attackers can pivot from exposed data into model logic, model weights, or downstream applications. This is where cloud infrastructure security becomes inseparable from AI security.

Traditional Cloud Security Posture Management (CSPM) provides essential hygiene across the cloud estate by identifying misconfigurations, excessive permissions, and policy drift. In AI environments, however, security teams also need visibility and control that is specific to how models are built, deployed, and accessed.

AI-Security Posture Management (AI-SPM) extends infrastructure security directly into the AI layer. By treating AI systems as first-class assets, AI-SPM provides a unified inventory of training jobs, development notebooks, managed AI services, and inference endpoints across the environment. Together, CSPM and AI-SPM allow security teams to understand how data, infrastructure, and AI systems are connected. They can trace attack paths from misconfigured storage to over-privileged training containers, detect unmanaged AI assets, and prevent adversaries from moving laterally from the cloud foundation into model logic.

This infrastructure layer is what connects secure data to secure runtime and it is essential for protecting AI at scale.

Singularity Cloud Security measures compliance posture over time against multiple global AI regulations including the EU AI Act.

The Runtime Layer: Protecting AI In Production

AI security cannot stop when a model finishes training. The moment AI systems move into production, they begin interacting with real users, real data, and real business processes, making runtime protection a critical part of the AI security lifecycle.

At runtime, AI workloads operate continuously and at machine speed. Models and agents execute inside cloud workloads that must be protected against exploitation, unauthorized access, and lateral movement. Any compromise at this stage can immediately impact business operations, data integrity, and customer trust.

This is where runtime workload protection becomes essential. Cloud Workload Protection Platforms (CWPP) provide real-time visibility and enforcement across the compute environments running AI models, ensuring that workloads are monitored, hardened, and protected without degrading the performance required for high-velocity inference.

By extending protection into runtime, security teams ensure that AI systems remain secure not only during development and deployment, but throughout their operational life. This completes the AI security lifecycle from data ingestion, through infrastructure, to production execution.

Prompt Security and AI Red-Teaming: Continuously Validating Trust

Securing AI at runtime goes beyond protecting the workloads that execute models. It also requires validating how models behave when they are used (and misused) in the real world.

Prompts are the primary interface to AI systems and they represent a powerful new attack surface. Malicious or malformed prompts can be used to bypass controls, extract sensitive information, manipulate model behavior, or trigger unintended actions in downstream systems. These risks cannot be addressed solely through static controls or one-time reviews. This is where prompt security and AI red-teaming become essential.

By continuously testing AI systems with adversarial prompts and simulated attacks, organizations can identify behavioral weaknesses before they are exploited in production. AI red-teaming helps validate that models behave as intended under real-world conditions, exposing prompt-level vulnerabilities, unsafe outputs, and policy bypasses that would otherwise go undetected.

When combined with runtime protection, this approach ensures that AI systems are not only secure in how they are built and deployed, but also resilient in how they respond — even as models evolve, prompts change, and new attack techniques emerge.

This continuous validation loop is critical for maintaining trust in production AI systems and closing the final gap in the AI security lifecycle.

A Unified Fabric for AI Security

The transition to AI is ultimately a trust shift. Organizations will only move AI from experimentation to production if they can trust the data that trains models, the infrastructure that runs them, and the systems that govern how AI operates at runtime. Securing AI therefore cannot be fragmented. It requires a unified platform that treats data, infrastructure, and runtime as a single, connected system with shared context and continuous visibility across the entire AI lifecycle.

By integrating data security, cloud infrastructure posture management, AI-specific posture management, and runtime workload protection, SentinelOne delivers end-to-end AI security from data ingestion through runtime execution. This approach does more than reduce risk. It enables velocity. When security is built into the foundation, organizations can deploy AI faster, meet evolving regulatory requirements more easily, and innovate with confidence.

Secure the data.

Secure the infrastructure.

Secure the runtime.

This is how AI moves from risk to real-world impact. Contact us or book a demo to see how SentinelOne secures AI end to end — from data ingestion to runtime execution. Not ready for a demo? Join our webinar to learn how AI security comes together across the full lifecycle.

Join the Webinar
Guarding the Crown Jewels: A Data-Centric Approach to Cloud Security


from SentinelOne https://ift.tt/x4hGfH6
via IFTTT