Monday, April 13, 2026

How to Analyze Hugging Face for Arm64 Readiness

This post is a collaboration between Docker and Arm, demonstrating how Docker MCP Toolkit and the Arm MCP Server work together to scan Hugging Face Spaces for Arm64 Readiness.

In our previous post, we walked through migrating a legacy C++ application with AVX2 intrinsics to Arm64 using Docker MCP Toolkit and the Arm MCP Server – code conversion, SIMD intrinsic rewrites, compiler flag changes, the full stack. This post is about a different and far more common failure mode.

When we tried to run ACE-Step v1.5, a 3.5B parameter music generation model from Hugging Face, on an Arm64 MacBook, the installation failed not with a cryptic kernel error but with a pip error. The flash-attn wheel in requirements.txt was hardcoded to a linux_x86_64 URL, no Arm64 wheel existed at that address, and the container would not build. It’s a deceptively simple problem that turns out to affect roughly 80% of Hugging Face Docker Spaces: not the code, not the Dockerfile, but a single hardcoded dependency URL that nobody noticed because nobody had tested on Arm.

To diagnose this systematically, we built a 7-tool MCP chain that can analyse any Hugging Face Space for Arm64 readiness in about 15 minutes. By the end of this guide you’ll understand exactly why ACE-Step v1.5 fails on Arm64, what the two specific blockers are, and how the chain surfaces them automatically.

Why Hugging Face Spaces Matter for Arm

Hugging Face hosts over one million Spaces, a significant portion of which use the Docker SDK meaning developers write a Dockerfile and HuggingFace builds and serves the container directly. The problem is that nearly all of those containers were built and tested exclusively on linux/amd64, which creates a deployment wall for three fast-growing Arm64 targets that are increasingly relevant for AI workloads.

Target

Hardware

Why it matters

Cloud

AWS Graviton, Azure Cobalt, Google Axion

20-40% cost reduction vs. x86

Edge/Robotics

NVIDIA Jetson Thor, DGX Spark

GR00T, LeRobot, Isaac all target Arm64

Local dev

Apple Silicon M1-M4

Most popular developer machine, zero cloud cost

The failure mode isn’t always obvious, and it tends to show up in one of two distinct patterns. The first is a missing container manifest – the image has no arm64 layer and Docker refuses to pull it, which is at least straightforward to diagnose. The second is harder to catch: the Dockerfile and base image are perfectly fine, but a dependency in requirements.txt points to a platform-specific wheel URL. The build starts, reaches pip install, and fails with a platform mismatch error that gives no clear indication of where to look. ACE-Step v1.5 is a textbook example of the second pattern, and the MCP chain catches both in minutes.

The 7-Tool MCP Chain

Docker MCP Toolkit orchestrates the analysis through a secure MCP Gateway. Each tool runs in an isolated Docker container. The seven tools in the chain are:

The 7-tool MCP chain architecture diagram

Caption: The 7-tool MCP chain architecture diagram

The tools:

  1. Hugging Face MCP – Discovers the Space, identifies SDK type (Docker vs. Gradio)
  2. Skopeo (via Arm MCP Server) – Inspects the container registry, reports supported architectures
  3. migrate-ease (via Arm MCP Server) – Scans source code for x86-specific intrinsics, hardcoded paths, arch-locked libraries
  4. GitHub MCP – Reads Dockerfile, pyproject.toml, requirements.txt from the repository
  5. Arm Knowledge Base (via Arm MCP Server) – Searches learn.arm.com for build strategies and optimization guides
  6. Sequential Thinking – Combines findings into a structured migration verdict
  7. Docker MCP Gateway – Routes requests, manages container lifecycle

The natural question at this point is whether you could simply rebuild your Docker image for Arm64 and be done with it and for many applications, you could. But knowing in advance whether the rebuild will actually succeed is a different problem. Your Dockerfile might depend on a base image that doesn’t publish Arm64 builds. Your Python dependencies might not have aarch64 wheels. Your code might use x86-specific system calls. The MCP chain checks all of this automatically before you invest time in a build that may not work.

Setting Up Visual Studio Code with Docker MCP Toolkit

Prerequisites

Before you begin, make sure you have:

  • A machine with 8 GB RAM minimum (16GB recommended)
  • The latest Docker Desktop release
  • VS Code with GitHub Copilot extension
  • GitHub account with personal access token

Step 1. Enable Docker MCP Toolkit

Open Docker Desktop and enable the MCP Toolkit from Settings.

To enable:

  1. Open Docker Desktop
  2. Go to Settings > Beta Features
Enabling Docker MCP Toolkit under Docker Desktop

Caption: Enabling Docker MCP Toolkit under Docker Desktop

  1. Toggle Docker MCP Toolkit ON
  2. Click Apply

Step 2. Add Required MCP Servers from Catalog

Add the following four MCP Servers from the Catalog. You can find them by selecting “Catalog” in the Docker Desktop MCP Toolkit, or by following these links:

Searching for Arm MCP Server in the Docker MCP Catalog

Caption: Searching for Arm MCP Server in the Docker MCP Catalog

Step 3. Configure the Servers

  1. Configure the Arm MCP Server

To access your local code for the migrate-ease scan and MCA tools, the Arm MCP Server needs a directory configured to point to your local code.

Arm MCP Server configuration

Caption: Arm MCP Server configuration

Once you click ‘Save’, the Arm MCP Server will know where to look for your code. If you want to give a different directory access in the future, you’ll need to change this path.

Available Arm Migration Tools

Click Tools to view all six MCP tools available under Arm MCP Server:

List of MCP tools provided by the Arm MCP Server

Caption: List of MCP tools provided by the Arm MCP Server

  • knowledge_base_search – Semantic search of Arm learning resources, intrinsics documentation, and software compatibility
  • migrate_ease_scan – Code scanner supporting C++, Python, Go, JavaScript, and Java for Arm compatibility analysis
  • check_image – Docker image architecture verification (checks if images support Arm64)
  • skopeo – Remote container image inspection without downloading
  • mca – Machine Code Analyzer for assembly performance analysis and IPC predictions
  • sysreport_instructions – System architecture information gathering
  1. Configure the GitHub MCP Server

The GitHub MCP Server lets GitHub Copilot read repositories, create pull requests, manage issues, and commit changes.

Steps to configure GitHub Official MCP Server

Caption: Steps to configure GitHub Official MCP Server

Configure Authentication:

  1. Select GitHub official
  2. Choose your preferred authentication method
  3. For Personal Access Token, get the token from GitHub > Settings > Developer Settings
Setting up Personal Access Token in GitHub MCP Server

Caption: Setting up Personal Access Token in GitHub MCP Server

  1. Configure the Sequential Thinking MCP Server
  • Click “Sequential Thinking”
  • No configuration needed
Sequential MCP Server requires zero configuration

Caption: Sequential MCP Server requires zero configuration

This server helps GitHub Copilot break down complex migration decisions into logical steps.

  1. Configure the Hugging Face MCP Server

The Hugging Face MCP Server provides access to Space metadata, model information, and repository contents directly from the Hugging Face Hub.

  • Click “Hugging Face”
  • No additional configuration needed for public Spaces
  • For private Spaces, add your HuggingFace API token

Step 4. Add the Servers to VS Code

The Docker MCP Toolkit makes it incredibly easy to configure MCP servers for clients like VS Code.

To configure, click “Clients” and scroll down to Visual Studio Code. Click the “Connect” button:

Setting up Visual Studio Code as MCP Client

Caption: Setting up Visual Studio Code as MCP Client

Now open VS Code and click on the ‘Extensions’ icon in the left toolbar:

Configuring MCP_DOCKER under VS Code Extensions

Caption: Configuring MCP_DOCKER under VS Code Extensions

Click the MCP_DOCKER gear, and click ‘Start Server’:

Starting MCP Server under VS Code

Caption: Starting MCP Server under VS Code

Step 5. Verify Connection

Open GitHub Copilot Chat in VS Code and ask:

What Arm migration and Hugging Face tools do you have access to?

You should see tools from all four servers listed. If you see them, your connection works. Let’s scan a Hugging Face Space.

Playing around with GitHub Copilot

Caption: Playing around with GitHub Copilot

image15

Real-World Demo: Scanning ACE-Step v1.5

Now that you’ve connected GitHub Copilot to Docker MCP Toolkit, let’s scan a real Hugging Face Space for Arm64 readiness and uncover the exact Arm64 blocker we hit when trying to run it locally.

  • Target: ACE-Step v1.5 – a 3.5B parameter music generation model 
  • Time to scan: 15 minutes 
  • Infrastructure cost: $0 (all tools run locally in Docker containers) 

The Workflow

Docker MCP Toolkit orchestrates the scan through a secure MCP Gateway that routes requests to specialized tools: the Arm MCP Server inspects images and scans code, Hugging Face MCP discovers the Space, GitHub MCP reads the repository, and Sequential Thinking synthesizes the verdict. 

Step 1. Give GitHub Copilot Scan Instructions

Open your project in VS Code. In GitHub Copilot Chat, paste this prompt:

Your goal is to analyze the Hugging Face Space "ACE-Step/ACE-Step-v1.5" for Arm64 migration readiness. Use the MCP tools to help with this analysis.

Steps to follow:
1. Use Hugging Face MCP to discover the Space and identify its SDK type (Docker or Gradio)
2. Use skopeo to inspect the container image - check what architectures are currently supported
3. Use GitHub MCP to read the repository - examine pyproject.toml, Dockerfile, and requirements
4. Run migrate_ease_scan on the source code to find any x86-specific dependencies or intrinsics
5. Use knowledge_base_search to find Arm64 build strategies for any issues discovered
6. Use sequential thinking to synthesize all findings into a migration verdict

At the end, provide a clear GO / NO-GO verdict with a summary of required changes.

Step 2. Watch Docker MCP Toolkit Execute

GitHub Copilot orchestrates the scan using Docker MCP Toolkit. Here’s what happens:

Phase 1: Space Discovery

GitHub Copilot starts by querying the Hugging Face MCP server to retrieve Space metadata.

GitHub Copilot uses HuggingFace MCP to discover the Space and identify its SDK type.

Caption: GitHub Copilot uses Hugging Face MCP to discover the Space and identify its SDK type.

The tool returns that ACE-Step v1.5 uses the Docker SDK – meaning Hugging Face serves it as a pre-built container image, not a Gradio app. This is critical: Docker SDK Spaces have Dockerfiles we can analyze and rebuild, while Gradio SDK Spaces are built by Hugging Face’s infrastructure we can’t control.

Phase 2: Container Image Inspection

Next, Copilot uses the Arm MCP Server’s skopeo tool to inspect the container image without downloading it.

The skopeo tool reports that the container image has no arm64 build available. The container won't start on Arm hardware.

Caption: The skopeo tool reports that the container image has no Arm64 build available. The container won’t start on Arm hardware.

Result: the manifest includes only linux/amd64. No Arm64 build exists. This is the first concrete data point  the container will fail on any Arm hardware. But this is not the full story.

Phase 3: Source Code Analysis

Copilot uses GitHub MCP to read the repository’s key files. Here is the actual Dockerfile from the Space:

FROM python:3.11-slim

ENV PYTHONDONTWRITEBYTECODE=1 \
    PYTHONUNBUFFERED=1 \
    DEBIAN_FRONTEND=noninteractive \
    TORCHAUDIO_USE_TORCHCODEC=0

RUN apt-get update && \
    apt-get install -y --no-install-recommends git libsndfile1 build-essential && \
    apt-get install -y ffmpeg libavcodec-dev libavformat-dev libavutil-dev libswresample-dev && \
    rm -rf /var/lib/apt/lists/*

RUN useradd -m -u 1000 user
RUN mkdir -p /data && chown user:user /data && chmod 755 /data

ENV HOME=/home/user \
    PATH=/home/user/.local/bin:$PATH \
    GRADIO_SERVER_NAME=0.0.0.0 \
    GRADIO_SERVER_PORT=7860

WORKDIR $HOME/app
COPY --chown=user:user requirements.txt .
COPY --chown=user:user acestep/third_parts/nano-vllm ./acestep/third_parts/nano-vllm
USER user

RUN pip install --no-cache-dir --user -r requirements.txt
RUN pip install --no-deps ./acestep/third_parts/nano-vllm

COPY --chown=user:user . .
EXPOSE 7860
CMD ["python", "app.py"]

The Dockerfile itself looks clean:

  • python:3.11-slim already publishes multi-arch builds including arm64
  • No -mavx2, no -march=x86-64 compiler flags
  • build-essential, ffmpeg, libsndfile1 are all available in Debian’s arm64 repositories

But the real problem is in requirements.txt. This is what I hit when I tried to install ACE-Step locally:

# nano-vllm dependencies
triton>=3.0.0; sys_platform != 'win32'

flash-attn @ https://github.com/mjun0812/flash-attention-prebuild-wheels/releases/
  download/v0.7.12/flash_attn-2.8.3+cu128torch2.10-cp311-cp311-linux_x86_64.whl
  ; sys_platform == 'linux' and python_version == '3.11'

Two immediate blockers:

  • flash-attn is pinned to a hardcoded linux_x86_64 wheel URL. On an aarch64 system, pip downloads this wheel and immediately rejects it: “not a supported wheel on this platform.” This is the exact error I hit.
  • triton>=3.0.0 has no aarch64 wheel on PyPI for Linux. It will fail on Arm hardware.

Neither of these is a code problem. The Python source code is architecture-neutral. The fix is in the dependency declarations.

Phase 4: Architecture Compatibility Scan

Copilot runs the migrate_ease_scan tool with the Python scanner on the codebase.

The migrate_ease_scan tool analyzes the Python source code and finds zero x86-specific dependencies. No intrinsics, no hardcoded paths, no architecture-locked libraries.

Caption: The migrate_ease_scan tool analyzes the Python source code and finds zero x86-specific dependencies. No intrinsics, no hardcoded paths, no architecture-locked libraries.

The application source code itself returns 0 architecture issues — no x86 intrinsics, no platform-specific system calls. But the scan also flags the dependency manifest. Two blockers in requirements.txt:

Dependency

Issue

Arm64 Fix

flash-attn (linux wheel)

Hardcoded linux_x86_64 URL

Use flash-attn 2.7+ via PyPI — publishes aarch64 wheels natively

triton>=3.0.0

No aarch64 PyPI wheel for Linux

Exclude on aarch64 or use triton-nightly aarch64 build

Phase 5: Arm Knowledge Base Lookup

Copilot queries the Arm MCP Server’s knowledge base for solutions to the discovered issues.

GitHub Copilot uses the knowledge_base_search tool to find Docker buildx multi-arch strategies from learn.arm.com.

Caption: GitHub Copilot uses the knowledge_base_search tool to find Docker buildx multi-arch strategies from learn.arm.com.

The knowledge base returns documentation on:

  • flash-attn aarch64 wheel availability from version 2.7+
  • PyTorch Arm64 optimization guides for Graviton and Apple Silicon
  • Best practices for CUDA 13.0 on aarch64 (Jetson Thor / DGX Spark)
  • triton alternatives for CPU inference paths on Arm

Phase 6: Synthesis and Verdict

Sequential Thinking combines all findings into a structured verdict

Sequential Thinking combines all findings into a structured verdict:

Check

Result

Blocks?

Container manifest

amd64 only

Yes, needs rebuild

Base image python:3.11-slim

Multi-arch (arm64 available)

No

System packages (ffmpeg, libsndfile1)

Available in Debian arm64

No

torch==2.9.1

aarch64 wheels published

No

flash-attn linux wheel

Hardcoded linux_x86_64 URL

YES, add arm64 URL alongside

triton>=3.0.0

aarch64 wheels available from 3.5.0+

No, resolves automatically

Source code (migrate-ease)

0 architecture issues

No

Compiler flags in Dockerfile

None x86-specific

No

Verdict: CONDITIONAL GO. Zero code changes. Zero Dockerfile changes. One dependency fix is required.

image18
image9

Here are the exact changes needed in requirements.txt:

# BEFORE — only x86_64

flash-attn @ https://github.com/mjun0812/flash-attention-prebuild-wheels/releases/download/v0.7.12/flash_attn-2.8.3+cu128torch2.10-cp311-cp311-linux_aarch64.whl ; sys_platform == 'linux' and python_version == '3.11' and platform_machine == 'aarch64'


# AFTER — add arm64 line alongside x86_64
flash-attn @ https://github.com/mjun0812/flash-attention-prebuild-wheels/releases/download/v0.7.12/flash_attn-2.8.3+cu128torch2.10-cp311-cp311-linux_aarch64.whl ; sys_platform == 'linux' and python_version == '3.11' and platform_machine == 'aarch64'
flash-attn @ https://github.com/mjun0812/flash-attention-prebuild-wheels/releases/download/v0.7.12/flash_attn-2.8.3+cu128torch2.10-cp311-cp311-linux_x86_64.whl ; sys_platform == 'linux' and python_version == '3.11' and platform_machine != 'aarch64'

# triton — no change needed, 3.5.0+ has aarch64 wheels, resolves automatically
triton>=3.0.0; sys_platform != 'win32'

After those two fixes, the build command is:

docker buildx build --platform linux/arm64 -t ace-step:arm64 .

That single command unlocks three deployment paths:

  • NVIDIA Arm64 — Jetson Thor, DGX Spark (aarch64 + CUDA 13.0)
  • Cloud Arm64 — AWS Graviton, Azure Cobalt, Google Axion (20-40% cost savings)
  • Apple Silicon — M1-M4 Macs with MPS acceleration (local inference, $0 cloud cost)
image10

Phase 7: Create the Pull Request

After completing the scan, Copilot uses GitHub MCP to propose the fix. Since the only blocker is the hardcoded linux_x86_64 wheel URL on line 32 of requirements.txt, the change is surgical: one line added, nothing removed.

The fix adds the equivalent linux_aarch64 wheel from the same release alongside the existing x86_64 entry, conditioned on platform_machine == 'aarch64':

# BEFORE — only x86_64, fails silently on Arm
flash-attn @ https://github.com/mjun0812/flash-attention-prebuild-wheels/releases/
  download/v0.7.12/flash_attn-2.8.3+cu128torch2.10-cp311-cp311-linux_x86_64.whl
  ; sys_platform == 'linux' and python_version == '3.11'

# AFTER — add arm64 line alongside, conditioned by platform_machine
flash-attn @ https://github.com/mjun0812/flash-attention-prebuild-wheels/releases/
  download/v0.7.12/flash_attn-2.8.3+cu128torch2.10-cp311-cp311-linux_x86_64.whl
  ; sys_platform == 'linux' and python_version == '3.11'
flash-attn @ https://github.com/mjun0812/flash-attention-prebuild-wheels/releases/
  download/v0.7.12/flash_attn-2.8.3+cu128torch2.10-cp311-cp311-linux_aarch64.whl
  ; sys_platform == 'linux' and python_version == '3.11' and platform_machine == 'aarch64'
PR #14 on Hugging Face - Ready to merge

Caption: PR #14 on Hugging Face – Ready to merge

The key insight: the upstream maintainer already published the arm64 wheel in the same release. The fix wasn’t a rebuild or a code change – it was adding one line that references an artifact that already existed. The MCP chain found it in 15 minutes. Without it, a developer hitting this pip error would spend hours tracking it down.

PR: https://huggingface.co/spaces/ACE-Step/Ace-Step-v1.5/discussions/14

Without Arm MCP vs. With Arm MCP

Let’s be clear about what changes when you add the Arm MCP Server to Docker MCP Toolkit.

  • Without Arm MCP: You ask GitHub Copilot to check your Hugging Face Space for Arm64 compatibility. Copilot responds with general advice: “Check if your base image supports arm64”, “Look for x86-specific code”, “Try rebuilding with buildx”. You manually inspect Docker Hub, grep through the codebase, check each dependency on PyPI, and hit a pip install failure you cannot easily diagnose. The flash-attn URL issue alone can take an hour to track down.
  • With Arm MCP + Docker MCP Toolkit: You ask the same question. Within minutes, it uses skopeo to verify the base image, runs migrate_ease_scan on your actual codebase, flags the hardcoded linux_x86_64 wheel URLs in requirements.txt, queries knowledge_base_search for the correct fix, and synthesizes a structured CONDITIONAL GO verdict with every check documented.

Real images get inspected. Real code gets scanned. Real dependency files get analyzed. The difference is Docker MCP Toolkit gives GitHub Copilot access to actual Arm migration tooling, not just general knowledge.

Manual Process vs. MCP Chain

Manual process:

  1. Clone the Hugging Face Space repository (10 minutes)
  2. Inspect the container manifest for architecture support (5 minutes)
  3. Read through pyproject.toml and requirements.txt (20 minutes)
  4. Check PyPI for Arm64 wheel availability across all dependencies (30 minutes)
  5. Analyze the Dockerfile for hardcoded architecture assumptions (10 minutes)
  6. Research CUDA/cuDNN Arm64 support for the required versions (20 minutes)
  7. Write up findings and recommended changes (15 minutes)

Total: 2-3 hours per Space

With Docker MCP Toolkit:

  1. Give GitHub Copilot the scan instructions (5 minutes)
  2. Review the migration report (5 minutes)
  3. Submit a PR with changes (5 minutes)

Total: 15 minutes per Space

What This Suggests at Scale

ACE-Step is a standard Python AI application: PyTorch, Gradio, pip dependencies, a slim Dockerfile. This pattern covers the majority of Docker SDK Spaces on Hugging Face.

The Arm64 wall for these apps is not always visible. The Dockerfile looks clean. The base image supports arm64. The Python code has no intrinsics. But buried in requirements.txt is a hardcoded wheel URL pointing at a linux_x86_64 binary, and nobody finds it until they actually try to run the container on Arm hardware.

That is the 80% problem: 80% of Hugging Face Docker Spaces have never been tested on Arm. Not because the code will not work. but because nobody checked. The MCP chain is a systematic check that takes 15 minutes instead of an afternoon of debugging pip errors.

That has real cost implications:

  • Graviton inference runs 20-40% cheaper for the same workloads. Every amd64-only Space leaves that savings untouched.
  • NVIDIA Physical AI (GR00T, LeRobot, Isaac) deploys on Jetson Thor. Developers find models on Hugging Face, but the containers fail to build on target hardware.
  • Apple Silicon is the most common developer laptop. Local inference means faster iteration and no cloud bill.

How Docker MCP Toolkit Changes Development

Docker MCP Toolkit changes how developers interact with specialized knowledge and capabilities. Rather than learning new tools, installing dependencies, or managing credentials, developers connect their AI assistant once and immediately access containerized expertise.

The benefits extend beyond Hugging Face scanning:

  • Consistency — Same 7-tool chain produces the same structured analysis for any container
  • Security — Each tool runs in an isolated Docker container, preventing tool interference
  • Reproducibility — Scans behave identically across environments
  • Composability — Add or swap tools as the ecosystem evolves
  • Discoverability — Docker MCP Catalog makes finding the right server straightforward

Most importantly, developers remain in their existing workflow. VS Code. GitHub Copilot. Git. No context switching to external tools or dashboards.

Wrapping Up

You have just scanned a real Hugging Face Space for Arm64 readiness using Docker MCP Toolkit, the Arm MCP Server, and GitHub Copilot. What we found with ACE-Step v1.5 is representative of what you will find across Hugging Face: code that is architecture-neutral, a Dockerfile that is already clean, but a requirements.txt with hardcoded x86_64 wheel URLs that silently break Arm64 builds.

The MCP chain surfaces this in 15 minutes. Without it, you are staring at a pip error with no clear path to the cause.

Ready to try it? Open Docker Desktop and explore the MCP Catalog. Start with the Arm MCP Server, add GitHub,Sequential Thinking, and Hugging Face MCP. Point the chain at any Hugging Face Space you’re working with and see what comes back.

Learn More



from Docker https://ift.tt/y2o8TfF
via IFTTT

FBI and Indonesian Police Dismantle W3LL Phishing Network Behind $20M Fraud Attempts

The U.S. Federal Bureau of Investigation (FBI), in partnership with the Indonesian National Police, has dismantled the infrastructure associated with a global phishing operation that leveraged an off-the-shelf toolkit called W3LL to steal thousands of victims' account credentials and attempt more than $20 million in fraud.

In tandem, authorities detained the alleged developer, who has been identified as G.L, and seized key domains linked to the phishing scheme. "The takedown cuts off a major resource used by cybercriminals to gain unauthorized access to victims' accounts," the FBI said in a statement. 

The W3LL phishing kit allowed criminals to mimic legitimate login pages to deceive victims into handing over their credentials, thus allowing the attackers to seize control of their accounts. The phishing kit was advertised for a fee of about $500.

The phishing kit enabled its customers to deploy bogus websites that mimicked their legitimate counterparts, masquerading as trusted login portals to harvest credentials.

"This wasn't just phishing – it was a full-service cybercrime platform," FBI Atlanta Special Agent in Charge Marlo Graham said. "We will continue to work with our domestic and foreign law enforcement partners, using all available tools to protect the public."

W3LL was first documented by Singapore-headquartered Group-IB in September 2023, highlighting the operators' use of an underground marketplace called the W3LL Store that served approximately 500 threat actors and allowed them to purchase access to the W3LL Panel phishing kit alongside other cybercrime tools for business email compromise (BEC) attacks.

The cybersecurity company described W3LL as an all-in-one phishing platform that offers a wide range of services, right from custom phishing tools and mailing lists to access to compromised servers. The threat actor behind the illicit service is believed to have been active since 2017, previously developing bulk email spam tools like PunnySender and W3LL Sender.

Per the FBI, the W3LL Store also facilitated the sale of stolen credentials and unauthorized system access, including remote desktop connections. More than 25,000 compromised accounts are estimated to have been peddled in the storefront between 2019 and 2023.

"Primarily focused on Microsoft 365 credentials, W3LL utilizes adversary-in-the-middle (AitM) to hijack session cookies and bypass multi-factor authentication," Hunt.io said in a report published in March 2024.

Then last year, French security company Sekoia, in its analysis of another phishing kit known as Sneaky 2FA, revealed the tool "reused a few bits of code" from the W3LL Store phishing syndicate, adding that cracked versions of W3LL have been circulated in the past few years.

"Even after W3LLSTORE shut down in 2023, the operation continued through encrypted messaging platforms, where the tool was rebranded and actively marketed," the FBI said. "From 2023 to 2024 alone, the phishing kit was used to target more than 17,000 victims worldwide."

"The developer behind the tool collected and resold access to compromised accounts, amplifying the reach and impact of the scheme."



from The Hacker News https://ift.tt/BLEb2Uk
via IFTTT

⚡ Weekly Recap: Fiber Optic Spying, Windows Rootkit, AI Vulnerability Hunting and More

Monday is back, and the weekend’s backlog of chaos is officially hitting the fan. We are tracking a critical zero-day that has been quietly living in your PDFs for months, plus some aggressive state-sponsored meddling in infrastructure that is finally coming to light. It is one of those mornings where the gap between a quiet shift and a full-blown incident response is basically non-existent.

The variety this week is particularly nasty. We have AI models being turned into autonomous exploit engines, North Korean groups playing the long game with social engineering, and fileless malware hitting enterprise workflows. There is also a major botnet takedown and new research proving that even fiber optic cables can be used to eavesdrop on your private conversations.

Skim this before your next meeting. Let’s get into it.

⚡ Threat of the Week

Adobe Acrobat Reader 0-Day Under Attack — Adobe released emergency updates to fix a critical security flaw in Acrobat Reader that has come under active exploitation in the wild. The vulnerability, assigned the CVE identifier CVE-2026-34621, carries a CVSS score of 8.6 out of 10.0. Successful exploitation of the flaw could allow an attacker to run malicious code on affected installations. It has been described as a case of prototype pollution that could result in arbitrary code execution. The development comes days after security researcher and EXPMON founder Haifei Li disclosed details of zero-day exploitation of the flaw to run malicious JavaScript code when opening specially crafted PDF documents through Adobe Reader. There is evidence suggesting that the vulnerability may have been under exploitation since December 2025.

🔔 Top News

  • U.S. Warns of Hacking Campaign by Iran-Affiliated Cyber Actors — U.S. agencies warned of a hacking campaign undertaken by Iranian threat actors hitting industrial control systems across the U.S. that has had disruptive and costly effects. The attacks, ongoing since last month, targeted programmable logic controllers (PLCs) in the energy sector, water and wastewater utilities, and government facilities that are left exposed to the public internet with the apparent intention of sabotaging their systems. "In a few cases, this activity has resulted in operational disruption and financial loss," the agencies said. The activity has not been attributed to any particular group. The attacks are part of a wider pattern of escalating Iran-linked operations as the war led by the U.S. and Israel against Iran entered its sixth week. The U.S. and Iran have since agreed to a two-week ceasefire.
  • Anthropic's Mythos Model is a 0-Day and Exploit Generation Engine — A closed consortium including tech giants and top security vendors is getting early access to a general-purpose frontier model that Anthropic says can autonomously discover software vulnerabilities at scale. Because there are concerns that frontier AI capabilities could be abused to launch sophisticated attacks, the idea is to use Mythos to improve the security of some of the most widely used software before bad actors get their hands on it. To that end, Project Glasswing aims to apply these capabilities in a controlled, defensive setting, enabling participating companies to test and improve the security of their own products. In early testing, Anthropic claims the model identified thousands of high-severity vulnerabilities across operating systems, web browsers, and other widely used software, not to mention devising exploits for N-day flaws, in some cases, under a day, significantly compressing the timeline typically required to build working exploits. "New AI models, especially those from Anthropic, have triggered a new set of actions for how we build and secure our products," Cisco, which is one of the launch partners, said. "While the capabilities now available to defenders are remarkable, they soon will also become available to adversaries, defining the critical inflection point we face today. Defensively, AI allows us to scan and secure vast codebases at a scale previously unimaginable. However, it also lowers the threshold for attackers, empowering less-skilled actors to launch complex, high-impact campaigns. Ultimately, AI is accelerating the pace of innovation for both defenders and adversaries alike. The question is simply who gets ahead of it and how fast."
  • Law Enforcement Operation Fells APT28 Router Botnet — APT28 has been silently exploiting known vulnerabilities in small and home office (SOHO) routers since at least May 2025, and changing their DNS server settings to redirect victims to websites it controls for credential theft. The attack chain begins with Forest Blizzard gaining unauthorized access to poorly secured SOHO routers and silently modifying their default network settings so that DNS lookups for select websites are altered to direct users to their bogus counterparts. Specifically, the actor replaces the router's legitimate DNS resolver configuration with actor-controlled DNS servers. Since endpoint devices, such as laptops, phones, and workstations, automatically inherit network configuration from routers via the Dynamic Host Configuration Protocol (DHCP), every device connecting through a compromised router unknowingly begins forwarding its DNS requests to Russian intelligence-controlled infrastructure. For a select subset of high-priority targets, Forest Blizzard escalated beyond passive DNS collection to active Adversary-in-the-Middle (AiTM) attacks against Transport Layer Security (TLS) connections. The compromised router redirects the victim's DNS query to the actor-controlled resolver. The malicious resolver returns a spoofed IP address, directing the victim's device to actor-controlled infrastructure instead of the legitimate service. Forest Blizzard then intercepts the underlying plaintext traffic – potentially including emails, credentials, and sensitive cloud-hosted content. The activity has gradually declined over the past few weeks. The operations are "likely opportunistic in nature, with the actor casting a wide net to reach many potential victims, before narrowing in on targets of intelligence interest as the attack develops," per the U.K. government. "The GRU provides fraudulent DNS answers for specific domains and services – including Microsoft Outlook Web Access — enabling adversary-in-the-middle (AitM) attacks against encrypted traffic if users navigate through a certificate error warning. These AitM attacks would allow the actors to see the traffic unencrypted." The operation fits into a series of disruptions aimed at Russian government hackers dating back to 2018, including VPNFilter, Cyclops Blink, and MooBot.
  • Drift Protocol Links Hack to North Korea — Drift Protocol has revealed that a North Korean state-linked group spent six months posing as a trading firm to steal $285 million in digital assets. The attack has been described as a meticulously planned intelligence operation that began in fall 2025, when a group of individuals approached Drift staff at a major cryptocurrency conference, presenting themselves as a quantitative trading firm seeking to integrate with the protocol. Over the next couple of months, the group built trust through in-person meetings, Telegram coordination, onboarding an Ecosystem Vault on Drift, and made a $1 million deposit of their own capital. But once the exploit hit, the trading group vanished, with the chats and malware "completely scrubbed" to cover up the tracks. The Drift Protocol hack follows a pattern that is becoming increasingly frequent as this incident marks the 18th North Korea-linked act Elliptic has tracked in 2026. 
  • Bitter-Linked Hack-for-Hire Campaign Targets Journalists Across MENA — An apparent hack-for-hire campaign likely orchestrated by a threat actor with suspected ties to the Indian government targeted journalists, activists, and government officials across the Middle East and North Africa (MENA). The targets included prominent Egyptian journalists and government critics, Mostafa Al-A'sar and Ahmed Eltantawy, along with an anonymous Lebanese journalist. The spear-phishing attacks aimed to compromise their Apple and Google accounts by sending specially crafted links designed to capture their credentials. The attack has been found to share infrastructure overlaps with an Android spyware campaign that leveraged deceptive websites impersonating Signal, ToTok, and Botim to deploy ProSpy and ToSpy to unspecified targets in the U.A.E. While Bitter has not been attributed to espionage campaigns targeting civil society members in the past, the campaign once again demonstrates a growing trend of government agencies outsourcing their hacking operations to private hack-for-hire firms, which develop spyware and exploits for use by law enforcement and intelligence agencies to covertly access data on people's phones.

🔥 Trending CVEs

Bugs drop weekly, and the gap between a patch and an exploit is shrinking fast. These are the heavy hitters for the week: high-severity, widely used, or already being poked at in the wild.

Check the list, patch what you have, and hit the ones marked urgent first — CVE-2026-34621 (Adobe Acrobat Reader), CVE-2026-39987 (Marimo), CVE-2026-34040 (Docker Engine), CVE-2025-59528 (Flowise), CVE-2026-34976 (dgraph), CVE-2026-0049, CVE-2025-48651 (Android), CVE-2026-0740 (Ninja Forms – File Upload plugin), CVE-2025-58136 (Apache Traffic Server), CVE-2026-4350 (Perfmatters plugin), CVE-2026-32922, CVE-2026-33579, GHSA-9p3r-hh9g-5cmg, GHSA-g5cg-8x5w-7jpm, GHSA-8rh7-6779-cjqq, GHSA-hc5h-pmr3-3497, GHSA-j7p2-qcwm-94v4, GHSA-fqw4-mph7-2vr8, GHSA-9hjh-fr4f-gxc4, GHSA-hf68-49fm-59cq (OpenClaw), CVE-2026-29059, CVE-2026-23696, CVE-2026-22683 (Windmill), CVE-2026-34197 (Apache ActiveMQ), CVE-2026-4342 (Kubernetes), CVE-2026-34078 (Flatpak), CVE-2026-31790 (OpenSSL), CVE-2026-0775 (npm cli), CVE-2026-0776 (Discord Client), CVE-2026-0234 (Palo Alto Networks), CVE-2026-4112 (SonicWall), CVE-2026-5437 through CVE-2026-5445 (Orthanc DICOM Server), CVE-2026-30815, CVE-2026-30818 (TP-Link), CVE-2026-33784 (Juniper Networks Support Insights Virtual Lightweight Collector), CVE-2026-23869 (React Server Components), CVE-2026-5707, CVE-2026-5708, CVE-2026-5709 (AWS Research and Engineering Studio), CVE-2026-5173, CVE-2026-1092, CVE-2025-12664 (GitLab), CVE-2026-5860, CVE-2026-5858, CVE-2026-5859, from CVE-2026-5860 through CVE-2026-5873 (Google Chrome), CVE-2023-46233, CVE-2026-1188, CVE-2026-1342, CVE-2026-1346 (IBM Verify Identity Access and IBM Security Verify Access), CVE-2026-5194 (WolfSSL), and CVE-2026-20929 (Windows HTTP.sys).

🎥 Cybersecurity Webinars

  • The Blueprint for AI Agent Governance: Identity, Visibility, and Control → As autonomous AI agents move from experimental "slideware" to production middleware, they’ve created a massive new attack surface: non-human identities. Join this webinar to cut through the vendor noise and get a practical blueprint for the three pillars of agent security—identity, visibility, and control. Learn how to establish hardware-backed agent identities and implement forensic AI proxies to govern your machine workforce before the "ghosts" in your system become liabilities.
  • State of AI Security 2026: From Experimental Apps to Autonomous Agents → AI is evolving from static tools to autonomous agents, outstripping traditional security faster than ever. With 87% of leaders citing AI as their top emerging risk, the "wait and see" approach is officially over. Join us to dissect the 2026 State of AI Security and gain a battle-tested roadmap for securing model runtimes, preventing agentic data leaks, and governing your machine workforce in production.
  • Validate 56% Faster: How AI Agents are Automating the Pentest Loop → Vulnerability backlogs are endless, but true exploitability is rare. Agentic Exposure Validation uses autonomous AI to safely test your defenses in real-time, proving which risks are real and which are just noise. Join us to learn how to automate your validation loop, prioritize the 1% of flaws that actually matter, and shrink your attack surface at machine speed.

📰 Around the Cyber World

  • Fake Claude Website Drops PlugX — A fake website impersonating Anthropic's Claude to push a trojanized installer that deploys known malware referred to asPlugXusing a technique called DLL side-loading. The domain mimics Claude's official site, and visitors who download the ZIP archive receive a copy of Claude that installs and runs as expected," Malwarebytes said. "But in the background, it deploys a PlugX malware chain that gives attackers remote access to the system." While PlugX is known to be widely shared among Chinese hacking groups and delivered via DLL side-loading, its source code has circulated in underground forums, indicating that other threat actors could also be weaponizing the malware in their own attacks.
  • Seized VerifTools Servers Expose 915,655 Fake IDs — In August 2025, a joint law enforcement operation between the Netherlands and the U.S. led to the takedown of a fake ID marketplace called VerifTools. Last week, Dutch police arrested eight suspects in a nationwide operation targeting users of the illicit platform as part of an identity fraud investigation. The male suspects, aged between 20 and 34, have been accused of identity fraud, forgery, and cybercrime-related offenses. In addition, nine suspects have been ordered to report to the police station. This includes seven men aged 18 to 35, and two girls aged 15 and 16. Further investigation into VerifTools has revealed that there were 636,847 registered users from February 2021 to August 2025, with 915,655 fake documents generated between May 2023 and August 2025. Investigators also found 236,002 document images linked to the U.S. that were purchased for about $1.47 million between July 2024 and August 2025.
  • U.K. Government Threatens Tech Execs with Jail Time — The U.K. government said it submitted amendments to the Crime and Policing Bill that, besides criminalizing pornography depicting illegal sexual conduct between family members and adults roleplaying as children and prohibiting people from possessing or publishing such content, also aims to fine or imprison senior executives of companies who fail to remove people's intimate images that have been shared without consent.
  • Optical Fibers for Acoustic Eavesdropping — New research from the Hong Kong Polytechnic University and Chinese University of Hong Kong has uncovered a critical side channel within telecommunication optical fiber that enables acoustic eavesdropping. "By exploiting the sensitivity of optical fibers to acoustic vibrations, attackers can remotely monitor sound-induced deformations in the fiber structure and further recover information from the original sound waves," a group of academics said in an accompanying paper. "This issue becomes particularly concerning with the proliferation of Fiber-to-the-Home (FTTH) installations in modern buildings. Attackers with access to one end of an optical fiber can use commercially available Distributed Acoustic Sensing (DAS) systems to tap into the private environment surrounding the other end."
  • Storm-2755 Conducts Payroll Pirate Attacks — Microsoft said it observed an emerging, financially motivated threat actor dubbed Storm-2755 carrying out payroll pirate attacks targeting Canadian users by abusing legitimate enterprise workflows. "In this campaign, Storm-2755 compromised user accounts to gain unauthorized access to employee profiles and divert salary payments to attacker-controlled accounts, resulting in direct financial loss for affected individuals and organizations," the company said. The tech giant also pointed out that the campaign is distinct from prior activityowing to differences in delivery and targeting.Particularly, this involves the exclusive targeting of Canadian users and the use of malvertising and search engine optimization (SEO) poisoning industry agnostic search terms like "Office 365" to lure victims to Microsoft 365 credential harvesting pages. Also notable is the use of adversary‑in‑the‑middle (AiTM) techniques to hijack authenticated sessions, allowing the threat actor to bypass multi-factor authentication (MFA) and blend into legitimate user activity.
  • MITRE Releases F3 Framework to Fight Cyber Fraud — MITRE has released the Fight Fraud Framework (F3), which it described as a "first-of-its-kind effort to define and standardize the tactics and techniques used in cyber-enabled financial fraud." The tactics cover the entire attack lifecycle: Reconnaissance, Resource Development, Initial Access, Defense Evasion, Positioning, Execution, and Monetization. By codifying the tradecraft used to conduct fraud, the idea is to help financial institutions better understand, detect, and prevent fraud through a shared framework of adversary behaviors, it added. "Fraud actors often blend traditional cyber techniques with domain-specific fraud tactics, making a unified cyber-fraud framework essential," MITRE said. "F3 helps defenders connect technical signals to real-world fraud events, enabling a shift from reactive response to proactive defense."
  • RegPhantom, a Stealthy Windows Kernel Rootkit — A new Windows kernel rootkit dubbed RegPhantom can give attackers code execution in kernel mode from an unprivileged user mode context without leaving any major visual evidence behind. "The malware abuses the Windows registry as a covert trigger mechanism: a usermode process can send an encrypted command through a registry write, which the driver intercepts and turns into arbitrary kernel-mode code execution," Nextron Systems said. "What makes this threat notable is the combination of stealth, privilege, and trust abuse. The driver runs as a signed kernel component, allowing it to operate at the highest privilege level on Windows systems. It does not rely on normal driver loading behavior for its payloads and instead reflectively maps code into kernel memory, making the loaded module invisible to standard tools that enumerate drivers. It also blocks the triggering registry write, wipes executed payload memory, and stores hook pointers in encoded form, which significantly reduces forensic visibility." The first sample of RegPhantom in the wild was detected on June 18, 2025.
  • APT28's NTLMv2 Hash Relay Attacks Detailed — In more APT28 (aka Pawn Storm) news, the threat actor has been attributed to NTLMv2 hash relay attacks through different methods against a wide range of global targets across Europe, North America, South America, Asia, Africa, and the Middle East between April 2022 and November 2023. The threat actor is known to break into mail servers and the corporate virtual private network (VPN) services of organizations around the world through brute-force credential attacks since 2019. "Pawn Storm has also been using EdgeOS routers to send spear-phishing emails, perform callbacks of CVE-2023-23397 exploits in Outlook, and proxy credential theft on credential phishing websites," Trend Micro said. Successful exploitation of CVE-2023-23397 allows an attacker to obtain a victim's Net-NTLMv2 hash and use it for authentication against other systems that support NTLM authentication. The vulnerability, per Microsoft, has been exploited as a zero-day since April 2022. Select campaigns observed in October 2022 involved the use of phishing emails to drop a stealer that scanned the system periodically for files matching certain extensions and exfiltrated them to the free file-sharing service, free.keep.sh.
  • New RATs Galore — Trojanized FileZilla installers are being used to initiate an attack chain that leads to the deployment of STX RAT, a remote access trojan (RAT) with infostealer capabilities. Researchers have also discovered an active threat called DesckVB RAT, a JavaScript-based trojan that deploys a PowerShell payload, which subsequently loads a .NET-based loader directly into memory. "Once executed, the RAT establishes communication with a command-and-control (C2) server, enabling attackers to remotely control the compromised system, exfiltrate sensitive data, and carry out various malicious activities while maintaining a low detection footprint," Point Wild said. Some of the other newly discovered RATs include CrystalX or WebCrystal RAT (a new malware-as-a-service (MaaS) and a rebrand of WebRAT promoted on Telegram and YouTube with remote access, data theft, keylogging, spyware, and clipper capabilities), RetroRAT (a malware distributed via PowerShell and .NET loaders as part of a campaign named Operation DualScript for system monitoring, financial activity tracking, clipboard hijacking to route cryptocurrency transactions, and remote command execution), ResokerRAT (a malware that uses Telegram for C2 and receive commands on the victim machine), and CrySome (a C# RAT that offers full-spectrum remote operations on compromised systems, along with deeply integrated persistence, AV killer, and anti-removal architecture that leverages recovery partition abuse and offline registry modification).
  • Phishing Campaign Delivers Remcos RAT in Fileless Manner — Phishing emails are being used to deliver Remcos RAT in what has been described as a fileless attack. "The attack chain is initiated through a phishing email containing a ZIP attachment disguised as a legitimate business document," Point Wild said. "Upon execution, an obfuscated JavaScript dropper establishes the initial foothold and retrieves a remote PowerShell script, which acts as a reflective loader. This loader employs multiple layers of obfuscation, including Base64 encoding, raw binary manipulation, and rotational XOR encryption, to reconstruct and execute a .NET payload entirely in memory." An important aspect of the campaign is the use of trusted system binaries to proxy malicious execution under the guise of legitimate processes. The final RAT payload is retrieved dynamically from a remote C2 server, allowing the threat actor to switch payloads at any time.
  • Tycoon 2FA Switch Infrastructure and Use ProxyLine —The operators of the Tycoon 2FA phishing kit have been observed increasingly relying on ProxyLine, a commercial datacenter proxy service, to evade IP and geo‑based detection controls following its return after the coordinated global takedown of its infrastructure last month. Following the takedown, threat actors have pivoted to new infrastructure providers like HOST TELECOM LTD, Clouvider, GREEN FLOID LLC, and Shock Hosting LLC. One provider that has witnessed continued use pre- and post-takedown is M247 Europe SRL. In addition, Gmail-targeted Tycoon 2FA campaigns have implemented WebSocket-based communication for real-time credential harvesting and reduced detection footprint compared to traditional HTTP POST requests.
  • TeleGuard's Security Failings Exposed — TeleGuard, an app that's advertised as an "encrypted messenger [that] offers uncompromising data protection" and has been downloaded more than a million times, has been found to suffer from poor encryption that allows an attacker to trivially access a user’s private key and decrypt their messages. "TeleGuard also uploads users' private keys to a company server, meaning TeleGuard itself could decrypt its users' messages, and the key can also at least partially be derived from simply intercepting a user's traffic," security researchers told 404 Media.
  • Google Brings E2EE to Gmail for Android and iOS — Google officially expanded support for end-to-end encryption (E2EE) to Android and iOS devices for Gmail client-side encryption (CSE) users. "Users with a Gmail E2EE license can send an encrypted message to any recipient, regardless of what email address the recipient has," Google said. The feature is currently limited to only Enterprise Plus customers with the Assured Controls or Assured Controls Plus add-on.
  • Bad Actor Abuse GitHub and GitLab — Threat actors are turning to trusted services like GitHub and GitLab for spreading malware and stealing login credentials from unsuspecting users. About 53% of all campaigns abusing the GitHub domains have been found to deliver malware (e.g., XWorm, Venom RAT), whereas 64% of campaigns abusing GitLab domains deliver malware (e.g., DCRat). Select campaigns have also adopted a dual threat attack chain, leveraging GitHub or GitLab to trick users into downloading Muck Stealer, after which a credential phishing page automatically opens. "These Git repository websites are necessary and can'tbe blocked because of their use by enterprise software and normal business operations," Cofense said. "By uploading malware or credential phishing pages to repositories hosted on these domains, threat actors can generate phishing links that won'tbe blocked by many email-based security defenses like secure email gateways (SEG). GitHub and GitLab mark the latest trend in abuse of legitimate cloud collaboration platforms."
  • FBI Extracts Signal Messages from iOS Notification History Database — The U.S. Federal Bureau of Investigation (FBI) managed to forensically extract copies of incoming Signal messages from a defendant's iPhone, even after the app was deleted, by taking advantage of the fact that copies of the content were saved in the device's push notification database, 404 Media reported. The development reveals how physical access to a device can enable specialized software to run on it to yield sensitive data derived even from secure messaging apps in unexpected places. The problem is not limited to the Signal app, but one that stems from a more fundamental design decision regarding how Apple stores notifications. Signal already has a setting that blocks message content from displaying in push notifications. Users who are concerned about their privacy are advised to consider turning the option on.
  • Multiple Flaws in IBM WebSphere Liberty — Multiple security flaws have been disclosed in IBM WebSphere Liberty, a modular, cloud-friendly Java application server, that could be exploited to seize control of affected systems. The vulnerabilities offer multiple pathways for attackers to move from network-level exposure or limited access to full server compromise, according to Oligo Security. The most severe is CVE-2026-1561 (CVSS score: 5.4), which enables pre-authenticated remote code execution in SSO-enabled deployments due to unsafe deserialization in SAML Web SSO. "IBM WebSphere Application Server Liberty is vulnerable to server-side request forgery (SSRF)," IBM said. "This may allow [a] remote attacker to send unauthorized requests from the system, potentially leading to network enumeration or facilitating other attacks."
  • Betterleaks → It is the next-generation successor to Gitleaks, built to find exposed credentials with greater speed and accuracy. It eliminates the noise of false positives by moving beyond basic pattern matching to high-fidelity detection. Designed for modern CI/CD pipelines, it helps developers identify and fix leaked API keys and sensitive data before they become security liabilities.
  • Supply Chain Monitor → This tool provides end-to-end visibility into your software supply chain by monitoring CI/CD pipelines for suspicious activity. It tracks build integrity, detects unauthorized changes, and surfaces vulnerabilities in real-time. By integrating directly with your existing workflows, it helps ensure that the code you ship hasn't been tampered with between the commit and production.

Disclaimer: This is strictly for research and learning. It hasn't been through a formal security audit, so don't just blindly drop it into production. Read the code, break it in a sandbox first, and make sure whatever you’re doing stays on the right side of the law.

Conclusion

That’s the wrap for this Monday. While the headlines usually focus on the high-level nation-state drama, remember that most of these attacks still rely on someone, somewhere, clicking a "trusted" link or ignoring a basic patch. Whether it’s an AI-driven exploit engine or a fake trading firm, the goal is always to find the path of least resistance into your environment.

Stay sharp, keep your edge devices updated, and don’t let the noise of the news cycle distract you from the basics of your own defense.



from The Hacker News https://ift.tt/FWmt4yi
via IFTTT

Your MTTD Looks Great. Your Post-Alert Gap Doesn't

Anthropic restricted its Mythos Preview model last week after it autonomously found and exploited zero-day vulnerabilities in every major operating system and browser. Palo Alto Networks' Wendi Whitmorewarned that similar capabilities are weeks or months from proliferation. CrowdStrike's 2026 Global Threat Report puts average eCrime breakout time at 29 minutes. Mandiant's M-Trends 2026 shows adversary hand-off times have collapsed to 22 seconds. 

Offense is getting faster. The question is where exactly defenders are slow — because it's not where most SOC dashboards suggest.

Detection tooling has gotten materially better. EDR, cloud security, email security, identity, and SIEM platforms ship with built-in detection logic that pushes MTTD close to zero for known techniques. That's real progress, and it's the result of years of investment in detection engineering across the industry. 

But when adversaries are operating on timelines measured in seconds and minutes, the question isn't whether your detections fire fast enough. It's what happens between the alert firing and someone actually picking it up.

The Post-Alert Gap

After the alert fires, the clock keeps running. An analyst has to see it, pick it up, assemble context from across the stack, investigate, make a determination, and initiate a response. In most SOC environments, that sequence is where the majority of the attacker's operating window actually lives.

The analyst is mid-investigation on something else. The alert enters a queue. Context is spread across four or five tools. The investigation itself requires querying the SIEM, checking identity logs, pulling endpoint telemetry, andcorrelating timelines. For a thorough investigation — one that results in a defensible determination, not a gut-feel close — that's 20 to 40 minutes of hands-on work, assuming the analyst starts immediately, which they rarely do.

Against a 29-minute breakout window, the investigation hasn't started by the time the attacker has moved laterally. Against a 22-second hand-off, the alert might still be in the queue.

MTTD doesn't capture any of this. It measures how quickly the detection fires, and on that front, the industry has made genuine progress. But that metric stops at the alert. It says nothing about how long the post-alert window actually was, how many alerts received a real investigation versus a quick skim, or how many were bulk-closed without meaningful analysis. MTTD reports on the part of the problem that the industry has already made real headway on. The downstream exposure — the post-alert investigation gap — isn't reflected anywhere.

What Changes When AI Handles Investigation

An AI-driven investigation doesn't improve detection speed. MTTD is a detection engineering metric, and it stays the same. What AI compresses is the post-alert timeline, which is exactly where the real exposure lives.

The queue disappears. Every alert is investigated as it arrives, regardless of severity or time of day. Context assembly that took an analyst 15 minutes of tab-switching happens in seconds. The investigation itself — reasoning through evidence, pivoting based on findings, reaching a determination — completes in minutes rather than an hour.

This is what we built Prophet AI to do. It investigates every alert with the depth and reasoning of a senior analyst, at machine speed: planning the investigation dynamically, querying the relevant data sources, and producing a transparent, evidence-backed conclusion. The post-alert gap doesn't exist in this model because there is no queue and no wait time. For teams working toward this benchmark, we've published practical steps to compress investigation time below two minutes.

The same structural constraint applies to MDR. MDR analysts face the same post-alert bottleneck because they're still bound by human investigation capacity. The shift from outsourced human investigation to AI investigation removes that ceiling entirely, changing what becomes measurable about your SOC's actual performance.

The Metrics That Matter Now

Once the post-alert window collapses, the traditional speed metrics stop being the most informative indicators. MTTI of two minutes is meaningful in the first quarter you report it. After that, it's table stakes. The question shifts from "how fast are we?" to "how much stronger is our security posture getting over time?"

Four metrics capture this:

  1. Investigation coverage rate. What percentage of total alerts receive a full investigation consisting of a complete line of questioning with evidence? In a traditional SOC, this number is typically 5 to 15 percent. The rest get skimmed, bulk-closed, or ignored. In an AI-driven SOC, it should be 100 percent. This is the single most important metric for understanding whether your SOC is actually seeing what's happening in your environment.
  2. Detection surface coverage. MITRE ATT&CK technique coverage mapped against your detection library, with gaps identified and tracked over time. This means continuously mapping the detection surface, identifying techniques with weak or no coverage, and flagging single points of failure or scenarios where a single detection rule is the only thing between the organization and complete blindness to a technique. Detection engineering in an AI-driven SOC requires rethinking how this surface is maintained.
  3. False positive feedback velocity. How quickly do investigation outcomes feed back into detection tuning? In most SOCs, this loop runs on human memory and quarterly review cycles. The target state is continuous: investigation outcomes should flow directly into detection optimization, suppressing noise and improving signal without waiting for a scheduled review.
  4. Hunt-driven detection creation rate. How many permanent detections were created from proactive hunting findings versus from incident response? This measures whether your hunting program is expanding your detection surface or just generating reports. The strongest implementations tie hunting directly to detection gaps where you run hypothesis-driven hunts against the techniques with the weakest coverage, then convert confirmed findings into permanent detection rules.

These measurements only matter once AI is doing real investigation work, but they represent a fundamentally different view of SOC performance that’s oriented around security outcomes rather than operational throughput.

The Mythos disclosure crystallized something the security industry already knew but hadn't fully internalized: AI is accelerating offense at a pace that makes human-speed investigation untenable. The response isn't to panic about AI-generated exploits. It's to close the gap where defenders are actually slow — the post-alert investigation window — and to start measuring whether that gap is shrinking.

The teams that shift from reporting detection speed to reporting investigation coverage and detection improvement will have a clearer picture of their actual risk posture. When attackers have AI working for them, that clarity matters.

Prophet Security's Agentic AI SOC Platform investigates every alert with senior analyst depth, continuously optimizes detections, and runs directed threat hunts against coverage gaps. Visit Prophet Security to see how it works.

Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.



from The Hacker News https://ift.tt/GV6WocP
via IFTTT