Thursday, January 22, 2026

Secrets management disaster recovery without the operational burden

Running an enterprise-grade secrets management platform yourself is not easy. These systems need to be highly available, secure, and resilient across regions — this requires deep expertise and significant ongoing investment. For many organizations, building and maintaining disaster-recovery-ready secrets management infrastructure on their own introduces unnecessary complexity, cost, and operational risk without delivering much differentiation.

The challenge for cybersecurity teams is finding a SaaS secrets management solution that has proven itself as a secure, reliable option. One solution worth testing is HCP Vault Dedicated, which delivers Vault Enterprise (the self-managed version of HashiCorp Vault) as a fully managed, single-tenant service. HCP Vault includes:

  • High availability
  • Automated cross-region disaster recovery
  • Data-plane isolation

To dive deeper into how teams can offload operations, scaling, and recovery for their Vault security and secrets management platform, we created a guide to help you assess your security operations and security platform disaster recovery architecture.

HCP Vault Dedicated SecOps features

HCP Vault Dedicated removes the need for your operations team to run Vault and provide the high levels of uptime that you need if you want to maintain secure access for your applications and infrastructure. At large enterprises, this can involve hundreds of thousands secret requests every hour. With the SaaS version of Vault Enterprise, you get:

Built-in high availability

Every production HCP Vault Dedicated cluster runs as a three-node, highly available (HA) deployment, monitored and maintained by HashiCorp SREs. If a node becomes unhealthy, HCP handles replacement automatically, helping ensure continuity without internal teams managing cluster lifecycle.

Cross-region disaster recovery made simple

Instead of engineering multi-region replication and failover logic yourself, HCP Vault Dedicated bakes it into the platform:

  • Configure a backup HashiCorp Virtual Network (HVN) in any supported region
  • If the primary region experiences an outage, HCP automatically fails over the cluster to the backup region
  • Vault’s DNS address remains the same, minimizing client disruption
Backup

This offers enterprise-grade disaster recovery with dramatically reduced operational overhead.

Resilience even during control plane outages

HCP uses a separate control plane and data plane. Even if the HashiCorp Cloud Platform (HCP) portal or API is disrupted, your dedicated Vault cluster continues to operate, and DR failover can still occur if preconfigured.

Managed snapshots and restore

Backups of your Vault cluster state (called snapshots) are automatically retained (depending on your tier), enabling cluster restoration even after accidental deletion or corruption — no custom backup pipeline required.

Advantages

HCP Vault: Do more with fewer resources

1. Bridges skills gaps for secret management operations

Running Vault reliably as an enterprise secrets management platform requires expertise in distributed systems, security hardening, replication, and DR planning.

2. Reduces infrastructure and operational costs

Avoid the overhead of:

  • Architecting multi-region failover
  • Operating HA clusters
  • Managing upgrades, patches, and security hardening
  • Running 24/7 monitoring
  • Troubleshooting outages

HCP manages these tasks so your teams don’t have to.

3. Frees teams to focus on differentiation

  • Operators and developers should be focused on building and managing new features or applications that bring value to customers. Managing Vault SRE tasks in-house doesn’t add to your business value, and it doesn’t differentiate you from your competitors in a meaningful way.

4. Simplifies hybrid and multi-cloud adoption

HCP Vault Dedicated integrates across hybrid and multi-cloud environments via HVN peering, transit gateway, or PrivateLink, functioning as a secure managed data-plane deployment connected directly into your environment.

5. Ensures consistent enterprise-grade security

HCP applies hardened defaults, operational best practices, and automated monitoring, delivering security standards that traditionally require large internal teams.

Snapshots

Disaster recovery architecture checklist for HCP Vault Dedicated

This checklist is designed to help CIOs, CISOs, and other cybersecurity decision-makers assess their disaster recovery architecture for HCP Vault secrets management.

Deployment and tiering

  • Select Essentials or Standard tier to enable cross-region DR
  • Plan for multi-region footprint early in the deployment lifecycle

Network architecture

  • Create a backup HVN in a different region with a non-overlapping CIDR
  • Ensure network connectivity (peering, transit gateway, or VPN) to both primary and backup HVNs
  • Confirm firewall rules allow traffic to load-balancer IPs in both regions

Connectivity and client behavior

  • Verify clients can resolve and reach Vault post-failover (DNS stays consistent)
  • If using the HCP proxy address, be aware it will not route traffic when the cluster is active in the backup region

Operational considerations

  • Expect less than 10 minutes of unavailability when enabling backup networks on an existing cluster
  • Subscribe to failover and recovery notifications
  • Validate that logs and metrics properly reflect DR-prefixed cluster IDs after failover

Backup and restore

  • Understand snapshot retention policies (e.g. 30 days for certain tiers)
  • Create a restore runbook including restoring to alternate regions if necessary

Testing and governance

  • Perform periodic DR simulations to validate application continuity
  • Document responsibilities during failover and failback
  • Update compliance frameworks with DR logging and audit behaviors

The outcomes of SaaS for secrets management

HCP Vault Dedicated combines Vault Enterprise’s proven DR capabilities with the convenience and efficiency of a managed service for your organization's secrets management. With built-in HA, automated cross-region failover, snapshot management, and data-plane isolation, organizations gain world-class resilience — without needing to build and maintain world-class infrastructure.

The result:

  • Lower operational overhead
  • Stronger security posture
  • Faster innovation
  • A more efficient team that can do more with fewer resources

Let your teams test drive HCP Vault Dedicated for free, and get in touch if you’d like to talk about your secrets management practices.

FAQs

What happens to Vault clients during a regional outage?

HCP Vault Dedicated is designed so applications using Vault as their secrets manager experience minimal disruption during a regional outage. When cross-region disaster recovery is enabled, HCP automatically fails over the Vault cluster to a preconfigured backup region while preserving the same DNS address. This allows applications to continue retrieving secrets and encryption keys without client-side reconfiguration.

How long does disaster recovery failover take?

Failover to a backup region typically completes within minutes once an outage is detected. Because disaster recovery is built directly into the managed HCP Vault Dedicated service, teams do not need to manually promote clusters or coordinate complex recovery workflows during an incident.

Can Vault continue operating if the HCP control plane is unavailable?

Yes. HCP Vault Dedicated uses a separate control plane and data plane. Even if the HCP portal or API is unavailable, the Vault data plane continues operating, and secrets management workflows — including authentication and secret retrieval — remain functional. Preconfigured disaster recovery can still occur without control plane access.



from HashiCorp Blog https://ift.tt/MkYCtJX
via IFTTT

Using MCP Servers: From Quick Tools to Multi-Agent Systems

Model Context Protocol (MCP) servers are a spec for exposing tools, models, or services to language models through a common interface. Think of them as smart adapters: they sit between a tool and the LLM, speaking a predictable protocol that lets the model interact with things like APIs, databases, and agents without needing to know implementation details.

But like most good ideas, the devil’s in the details.

The Promise—and the Problems of Running MCP Servers

Running an MCP sounds simple: spin up a Python or Node server that exposes your tool. Done, right? Not quite.

You run into problems fast:

  • Runtime friction: If an MCP is written in Python, your environment needs Python (plus dependencies, plus maybe a virtualenv strategy, plus maybe GPU drivers). Same goes for Node. This multiplies fast when you’re managing many MCPs or deploying them across teams.
  • Secrets management: MCPs often need credentials (API keys, tokens, etc.). You need a secure way to store and inject those secrets into your MCP runtime. That gets tricky when different teams, tools, or clouds are involved.
  • N×N integration pain: Let’s say you’ve got three clients that want to consume MCPs, and five MCPs to serve up. Now you’re looking at 15 individual integrations. No thanks.

To make MCPs practical, you need to solve these three core problems: runtime complexity, secret injection, and client-to-server wiring. 

If you’re wondering where I’m going with all this, take a look at those problems. We already have a technology that has been used by developers for over a decade that helps solve them: Docker containers.

In the rest of this blog I’ll walk through three different approaches, going from least complex to most complex, for integrating MCP servers into your developer experience. 

Option 1 — Docker MCP Toolkit & Catalog

For the developer who already uses containers and wants a low-friction way to start with MCP.

If you’re already comfortable with Docker but just getting your feet wet with MCP, this is the sweet spot. In the raw MCP world, you’d clone Python/Node servers, manage runtimes, inject secrets yourself, and hand-wire connections to every client. That’s exactly the pain Docker’s MCP ecosystem set out to solve.

Docker’s MCP Catalog is a curated, containerized registry of MCP servers. Each entry is a prebuilt container with everything you need to run the MCP server. 

The MCP Toolkit (available via Docker Desktop) is your control panel: search the catalog, launch servers with secure defaults, and connect them to clients.

How it helps:

  • No language runtimes to install
  • Built-in secrets management
  • One-click enablement via Docker Desktop
  • Easily wire the MCPs to your existing agents (Claude Desktop, Copilot in VS Code, etc)
  • Centralized access via the MCP Gateway
MCP Catalog

Figure 1: Docker MCP Catalog: Browse hundreds of MCP servers with filters for local or remote and clear distinctions between official and community servers

A Note on the MCP Gateway
One important piece working behind the scenes in both the MCP Toolkit and cagent (a framework for easily building multi-agent applications that we cover below) is the MCP Gateway, an open-source project from Docker that acts as a centralized frontend for all your MCP servers. Whether you’re using a GUI to start containers or defining agents in YAML, the Gateway handles all the routing, authentication, and translation between clients and tools. It also exposes a single endpoint that custom apps or agent frameworks can call directly, making it a clean bridge between GUI-based workflows and programmatic agent development.

Moving on: Using MCP servers alongside existing AI agents is often the first step for many developers. You wire up a couple tools, maybe connect to a calendar or a search API, and use them in something like Claude, ChatGPT, or a small custom agent. For step-by-step tutorials on how to automate dev workflows with Docker’s MCP Catalog and Toolkit with popular clients, check out these guides on ChatGPT, Claude Desktop,Codex, Gemini CLI, and Claude Code
Once that pattern clicks, the next logical step is to use those same MCP servers as tools inside a multi-agent system.

Option 2 — cagent: Declarative Multi-Agent Apps

For the developer who wants to build custom multi-agent applications but isn’t steeped in traditional agentic frameworks.

If you’re past simple MCP servers and want agents that can delegate, coordinate, and reason together, cagent is your next step. It’s Docker’s open-source, YAML-first framework for defining and running multi-agent systems—without needing to dive into complex agent SDKs or LLM loop logic.

Cagent lets you describe:

  • The agents themselves (model, role, instructions)
  • Who delegates to whom
  • What tools each agent can access (via MCP or local capabilities)

Below is an example of a pirate flavored chat bot:

agents:
  root:
    description: An agent that talks like a pirate
    instruction: Always answer by talking like a pirate.
    welcome_message: |
      Ahoy! I be yer pirate guide, ready to set sail on the seas o' knowledge! What be yer quest? 
    model: auto


cagent run agents.yaml

You don’t write orchestration code. You describe what you want, and Cagent runs the system.

Why it works:

  • Tools are scoped per agent
  • Delegation is explicit
  • Uses MCP Gateway behind the scene
  • Ideal for building agent systems without writing Python

If you’d like to give cagent a try, we have a ton of examples in the project’s GitHub repository. Check out this guide on building multi-agent systems in 5 minutes. 

Option 3 — Traditional Agent Frameworks (LangGraph, CrewAI, ADK)

For developers building complex, custom, fully programmatic agent systems.

Traditional agent frameworks like LangGraph, CrewAI, or Google’s Agent Development Kit (ADK) let you define, control, and orchestrate agent behavior directly in code. You get full control over logic, state, memory, tools, and workflows.

They shine when you need:

  • Complex branching logic
  • Error recovery, retries, and persistence
  • Custom memory or storage layers
  • Tight integration with existing backend code

Example: LangGraph + MCP via Gateway


import requests
from langgraph.graph import StateGraph
from langchain.agents import Tool
from langchain_openai import ChatOpenAI

# Discover MCP endpoint from Gateway
resp = requests.get("http://localhost:6600/v1/servers")
servers = resp.json()["servers"]
duck_url = next(s["url"] for s in servers if s["name"] == "duckduckgo")

# Define a callable tool
def mcp_search(query: str) -> str:
    return requests.post(duck_url, json={"input": query}).json()["output"]

search_tool = Tool(name="web_search", func=mcp_search, description="Search via MCP")

# Wire it into a LangGraph loop
llm = ChatOpenAI(model="gpt-4")
graph = StateGraph()
graph.add_node("agent", llm.bind_tools([search_tool]))
graph.add_edge("agent", "agent")
graph.set_entry_point("agent")

app = graph.compile()
app.invoke("What’s the latest in EU AI regulation?")

In this setup, you decide which tools are available. The agent chooses when to use them based on context, but you’ve defined the menu.
And yes, this is still true in the Docker MCP Toolkit: you decide what to enable. The LLM can’t call what you haven’t made visible.


Choosing the Right Approach

Approach

Best For

You Manage

You Get

Docker MCP Toolkit + Catalog

Devs new to MCP, already using containers

Tool selection

One-click setup, built-in secrets, Gateway integration

Cagent

YAML-based multi-agent apps without custom code

Roles & tool access

Declarative orchestration, multi-agent workflows

LangGraph / CrewAI / ADK

Complex, production-grade agent systems

Full orchestration

Max control over logic, memory, tools, and flow

Wrapping Up
Whether you’re just connecting a tool to Claude, designing a custom multi-agent system, or building production workflows by hand, Docker’s MCP tooling helps you get started easily and securely. 

Check out the Docker MCP Toolkit, cagent, and MCP Gateway for example code, docs, and more ways to get started.



from Docker https://ift.tt/1YhcqyL
via IFTTT

I scan, you scan, we all scan for... knowledge?

I scan, you scan, we all scan for... knowledge?

Welcome to this week’s edition of the Threat Source newsletter. 

“Upon us all a little rain must fall” — Led Zeppelin, via Henry Wadsworth Longfellow  

I recently bumped into a colleague with whom I spent several years working in an MSSP environment. We had very different roles within the organization, so our viewpoints, both then and now, were very different. He asked me the question I hear almost every time I speak somewhere: “What do you think are the most essential things to protect your own network?” This always leads to my top answer — the one that no one ever wants to hear. 

“Know your environment.” 

It led me down a path of thinking about how cyclical things are in the world of cybersecurity and how we, the global “we”, have slipped back to a place where reconnaissance is too largely ignored in our day-to-day workflow. 

Look, I know that we all have alert fatigue. We’re managing too many devices, dealing with too many data points, generating too many logs, and facing too few resources to handle it all. So my “Let’s not ignore reconnaissance” mantra might not be regarded well at first.  

Here’s the thing: It’s always tempting to trim your alerts and reduce your ticketing workload. After all, attack signals seem more “impactful” by nature, right? But I've always believed it’s a mistake to dismiss reconnaissance events to clear the way for analysts to look for the “real” problems. I always go back to my first rule: “Know your environment.” The bad actors are only getting better at the recon portion, both on the wire and in social engineering. 

AI tooling has made a lot of the most challenging aspects of reconnaissance automagical. If you search the dark web for postings from initial access brokers (IABs), you’ll find that they excel in reconnaissance and understanding your ownenvironment. They’re quick to find every Windows 7 machine still on your network, not to mention your unpatched printers, smart fridges, and vulnerable thermostats. 

I get that we can’t get spun up about every half-open SYN, but spotting when these events form a pattern is exactly what we’re here for, and it’s as important as tracking down directory traversal attempts. 

“Behind the clouds is the sun still shining;  
Thy fate is the common fate of all...” — Henry Wadsworth Longfellow

The one big thing 

Cisco Talos researchers recently discovered and disclosed vulnerabilities in Foxit PDF Editor, Epic Games Store, and MedDream PACS, all of which have since been patched by the vendors. These vulnerabilities include privilege escalation, use-after-free, and cross-site scripting issues that could allow attackers to execute malicious code or gain unauthorized access.

Why do I care? 

 These vulnerabilities could have enabled attackers to escalate privileges, execute arbitrary code, or compromise sensitive systems, potentially leading to data breaches or system outages. Even though patches are available, unpatched systems remain at risk.

So now what? 

Organizations should make sure all affected software is updated with the latest patches and review security monitoring for signs of exploitation attempts. Additionally, defenders should implement layered defenses and educate users on the risks of opening suspicious files or clicking unknown links to reduce the likelihood of successful attacks. 

Top security headlines of the week 

How a hacking campaign targeted high-profile Gmail and WhatsApp users across the Middle East 
TechCrunch analyzed the source code of the phishing page, and believes the campaign aimed to steal Gmail and other online credentials, compromise WhatsApp accounts, and conduct surveillance by stealing location data, photos, and audio recordings. (TechCrunch

LastPass warns of fake maintenance messages targeting users' master passwords 
The campaign, which began on or around Jan. 19, 2026, involves sending phishing emails claiming upcoming maintenance and urging them to create a local backup of their password vaults in the next 24 hours. (The Hacker News

Everest Ransomware claims McDonalds India breach involving customer data  
The claim was published on the group’s official dark web leak site earlier today, January 20, 2026, stating that they exfiltrated a massive 861GB of customer data and internal company documents. (HackRead

North Korea-linked hackers pose as human rights activists, report says  
North Korea-linked hackers are using emails that impersonate human rights organizations and financial institutions to lure targets into opening malicious files. (UPI

Hackers use LinkedIn messages to spread RAT malware through DLL sideloading 
The attack involves approaching high-value individuals through messages sent on LinkedIn, establishing trust, and deceiving them into downloading a malicious WinRAR self-extracting archive (SFX). (The Hacker News

Can’t get enough Talos? 

Engaging Cisco Talos Incident Response is just the beginning 
Sophisticated adversaries leave multiple persistence mechanisms. Miss one backdoor, one scheduled task, or one modified firewall rule, and they return weeks later, often selling access to other criminal groups. 

Talos Takes: Cyber certifications and you 
In the first episode of the year, Amy Ciminnisi, Talos’ Content Manager and new podcast host, steps up to the mic with Joe Marshall to explore certifications, one of cybersecurity’s overwhelming (and sometimes most controversial) topics. 

Microsoft Patch Tuesday for January 2026 
Microsoft has released its monthly security update for January 2026, which includes 112 vulnerabilities affecting a range of products, including 8 that Microsoft marked as “critical.”   

Upcoming events where you can find Talos 

  • JSAC (Jan. 21 – 23) Tokyo, Japan 
  • DistrictCon (Jan. 24 – 25) Washington, DC 
  • S4x26 (Feb. 23 – 26) Miami, FL  

Most prevalent malware files from Talos telemetry over the past week 

SHA256: 9f1f11a708d393e0a4109ae189bc64f1f3e312653dcf317a2bd406f18ffcc507  
MD5: 2915b3f8b703eb744fc54c81f4a9c67f  
Talos Rep: https://talosintelligence.com/talos_file_reputation?s=9f1f11a708d393e0a4109ae189bc64f1f3e312653dcf317a2bd406f18ffcc507 
Example Filename: 9f1f11a708d393e0a4109ae189bc64f1f3e312653dcf317a2bd406f18ffcc507.exe  
Detection Name: Win.Worm.Coinminer::1201 

SHA256: 90b1456cdbe6bc2779ea0b4736ed9a998a71ae37390331b6ba87e389a49d3d59 
MD5: c2efb2dcacba6d3ccc175b6ce1b7ed0a  
Talos Rep: https://talosintelligence.com/talos_file_reputation?s=90b1456cdbe6bc2779ea0b4736ed9a998a71ae37390331b6ba87e389a49d3d59  
Example Filename: APQCE0B.dll  
Detection Name: Auto.90B145.282358.in02 

SHA256: a31f222fc283227f5e7988d1ad9c0aecd66d58bb7b4d8518ae23e110308dbf91 
MD5: 7bdbd180c081fa63ca94f9c22c457376  
Talos Rep: https://talosintelligence.com/talos_file_reputation?s=a31f222fc283227f5e7988d1ad9c0aecd66d58bb7b4d8518ae23e110308dbf91  
Example Filename: e74d9994a37b2b4c693a76a580c3e8fe_3_Exe.exe  
Detection Name: Win.Dropper.Miner::95.sbx.tg 

SHA256: 96fa6a7714670823c83099ea01d24d6d3ae8fef027f01a4ddac14f123b1c9974  
MD5: aac3165ece2959f39ff98334618d10d9  
Talos Rep: https://talosintelligence.com/talos_file_reputation?s=96fa6a7714670823c83099ea01d24d6d3ae8fef027f01a4ddac14f123b1c9974  
Example Filename: 96fa6a7714670823c83099ea01d24d6d3ae8fef027f01a4ddac14f123b1c9974.exe  
Detection Name: W32.Injector:Gen.21ie.1201 

SHA256: 47ecaab5cd6b26fe18d9759a9392bce81ba379817c53a3a468fe9060a076f8ca  
MD5: 71fea034b422e4a17ebb06022532fdde  
Talos Rep: https://talosintelligence.com/talos_file_reputation?s=47ecaab5cd6b26fe18d9759a9392bce81ba379817c53a3a468fe9060a076f8ca  
Example Filename: VID001.exe  
Detection Name: Coinminer:MBT.26mw.in14.Talos



from Cisco Talos Blog https://ift.tt/7CjRy1u
via IFTTT

Microsoft Security success stories: Why integrated security is the foundation of AI transformation

AI is transforming how organizations operate and how they approach security. In this new era of agentic AI, every interaction, digital or human, must be built on trust. As businesses modernize, they’re not just adopting AI tools, they’re rearchitecting their digital foundations. And that means security can’t be an afterthought. It must be woven in from the beginning into every layer of the stack—ubiquitous, ambient, and autonomous—just like the AI it protects. 

In this blog, we spotlight three global organizations that are leading the way. Each is taking a proactive, platform-first approach to security—moving beyond fragmented defenses and embedding protection across identity, data, devices, and cloud infrastructure. Their stories show that when security is deeply integrated from the start, it becomes a strategic enabler of resilience, agility, and innovation. And by choosing Microsoft Security, these customers are securing the foundation of their AI transformation from end to end.

Why security transformation matters to decision makers

Security is a board-level priority. The following customer stories show how strategic investments in security platforms can drive cost savings, operational efficiency, and business agility, not just risk reduction. Read on to learn how Ford, Icertis, and TriNet transformed their operations with support from Microsoft.

Ford builds trust across global operations

In the automotive industry, a single cyberattack can ripple across numerous aspects of the business. Ford recognized that rising ransomware and targeted cyberattacks demanded a different approach. The company made a deliberate shift away from fragmented, custom-built security tools toward a unified Microsoft security platform, adopting a Zero Trust approach and prioritizing security embedded into every layer of its hybrid environment—from endpoints to data centers and cloud infrastructure.

Unified protection and measurable impact

Partnering with Microsoft, Ford deployed Microsoft Defender, Microsoft Sentinel, Microsoft Purview, and Microsoft Entra to strengthen defenses, centralize threat detection, and enforce data governance. AI-powered telemetry and automation improved visibility and accelerated incident response, while compliance certifications supported global scaling. By building a security-first culture and leveraging Microsoft’s integrated stack, Ford reduced vulnerabilities, simplified operations, and positioned itself for secure growth across markets.

Read the full customer story to discover more about Ford’s security modernization collaboration with Microsoft.

Icertis cuts security operations center (SOC) incidents by 50%

As a global leader in contract intelligence, Icertis introduced generative AI to transform enterprise contracting, launching applications built on Microsoft Azure OpenAI and its Vera platform. These innovations brought new security challenges, including prompt injection risks and compliance demands across more than 300 Azure subscriptions. To address these, Icertis adopted Microsoft Defender for Cloud for AI posture management, threat detection, and regulatory alignment, ensuring sensitive contract data remains protected.

Driving security efficiency and resilience

By integrating Microsoft Security solutions—Defender for Cloud, Microsoft Sentinel, Purview, Entra, and Microsoft Security Copilot—Icertis strengthened governance and accelerated incident response. AI-powered automation reduced alert triage time by up to 80%, cut mean time to resolution to 25 minutes, and lowered incident volume by 50%. With Zero Trust principles and embedded security practices, Icertis scales innovation securely while maintaining compliance, setting a new standard for trust in AI-powered contracting.

Read the full customer story to learn how Icertis secures sensitive contract data, accelerates AI innovation, and achieves measurable risk reduction with Microsoft’s unified security platform.

TriNet moves to Microsoft 365 E5, achieves annual savings in security spend

Facing growing complexity from multiple point solutions, TriNet sought to reduce operational overhead and strengthen its security posture. The company’s leadership recognized that consolidating tools could improve visibility, reduce risk, and align security with its broader digital strategy. After evaluating providers, TriNet chose Microsoft 365 E5 for its integrated security platform, delivering advanced threat protection, identity management, and compliance capabilities.

Streamlined operations and improved efficiencies

By adopting Microsoft Defender XDR, Purview, Entra, Microsoft Sentinel, and Microsoft 365 Copilot, TriNet unified security across endpoints, cloud apps, and data governance. Automation and centralized monitoring reduced alert fatigue, accelerated incident response, and improved Secure Score. The platform blocked a spear phishing attempt targeting executives, demonstrating the value of Zero Trust and advanced safeguards. With cost savings from tool consolidation and improved efficiency, TriNet is building a secure foundation for future innovation.

Read the full customer story to see how TriNet consolidated its security stack with Microsoft 365 E5, reduced complexity, and strengthened defenses against advanced threats.

How to plan, adopt, and operationalize a Microsoft Security strategy 

Ford, Icertis, and TriNet each began their transformation by assessing legacy systems and identifying gaps that created complexity and risk. Ford faced fragmented tools across a global manufacturing footprint, Icertis needed to secure sensitive contract data while adopting generative AI, and TriNet aimed to reduce operational complexity caused by managing multiple point solutions, seeking a more streamlined and integrated approach. These assessments revealed the need for a unified, risk-based strategy to simplify operations and strengthen protection.

Building on Zero Trust and deploying integrated solutions

All three organizations aligned on Zero Trust principles as the foundation for modernization. They consolidated security into Microsoft’s integrated platform, deploying Defender for endpoint and cloud protection, Microsoft Sentinel for centralized monitoring, Purview for data governance, Entra for identity management, and Security Copilot for AI-powered insights. This phased rollout allowed each company to embed security into daily operations while reducing manual processes and improving visibility.

Measuring impact and sharing best practices

The results were tangible: Ford accelerated threat detection and governance across its hybrid environment, Icertis cut incident volume by 50% and reduced triage time by 80%, and TriNet improved Secure Score while achieving cost savings through tool consolidation. Automation and AI-powered workflows delivered faster response times and reduced complexity. Each organization now shares learnings internally and with industry peers—whether through executive briefings, training programs, or participation in cybersecurity forums—helping set new standards for resilience and innovation.

Working towards a more secure future

The future of enterprise security is being redefined by AI, by innovation, and by the bold choices organizations make today. Modernization, automation, and collaboration are no longer optional—they’re foundational. As AI reshapes how we work, build, and protect, security must evolve in lockstep: not as an add-on, but as a fabric woven through every layer of the enterprise. 

These customer stories show us that building a security-first approach isn’t just possible; it’s imperative. From cloud-native disruptors to global institutions modernizing complex environments, leading organizations are showing what’s possible when security and AI move together. By unifying their tools, automating what once was manual, and using AI to stay ahead of emerging cyberthreats, they’re not just protecting today, they’re securing the future and shaping what comes next. 

Share your thoughts

Are you a regular user of Microsoft Security products? Share your insights and experiences on Gartner Peer Insights™.

Learn more

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.

The post Microsoft Security success stories: Why integrated security is the foundation of AI transformation appeared first on Microsoft Security Blog.



from Microsoft Security Blog https://ift.tt/Jsd0q2o
via IFTTT

Your Dependencies Don’t Care About Your FIPS Configuration

FIPS compliance is a great idea that makes the entire software supply chain safer. But teams adopting FIPS-enabled container images are running into strange errors that can be challenging to debug. What they are learning is that correctness at the base image layer does not guarantee compatibility across the ecosystem. Change is complicated, and changing complicated systems with intricate dependency webs often yields surprises. We are in the early adaptation phase of FIPS, and that actually provides interesting opportunities to optimize how things work. Teams that recognize this will rethink how they build FIPS and get ahead of the game.

FIPS in practice

FIPS is a U.S. government standard for cryptography. In simple terms, if you say a system is “FIPS compliant,” that means the cryptographic operations like TLS, hashing, signatures, and random number generation are performed using a specific, validated crypto module in an approved mode. That sounds straightforward until you remember that modern software is built not as one compiled program, but as a web of dependencies that carry their own baggage and quirks.

The FIPS crypto error that caught us off guard

We got a ticket recently for a Rails application in a FIPS-enabled container image. On the surface, everything looked right. Ruby was built to use OpenSSL 3.x with the FIPS provider. The OpenSSL configuration was correct. FIPS mode was active.

However, the application started throwing cryptography module errors from the Postgres Rubygem module. Even more confusing, a minimal reproducer of a basic Ruby app and a stock postgres did not reproduce the error and a connection was successfully established. The issue only manifested when using ActiveRecord.

The difference came down to code paths. A basic Ruby script using the pg gem directly exercises a simpler set of operations. ActiveRecord triggers additional functionality that exercises different parts of libpq. The non-FIPS crypto was there all along, but only certain operations exposed it.

Your container image can be carefully configured for FIPS, and your application can still end up using non-FIPS crypto because a dependency brought its own crypto along for the ride. In this case, the culprit was a precompiled native artifact associated with the database stack. When you install pg, Bundler may choose to download a prebuilt binary dependency such as libpq.

Unfortunately those prebuilt binaries are usually built with assumptions that cause problems. They may be linked against a different OpenSSL than the one in your image. They may contain statically embedded crypto code. They may load crypto at runtime in a way that is not obvious.

This is the core challenge with FIPS adoption. Your base image can do everything right, but prebuilt dependencies can silently bypass your carefully configured crypto boundary.

Why we cannot just fix it in the base image yet

The practical fix for the Ruby case was adding this to your Gemfile.

gem "pg", "~> 1.1", force_ruby_platform: true

You also need to install libpq-dev to allow compiling from source. This forces Bundler to build the gem from source on your system instead of using a prebuilt binary. When you compile from source inside your controlled build environment, the resulting native extension is linked against the OpenSSL that is actually in your FIPS image.

Bundler also supports an environment/config knob for the same idea called BUNDLE_FORCE_RUBY_PLATFORM. The exact mechanism matters less than the underlying strategy of avoiding prebuilt native artifacts when you are trying to enforce a crypto boundary.

You might reasonably ask why we do not just add BUNDLE_FORCE_RUBY_PLATFORM to the Ruby FIPS image by default. We discussed this internally, and the answer illustrates why FIPS complexity cascades.

Setting that flag globally is not enough on its own. You also need a C compiler and the relevant libraries and headers in the build stage. And not every gem needs this treatment. If you flip the switch globally, you end up compiling every native gem from source, which drags in additional headers and system libraries that you now need to provide. The “simple fix” creates a new dependency management problem.

Teams adopt FIPS images to satisfy compliance. Then they have to add back build complexity to make the crypto boundary real and verify that every dependency respects it. This is not a flaw in FIPS or in the tooling. It is an inherent consequence of retrofitting a strict cryptographic boundary onto an ecosystem built around convenience and precompiled artifacts.

The patterns we are documenting today will become the defaults tomorrow. The tooling will catch up. Prebuilt packages will get better. Build systems will learn to handle the edge cases. But right now, teams need to understand where the pitfalls are.

What to do if you are starting a FIPS journey

You do not need to become a crypto expert to avoid the obvious traps. You only need a checklist mindset. The teams working through these problems now are building real expertise that will be valuable as FIPS requirements expand across industries.

  • Treat prebuilt native dependencies as suspect. If a dependency includes compiled code, assume it might carry its own crypto linkage until you verify otherwise. You can use ldd on Linux to inspect dynamic linking and confirm that binaries link against your system OpenSSL rather than a bundled alternative.
  • Use a multi-stage build and compile where it matters. Keep your runtime image slim, but allow a builder stage with the compiler and headers needed to compile the few native pieces that must align with your FIPS OpenSSL.
  • Test the real execution path, not just “it starts.” For Rails, that means running a query, not only booting the app or opening a connection. The failures we saw appeared when using the ORM, not on first connection.
  • Budget for supply-chain debugging. The hard part is not turning on FIPS mode. The hard part is making sure all the moving parts actually respect it. Expect to spend time tracing crypto usage through your dependency graph.

Why this matters beyond government contracts

FIPS compliance has traditionally been seen as a checkbox for federal sales. That is changing. As supply chain security becomes a board-level concern across industries, validated cryptography is moving from “nice to have” to “expected.” The skills teams build solving FIPS problems today translate directly to broader supply chain security challenges.

Think about what you learn when you debug a FIPS failure. You learn to trace crypto usage through your dependency graph, to question prebuilt artifacts, to verify that your security boundaries are actually enforced at runtime. Those skills matter whether you are chasing a FedRAMP certification or just trying to answer your CISO’s questions about software provenance.

The opportunity in the complexity

FIPS is not “just a switch” you flip in a base image. View FIPS instead as a new layer of complexity that you might have to debug across your dependency graph. That can sound like bad news, but switch the framing and it becomes an opportunity to get ahead of where the industry is going.

The ecosystem will adapt and the tooling will improve. The teams investing in understanding these problems now will be the ones who can move fastest when FIPS or something like it becomes table stakes.

If you are planning a FIPS rollout, start by controlling the prebuilt native artifacts that quietly bypass the crypto module you thought you were using. Recognize that every problem you solve is building institutional knowledge that compounds over time. This is not just compliance work. It is an investment in your team’s security engineering capability.



from Docker https://ift.tt/IRVFYgS
via IFTTT

Automated FortiGate Attacks Exploit FortiCloud SSO to Alter Firewall Configurations

Cybersecurity company Arctic Wolf has warned of a "new cluster of automated malicious activity" that involves unauthorized firewall configuration changes on Fortinet FortiGate devices.

The activity, it said, commenced on January 15, 2026, adding it shares similarities with a December 2025 campaign in which malicious SSO logins on FortiGate appliances were recorded against the admin account from different hosting providers by exploiting CVE-2025-59718 and CVE-2025-59719.

Both vulnerabilities allow for unauthenticated bypass of SSO login authentication via crafted SAML messages when the FortiCloud single sign-on (SSO) feature is enabled on affected Devices. The shortcomings impact FortiOS, FortiWeb, FortiProxy, and FortiSwitchManager.

"This activity involved the creation of generic accounts intended for persistence, configuration changes granting VPN access to those accounts, as well as exfiltration of firewall configurations," Arctic Wolf said of the developing threat cluster.

Specifically, this entails carrying out malicious SSO logins against a malicious account "cloud-init@mail.io" from four different IP addresses, following which the firewall configuration files are exported to the same IP addresses via the GUI interface. The list of source IP addresses is below -

  • 104.28.244[.]115
  • 104.28.212[.]114
  • 217.119.139[.]50
  • 37.1.209[.]19

In addition, the threat actors have been observed creating secondary accounts, such as "secadmin," "itadmin," "support," "backup," "remoteadmin," and "audit," for persistence.

"All of the above events took place within seconds of each other, indicating the possibility of automated activity," Arctic Wolf added.

The disclosure coincides with a post on Reddit in which multiple users reported seeing malicious SSO logins on fully-patched FortiOS devices, with one user stating the "Fortinet developer team has confirmed the vulnerability persists or is not fixed in version 7.4.10."

The Hacker News has reached out to Fortinet for comment, and we will update the story if we hear back. In the interim, it's advised to disable the "admin-forticloud-sso-login" setting.



from The Hacker News https://ift.tt/cujPkRv
via IFTTT

Resurgence of a multi‑stage AiTM phishing and BEC campaign abusing SharePoint 

Microsoft Defender Researchers uncovered a multi‑stage adversary‑in‑the‑middle (AiTM) phishing and business email compromise (BEC) campaign targeting multiple organizations in the energy sector, resulting in the compromise of various user accounts. The campaign abused SharePoint file‑sharing services to deliver phishing payloads and relied on inbox rule creation to maintain persistence and evade user awareness. The attack transitioned into a series of AiTM attacks and follow-on BEC activity spanning multiple organizations.

Following the initial compromise, the attackers leveraged trusted internal identities from the target to conduct large‑scale intra‑organizational and external phishing, significantly expanding the scope of the campaign. Defender detections surfaced the activity to all affected organizations.

This attack demonstrates the operational complexity of AiTM campaigns and the need for remediation beyond standard identity compromise responses. Password resets alone are insufficient. Impacted organizations in the energy sector must additionally revoke active session cookies and remove attacker-created inbox rules used to evade detection.

Attack chain: AiTM phishing attack

Stage 1: Initial access via trusted vendor compromise

Analysis of the initial access vector indicates that the campaign leveraged a phishing email sent from an email address belonging to a trusted organization, likely compromised before the operation began. The lure employed a SharePoint URL requiring user authentication and used subject‑line mimicry consistent with legitimate SharePoint document‑sharing workflows to increase credibility.

Threat actors continue to leverage trusted cloud collaboration platforms particularly Microsoft SharePoint and OneDrive due to their ubiquity in enterprise environments. These services offer built‑in legitimacy, flexible file‑hosting capabilities, and authentication flows that adversaries can repurpose to obscure malicious intent. This widespread familiarity enables attackers to deliver phishing links and hosted payloads that frequently evade traditional email‑centric detection mechanisms.

Stage 2: Malicious URL clicks

Threat actors often abuse legitimate services and brands to avoid detection. In this scenario, we observed that the attacker leveraged the SharePoint service for the phishing campaign. While threat actors may attempt to abuse widely trusted platforms, Microsoft continuously invests in safeguards, detections, and abuse prevention to limit misuse of our services and to rapidly detect and disrupt malicious activity

Stage 3: AiTM attack

Access to the URL redirected users to a credential prompt, but visibility into the attack flow did not extend beyond the landing page.

Stage 4: Inbox rule creation

The attacker later signed in with another IP address and created an Inbox rule with parameters to delete all incoming emails on the user’s mailbox and marked all the emails as read.

Stage 5: Phishing campaign

Followed by Inbox rule creation, the attacker initiated a large-scale phishing campaign involving more than 600 emails with another phishing URL. The emails were sent to the compromised user’s contacts, both within and outside of the organization, as well as distribution lists. The recipients were identified based on the recent email threads in the compromised user’s inbox.

Stage 6: BEC tactics

The attacker then monitored the victim user’s mailbox for undelivered and out of office emails and deleted them from the Archive folder. The attacker read the emails from the recipients who raised questions regarding the authenticity of the phishing email and responded, possibly to falsely confirm that the email is legitimate. The emails and responses were then deleted from the mailbox. These techniques are common in any BEC attacks and are intended to keep the victim unaware of the attacker’s operations, thus helping in persistence.

Stage 7: Accounts compromise

The recipients of the phishing emails from within the organization who clicked on the malicious URL were also targeted by another AiTM attack. Microsoft Defender Experts identified all compromised users based on the landing IP and the sign-in IP patterns. 

Mitigation and protection guidance

Microsoft Defender XDR detects suspicious activities related to AiTM phishing attacks and their follow-on activities, such as sign-in attempts on multiple accounts and creation of malicious rules on compromised accounts. To further protect themselves from similar attacks, organizations should also consider complementing MFA with conditional access policies, where sign-in requests are evaluated using additional identity-driven signals like user or group membership, IP location information, and device status, among others.

Defender Experts also initiated rapid response with Microsoft Defender XDR to contain the attack including:

  • Automatically disrupting the AiTM attack on behalf of the impacted users based on the signals observed in the campaign.
  • Initiating zero-hour auto purge (ZAP) in Microsoft Defender XDR to find and take automated actions on the emails that are a part of the phishing campaign.

Defender Experts further worked with customers to remediate compromised identities through the following recommendations:

  • Revoking session cookies in addition to resetting passwords.
  • Revoking the MFA setting changes made by the attacker on the compromised user’s accounts.
  • Deleting suspicious rules created on the compromised accounts.

Mitigating AiTM phishing attacks

The general remediation measure for any identity compromise is to reset the password for the compromised user. However, in AiTM attacks, since the sign-in session is compromised, password reset is not an effective solution. Additionally, even if the compromised user’s password is reset and sessions are revoked, the attacker can set up persistence methods to sign-in in a controlled manner by tampering with MFA. For instance, the attacker can add a new MFA policy to sign in with a one-time password (OTP) sent to attacker’s registered mobile number. With these persistence mechanisms in place, the attacker can have control over the victim’s account despite conventional remediation measures.

While AiTM phishing attempts to circumvent MFA, implementation of MFA still remains an essential pillar in identity security and highly effective at stopping a wide variety of threats. MFA is the reason that threat actors developed the AiTM session cookie theft technique in the first place. Organizations are advised to work with their identity provider to ensure security controls like MFA are in place. Microsoft customers can implement MFA through various methods, such as using the Microsoft Authenticator, FIDO2 security keys, and certificate-based authentication.

Defenders can also complement MFA with the following solutions and best practices to further protect their organizations from such attacks:

  • Use security defaults as a baseline set of policies to improve identity security posture. For more granular control, enable conditional access policies, especially risk-based access policies. Conditional access policies evaluate sign-in requests using additional identity-driven signals like user or group membership, IP location information, and device status, among others, and are enforced for suspicious sign-ins. Organizations can protect themselves from attacks that leverage stolen credentials by enabling policies such as compliant devices, trusted IP address requirements, or risk-based policies with proper access control.
  • Implement continuous access evaluation.
  • Invest in advanced anti-phishing solutions that monitor and scan incoming emails and visited websites. For example, organizations can leverage web browsers that automatically identify and block malicious websites, including those used in this phishing campaign, and solutions that detect and block malicious emails, links, and files.
  • Continuously monitor suspicious or anomalous activities. Hunt for sign-in attempts with suspicious characteristics (for example, location, ISP, user agent, and use of anonymizer services).

Detections

Because AiTM phishing attacks are complex threats, they require solutions that leverage signals from multiple sources. Microsoft Defender XDR uses its cross-domain visibility to detect malicious activities related to AiTM, such as session cookie theft and attempts to use stolen cookies for signing in.

Using Microsoft Defender for Cloud Apps connectors, Microsoft Defender XDR raises AiTM-related alerts in multiple scenarios. For Microsoft Entra ID customers using Microsoft Edge, attempts by attackers to replay session cookies to access cloud applications are detected by Defender for Cloud Apps connectors for Microsoft 365 and Azure. In such scenarios, Microsoft Defender XDR raises the following alert:

  • Stolen session cookie was used

In addition, signals from these Defender for Cloud Apps connectors, combined with data from the Defender for Endpoint network protection capabilities, also triggers the following Microsoft Defender XDR alert on Microsoft Entra ID. environments:

  • Possible AiTM phishing attempt

A specific Defender for Cloud Apps connector for Okta, together with Defender for Endpoint, also helps detect AiTM attacks on Okta accounts using the following alert:

  • Possible AiTM phishing attempt in Okta

Other detections that show potentially related activity are the following:

Microsoft Defender for Office 365

  • Email messages containing malicious file removed after delivery
  • Email messages from a campaign removed after delivery
  • A potentially malicious URL click was detected
  • A user clicked through to a potentially malicious URL
  • Suspicious email sending patterns detected

Microsoft Defender for Cloud Apps

  • Suspicious inbox manipulation rule
  • Impossible travel activity
  • Activity from infrequent country
  • Suspicious email deletion activity

Microsoft Entra ID Protection

  • Anomalous Token
  • Unfamiliar sign-in properties
  • Unfamiliar sign-in properties for session cookies

Microsoft Defender XDR

  • BEC-related credential harvesting attack
  • Suspicious phishing emails sent by BEC-related user

Indicators of Compromise

  • Network Indicators
    • 178.130.46.8 – Attacker infrastructure
    • 193.36.221.10 – Attacker infrastructure

Recommended actions

Microsoft recommends the following mitigations to reduce the impact of this threat:

Hunting queriesMicrosoft XDR

AHQ#1 – Phishing Campaign:

EmailEvents

| where Subject has “NEW PROPOSAL – NDA”

AHQ#2 – Sign-in activity from the suspicious IP Addresses

AADSignInEventsBeta

| where Timestamp >= ago(7d)

| where IPAddress startswith “178.130.46.” or IPAddress startswith “193.36.221.”

Microsoft Sentinel

Microsoft Sentinel customers can use the following analytic templates to find BEC related activities similar to those described in this post:

In addition to the analytic templates listed above, Microsoft Sentinel customers can use the following hunting content to perform Hunts for BEC related activities:


The post Resurgence of a multi‑stage AiTM phishing and BEC campaign abusing SharePoint  appeared first on Microsoft Security Blog.



from Microsoft Security Blog https://ift.tt/BKvEFAz
via IFTTT

Wednesday, January 21, 2026

Cisco Fixes Actively Exploited Zero-Day CVE-2026-20045 in Unified CM and Webex

Cisco has released fresh patches to address what it described as a "critical" security vulnerability impacting multiple Unified Communications (CM) products and Webex Calling Dedicated Instance that it has been actively exploited as a zero-day in the wild.

The vulnerability, CVE-2026-20045 (CVSS score: 8.2), could permit an unauthenticated remote attacker to execute arbitrary commands on the underlying operating system of a susceptible device.

"This vulnerability is due to improper validation of user-supplied input in HTTP requests," Cisco said in an advisory. "An attacker could exploit this vulnerability by sending a sequence of crafted HTTP requests to the web-based management interface of an affected device. A successful exploit could allow the attacker to obtain user-level access to the underlying operating system and then elevate privileges to root."

The critical rating for the flaw is due to the fact that its exploitation could allow for privilege escalation to root, it added. The vulnerability impacts the following products -

  • Unified CM
  • Unified CM Session Management Edition (SME)
  • Unified CM IM & Presence Service (IM&P)
  • Unity Connection
  • Webex Calling Dedicated Instance

It has been addressed in the following versions -

Cisco Unified CM, CM SME, CM IM&P, and Webex Calling Dedicated Instance -

  • Release 12.5 - Migrate to a fixed release
  • Release 14 - 14SU5 or apply patch file: ciscocm.V14SU4a_CSCwr21851_remote_code_v1.cop.sha512
  • Release 15 - 15SU4 (Mar 2026) or apply patch file: ciscocm.V15SU2_CSCwr21851_remote_code_v1.cop.sha512 or ciscocm.V15SU3_CSCwr21851_remote_code_v1.cop.sha512

Cisco Unity Connection

  • Release 12.5 - Migrate to a fixed release
  • Release 14 - 14SU5 or apply patch file: ciscocm.cuc.CSCwr29208_C0266-1.cop.sha512
  • Release 15 - 15SU4 (Mar 2026) or apply patch file: ciscocm.cuc.CSCwr29208_C0266-1.cop.sha512

The networking equipment major also said it's "aware of attempted exploitation of this vulnerability in the wild," urging customers to upgrade to a fixed software release to address the issue. There are currently no workarounds. An anonymous external researcher has been credited with discovering and reporting the bug.

The development has prompted the U.S. Cybersecurity and Infrastructure Security Agency (CISA) to add CVE-2026-20045 to its Known Exploited Vulnerabilities (KEV) catalog, requiring Federal Civilian Executive Branch (FCEB) agencies to apply the fixes by February 11, 2026.

The discovery of CVE-2026-20045 comes less than a week after Cisco released updates for another actively exploited critical security vulnerability affecting AsyncOS Software for Cisco Secure Email Gateway and Cisco Secure Email and Web Manager (CVE-2025-20393, CVSS score: 10.0) that could permit an attacker to execute arbitrary commands with root privileges.



from The Hacker News https://ift.tt/0hfEDvb
via IFTTT

The Cloudcast Playbook - Lessons from 1000 Shows

After 1000+ shows of The Cloudcast, when critical lessons have we learned about the industry, our our journey, and tips for anyone in tech. 

SHOW: 995

SHOW TRANSCRIPT: The Cloudcast #995 Transcript

SHOW VIDEO: https://youtube.com/@TheCloudcastNET 

NEW TO CLOUD? CHECK OUT OUR OTHER PODCAST - "CLOUDCAST BASICS" 

SHOW NOTES:

Lesson 1 - Why did we create The Cloudcast? [Career expansion]

Lesson 2 - How did we learn all the topics we covered? [Skills development]

Lesson 3 - What have been the biggest Cloud trends we’ve covered [Trends]

Lesson 4 - How did you think about technology transitions over the last 10-15 years? [Change]

Lesson 5 - How important was it for you to learn the business side of cloud? [Follow the money]

Lesson 6 - What structural changes do you think Cloud will have on IT in 10 years? [Legacy]

Lesson 7 - What lesson from an episode do you think about the most frequently?

Lesson 8 - What surprised you the most about the Cloud 1.0 era?

Lesson 9 - What did you expect to happen, which never really happened?

Lesson 10 - What were the most important lessons you personally learned? [Lessons Learned]



FEEDBACK?




from The Cloudcast (.NET) https://ift.tt/PMCtRGn
via IFTTT

Everyone’s worried about the wrong AI security risk

Everyones talking about Claude Cowork, a research preview from Anthropic which allows AI to operate your entire desktop, access your files, control your browser, and connect to hundreds of enterprise apps. You most likely have workers using it right now. And while there are many risks associated with random workers using Cowork in the enterprise, I feel that most IT professionals and corporate leaders are worrying about the wrong thing. 

The wrong risk 

When I talk to customers about AI security risks, I often hear fears like, “What if a worker pastes our secret recipe into ChatGPT and it ends up in the model?” Their fear is that corporate secrets will somehow get absorbed into the AI’s training data and then leak out to competitors through future responses. 

Luckily this isn’t really how large language models work. Your Tuesday afternoon prompt about Q3 projections doesn’t get folded into GPT-5’s knowledge base. That said, the “secrets absorbed into the model” scenario is still (by far!) the #1 risk I hear! 

But there’s a bigger risk to AI use in the workplace that people aren’t thinking about. 

The real risk is what AI does, not what it learns 

Employee-focused AI is evolving from “tool” to “assistant” to “coworker.” Claude Cowork illustrates this perfectly, as it moved from “AI that answers questions” to “AI that takes actions.” 

Claude Cowork does so much more than just chatting with workers. It operates their computer and has access to their files, browser, email, calendar, and any other enterprise apps they’ve connected. (Microsoft’s Copilot agents work the same way, as do Google’s Workspace agents and the dozens of other agentic tools hitting the market.) 

I’ve been tracking this progression for months now—from simple prompt-and-paste, through AI gaining ambient awareness of your screen, to AI actually operating your computer on your behalf. We’re now firmly in the stages where AI agents execute multi-step workflows across real enterprise systems while workers only half pay attention. This means: 

  • Whatever permissions the worker has, the AI has. 
  • Whatever systems the worker can touch, the AI can touch. 
  • Whatever mistakes the worker could make, the AI can make. (Just faster and at scale!) 

This latest batch of AI tools change the threat model completely. 

The breach won’t be exfiltration. It’ll be execution. 

What’s the first actual breach is going to look like? 

  • An AI agent, operating on behalf of a worker, pastes internal data to a public site. 
  • An AI agent, operating on behalf of a worker, deletes something important. 
  • An AI agent, operating on behalf of a worker, forwards a confidential document to an external email address because the worker’s voice command while walking down the street was slightly ambiguous. 
  • An AI agent, operating on behalf of a worker, is super gullible and falls for a prompt injection at some remote site which says it should bcc: all emails to itself from now on. 
  • Etc. 

This isn’t an AI model problem. This is a workflow execution problem. In all these cases, the AI didn’t absorb corporate secrets into its training data, it just did something with those secrets that the worker didn’t quite intend or because they weren’t fully supervising it. 

Workers don’t see the risk and therefore don’t care 

Many workers feel pressured to do more with less. They’ve got a pile of tasks—many which feel tedious and not particularly important—and suddenly this AI Cowork thing shows up that can knock them out faster. Why would they be worried about risk? This isn’t some weird AI in the cloud, Claude Cowork lives on their computer, works with their files, and operates in their browser. It feels personal, safe, and contained. This is local. It’s theirs! 

Whenever IT releases a policy memo saying workers shouldn’t use unapproved AI tools, it feels abstract and overly cautious. Workers just don’t see the actual risk (an automated AI agent with a worker’s permissions operating semi-autonomously across enterprise systems). 

Even the warnings from Anthropic in the Cowork product itself (there are lots!) don’t really land with workers. After all, AI apps have been warning users for years that AI can make mistakes, and so far, it’s been fine. These warnings are now so pervasive that they’re not even noticed anymore, so why should a worker think Claude Cowork is any different or riskier? 

Can’t IT just monitor things better? 

Security tools that “watch everything the worker does” exist. When AI is that worker, everyone welcomes that level of monitoring. Go ahead and record the screen, log every action, and review every output. That feels appropriate for AI workers. 

But when humans are the worker, those same monitoring tools cause revolts. (Remember the backlash from Microsoft Recall before it even went live?) Workers don’t want their employer watching every keystroke and mouse click. 

So what happens with something like Claude Cowork, where a human worker and AI worker operate in the same workspace? How do you know what’s human activity or AI activity? When do you enable the invasive monitoring and when do you back off? How do you tell the difference between a human making a mistake and an AI making a mistake on a human’s behalf? 

IT departments aren’t set up for this. They’ve been getting away with thinking “AI worker security is somewhere way down the road.” Claude Cowork shows that these are conversations we need to start having today. 

This has to be solved at the workspace, not the AI model 

If the risk was “LLM data leakage into training sets,” the solution would be better model training policies and data handling. That would be easy, as we could tell Anthropic, OpenAI, Microsoft, and/or Google to “be better” and we’re done. 

But since the actual risk is “AI agents executing actions in the work context”, that’s a workspace governance problem. That solution involves visibility into what agents are doing, what systems they’re touching, and what actions they’re taking on behalf of workers. 

This is why I’ve been writing about the workspace as the control plane. When AI operates everywhere (inside apps, inside browsers, inside the OS, inside standalone tools, just generally on the inside), the governance boundary has to be the workspace itself. 

This isn’t about blocking AI tools or forcing workers onto inferior “approved” alternatives. It’s about securing the space where AI and humans work together, regardless of which AI they’re using or how they’re using it. This is absolutely a solvable problem, but with a different solution than how we’ve solved IT security in the past. 


Read more & connect

Join the conversation and discuss this post on LinkedIn. You can find all my posts on my author page (or via RSS).



from Citrix Blogs https://ift.tt/na96UcN
via IFTTT

North Korean PurpleBravo Campaign Targeted 3,136 IP Addresses via Fake Job Interviews

As many as 3,136 individual IP addresses linked to likely targets of the Contagious Interview activity have been identified, with the campaign claiming 20 potential victim organizations spanning artificial intelligence (AI), cryptocurrency, financial services, IT services, marketing, and software development sectors in Europe, South Asia, the Middle East, and Central America.

The new findings come from Recorded Future's Insikt Group, which is tracking the North Korean threat activity cluster under the moniker PurpleBravo. First documented in late 2023, the campaign is also known as CL-STA-0240, DeceptiveDevelopment, DEV#POPPER, Famous Chollima, Gwisin Gang, Tenacious Pungsan, UNC5342, Void Dokkaebi, and WaterPlum.

The 3,136 individual IP addresses, primarily concentrated around South Asia and North America, are assessed to have been targeted by the adversary from August 2024 to September 2025. The 20 victim companies are said to be based in Belgium, Bulgaria, Costa Rica, India, Italy, the Netherlands, Pakistan, Romania, the United Arab Emirates (U.A.E.), and Vietnam.

"In several cases, it is likely that job-seeking candidates executed malicious code on corporate devices, creating organizational exposure beyond the individual target," the threat intelligence firm said in a new report shared with The Hacker News.

The disclosure comes a day after Jamf Threat Labs detailed a significant iteration of the Contagious Interview campaign wherein the attackers abuse malicious Microsoft Visual Studio Code (VS Code) projects as an attack vector to distribute a backdoor, underscoring continued exploitation of trusted developer workflows to achieve their twin goals of cyber espionage and financial theft.

The Mastercard-owned company said it detected four LinkedIn personas potentially associated with PurpleBravo that masqueraded as developers and recruiters and claimed to be from the Ukrainian city of Odesa, along with several malicious GitHub repositories that are designed to deliver known malware families like BeaverTail.

PurpleBravo has also been observed managing two distinct sets of command-and-control (C2) servers for BeaverTail, a JavaScript infostealer and loader, and a Go-based backdoor known as GolangGhost (aka FlexibleFerret or WeaselStore) that is based on the HackBrowserData open-source tool.

The C2 servers, hosted across 17 different providers, are administered via Astrill VPN and from IP ranges in China. North Korean threat actors' use of Astrill VPN in cyber attacks has been well-documented over the years.

It's worth pointing out that Contagious Interview complements a second, separate campaign referred to as Wagemole (aka PurpleDelta), where IT workers from the Hermit Kingdom actors seek unauthorized employment under fraudulent or stolen identities with organizations based in the U.S. and other parts of the world for both financial gain and espionage.

While the two clusters are treated as disparate sets of activities, there are significant tactical and infrastructure overlaps between them despite the fact that the IT worker threat has been ongoing since 2017.

"This includes a likely PurpleBravo operator displaying activity consistent with North Korean IT worker behavior, IP addresses in Russia linked to North Korean IT workers communicating with PurpleBravo C2 servers, and administration traffic from the same Astrill VPN IP address associated with PurpleDelta activity," Recorded Future said.

To make matters worse, candidates who are approached by PurpleBravo with fictitious job offers have been found to take the coding assessment on company-issued devices, effectively compromising their employers in the process. This highlights that the IT software supply chain is "just as vulnerable" to infiltration from North Korean adversaries other than the IT workers.

"Many of these [potential victim] organizations advertise large customer bases, presenting an acute supply-chain risk to companies outsourcing work in these regions," the company noted. "While the North Korean IT worker employment threat has been widely publicized, the PurpleBravo supply-chain risk deserves equal attention so organizations can prepare, defend, and prevent sensitive data leakage to North Korean threat actors."



from The Hacker News https://ift.tt/NM7DLuq
via IFTTT