Wednesday, March 11, 2026

Researchers Trick Perplexity's Comet AI Browser Into Phishing Scam in Under Four Minutes

Agentic web browsers that leverage artificial intelligence (AI) capabilities to autonomously execute actions across multiple websites on behalf of a user could be trained and tricked into falling prey to phishing and scam traps.

The attack, at its core, takes advantage of AI browsers' tendency to reason their actions and use it against the model itself to lower their security guardrails, Guardio said in a report shared with The Hacker News ahead of publication.

"The AI now operates in real time, inside messy and dynamic pages, while continuously requesting information, making decisions, and narrating its actions along the way. Well, 'narrating' is quite an understatement - It blabbers, and way too much!," security researcher Shaked Chen said.

"This is what we call Agentic Blabbering: the AI Browser exposing what it sees, what it believes is happening, what it plans to do next, and what signals it considers suspicious or safe."

By intercepting this traffic between the browser and the AI services running on the vendor's servers and feeding it as input to a Generative Adversarial Network (GAN), Guardio said it was able to make Perplexity's Comet AI browser fall victim to a phishing scam in under four minutes.

The research builds on prior techniques like VibeScamming and Scamlexity, which found that vibe-coding platforms and AI browsers could be coaxed into generating scam pages or carrying out malicious actions via hidden prompt injections. In other words, with the AI agent handling the tasks without constant human supervision, there arises a shift in the attack surface wherein a scam no longer has to deceive a user. Rather, it aims to trick the AI model itself.

"If you can observe what the agent flags as suspicious, hesitates on, and more importantly, what it thinks and blabbers about the page, you can use that as a training signal," Chen explained. "The scam evolves until the AI Browser reliably walks into the trap another AI set for it."

The idea, in a nutshell, is to build a "scamming machine" that iteratively optimizes and regenerates a phishing page until the agentic browser stops complaining and proceeds to carry out the threat actor's bidding, such as entering a victim's credentials on a bogus web page designed for carrying out a refund scam.

What makes this attack interesting and dangerous is that once the fraudster iterates on a web page until it works against a specific AI browser, it works on all users who rely on the same agent. Put differently, the target has shifted from the human user to the AI browser.

"This reveals the unfortunate near future we are facing: scams will not just be launched and adjusted in the wild, they will be trained offline, against the exact model millions rely on, until they work flawlessly on first contact," Guardio said. "Because when your AI Browser explains why it stopped, it teaches attackers how to bypass it."

The disclosure comes as Trail of Bits demonstrated four prompt injection techniques against the Comet browser to extract users' private information from services like Gmail by exploiting the browser's AI assistant and exfiltrating the data to an attacker’s server when the user asks to summarize a web page under their control.

Last week, Zenity Labs also detailed two zero-click attacks affecting Perplexity's Comet that use indirect prompt injection seeded within meeting invites to exfiltrate local files to an external server (aka PerplexedComet) or hijack a user's 1Password account if the password manager extension is installed and unlocked. The issues, collectively codenamed PerplexedBrowser, have since been addressed by the AI company.

This is achieved by means of a prompt injection technique referred to as intent collision, which occurs "when the agent merges a benign user request with attacker-controlled instructions from untrusted web data into a single execution plan, without a reliable way to distinguish between the two," security researcher Stav Cohen said.

Prompt injection attacks remain a fundamental security challenge for large language models (LLMs) and for integrating them into organizational workflows, largely because completely eliminating these vulnerabilities may not be feasible. In December 2025, OpenAI noted that such weaknesses are "unlikely to ever" be fully resolved in agentic browsers, although the associated risks could be reduced through automated attack discovery, adversarial training, and new system-level safeguards.



from The Hacker News https://ift.tt/934xROQ
via IFTTT

Critical n8n Flaws Allow Remote Code Execution and Exposure of Stored Credentials

Cybersecurity researchers have disclosed details of two now-patched security flaws in the n8n workflow automation platform, including two critical bugs that could result in arbitrary command execution.

The vulnerabilities are listed below -

  • CVE-2026-27577 (CVSS score: 9.4) - Expression sandbox escape leading to remote code execution (RCE)
  • CVE-2026-27493 (CVSS score: 9.5) - Unauthenticated expression evaluation via n8n's Form nodes

"CVE-2026-27577 is a sandbox escape in the expression compiler: a missing case in the AST rewriter lets process slip through untransformed, giving any authenticated expression full RCE," Pillar Security researcher Eilon Cohen, who discovered and reported the issues, said in a report shared with The Hacker News.

The cybersecurity company described CVE-2026-27493 as a "double-evaluation bug" in n8n's Form nodes that could be abused for expression injection by taking advantage of the fact that the form endpoints are public by design and require neither authentication nor an n8n account.

All it takes for successful exploitation is to leverage a public "Contact Us" form to execute arbitrary shell commands by simply providing a payload as input into the Name field.

In an advisory released late last month, n8n said CVE-2026-27577 could be weaponized by an authenticated user with permission to create or modify workflows to trigger unintended system command execution on the host running n8n via crafted expressions in workflow parameters.

N8n also noted that CVE-2026-27493, when chained with an expression sandbox escape like CVE-2026-27577, could "escalate to remote code execution on the n8n host." Both vulnerabilities affect the self-hosted and cloud deployments of n8n -

  • < 1.123.22, >= 2.0.0 < 2.9.3, and >= 2.10.0 < 2.10.1 - Fixed in versions 2.10.1, 2.9.3, and 1.123.22

If immediate patching of CVE-2026-27577 is not an option, users are advised to limit workflow creation and editing permissions to fully trusted users and deploy n8n in a hardened environment with restricted operating system privileges and network access.

As for CVE-2026-27493, n8n recommends the following mitigations -

  • Review the usage of form nodes manually for the above-mentioned preconditions.
  • Disable the Form node by adding n8n-nodes-base.form to the NODES_EXCLUDE environment variable.
  • Disable the Form Trigger node by adding n8n-nodes-base.formTrigger to the NODES_EXCLUDE environment variable.

"These workarounds do not fully remediate the risk and should only be used as short-term mitigation measures," the maintainers cautioned.

Pillar Security said an attacker could exploit these flaws to read the N8N_ENCRYPTION_KEY environment variable and use it to decrypt every credential stored in n8n's database, including AWS keys, database passwords, OAuth tokens, and API keys.

N8n versions 2.10.1, 2.9.3, and 1.123.22 also resolve two more critical vulnerabilities that could also be abused to achieve arbitrary code execution -

  • CVE-2026-27495 (CVSS score: 9.4) - An authenticated user with permission to create or modify workflows could exploit a code injection vulnerability in the JavaScript Task Runner sandbox to execute arbitrary code outside the sandbox boundary.
  • CVE-2026-27497 (CVSS score: 9.4) - An authenticated user with permission to create or modify workflows could leverage the Merge node's SQL query mode to execute arbitrary code and write arbitrary files on the n8n server.

Besides limiting workflow creation and editing permissions to trusted users, n8n has outlined the workarounds below for each flaw -

  • CVE-2026-27495 - Use external runner mode (N8N_RUNNERS_MODE=external) to limit the blast radius.
  • CVE-2026-27497 - Disable the Merge node by adding n8n-nodes-base.merge to the NODES_EXCLUDE environment variable.

While n8n makes no mention of any of these vulnerabilities being exploited in the wild, users are advised to keep their installations up-to-date for optimal protection.



from The Hacker News https://ift.tt/IlybNeQ
via IFTTT

Building AI Teams: How Docker Sandboxes and Docker Agent Transform Development

It’s 11 PM. You’ve got a JIRA ticket open, an IDE with three unsaved files, a browser tab on Stack Overflow, and another on documentation. You’re context-switching between designing UI, writing backend APIs, fixing bugs, and running tests. You’re wearing all the hats, product manager, designer, engineer, QA specialist, and it’s exhausting.

What if instead of doing it all yourself, you could describe the goal and have a team of specialized AI agents handle it for you?

One agent breaks down requirements, another designs the interface, a third builds the backend, a fourth tests it, and a fifth fixes any issues. Each agent focuses on what it does best, working together autonomously while you sip your coffee.That’s not sci-fi, it’s what Agent + Docker Sandboxes delivers today.

image5

What is Docker Agent?

Docker Agent is an open source tool for building teams of specialized AI agents. Instead of prompting one general-purpose model to do everything, you define agents with specific roles that collaborate to solve complex problems.

Here’s a typical dev-team configuration:

agents:
 root:
   model: openai/gpt-5
   description: Product Manager - Leads the development team and coordinates iterations
   instruction: |
     Break user requirements into small iterations. Coordinate designer → frontend → QA.
     - Define feature and acceptance criteria
     - Ensure iterations deliver complete, testable features
     - Prioritize based on value and dependencies
   sub_agents: [designer, awesome_engineer, qa, fixer_engineer]
   toolsets:
     - type: filesystem
     - type: think
     - type: todo
     - type: memory
       path: dev_memory.db
​
 designer:
   model: openai/gpt-5
   description: UI/UX Designer - Creates user interface designs and wireframes
   instruction: |
     Create wireframes and mockups for features. Ensure responsive, accessible designs.
     - Use consistent patterns and modern principles
     - Specify colors, fonts, interactions, and mobile layout
   toolsets:
     - type: filesystem
     - type: think
     - type: memory
       path: dev_memory.db
       
 qa:
   model: openai
   description: QA Specialist - Analyzes errors, stack traces, and code to identify bugs
   instruction: |
     Analyze error logs, stack traces, and code to find bugs. Explain what's wrong and why it's happening.
     - Review test results, error messages, and stack traces
   .......
​
 awesome_engineer:
   model: openai
   description: Awesome Engineer - Implements user interfaces based on designs
   instruction: |
     Implement responsive, accessible UI from designs. Build backend APIs and integrate.
   ..........

 fixer_engineer:
   model: openai
   description: Test Integration Engineer - Fixes test failures and integration issues
   instruction: |
     Fix test failures and integration issues reported by QA.
     - Review bug reports from QA

The root agent acts as product manager, coordinating the team. When a user requests a feature, root delegates to designer for wireframes, then awesome_engineer for implementation, qa for testing, and fixer_engineer for bug fixes. Each agent uses its own model, has its own context, and accesses tools like filesystem, shell, memory, and MCP servers.

Agent Configuration

Each agent is defined with five key attributes:

  • model: The AI model to use (e.g., openai/gpt-5, anthropic/claude-sonnet-4-5). Different agents can use different models optimized for their tasks.
  • description: A concise summary of the agent’s role. This helps Docker Agent understand when to delegate tasks to this agent.
  • instruction: Detailed guidance on what the agent should do. Includes workflows, constraints, and domain-specific knowledge.
  • sub_agents: A list of agents this agent can delegate work to. This creates the team hierarchy.
  • toolsets: The tools available to the agent. Built-in options include filesystem (read/write files), shell (run commands), think (reasoning), todo (task tracking), memory (persistent storage), and mcp (external tool connections).

This configuration system gives you fine-grained control over each agent’s capabilities and how they coordinate with each other.

Why Agent Teams Matter

One agent handling complex work means constant context-switching. Split the work across focused agents instead, each handles what it’s best at. Docker Agent manages the coordination.

The benefits are clear:

  • Specialization: Each agent is optimized for its role (design vs. coding vs. debugging)
  • Parallel execution: Multiple agents can work on different aspects simultaneously
  • Better outcomes: Focused agents produce higher quality work in their domain
  • Maintainability: Clear separation of concerns makes teams easier to debug and iterate
image3

The Problem: Running AI Agents Safely

Agent teams are powerful, but they come with a serious security concern. These agents need to:

  • Read and write files on your system
  • Execute shell commands (npm install, git commit, etc.)
  • Access external APIs and tools
  • Run potentially untrusted code

Giving AI agents full access to your development machine is risky. A misconfigured agent could delete files, leak secrets, or run malicious commands. You need isolation, agents should be powerful but contained.

Traditional virtual machines are too heavy. Chroot jails are fragile. You need something that provides:

  • Strong isolation from your host machine
  • Workspace access so agents can read your project files
  • Familiar experience with the same paths and tools
  • Easy setup without complex networking or configuration

Docker Sandboxes: The Secure Foundation

Docker Sandboxes solves this by providing isolated environments for running AI agents. As of Docker Desktop 4.60+, sandboxes run inside dedicated microVMs, providing a hard security boundary beyond traditional container isolation. When you run docker sandbox run <agent>, Docker creates an isolated microVM workspace that:

  • Mounts your project directory at the same absolute path (on Linux and macOS)
  • Preserves your Git configuration for proper commit attribution
  • Does not inherit environment variables from your current shell session
  • Gives agents full autonomy without compromising your host
  • Provides network isolation with configurable allow/deny lists

Docker Sandboxes now natively supports six agent types: Claude Code, Gemini, Codex, Copilot, Agent, and Kiro (all experimental). Agent can be launched directly as a sandbox agent:

# Run Agent natively in a sandbox
docker sandbox create agent ~/path/to/workspace
docker sandbox run agent ~/path/to/workspace

Or, for more control, use a detached sandbox:

# Create a sandbox
docker sandbox run -d --name my-agent-sandbox claude
​
# Copy agent into the sandbox
docker cp /usr/bin/agent &lt;container-id&gt;:/usr/bin/agent
​
# Run your agent team
docker exec -it &lt;container-id&gt; bash -c "cd /path/to/workspace &amp;&amp; agent run dev-team.yaml"

Your workspace /Users/alice/projects/myapp on the host is also /Users/alice/projects/myapp inside the microVM. Error messages, scripts with hard-coded paths, and relative imports all work as expected. But the agent is contained in its own microVM, it can’t access files outside the mounted workspace, and any damage it causes is limited to the sandbox.

image1

Why Docker Sandboxes Matter

The combination of agents and Docker Sandboxes gives you something powerful:

  • Full agent autonomy: Agents can install packages, run tests, make commits, and use tools without constant human oversight
  • Complete safety: Even if an agent makes a mistake, it’s contained within the microVM sandbox
  • Hard security boundary: MicroVM isolation goes beyond containers, each sandbox runs in its own virtual machine
  • Network control: Allow/deny lists let you restrict which external services agents can access
  • Familiar experience: Same paths, same tools, same workflow as working directly on your machine
  • Workspace persistence: Changes sync between host and microVM, so your work is always available

Here’s how the workflow looks in practice:

  1. User requests a feature to the root agent: “Create a bank app with Gradio”
  2. Root creates a todo list and delegates to the designer
  3. Designer generates wireframes and UI specifications
  4. Awesome_engineer implements the code, running pip install gradio and python app/main.py
  5. QA runs tests, finds bugs, and reports them
  6. Fixer_engineer resolves the issues
  7. Root confirms all tests pass and marks the feature complete

All of this happens autonomously inside a sandboxed environment. The agents can install dependencies, modify files, and execute commands, but they’re isolated from your host machine.

image2 1

Try It Yourself

Let’s walk through setting up a simple agent team in a Docker Sandbox.

Prerequisites

  • Docker Desktop 4.60+ with sandbox support (microVM-based isolation)
  • agent (included in Docker Desktop 4.49+)
  • API key for your model provider (Anthropic, OpenAI, or Google)

Step 1: Create Your Agent Team

Save this configuration as dev-team.yaml:

models:
 openai:
   provider: openai
   model: gpt-5
​
agents:
 root:
   model: openai
   description: Product Manager - Leads the development team
   instruction: |
     Break user requirements into small iterations. Coordinate designer → frontend → QA.
   sub_agents: [designer, awesome_engineer, qa]
   toolsets:
     - type: filesystem
     - type: think
     - type: todo
​
 designer:
   model: openai
   description: UI/UX Designer - Creates designs and wireframes
   instruction: |
     Create wireframes and mockups for features. Ensure responsive designs.
   toolsets:
     - type: filesystem
     - type: think
​
 awesome_engineer:
   model: openai
   description: Developer - Implements features
   instruction: |
     Build features based on designs. Write clean, tested code.
   toolsets:
     - type: filesystem
     - type: shell
     - type: think
​
 qa:
   model: openai
   description: QA Specialist - Tests and identifies bugs
   instruction: |
     Test features and identify bugs. Report issues to fixer.
   toolsets:
     - type: filesystem
     - type: think

Step 2: Create a Docker Sandbox

The simplest approach is to use agent as a native sandbox agent:

# Run agent directly in a sandbox (experimental)
docker sandbox run agent ~/path/to/your/workspace

Alternatively, use a detached Claude sandbox for more control:

# Start a detached sandbox
docker sandbox run -d --name my-dev-sandbox claude
​
# Copy agent into the sandbox
which agent  # Find the path on your host
docker cp $(which agent) $(docker sandbox ls --filter name=my-dev-sandbox -q):/usr/bin/agent

Step 3: Set Environment Variables

# Run agent with your API key (passed inline since export doesn't persist across exec calls)
docker exec -it -e OPENAI_API_KEY=your_key_here my-dev-sandbox bash

Step 4: Run Your Agent Team

# Mount your workspace and run agent
docker exec -it my-dev-sandbox bash -c "cd /path/to/your/workspace &amp;&amp; agent run dev-team.yaml"

Now you can describe what you want to build, and your agent team will handle the rest:

User: Create a bank application using Python. The bank app should have basic functionality like account savings, show balance, withdraw, add money, etc. Build the UI using Gradio. Create a directory called app, and inside of it, create all of the files needed by the project
​
Agent (root): I'll break this down into iterations and coordinate with the team...

Watch as the designer creates wireframes, the engineer builds the Gradio app, and QA tests it, all autonomously in a secure sandbox.

Final result from a one shot prompt

image2

Step 5: Clean Up

When you’re done:

# Remove the sandbox
docker sandbox rm my-dev-sandbox

Docker enforces one sandbox per workspace. Running docker sandbox run in the same directory reuses the existing container. To change configuration, remove and recreate the sandbox.

Current Limitations

Docker Sandboxes and Docker Agent are evolving rapidly. Here are a few things to know:

  • Docker Sandboxes now supports six agent types natively: Claude Code, Gemini, Codex, Copilot, agent, and Kiro.  All are experimental and breaking changes may occur between Docker Desktop versions.
  • Custom Shell that doesn’t include a pre-installed agent binary. Instead, it provides a clean environment where you can install and configure any agent or tool
  • MicroVM sandboxes require macOS or Windows. Linux users can use legacy container-based sandboxes with Docker Desktop 4.57+
  • API keys may still need manual configuration depending on the agent type
  • Sandbox templates are optimized for certain workflows; custom setups may require additional configuration

Why This Matters Now

AI agents are becoming more capable, but they need infrastructure to run safely and effectively. The combination of agent and Docker Sandboxes addresses this by:

Feature

Traditional Approach

With agent + Docker Sandboxes

Autonomy

Limited – requires constant oversight

High – agents work independently

Security

Risky – agents have host access

Isolated – agents run in microVMs

Specialization

One model does everything

Multiple agents with focused roles

Reproducibility

Inconsistent across machines

MicroVM-isolated, version-controlled

Scalability

Manual coordination

Automated team orchestration

This isn’t just about convenience, it’s about enabling AI agents to do real work in production environments, with the safety guarantees that developers expect.

What’s Next

Conclusion

We’re moving from “prompting AI to write code” to “orchestrating AI teams to build software.” agent gives you the team structure; Docker Sandboxes provides the secure foundation.

The days of wearing every hat as a solo developer are numbered. With specialized AI agents working in isolated containers, you can focus on what matters, designing great software, while your AI team handles the implementation, testing, and iteration.

Try it out. Build your own agent team. Run it in a Docker Sandbox. See what happens when you have a development team at your fingertips, ready to ship features while you grab lunch.



from Docker https://ift.tt/S0EBVQh
via IFTTT

Meta Disables 150K Accounts Linked to Southeast Asia Scam Centers in Global Crackdown

Meta on Wednesday said it disabled over 150,000 accounts associated with scam centers in Southeast Asia as part of a coordinated effort in partnership with authorities from Thailand, the U.S., the U.K., Canada, Korea, Japan, Singapore, the Philippines, Australia, New Zealand, and Indonesia.

The effort also led to 21 arrests made by the Royal Thai Police, the company said. The action builds upon a pilot initiative in December 2025 that resulted in Meta removing 59,000 accounts, Pages, and Groups from its platforms and six arrest warrants.

"Online scams have become significantly more sophisticated and industrialized in recent years, with criminal networks often based in Southeast Asia in countries like Cambodia, Myanmar, and Laos running what amount to full-scale business operations," Meta said in a statement. "These operations cause real harm – they upend lives, destroy trust, and are deliberately designed to avoid detection and disruption."

In tandem, Meta said it's announcing a number of new tools to protect people when scam-related red flags are detected -

  • New warnings on Facebook when users receive suspicious accounts.
  • Alerting users when they receive suspicious WhatsApp device linking requests by tricking them into scanning a QR code that would link the scammer's device to their account.
  • Expanded advanced scam detection on Messenger that prompts users to share recent chat messages for an AI scam review when a conversation with a new contact exhibits common scam patterns like suspicious job offers.

The social media giant said it removed over 159 million scam ads for violating its policies in 2025, and that it took down 10.9 million accounts on Facebook and Instagram associated with criminal scam centers. In addition, the company has announced plans to expand advertiser verification in an attempt to bolster transparency and curtail efforts by bad actors to misrepresent advertiser identity.

The development comes as the U.K. government launched a new Online Crime Centre to combat cybercrime, including those fueled by the rise of scam compounds operating across Southeast Asia, West Africa, Eastern Europe, India, and China, by bringing together specialists from the government, police, intelligence agencies, banks, mobile networks, and major technology firms.

The disruption unit is expected to launch operations next month. It also outlines plans to deploy artificial intelligence (AI) to flag emerging fraud patterns, stop suspicious bank transfers faster, and use "scam-baiting chatbots" to deceive fraudsters and gather intelligence.

"Backed by over £30 million in funding, the centre will identify the accounts, websites and phone numbers that organised crime groups rely on, and shut them down at scale – blocking scam texts, freezing criminal accounts, removing scam social media accounts and disrupting operations at source," the U.K. government said.



from The Hacker News https://ift.tt/t9o5JMR
via IFTTT

What Boards Must Demand in the Age of AI-Automated Exploitation

“You knew, and you could have acted. Why didn’t you?” 

This is the question you do not want to be asked. And increasingly, it’s the question leaders are forced to answer after an incident.

For years, many executive teams and boards have treated a large vulnerability backlog as an uncomfortable but tolerable fact of life: “we’ve accepted the risk.” If you’ve ever seen a report showing thousands (or tens of thousands) of open Highs and Critical CVEs, you’ve probably also heard the usual rationalizations from folks that would rather look the other way: we have other priorities, this will take years of engineering time to fix, how do you know these are really Critical, we’re still prioritizing, we’ll get to it.

In the old world, that story, while not good, was often survivable. Exploitation was slower, more manual, and required more operator skill. Even the most sophisticated attackers had constraints. Organizations leaned on those constraints as an unspoken part of the risk model: “If it was really as bad as you say, we’d be compromised right now.”

That world is gone.

AI has collapsed the cost of exploitation

We’re now watching threat actors use agentic AI systems to accelerate the entire offensive workflow: reconnaissance, vulnerability discovery, exploit development, and operational tempo. Anthropic publicly detailed disrupting a cyber-espionage campaign in which attackers used Claude in ways that materially increased their speed and scale, and they explicitly warned that this kind of capability can allow less experienced groups to do work that previously required far more skill and staffing. 

As security leaders, we know that AI enables attackers to move faster. But now, automation turns a backlog into a weapon. In the old model, having 13,000 Highs in production could be rationalized as a triage problem. In the new model, attackers can move from chain discovery to validation and exploitation in dramatically less time. “We’re working the backlog” stops sounding like a strategy and starts sounding like an excuse.

The most dangerous sentence in the boardroom

“Don’t worry, the CISO has it handled.”

I’ve lived the reality behind that sentence. CISOs can build programs, establish priorities, report metrics, and drive cross-functional remediation, but in many enterprises, the vulnerability problem is structurally bigger than any one executive’s responsibility. It’s a system problem: legacy dependencies, release velocity constraints, fragile production environments, and limited engineering resources. Boards can’t delegate governance.

Delaware’s Caremark line of cases is frequently cited in director oversight discussions: boards must have reporting systems designed to surface consequential risk and must actually engage with what those systems report. The point isn’t to scare directors with legal theory – it’s to make the practical governance point that if your reporting says “we have thousands of serious vulnerabilities open,” the board’s job is to exercise oversight.

What boards should demand (and how CISOs should answer)

If you’re a board member, you should seek operational truth. Focus on the resiliency of your company’s tech, not just compliance. And if you’re a security leader, you should be creating the operating systems that provide it. These are the questions teams can use that cut through performative cybersecurity:

  1. What does our vulnerability management program look like end-to-end?
  2. How many vulnerabilities (especially Criticals and Highs) exist in our products right now?
  3. How long did it take to fully remediate new Criticals and Highs in the past quarter? The past year?
  4. If a new 0-day was discovered in our top-selling product today, how long would it take before we could tell customers it was safe?
  5. What is the dollar cost of our current vulnerability backlog? (Multiply people-hours to fix by fully loaded engineering cost, and you get a number the board can govern.)

This is how you make the backlog tangible enough that leadership stops hiding behind abstractions.

“Patch faster” is not a complete answer

Many organizations respond to board pressure by promising to patch faster. That helps, until it breaks production.

If emergency patching reliably causes customer impact (and in some environments it does), you’re forced into a terrible tradeoff: accept exposure or accept downtime. The modern enterprise needs a model that reduces the frequency and blast radius of emergency remediation, not one that merely accelerates the same fragile process.

The supply chain reality: liabilities are shifting

We’re seeing liabilities shift as regulators and courts focus on software supply chain hygiene and operational resilience. 

In the EU, the Cyber Resilience Act (CRA) is now in force, with its main obligations taking effect in December 2027. Many organizations will face stronger expectations for vulnerability handling, secure-by-design practices, and accountability throughout the software lifecycle.

In financial services, DORA (Digital Operational Resilience Act) has entered into application, bringing harmonized ICT risk management and operational resilience requirements across the EU. 

We’re also seeing this dynamic play out in the US, where negligence claims are brought in class action lawsuits against firms, with plaintiffs alleging a lack of due care that led to data breaches.

You can reduce the backlog by design

In the age of AI-accelerated exploitation, “managed risk” too often means assuming attackers will keep moving at yesterday’s pace.

Boards should stop accepting that assumption. CISOs should stop pretending “patch faster” or getting a risk acceptance is sufficient. And organizations should invest in reducing vulnerability exposure at the source so the next audit report isn’t a spreadsheet of accepted risks, but evidence of a shrinking attack surface.

Shameless plug, this is where Chainguard’s approach is designed to change the math: start with secure-by-default software components that minimize vulnerabilities from the outset and reduce vulnerability accrual over time. That means fewer critical findings landing in your environment, fewer emergency patch cycles, and less operational disruption when the next high-profile CVE hits.

By structurally reducing vulnerability backlog and remediation toil, teams can redirect engineering time from zero-ROI firefighting into high-ROI innovation that actually drives competitive advantage and revenue.

Because when the finger-pointing starts after the breach, and someone asks why the company chose to live with 13,000 Highs in production, the only defensible answer is: we didn’t. We changed the system.

For more hot takes and practical advice from – and for – engineering and security leaders, subscribe to Unchained or reach out to learn more about Chainguard. 

Note: This article was expertly written and contributed byQuincy Castro, CISO, Chainguard.

Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.



from The Hacker News https://ift.tt/7rmHznk
via IFTTT

Spinning complex ideas into clear docs with Kri Dontje

Spinning complex ideas into clear docs with Kri Dontje

Welcome back! This week, we're shining a spotlight on Kri Dontje, a technical writer who’s become an essential voice in making Cisco Talos' work understandable for a wide audience. With a background in technical communications and a career that began at a small startup, Kri discusses the importance of consistency, accuracy, and accessibility in documentation, as well as how to get the most out of a subject matter expert-technical writer relationship.

Now transitioning into a new role, Kri continues to bridge the gap between deep technical expertise and clear communication. When she’s not decoding cyber jargon, she’s hand-spinning yarn for stunning knit pieces, showing that creativity and tech go hand in hand. Keep an eye out for more content featuring Kri in the future.

Amy Ciminnisi: Can you tell us a little bit about what you do here in Talos?

Kri Dontje: Absolutely. I have a technical writing degree — technical communications — which means I translate very technical topics into something that other people can understand if they're not necessarily experts in that field. I've had a very nontraditional career. My first position was at a very small company, 14 people at its largest. I did documentation, design and demonstration videos, and rebuilt their health system from the ground up. It was interesting and terrifying because I was learning it completely alone.

I'm also a huge nerd and a learning junkie, which helps with this kind of job. I enjoy being around people who are into really complex things and talking to them about it. I spent a lot of time around a local miniatures wargaming shop and became friends with a bunch of nerds, some of whom have migrated into Talos.

I transitioned over to the strategic communications team as a research engineer. I’m going to focus more on communicating about Talos at a slightly more technical level than our communications have been to the public for a while, while still creating content that makes Talos accessible for people as much as possible.

AC: What do you think are the most important qualities or skills that make someone a really good technical writer, especially in a fast-changing landscape like cybersecurity?

KD: That’s a big contradiction. One of the most important things for tech writing is consistency and accessibility. It’s not a career that encourages adjectives. You want to use the same word to mean the same thing every time because if you use a fun synonym, the reader might think it’s an entirely different concept.

Versioning is a big problem. People won’t trust documentation if they find bad information in it. They’ll never think it’s a reasonable place to go again. So keeping things accurate is really important.

Being snoopy and not being afraid to feel real stupid in front of extremely smart people is also key. Usually, you can find common ground. It’s important to recognize you’re not talking down to the audience or making the information for stupid people. Even within Talos and the cyber community, everyone has broad-ranging specialties. Most people don’t know what others do or can’t figure it out without spending a lot of time and energy they don’t need to. So the important thing is to bring the information to a level where other very intelligent people can cross-reference it and make it applicable to what they’re doing.


Want to see more? Watch the full interview, and don’t forget to subscribe to our YouTube channel for future episodes of Humans of Talos.



from Cisco Talos Blog https://ift.tt/t3namT8
via IFTTT

Microsoft Patches 84 Flaws in March Patch Tuesday, Including Two Public Zero-Days

Microsoft on Tuesday released patches for a set of 84 new security vulnerabilities affecting various software components, including two that have been listed as publicly known.

Of these, eight are rated Critical, and 76 are rated Important in severity. Forty-six of the patched vulnerabilities relate to privilege escalation, followed by 18 remote code execution, 10 information disclosure, four spoofing, four denial-of-service, and two security feature bypass flaws.

The fixes are in addition to 10 vulnerabilities that have been addressed in its Chromium-based Edge browser since the release of the February 2026 Patch Tuesday update.

The two publicly disclosed zero-days are CVE-2026-26127 (CVSS score: 7.5), a denial-of-service vulnerability in .NET, and CVE-2026-21262 (CVSS score: 8.8), an elevation of privilege vulnerability in SQL Server.

The vulnerability with the highest CVSS score in this month's update is a critical remote code execution flaw in the Microsoft Devices Pricing Program. CVE-2026-21536 (CVSS score: 9.8), per Microsoft, has been fully mitigated, and no action is required from users. Artificial intelligence (AI)-powered autonomous vulnerability discovery platform XBOW has been credited with discovering and reporting the issue.

"This month, over half (55%) of all Patch Tuesday CVEs were privilege escalation bugs, and of those, six were rated exploitation more likely across Windows Graphics Component, Windows Accessibility Infrastructure, Windows Kernel, Windows SMB Server, and Winlogon," Satnam Narang, senior staff research engineer at Tenable, said.

"We know these bugs are typically used by threat actors as part of post-compromise activity, once they get onto systems through other means (social engineering, exploitation of another vulnerability)."

The Winlogon privilege escalation flaw (CVE-2026-25187, CVSS score: 7.8), in particular, leverages improper link resolution to obtain SYSTEM privileges. Google Project Zero researcher James Forshaw has been acknowledged for reporting the vulnerability.

"The flaw allows a locally authenticated attacker with low privileges to exploit a link-following condition in the Winlogon process and escalate to SYSTEM privileges," Jacob Ashdown, cybersecurity engineer at Immersive, said. "The vulnerability requires no user interaction and has low attack complexity, making it a straightforward target once an attacker gains a foothold."

Another vulnerability of note is CVE-2026-26118 (CVSS score: 8.8), a server-side request forgery bug in the Azure Model Context Protocol (MCP) server that could allow an authorized attacker to elevate privileges over a network.

"An attacker could exploit this issue by sending specially crafted input to an Azure Model Context Protocol (MCP) Server tool that accepts user‑provided parameters," Microsoft said.

"If the attacker can interact with the MCP‑backed agent, they can submit a malicious URL in place of a normal Azure resource identifier. The MCP Server then sends an outbound request to that URL and, in doing so, may include its managed identity token. This allows the attacker to capture that token without requiring administrative access."

Successful exploitation of the vulnerability could permit an attacker to obtain the permissions associated with the MCP Server's managed identity. The attacker could then leverage this behavior to access or perform actions on any resources that the managed identity is authorized to reach.

Among the Critical-severity bugs resolved by Microsoft is an information disclosure flaw in Excel. Tracked as CVE-2026-26144 (CVSS score of 7.5), it has been described as a case of cross-site scripting that occurs as a result of improper neutralization of input during web page generation.

The Windows maker said an attacker who exploited the shortcoming could potentially cause Copilot Agent mode to exfiltrate data as part of a zero-click attack.

"Information disclosure vulnerabilities are especially dangerous in corporate environments where Excel files often contain financial data, intellectual property, or operational records," Alex Vovk, CEO and co-founder of Action1, said in a statement.

"If exploited, attackers could silently extract confidential information from internal systems without triggering obvious alerts. Organizations using AI-assisted productivity features may face increased exposure, as automated agents could unintentionally transmit sensitive data outside corporate boundaries."

The patches come as Microsoft said it's changing the default behavior of Windows Autopatch by enabling hotpatch security updates to help secure devices at a faster pace.

"This change in default behavior comes to all eligible devices in Microsoft Intune and those accessing the service via Microsoft Graph API starting with the May 2026 Windows security update," Redmond said. "Applying security fixes without waiting for a restart can get organizations to 90% compliance in half the time, while you remain in control."



from The Hacker News https://ift.tt/L2Xk8M6
via IFTTT

UNC6426 Exploits nx npm Supply-Chain Attack to Gain AWS Admin Access in 72 Hours

A threat actor known as UNC6426 leveraged keys stolen following the supply chain compromise of the nx npm package last year to completely breach a victim's cloud environment within a span of 72 hours.

The attack started with the theft of a developer's GitHub token, which the threat actor then used to gain unauthorized access to the cloud and steal data.

"The threat actor, UNC6426, then used this access to abuse the GitHub-to-AWS OpenID Connect (OIDC) trust and create a new administrator role in the cloud environment," Google said in its Cloud Threat Horizons Report for H1 2026. "They abused this role to exfiltrate files from the client's Amazon Web Services (AWS) Simple Storage Service (S3) buckets and performed data destruction in their production cloud environments."

The supply chain attack targeting the nx npm package took place in August 2025, when unknown threat actors exploited a vulnerable pull_request_target workflow – an attack type referred to as Pwn Request – to obtain elevated privileges and access sensitive data, including a GITHUB_TOKEN, and ultimately push trojanized versions of the package to the npm registry.

The packages were found to embed a postinstall script that, in turn, launched a JavaScript credential stealer named QUIETVAULT to siphon environment variables, system information, and valuable tokens, including GitHub Personal Access Tokens (PATs), by weaponizing a Large Language Model (LLM) tool already installed on the endpoint to perform the search. The data was uploaded to a public GitHub repository named "/s1ngularity-repository-1."

Google said an employee at the victim organization ran a code editor application that used the Nx Console plugin, triggering an update in the process and resulting in the execution of QUIETVAULT.

UNC6426 is said to have initiated reconnaissance activities within the client's GitHub environment using the stolen PAT two days after the initial compromise using a legitimate open-source tool called Nord Stream to extract secrets from CI/CD environments, leaking the credentials for a GitHub service account.

Subsequently, the attackers leveraged this service account and used the utility's "--aws-role" parameter to generate temporary AWS Security Token Service (STS) tokens for the "Actions-CloudFormation" role and ultimately allow them to obtain a foothold in the victim's AWS environment.

"The compromised Github-Actions-CloudFormation role was overly permissive," Google said. "UNC6426 used this permission to deploy a new AWS Stack with capabilities ["CAPABILITY_NAMED_IAM","CAPABILITY_IAM"]. This stack's sole purpose was to create a new IAM role and attach the arn:aws:iam::aws:policy/AdministratorAccess policy to it. UNC6426 successfully escalated from a stolen token to full AWS administrator permissions in less than 72 hours."

Armed with the new administrator roles, the threat actor carried out a series of actions, including enumerating and accessing objects within S3 buckets, terminating production Elastic Compute Cloud (EC2) and Relational Database Service (RDS) instances, and decrypting application keys. In the final stage, all of the victim's internal GitHub repositories were renamed to "/s1ngularity-repository-[randomcharacters]" and made public.

To counter such threats, it's advised to use package managers that prevent postinstall scripts or sandboxing tools, apply the principle of least privilege (PoLP) to CI/CD service accounts and OIDC-linked roles, enforce fine-grained PATs with short expiration windows and specific repository permissions, remove standing privileges for high-risk actions like creating administrator roles, monitor for anomalous IAM activity, and implement strong controls to detect Shadow AI risks.

The incident highlights a case of what has been described by Socket as an AI-assisted supply chain abuse, where the execution is offloaded to AI agents that already have privileged access to the developer's file system, credentials, and authenticated tooling. 

"The malicious intent is expressed in natural-language prompts rather than explicit network callbacks or hardcoded endpoints, complicating conventional detection approaches," the software supply chain security firm said. "As AI assistants become more integrated into developer workflows, they also expand the attack surface. Any tool capable of invoking them inherits their reach."



from The Hacker News https://ift.tt/qcGwp2I
via IFTTT

Tuesday, March 10, 2026

HCP Vault Dedicated now available in additional AWS and Azure regions

Modern infrastructure is increasingly distributed across clouds, regions, and regulatory environments. As organizations scale their platforms globally, the systems responsible for securing secrets, encryption keys, and identities must be just as resilient and globally accessible.

Today, we’re announcing the expansion of the regional availability of HCP Vault Dedicated with new deployment locations across AWS and Microsoft Azure. The new regions now available include:

AWS

  • Stockholm (eu-north-1)

  • Paris (eu-west-3 / Paris region availability)

AWS


Microsoft Azure

  • Australia East
  • Australia Central
Azure

These additions expand the global footprint of HCP Vault Dedicated and give organizations greater flexibility when deploying Vault to support disaster recovery strategies, performance replication, and regional data residency requirements.

By bringing Vault closer to applications and infrastructure, organizations can improve performance, reduce operational risk, and better align their security architecture with regulatory and compliance requirements.

Improving resilience, performance, and proximity for Vault deployments

HCP Vault Dedicated is a fully managed deployment of Vault Enterprise on the HashiCorp Cloud Platform. It allows organizations to securely store, manage, and control access to sensitive data such as tokens, passwords, encryption keys, and certificates without managing the operational overhead of running Vault themselves.

Expanding the number of available regions allows teams to deploy Vault clusters closer to their workloads while also strengthening multi-region resilience architectures.

For performance-sensitive operations such as secrets retrieval, encryption, and identity validation, proximity matters. Placing Vault clusters closer to applications reduces network latency and improves responsiveness for systems that rely on frequent access to credentials or cryptographic services.

These new regions also strengthen multi-region disaster recovery strategies. HCP Vault Dedicated supports cross-region disaster recovery replication, allowing organizations to maintain a secondary Vault cluster in another region. If a primary region experiences an outage, teams can fail over to the disaster recovery replica to maintain access to secrets and security services.

With additional regional deployment options, organizations can:

  • Deploy Vault clusters closer to application workloads

  • Design disaster recovery architectures across geographically distinct regions

  • Reduce dependency on a small number of regions for failover planning

  • Improve performance for distributed applications accessing secrets and encryption services

For example, organizations operating in Europe can now pair regions such as:

  • Paris and Frankfurt

  • Paris and Stockholm

Similarly, organizations operating in Australia can deploy Vault clusters across Australia East and Australia Central, maintaining regional resilience while keeping infrastructure within national boundaries.

In addition to disaster recovery, HCP Vault Dedicated supports performance replication, allowing organizations to deploy secondary clusters closer to distributed workloads while maintaining centralized governance.

In this architecture:

  • A primary cluster acts as the system of record

  • Secondary clusters replicate configuration, policies, and secrets

  • Applications interact with the nearest Vault cluster to reduce latency

Expanding HCP Vault Dedicated into additional AWS and Azure regions makes it easier for organizations to design Vault architectures that balance performance, resilience, and regional infrastructure placement.

Getting started with the new regions

The new regions are available today when provisioning HCP Vault Dedicated clusters.

When creating a cluster, simply select the desired cloud provider and region in the HCP portal or via the HCP API.

With these additional deployment locations, organizations can more easily align Vault architectures with their global infrastructure footprint across AWS and Azure. For example, teams can now:

  • Deploy a primary Vault cluster in AWS Paris with a DR replica in AWS Stockholm

  • Run a primary cluster in Azure Australia East with a regional DR cluster in Australia Central

  • Place performance replicas closer to application workloads across Europe or APAC

These expanded regional options make it easier to design Vault architectures that meet resilience, performance, and compliance requirements without managing the operational complexity of running Vault clusters yourself.

To get started, sign-up or create a new HCP Vault Dedicated cluster in the HCP portal and select one of the newly available regions.

You can also review the full list of supported regions in the HCP Vault documentation.



from HashiCorp Blog https://ift.tt/OYxKEzL
via IFTTT

KadNap Malware Infects 14,000+ Edge Devices to Power Stealth Proxy Botnet

Cybersecurity researchers have discovered a new malware called KadNap that's primarily targeting Asus routers to enlist them into a botnet for proxying malicious traffic.

The malware, first detected in the wild in August 2025, has expanded to over 14,000 infected devices, with more than 60% of victims located in the U.S., according to the Black Lotus Labs team at Lumen. A lesser number of infections have been detected in Taiwan, Hong Kong, Russia, the U.K., Australia, Brazil, France, Italy, and Spain.

"KadNap employs a custom version of the Kademlia Distributed Hash Table (DHT) protocol, which is used to conceal the IP address of their infrastructure within a peer-to-peer system to evade traditional network monitoring," the cybersecurity company said in a report shared with The Hacker News.

Compromised nodes in the network leverage the DHT protocol to locate and connect with a command-and-control (C2) server, thereby making it resilient to detection and disruption efforts.

Once devices are successfully compromised, they are marketed by a proxy service named Doppelgänger ("doppelganger[.]shop"), which is assessed to be a rebrand of Faceless, another proxy service associated with TheMoon malware. Doppelgänger, according to its website, claims to offer resident proxies in over 50 countries that provide "100% anonymity." The service is said to have launched in May/June 2025.

Despite the focus on Asus routers, the operators of KadNap have been found to deploy the malware against an assorted set of edge networking devices.

Central to the attack is a shell script ("aic.sh") that's downloaded from the C2 server ("212.104.141[.]140"), which is responsible for initiating the process of conscripting the victim to the P2P network. The file creates a cron job to retrieve the shell script from the server at the 55-minute mark of every hour, rename it to ".asusrouter," and run it.

Once persistence is established, the script pulls a malicious ELF file, renames it to "kad," and executes it. This, in turn, leads to the deployment of KadNap. The malware is capable of targeting devices running both ARM and MIPS processors.

KadNap is also designed to connect to a Network Time Protocol (NTP) server to fetch the current time and store it along with the host uptime. This information serves as a basis to create a hash that's used to locate other peers in the decentralized network to receive commands or download additional files.

The files – fwr.sh and /tmp/.sose – contains functionality to close port 22, the standard TCP port for Secure Shell (SSH), on the infected device and extract a list of C2 IP address:port combinations to connect to.

"In short, the innovative use of the DHT protocol allows the malware to establish robust communication channels that are difficult to disrupt, by hiding in the noise of legitimate peer-to-peer traffic," Lumen said.

Further analysis has determined that not all compromised devices communicate with every C2 server, indicating the infrastructure is being categorized based on device type and models.

The Black Lotus Labs team told The Hacker News that Doppelgänger's bots are being abused by threat actors in the wild. "One issue there has been since these Asus (and other devices) are also sometimes co-infected with other malware, it is tricky to say who exactly is responsible for a specific malicious activity," the company said.

Users running SOHO routers are advised to keep their devices up to date, reboot them regularly, change default passwords, secure management interfaces, and replace models that are end-of-life and are no longer supported.

"The KadNap botnet stands out among others that support anonymous proxies in its use of a peer-to-peer network for decentralized control," Lumen concluded. "Their intention is clear, avoid detection and make it difficult for defenders to protect against."

New Linux Threat ClipXDaemon Emerges

The disclosure comes as Cyble detailed a new Linux threat dubbed ClipXDaemon that's designed to target cryptocurrency users by intercepting and altering copied wallet addresses. The clipper malware, delivered via Linux post-exploitation framework called ShadowHS, has been described as an autonomous cryptocurrency clipboard hijacker targeting Linux X11 environments.

Staged entirely in memory, the malware employs stealth techniques, such as process masquerading and Wayland session avoidance, while simultaneously monitoring the clipboard every 200 milliseconds and substituting cryptocurrency addresses with attacker-controlled wallets. It's capable of targeting Bitcoin, Ethereum, Litecoin, Monero, Tron, Dogecoin, Ripple, and TON wallets.

The decision to avoid execution in Wayland sessions is deliberate, as the display server protocol's security architecture places additional controls, like requiring explicit user interaction, before applications can access the clipboard content. In disabling itself under such scenarios, the malware aims to eliminate noise and avoid runtime failure.

"ClipXDaemon differs fundamentally from traditional Linux malware. It contains no command-and-control (C2) logic, performs no beaconing, and requires no remote tasking," the company said. "Instead, it monetizes victims directly by hijacking cryptocurrency wallet addresses copied in X11 sessions and replacing them in real time with attacker-controlled addresses."



from The Hacker News https://ift.tt/9v7zTeZ
via IFTTT

Which Tools Will Replace Microsoft MDT in 2026?

End of an era: Microsoft Deployment Toolkit (MDT) is officially retired. What should IT admins use instead in 2026? Our new article explains the best replacements for different environments.

Many IT admins who have been in IT for years, including me, relied on tools like Microsoft Deployment Toolkit (MDT) for efficient OS deployments in various environments. MDT has been a good tool for creating and managing Windows images, automating installations, and handling driver injections. However, with Microsoft’s recent announcement, it’s time to look ahead. On January 6, 2026, Microsoft declared the immediate retirement of MDT, meaning no more updates, security fixes, or even official support.

This retirement isn’t entirely surprising. MDT’s lifecycle was tied to underpinning technologies like Windows PE and WDS, which are evolving or being deprioritized. Support effectively ends after the first Configuration Manager release post-October 2025, and downloads have already been pulled from official channels

How the IT will be affected?

  • Existing deployments will continue to function, but they are no longer supported.
  • Download packages have been removed from official distribution channels, including the Microsoft Learn and Intune pages.

While MDT was a foundational tool for decades, its discontinuation reflects Microsoft’s shift toward cloud-first deployment strategies. Organizations using MDT should now prioritize migration to avoid long-term risks.

Microsoft’s Recommended Alternatives

  • Windows Autopilot for cloud-based, zero-touch deployment.
  • Configuration Manager (SCCM) Operating System Deployment (OSD) for on-premises environments.

Autopilot leverages Azure AD (now Entra ID) and Intune for device provisioning. It’s ideal for modern management without heavy imaging.

Technical Setup: Devices are pre-registered via hardware hashes uploaded to Intune. On first boot, they connect to the internet, authenticate, and pull configurations.

Capabilities: Supports OOBE customization, app deployment via Win32/MSI, driver updates from Windows Update, and policy enforcement (e.g., BitLocker, Defender). Use ESP (Enrollment Status Page) for progress tracking.

Pros for MDT Users: Zero-touch reduces manual intervention; integrates with Autopilot Reset for re-provisioning. Handles hybrid joins for on-prem AD.

Cons: Requires internet connectivity; less flexible for custom WIM images or offline scenarios. Not suited for bare-metal without OEM preloads.

Migration Tip: Export MDT drivers to Intune repositories; script custom tasks via Proactive Remediations.

Worth to note that for larger environments with 500+ devices, Autopilot scales better than MDT’s share-based model.

Configuration Manager OSD

If you have an existing ConfigMgr site, OSD is the direct evolution of MDT integration.

Technical Setup: Uses task sequences similar to MDT but with deeper SCCM features like software distribution points and boundary groups.

Capabilities: PXE booting via WDS/SCCM, dynamic driver packages, application models for conditional installs, and USMT for user state migration. Supports multicast for large-scale deployments.

Pros: Full on-prem control; integrates with SQL for reporting; handles complex branching logic in sequences.

Cons: Steeper learning curve and resource-intensive (requires dedicated servers). Licensing via Microsoft Endpoint Manager.

Migration Tip: Import MDT task sequences into SCCM, then refactor for native steps. Remove MDT add-ons to comply with retirement.

Third-Party Alternatives: SmartDeploy and WAPT

It’s often cost effective to seek elsewhere than Microsoft. (Not liking having my eggs in the same basket, right?)

Beyond Microsoft’s ecosystem, tools like SmartDeploy and WAPT offer flexible, cost-effective options for OS deployment.

SmartDeploy is a commercial tool positioned as a direct MDT replacement, focusing on hardware-independent imaging.

Technical Setup: Central console for building golden images on VMs, then deploying via USB, network, or cloud.

Capabilities: Platform Packs for drivers (over 1,000 models supported); WDS/PXE integration; answer files for unattended installs. Supports multilayer imaging to separate OS, apps, and drivers.

Pros: Reduces image count (one WIM per OS version); offline deployment; built-in migration from MDT without rebuilding everything.

Cons: Paid licensing; less cloud-native than Autopilot.

Migration Tip: Import MDT shares directly; use wizard to map drivers and scripts.

It’s great for SMBs needing MDT-like simplicity without ConfigMgr overhead.

WAPT: A Quick Introduction and Capabilities

WAPT (Windows APT) is an open-source package management and deployment tool inspired by Debian’s APT system, adapted for Windows environments. It’s designed for centralized software installation, updates, and configuration management across networks. While not a pure OS imaging tool like MDT, it excels in post-OS deployment tasks and can integrate with imaging workflows, making it a complementary alternative for app-heavy environments.

Quick Intro: Developed by Tranquil IT, WAPT uses a client-server architecture. WAPT enables centralized deployment of software, configurations, patches, and operating systems across Windows, Linux, and macOS environments. Tranquil IT also contributes to open-source projects such as OpenRSAT and AzureADConnect_Samba4.

wp-image-33662

The repo hosts packages for Windows, Linux or MacOS architectures

 

The server hosts repositories of packages (MSI, EXE, scripts), while agents on endpoints pull and execute them. The documentation is really great, with many screenshots. I’d highly recommend to do a POC before adopting, but with their documentation help it is a snap!

wp-image-33663

Example of importing of a package into the console from a repo

 

It’s free for basic use, with enterprise editions for advanced features. Installation involves running waptserversetup.exe on a Windows server (or Linux for better scalability), configuring Nginx as the web server, and setting up PostgreSQL for the database.

Key Capabilities:

  • Package Creation and Deployment: Build custom packages using WAPT’s console or PyScripter. Supports dependencies, pre/post-install scripts, and silent installs. Deploy via policies targeting OUs, groups, or hardware profiles.
  • Repository Management: Mirror external repos (e.g., Chocolatey) or create internal ones. Handles versioning, rollbacks, and audits.
  • Agent Management: Agents report inventory (hardware, software, configs) back to the server. Supports wake-on-LAN, remote execution, and self-service portals for users.
  • Security and Compliance: Enforces signatures on packages; integrates with AD for authentication (Kerberos on Linux servers). Monitors vulnerabilities and automates patches.
  • Scalability Limits: On Windows servers, handles up to 500 agents efficiently; switch to Linux for larger deployments or features like large file uploads.
  • OS Deployment Integration: While WAPT focuses on software, it can script OS prep tasks (e.g., partitioning, driver installs) and deploy apps during imaging via hooks in tools like WinPE or combined with WDS.

WAPT is great in environments needing granular app control without full imaging. For pure OS deploys, you can pair it with Autopilot or OSDCloud. Limitations include no native Kerberos on Windows servers and potential performance hits for very large packages.

wp-image-33664

Example of Windows Installer for WAPT server

Choosing the Right Tool for Your Environment

Selecting an MDT replacement depends on your infrastructure:

  • Cloud-First: Go with Autopilot for simplicity.
  • On-Prem Heavy: ConfigMgr OSD or SmartDeploy for robust task sequences.
  • Budget-Conscious/Open-Source: WAPT for app deployment, possibly extended to OS via scripts.
  • Hybrid: Combine Autopilot with WAPT for end-to-end management.

Test in a lab: Start with exporting MDT assets (images, drivers) and importing into the new tool. Monitor for hardware compatibility using tools like HWInfo or PowerShell’s Get-WmiObject.

wp-image-33666

Example of the console with Software inventory, Reporting, OS Deploy or Secondary repos

 

Link: WAPT and TranquilIT

Final Words

MDT’s end marks a shift to more automated, secure deployments. While it’s bittersweet, these alternatives offer better integration with modern Windows features. WAPT enables companies and public authorities to simply and securely deploy software, configurations, software and OS patches, and operating systems on Windows, Linux and macOS environments.

Perhaps one of our future blog posts, I’ll go a bit into details about how to setup a WAPT environment so you can deploy packages with this nice Open-Source tool. For enterprise environment and using the advanced functions, you should definitely get a license that allows you to manage enterprise environments with reporting, OS deployments, Software inventories etc.



from StarWind Blog https://ift.tt/T71iqv2
via IFTTT

New "LeakyLooker" Flaws in Google Looker Studio Could Enable Cross-Tenant SQL Queries

Cybersecurity researchers have disclosed nine cross-tenant vulnerabilities in Google Looker Studio that could have permitted attackers to run arbitrary SQL queries on victims' databases and exfiltrate sensitive data within organizations' Google Cloud environments.

The shortcomings have been collectively named LeakyLooker by Tenable. There is no evidence that the vulnerabilities were exploited in the wild. Following responsible disclosure in June 2025, the issues have been addressed by Google.

The list of security flaws is as follows -

"The vulnerabilities broke fundamental design assumptions, revealed a new attack class, and could have allowed attackers to exfiltrate, insert, and delete data in victims' services and Google Cloud environment," security researcher Liv Matan said in a report shared with The Hacker News.

"These vulnerabilities exposed sensitive data across Google Cloud Platform (GCP) environments, potentially affecting any organization using Google Sheets, BigQuery, Spanner, PostgreSQL, MySQL, Cloud Storage, and almost any other Looker Studio data connector."

Successful exploitation of the cross-tenant flaws could enable threat actors to gain access to entire datasets and projects across different cloud tenants.

Attackers could scan for public Looker Studio reports or obtain access to private ones that use these connectors (e.g., BigQuery) and seize control of the databases, allowing them to run arbitrary SQL queries across the owner's entire GCP project.

Alternatively, a victim creates a report as public or shares it with a specific recipient, and uses a JDBC-connected data source such as PostgreSQL. In this scenario, the attacker can take advantage of a logic flaw in the copy report feature that makes it possible to clone reports while retaining the original owner's credentials, enabling them to delete or modify tables.

Another high-impact path detailed by the cybersecurity company involved one-click data exfiltration, where sharing a specially crafted report forces a victim's browser to execute malicious code that contacts an attacker-controlled project to reconstruct entire databases from logs.

"The vulnerabilities broke the fundamental promise that a 'Viewer' should never be able to control the data they are viewing," Matan said, adding they "could have let attackers exfiltrate or modify data across Google services like BigQuery and Google Sheets."



from The Hacker News https://ift.tt/4hm9rzN
via IFTTT

What’s Holding Back AI Agents? It’s Still Security

It’s hard to find a team today that isn’t talking about agents. For most organizations, this isn’t a “someday” project anymore. Building agents is a strategic priority for 95% of respondents that we surveyed across the globe with 800+ developers and decision makers in our latest State of Agentic AI research. The shift is happening fast: agent adoption has moved beyond experiments and demos into something closer to early operational maturity. 60% of organizations already report having AI agents in production, though a third of those remain in early stages. 

Agent adoption today is driven by a pragmatic focus on productivity, efficiency, and operational transformation, not revenue growth or cost reduction. Early adoption is concentrated in internal, productivity-focused use cases, especially across software, infrastructure, and operations. The feedback loops are fast, and the risks are easier to control. 

whats holding agents back blog fig 1

So what’s holding back agent scaling? Friction shows up and nearly all roads lead to the same place: AI agent security. 

AI agent security isn’t one issue it’s the constraint

When teams talk about what’s holding them back, AI agent security rises to the top. In the same survey, 40% of respondents cite security as their top blocker when building agents. The reason it hits so hard is that it’s not confined to a single layer of the stack. It shows up everywhere, and it compounds as deployments grow.

For starters, when it comes to infrastructure, as organizations expand agent deployments, teams emphasize the need for secure sandboxing and runtime isolation, even for internal agents.

At the operations layer, complexity becomes a security problem. Once you have more tools, more integrations, and more orchestration logic, it gets harder to see what’s happening end-to-end and harder to control it. Our latest research data reflects that sprawl: over a third of respondents report challenges coordinating multiple tools, and a comparable share say integrations introduce security or compliance risk. That’s a classic pattern: operational complexity creates blind spots, and blind spots become exposure.

45% of organizations say the biggest challenge is ensuring tools are secure, trusted, and enterprise-ready.

And at the governance layer, enterprises want something simple: consistency. They want guardrails, policy enforcement, and auditability that work across teams and workflows. But current tooling isn’t meeting that bar yet. In fact, 45% of organizations say the biggest challenge is ensuring tools are secure, trusted, and enterprise-ready. That’s not a minor complaint: it’s the difference between “we can try this” and “we can scale this.”

MCP is popular but not ready for enterprise

Many teams are adopting Model Context Protocol (MCP) because it gives agents a standardized way to connect to tools, data, and external systems, making agents more useful and customized.  Among respondents further along in their agent journey,  85% say they’re familiar with MCP and two-thirds say they actively use it across personal and professional projects. 

Research data suggests that most teams are operating in what could be described as “leap-of-faith mode” when it comes to MCP, adopting the protocol without security guarantees and operational controls they would demand from mature enterprise infrastructure.

But the security story hasn’t caught up yet. Teams adopt MCP because it works, but they do so without the security guarantees and operational controls they would expect from mature enterprise infrastructure. For teams earlier in their agentic journey: 46% of them identify  security and compliance as the top challenge with MCP.

Organizations are increasingly watching for threats like prompt injection and tool poisoning, along with the more foundational issues of access control, credentials, and authentication. The immaturity and security challenges of current MCP tooling make for a fragile foundation at this stage of agentic adoption.

Conclusion and recommendations

Ai agent security is what sets the speed limit for agentic AI in the enterprise. Organizations aren’t lacking interest, they’re lacking confidence that today’s tooling is enterprise-ready, that access controls can be enforced reliably, and that agents can be kept safely isolated from sensitive systems.  

The path forward is clear. Unlocking agents’ full potential will require new platforms built for enterprise scale, with secure-by-default foundations, strong governance, and policy enforcement that’s integrated, not bolted on.

Download the full Agentic AI report for more insights and recommendations on how to scale agents for enterprise. 

Join us on March 25, 2026, for a webinar where we’ll walk through the key findings and the strategies that can help you prioritize what comes next.

Learn more:



from Docker https://ift.tt/L5jqz39
via IFTTT