Wednesday, February 18, 2026

Data Mesh vs. Data Fabric: Modern Data Management Challenges

Previously, we explored the fundamentals of centralized storage management including data warehouse architecture.

While the warehouse remains a critical component of the data stack, data management frameworks evolved significantly to address the limitations of monolithic centralization.

As organizations face exploding data volumes and increasingly distributed teams, the conversation has shifted from “how do we store it?” to “how do we manage access and ownership at scale?”

Two concepts have emerged as possible answers to modern data management challenges: Data Mesh and Data Fabric. Both aim to solve the friction of enterprise data, yet they approach the challenge from completely different angles.

What is a data mesh?

A data mesh is an architectural and organizational approach. It changes how data ownership works inside a company. Instead of one central data team controlling everything, responsibility shifts to business domains.

A domain can be a department or a product group. Each domain treats its data as a product. That means the team that knows the data best also manages its quality, access rules, and documentation.

Data mesh stands on four main ideas:

  • Domain ownership of data
  • Data treated as a product
  • Self-service data platform
  • Federated governance

The key point is decentralization. Data mesh accepts that large organizations are already distributed and aligns the data architecture with the business structure.

Examples of Data Mesh Implementation

Data mesh is especially useful in industries where data comes from many independent streams and teams need to move fast.

Retail: Data originates from online stores, physical locations, logistics systems, and customer analytics platforms. In a monolithic system, these are dumped into one lake. With data mesh, the e-commerce team owns clickstream and cart data, while the supply chain team manages inventory and shipment data. These datasets are exposed as well-defined products which other teams can consume through standard interfaces.

Finance: In banks and fintech, data is generated by trading systems, risk platforms, customer onboarding tools, and fraud detection engines. These systems are built by different teams under strict compliance rules. A data mesh allows each domain to manage its own data products while following shared governance policies. This allows risk analysts to consume trading data products without needing direct access to raw operational systems, balancing autonomy with regulatory control.

Manufacturing: Data is produced by sensors, maintenance systems, production lines, and quality control tools, often spread across multiple plants. A data mesh model allows each plant or production domain to own its pipelines and expose standardized data products. Central analytics teams can then combine them without building custom integrations for every single facility.

Energy: Grid operations, renewable sites, trading desks, and customer billing systems operate as largely independent domains. Operational technology (OT) and IT systems rarely speak the same language. Data mesh helps by letting each operational domain manage its own data products. This makes grid data and market data visible and usable, rather than keeping them locked in technical silos.

What is Data Fabric?

A data fabric is a technology-focused approach to data management. Its goal is to connect data across different environments—on-prem, cloud, and hybrid—to make it easier to access and work with.

Unlike data mesh, data fabric does not mainly change the organization’s structure. Instead, it focuses on building a unified layer across existing systems. Ownership stays the same, but day-to-day access becomes much easier.

Data fabric relies heavily on automation. It uses metadata, data catalogs, and integration tools to discover datasets and manage data flows. Modern solutions utilize AI and machine learning to:

  • Classify data automatically.
  • Detect patterns and suggest links between datasets.
  • Flag data quality issues.

This reduces manual work for data engineers and speeds up analytics projects.

Examples of Data Fabric Implementation

Data fabric is valuable where data is scattered across many platforms and needs to be connected without heavy reengineering.

Customer 360: In customer-facing businesses, information is often fragmented across CRM systems, support tools, marketing platforms, and billing databases. A data fabric links these sources through metadata and integration pipelines. Support teams get a complete customer profile without the need to physically move all data into one massive system.

Regulatory Compliance: Industries with strict regulations need visibility into sensitive data. A data fabric can automatically tag personal or financial information and enforce policies across systems. This gives security teams control without requiring them to manually check every database.

AI and Data Science: For AI workloads, data preparation is often the most time-consuming phase. With a data fabric, datasets are easier to find and understand. Automated metadata and lineage tracking shorten the path from raw data to model training, allowing data scientists to spend time building models rather than hunting for data.

Data Mesh vs. Data Fabric: The Core Differences

The primary divergence between these two approaches lies in their philosophy toward complexity. Data Mesh views silos as a necessary byproduct of business complexity and seeks to manage them through federated cooperation. It is fundamentally a “people-first” approach, relying on domain expertise to define what good data looks like and how it should be used.

Data Fabric, conversely, treats silos as a technical inefficiency to be bridged by a unified virtualization layer. It is a “technology-first” approach, leveraging AI and metadata automation to create a cohesive map of the enterprise data without necessarily requiring teams to change how they work.

Data Mesh Data Fabric
Core Philosophy Decentralized (People & Process) Unified (Technology & Automation)
Primary goal Treat data as a product owned by domains Connect data through a unified metadata layer
Governance Federated (Global standards, local execution) Centralized (Automated policy enforcement)
Scalability Scales by adding more domains and products Scales via platform capabilities and automation
Agility Source Team-level speed (autonomy) Integration speed (fast connection of sources)

The Hybrid Model: Using Them Together

Data mesh and data fabric are not competing in a strict sense; they often solve different parts of the same problem. One answers the question of who owns the data, while the other focuses on how data is connected.

Many organizations use both. You can implement a Data Mesh to define ownership and culture, while using a Data Fabric to provide the underlying integration, metadata, and governance layer.

  • Mesh for people: Assigns responsibility to business domains to ensure data relevance and quality.
  • Fabric for tech: Provides the automated “plumbing” that allows those domains to share data without building custom integrations every time.

Insights from the trenches

While the concepts sound ideal on paper, engineers often highlight the physical limitations of these architectures. Here are the common hurdles from real-world implementations:

  1. The “physics” of data mesh
    A common misconception is that Data Mesh eliminates the need to move data. In reality, network latency kills all the fun. Querying a 1TB table across a WAN link will time out before it completes. You cannot just “virtualize” everything. For heavy analytics, you still need robust storage and caching closer to compute.
  2. Scale is the gatekeeper
    Neither approach makes sense for small teams. Practitioners from large enterprises note that Mesh implementations can take company several years. If you don’t have multiple domains effectively “fighting” over data access, a monolith is often faster and cheaper.
  3. The troubleshooting nightmare
    When a centralized pipeline breaks, you know where to look. When a federated query across four different domain products fails, troubleshooting becomes a forensic investigation. Decentralization requires more maturity in observability, not less.

Conclusion

Data mesh and data fabric are not competing in a strict sense. They solve different parts of the same problem. One answers the question of who owns the data, while the other focuses on how data is connected.

Many organizations use both. A data mesh model can define ownership, while a data fabric provides integration, metadata, and governance underneath. Used together, they form a practical and flexible approach to modern data management.



from StarWind Blog https://ift.tt/8WiQtl0
via IFTTT

Cybersecurity Tech Predictions for 2026: Operating in a World of Permanent Instability

In 2025, navigating the digital seas still felt like a matter of direction. Organizations charted routes, watched the horizon, and adjusted course to reach safe harbors of resilience, trust, and compliance.

In 2026, the seas are no longer calm between storms. Cybersecurity now unfolds in a state of continuous atmospheric instability: AI-driven threats that adapt in real time, expanding digital ecosystems, fragile trust relationships, persistent regulatory pressure, and accelerating technological change. This is not turbulence on the way to stability; it is the climate.

In this environment, cybersecurity technologies are no longer merely navigational aids. They are structural reinforcements. They determine whether an organization endures volatility or learns to function normally within it. That is why security investments in 2026 are increasingly made not for coverage, but for operational continuity: sustained operations, decision-grade visibility and controlled adaptation as conditions shift.

This article is less about what’s “next-gen” and more about what becomes non-negotiable when conditions keep changing. The shifts that will steer cybersecurity priorities and determine which investments hold when conditions turn.

Regulation and geopolitics become architectural constraints

Regulation is no longer something security reacts to. It is something systems are built to withstand continuously.

Cybersecurity is now firmly anchored at the intersection of technology, regulation and geopolitics. Privacy laws, digital sovereignty requirements, AI governance frameworks and sector-specific regulations no longer sit on the side as periodic compliance work; they operate as permanent design parameters, shaping where data can live, how it can be processed and what security controls are acceptable by default.

At the same time, geopolitical tensions increasingly translate into cyber pressure: supply-chain exposure, jurisdictional risk, sanctions regimes and state-aligned cyber activity all shape the threat landscape as much as vulnerabilities do.

As a result, cybersecurity strategies must integrate regulatory and geopolitical considerations directly into architecture and technology decisions, rather than treating them as parallel governance concerns.

Changing the conditions: Making the attack surface unreliable

Traditional cybersecurity often tried to forecast specific events: the next exploit, the next malware campaign, the next breach. But in an environment where signals multiply, timelines compress and AI blurs intent and scale, those forecasts decay quickly. The problem isn’t that prediction is useless. It’s that it expires faster than defenders can operationalize it.

So the advantage shifts. Instead of trying to guess the next move, the stronger strategy is to shape the conditions attackers need to succeed.

Attackers depend on stability: time to map systems, test assumptions, gather intelligence and establish persistence. The modern counter-move is to make that intelligence unreliable and short-lived. By using tools like Automated Moving Target Defense (AMTD) to dynamically alter system and network parameters, Advanced Cyber Deception that diverts adversaries away from critical systems, or Continuous Threat Exposure Management (CTEM) to map exposure and reduce exploitability, defenders shrink the window in which an intrusion chain can be assembled.

This is where security becomes less about “detect and respond” and more about deny, deceive and disrupt before an attacker’s plan becomes momentum.

The goal is simple: shorten the shelf-life of attacker knowledge until planning becomes fragile, persistence becomes expensive and “low-and-slow” stops paying off.

AI becomes the acceleration layer of the cyber control plane

AI is no longer a feature layered on top of security tools. It is increasingly infused inside them across prevention, detection, response, posture management and governance.

The practical shift is not “more alerts,” but less friction: faster correlation, better prioritization and shorter paths from raw telemetry to usable decisions.

The SOC becomes less of an alert factory and more of a decision engine, with AI accelerating triage, enrichment, correlation and the translation of scattered signals into a coherent narrative. Investigation time compresses because context arrives faster and response becomes more orchestrated because routine steps can be drafted, sequenced and executed with far less manual stitching.

But the bigger story is what happens outside the SOC. AI is increasingly used to improve the efficiency and quality of cybersecurity controls: asset and data discovery become faster and more accurate; posture management becomes more continuous and less audit-driven; policy and governance work becomes easier to standardize and maintain. Identity operations, in particular, benefit from AI-assisted workflows that improve provisioning hygiene, strengthen recertification by focusing reviews on meaningful risk and reduce audit burden by accelerating evidence collection and anomaly detection.

This is the shift that matters. Security programs stop spending energy assembling complexity and start spending it steering outcomes.

Security becomes a lifecycle discipline across digital ecosystems

Most breaches do not start with a vulnerability. They start with an architectural decision made months earlier.

Cloud platforms, SaaS ecosystems, APIs, identity federation and AI services continue to expand digital environments at a faster rate than traditional security models can absorb. The key shift is not merely that the attack surface grows, but that interconnectedness changes what “risk” means.

Security is therefore becoming a lifecycle discipline: integrated throughout the entire system lifecycle, not just development. It starts at architecture and procurement, continues through integration and configuration, extends into operations and change management and is proven during incidents and recovery.

In practice, that means the lifecycle now includes what modern ecosystems are actually made of: secure-by-design delivery through the SDLC and digital supply chain security to manage the risks inherited from third-party software, cloud services and dependencies.

Leading organizations move away from security models focused on isolated components or single phases. Instead, security is increasingly designed as an end-to-end capability that evolves with the system, rather than trying to bolt on controls after the fact.

Zero Trust as a continuous decisioning and adaptive control

In a world where the perimeter dissolved long ago, Zero Trust stops being a strategy and becomes the default infrastructure. Especially as trust itself becomes dynamic.

The key shift is that access is no longer treated as a one-time gate. Zero Trust increasingly means continuous decisioning: permission is evaluated repeatedly, not granted once. Identity, device posture, session risk, behavior and context become live inputs into decisions that can tighten, step up, or revoke access as conditions change.

With identity designed as a dynamic control plane, Zero Trust expands beyond users to include non-human identities such as service accounts, workload identities, API tokens and OAuth grants. This is why identity threat detection and response becomes essential: detecting token abuse, suspicious session behavior and privilege path anomalies early, then containing them fast. Continuous authorization makes stolen credentials less durable, limits how far compromise can travel and reduces the Time-To-Detection dependency by increasing the Time-To-Usefulness friction for attackers. Segmentation then does the other half of the job by keeping local compromise from turning into systemic spread by containing the blast radius by design.

The most mature Zero Trust programs stop measuring success by deployment milestones and start measuring it by operational outcomes: how quickly access can be constrained when risk rises, how fast sessions can be invalidated, how small the blast radius remains when an identity is compromised and how reliably sensitive actions require stronger proof than routine access.

Data security and privacy engineering unlock scalable AI

Data is the foundation of digital value and simultaneously the fastest path to regulatory, ethical and reputational damage. That tension is why data security and privacy engineering are becoming non-negotiable foundations, not governance add-ons. When organizations can’t answer basic questions such as what data exists, where it lives, who can access it, what is it used for and how it moves, every initiative built on data becomes fragile. This is what ultimately determines whether AI projects can scale without turning into a liability.

Data security programs must evolve from “protect what we can see” to govern how the business actually uses data. That means building durable foundations around visibility (discovery, classification, lineage), ownership, enforceable access and retention rules and protections that follow data across cloud, SaaS, platforms and partners. A practical way to build this capability is through a Data Security Maturity Model to identify gaps across the core building blocks, prioritize what to strengthen first and initiate a maturity journey toward consistent, measurable and continuous data protection throughout its lifecycle.

Privacy engineering becomes also the discipline that makes those foundations usable and scalable. It shifts privacy from documentation to design through purpose-based access, minimization by default and privacy-by-design patterns embedded in delivery teams. The result is data that can move quickly with guardrails, without turning growth into hidden liability.

Post-Quantum Risk makes crypto agility a design requirement

Quantum computing is still emerging, but its security impact is already tangible because adversaries plan around time. “Harvest now, decrypt later” turns encrypted traffic collected now into future leverage. “Trust now, forge later” carries the same logic into trust systems: certificates, signed code and long-lived signatures that anchor security decisions today could become vulnerable later.

Governments have understood this timing problem and started to put dates on it, with first milestones as early as 2026 for EU governments and critical infrastructure operators to develop national post-quantum roadmaps and cryptographic inventories. Even if the rules start in the public sector, they travel fast through the supply chain and into the private sector.

This is why crypto agility becomes a design requirement rather than a future upgrade project. Cryptography is not a single control in one place. It is embedded across protocols, applications, identity systems, certificates, hardware, third-party products and cloud services. If an organization cannot rapidly locate where cryptography lives, understand what it protects and change it without breaking operations, it is not “waiting for PQC.” It is accumulating cryptographic debt under a regulatory clock.

Post-quantum preparedness therefore becomes less about picking replacement algorithms and more about building the ability to evolve: cryptographic asset visibility, disciplined key and certificate lifecycle management, upgradable trust anchors where possible and architectures that can rotate algorithms and parameters without disruption.

Cryptographic risk is no longer a future problem. It is a present design decision with long-term consequences.

Taken together, these shifts change what “good” looks like.

Security stops being judged by how much it covers and starts being judged by what it enables: resilience, clarity and controlled adaptation when conditions refuse to cooperate.

The strongest security programs are not the most rigid ones. They are the ones that adapt without losing control.

The digital environment does not promise stability, but it does reward preparation. Organizations that integrate security across the system lifecycle, treat data as a strategic asset, engineer for cryptographic evolution and reduce human friction are better positioned to operate with confidence in a world that keeps shifting.

Turbulence is no longer exceptional. It’s the baseline. The organizations that succeed are the ones designed to operate anyway.

Read Digital Security Magazine – 18th Edition.

Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.



from The Hacker News https://ift.tt/ZQBsePt
via IFTTT

3 Ways to Start Your Intelligent Workflow Program

Security, IT, and engineering teams today are under relentless pressure to accelerate outcomes, cut operational drag, and unlock the full potential of AI and automation. But simply investing in tools isn’t enough. 88% of AI proofs-of-concept never make it to production, even though 70% of workers cite freeing time for high-value work as the primary AI automation motivation. Real impact comes from intelligent workflows that combine automation, AI-driven decisioning, and human ingenuity into seamless processes that work across teams and systems. 

In this article, we’ll highlight three use cases across Security and IT that can serve as powerful starting points for your intelligent workflow program. For each use case, we’ll share a pre-built workflow to help you tackle real bottlenecks in your organization with automation while connecting directly into your existing tech stack. These use cases are great starting points to help you turn theory into practice and achieve measurable gains from day one.

Workflow #1 Automated Phishing Response 

For security teams, responding to phishing emails can be a slow, burdensome process given the number of alerts and the growing sophistication of phishing attacks. By streamlining phishing analysis with automated workflows, security teams of all sizes get time back to focus on more critical issues and alerts. 

Our first workflow, Analyze phishing email senders, URLs, and attachments, uses VirusTotal, URLScan.io, and Sublime Security to analyze key aspects of phishing emails such as file attachments, website behavior, email sender reputation, and detection rule matching. It then consolidates all of the results and displays them in a Tines page, which can be sent via email for archiving or further analysis.

Workflow #2 Agents for IT Service Request Automation

IT service desks are often overwhelmed with repetitive, time-consuming requests like password resets, software access provisioning, hardware troubleshooting, and account management. These tasks pull valuable technical resources away from strategic initiatives. When AI agents are deployed to handle these routine service requests, organizations can dramatically reduce response times from hours to seconds, be more likely to ensure 24/7 availability, and free IT teams to focus on complex problems that require human expertise. 

The Automate IT service requests using Slack and agents workflow creates AI agents to categorize and process IT service requests. From a Slack message, the workflow categorizes requests into 3 categories: password resets, application access, or another action. Each request is then handled by a specialized agent. 

The password reset agent verifies user identity and management relationships before processing. The application request agent identifies the correct application owner and facilitates access. Responses are handled over Slack, creating a self-serve flow that reduces manual IT involvement while letting teams decide when AI acts and when humans stay in the loop.

Workflow #3 Monitor and Manage Vulnerabilities

Security teams face an unrelenting stream of newly disclosed vulnerabilities. CISA's Known Exploited Vulnerabilities catalog is updated continuously as threat actors actively weaponize critical flaws. Automating the connection between vulnerability intelligence feeds and your asset inventory transforms this reactive scramble into a proactive rather than reactive defense. By automating the vulnerability detection process, security teams can cut response windows from days to minutes, and ensure they prioritize patching efforts based on real exposure rather than theoretical risk. 

Without automation, organizations rely on manual monitoring of security bulletins, time-consuming spreadsheet comparisons between vulnerability databases and asset inventories, and delayed communications that leave critical gaps unaddressed while attackers move at machine speed. The result is increased breach risk, compliance failures, and security teams buried in manual triage work instead of strategic threat hunting and remediation.

The Check for new CISA vulnerabilities workflow monitors the CISA Vulnerability RSS feed and then uses the Tenable Vulnerability Management platform to check for any vulnerable systems. If vulnerabilities are detected, a message is sent via Microsoft Teams.

Intelligent Workflows that Keep Humans in the Loop 

Intelligent workflows aren’t about replacing people, they’re about amplifying them. The three workflows above demonstrate how you can quickly move from isolated automation to connected, intelligent systems that blend AI, integrations, and human oversight to solve real operational problems.

Whether you’re responding to security threats, streamlining IT requests, or improving visibility into risk, these pre-built workflows provide practical, production-ready foundations you can adapt and extend as your needs evolve.

Tines’ intelligent workflow platform unites automation, AI agents, and human-in-the-loop controls to reduce repetitive “muckwork,” speed execution, and free teams to focus on higher-value work — while ensuring governance, integration, and scale so pilots don’t stall before they realize true value.

Get started today with one of these pre-built workflows or another from our broader story library. Prove the value first-hand and use it as a blueprint to scale an intelligent workflow program that drives meaningful impact across your organization.

Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.



from The Hacker News https://ift.tt/TWp4c5u
via IFTTT

Notepad++ Fixes Hijacked Update Mechanism Used to Deliver Targeted Malware

Notepad++ has released a security fix to plug gaps that were exploited by an advanced threat actor from China to hijack the software update mechanism to selectively deliver malware to targets of interest.

The version 8.9.2 update incorporates what maintainer Don Ho calls a "double lock" design that aims to make the update process "robust and effectively unexploitable." This includes verification of the signed installer downloaded from GitHub (implemented in version 8.8.9 and later), as well as the newly added verification of the signed XML returned by the update server at notepad-plus-plus[.]org.

In addition to these enhancements, security-focused changes have been introduced to WinGUp, the auto-updater component -

  • Removal of libcurl.dll to eliminate DLL side-loading risk
  • Removal of two unsecured cURL SSL options: CURLSSLOPT_ALLOW_BEAST and CURLSSLOPT_NO_REVOKE
  • Restriction of plugin management execution to programs signed with the same certificate as WinGUp

The update also addresses a high-severity vulnerability (CVE-2026-25926, CVSS score: 7.3) that could result in arbitrary code execution in the context of the running application.

"An Unsafe Search Path vulnerability (CWE-426) exists when launching Windows Explorer without an absolute executable path," Ho said. "This may allow execution of a malicious explorer.exe if an attacker can control the process working directory. Under certain conditions, this could lead to arbitrary code execution in the context of the running application."

The development comes weeks after Notepad++ disclosed that a breach at the hosting provider level enabled threat actors to hijack update traffic starting June 2025 and redirect requests from certain users to malicious servers to serve a poisoned update. The issue was detected in early December 2025.

According to Rapid7 and Kaspersky, the tampered updates enabled the attackers to deliver a previously undocumented backdoor dubbed Chrysalis. The supply chain incident, tracked under the CVE identifier CVE-2025-15556 (CVSS score: 7.7), has been attributed to a China-nexus hacking group called Lotus Panda.

Notepad++ users are recommended to update to version 8.9.2, and make sure that the installers are downloaded from the official domain.



from The Hacker News https://ift.tt/hYLarc3
via IFTTT

CISA Flags Four Security Flaws Under Active Exploitation in Latest KEV Update

The U.S. Cybersecurity and Infrastructure Security Agency (CISA) on Tuesday added four security flaws to its Known Exploited Vulnerabilities (KEV) catalog, citing evidence of active exploitation in the wild.

The list of vulnerabilities is as follows -

  • CVE-2026-2441 (CVSS score: 8.8) - A use-after-free vulnerability in Google Chrome that could allow a remote attacker to potentially exploit heap corruption via a crafted HTML page.
  • CVE-2024-7694 (CVSS score: 7.2) - An arbitrary file upload vulnerability in TeamT5 ThreatSonar Anti-Ransomware versions 3.4.5 and earlier that could allow an attacker to upload malicious files and achieve arbitrary system command execution on the server.
  • CVE-2020-7796 (CVSS score: 9.8) - A server-side request forgery (SSRF) vulnerability in Synacor Zimbra Collaboration Suite (ZCS) that could allow an attacker to send a crafted HTTP request to a remote host and obtain unauthorized access to sensitive information.
  • CVE-2008-0015 (CVSS score: 8.8) - A stack-based buffer overflow vulnerability in Microsoft Windows Video ActiveX Control that could allow an attacker to achieve remote code execution by setting up a specially crafted web page.

The addition of CVE-2026-2441 to the KEV catalog comes days after Google acknowledged that "an exploit for CVE-2026-2441 exists in the wild." It's currently not known how the vulnerability is being weaponized, but such information is typically withheld until a majority of the users are updated with a fix so as to prevent other threat actors from joining the exploitation bandwagon.

As for CVE-2020-7796, a report published by threat intelligence firm GreyNoise in March 2025 revealed that a cluster of about 400 IP addresses was actively exploiting multiple SSRF vulnerabilities, including CVE-2020-7796, to target susceptible instances in the U.S., Germany, Singapore, India, Lithuania, and Japan.

"When a user visits a web page containing an exploit detected as Exploit:JS/CVE-2008-0015, it may connect to a remote server and download other malware," Microsoft notes in its threat encyclopedia. It also said it's aware of cases where the exploit is used to download and execute Dogkild, a worm that propagates via removable drives.

The worm comes with capabilities to retrieve and run additional binaries, overwrite certain system files, terminate a long list of security-related processes, and even replace the Windows Hosts file in an attempt to prevent users from accessing websites associated with security programs.

It's presently unclear how the TeamT5 ThreatSonar Anti-Ransomware vulnerability is being exploited. Federal Civilian Executive Branch (FCEB) agencies are recommended to apply the necessary fixes by March 10, 2026, for optimal protection.



from The Hacker News https://ift.tt/6rGMQlk
via IFTTT

Evaluating AI Models in 2026

Aaron and Brian review some of the latest AI model releases and discuss how they would evaluate them through the lens of an Enterprise AI Architect. 

SHOW: 1003

SHOW TRANSCRIPT: The Cloudcast #1003 Transcript

SHOW VIDEO: https://youtube.com/@TheCloudcastNET 

NEW TO CLOUD? CHECK OUT OUR OTHER PODCAST - "CLOUDCAST BASICS" 

SHOW NOTES:

TAKEAWAYS

  • The frequency of AI model releases can lead to numbness among users.
  • Evaluating AI models requires understanding their specific use cases and benchmarks.
  • Enterprises must consider the compatibility and integration of new models with existing systems.
  • Benchmarks are becoming more accessible but still require careful interpretation.
  • The rapid pace of AI development creates challenges for enterprise adoption and integration.
  • Companies need to be proactive in managing the versioning of AI models.
  • The industry may need to establish clearer standards for evaluating AI performance.
  • Efficiency and cost-effectiveness are becoming critical metrics for AI adoption.
  • The timing of model releases can impact their market reception and user adoption.
  • Businesses must adapt to the fast-paced changes in AI technology to remain competitive.

FEEDBACK?



from The Cloudcast (.NET) https://ift.tt/9aDM5EL
via IFTTT

Tuesday, February 17, 2026

From BRICKSTORM to GRIMBOLT: UNC6201 Exploiting a Dell RecoverPoint for Virtual Machines Zero-Day

Written by: Peter Ukhanov, Daniel Sislo, Nick Harbour, John Scarbrough, Fernando Tomlinson, Jr., Rich Reece


Introduction 

Mandiant and Google Threat Intelligence Group (GTIG) have identified the zero-day exploitation of a high-risk vulnerability in Dell RecoverPoint for Virtual Machines, tracked as CVE-2026-22769with a CVSSv3.0 score of 10.0. Analysis of incident response engagements revealed that UNC6201, a suspected PRC-nexus threat cluster, has exploited this flaw since at least mid-2024 to move laterally, maintain persistent access, and deploy malware including SLAYSTYLE, BRICKSTORM, and a novel backdoor tracked as GRIMBOLT. The initial access vector for these incidents was not confirmed, but UNC6201 is known to target edge appliances (such as VPN concentrators) for initial access. There are notable overlaps between UNC6201 and UNC5221, which has been used synonymously with the actor publicly reported as Silk Typhoon, although GTIG does not currently consider the two clusters to be the same.

This report builds on previous GTIG research into BRICKSTORM espionage activity, providing a technical deep dive into the exploitation of CVE-2026-22769 and the functionality of the GRIMBOLT malware. Mandiant identified a campaign featuring the replacement of older BRICKSTORM binaries with GRIMBOLT in September 2025. GRIMBOLT represents a shift in tradecraft; this newly identified malware, written in C# and compiled using native ahead-of-time (AOT) compilation, is designed to complicate static analysis and enhance performance on resource-constrained appliances.

Beyond the Dell appliance exploitation, Mandiant observed the actor employing novel tactics to pivot into VMware virtual infrastructure, including the creation of "Ghost NICs" for stealthy network pivoting and the use of iptables for Single Packet Authorization (SPA).

Dell has released remediations for CVE-2026-22769, and customers are urged to follow the guidance in the official Security Advisory. This post provides actionable hardening guidance, detection opportunities, and a technical analysis of the UNC6201 tactics, techniques, and procedures (TTPs).

GRIMBOLT

During analysis of compromised Dell RecoverPoint for Virtual Machines, Mandiant discovered the presence of BRICKSTORM binaries and the subsequent replacement of these binaries with GRIMBOLT in September 2025. GRIMBOLT is a C#-written foothold backdoor compiled using native ahead-of-time (AOT) compilation and packed with UPX. It provides a remote shell capability and uses the same command and control as previously deployed BRICKSTORM payload. It's unclear if the threat actor's replacement of BRICKSTORM with GRIMBOLT was part of a pre-planned life cycle iteration by the threat actor or a reaction to incident response efforts led by Mandiant and other industry partners.Unlike traditional .NET software that uses just-in-time (JIT) compilation at runtime, Native AOT-compiled binaries, introduced to .NET in 2022, are converted directly to machine-native code during compilation. This approach enhances the software’s performance on resource-constrained appliances, ensures required libraries are already present in the file, and complicates static analysis by removing the common intermediate language (CIL) metadata typically associated with C# samples.

UNC6201 established BRICKSTORM and GRIMBOLT persistence on the Dell RecoverPoint for Virtual Machines by modifying a legitimate shell script named convert_hosts.sh to include the path to the backdoor. This shell script is executed by the appliance at boot time via rc.local.

CVE-2026-22769

Mandiant discovered CVE-2026-22769 while investigating multiple Dell RecoverPoint for Virtual Machines within a victim’s environment that had active C2 associated with BRICKSTORM and GRIMBOLT backdoors. During analysis of the appliances, analysts identified multiple web requests to an appliance prior to compromise using the username admin. These requests were directed to the installed Apache Tomcat Manager, used to deploy various components of the Dell RecoverPoint software, and resulted in the deployment of a malicious WAR file containing a SLAYSTYLE web shell.

After analyzing various configuration files belonging to Tomcat Manager, we identified a set of hard-coded default credentials for the admin user in /home/kos/tomcat9/tomcat-users.xml. Using these credentials, a threat actor could authenticate to the Dell RecoverPoint Tomcat Manager, upload a malicious WAR file using the /manager/text/deploy endpoint, and then execute commands as root on the appliance.

The earliest identified exploitation activity of this vulnerability occurred in mid-2024.

Newly Observed VMware Activity

During the course of the recent investigations, Mandiant observed continued compromise of VMware virtual infrastructure by the threat actor as previously reported by Mandiant, CrowdStrike, and CISA. Additionally, several new TTPs were discovered that haven’t been previously reported on.

Ghost NICs

Mandiant discovered the threat actor creating new temporary network ports on existing virtual machines running on an ESXi server. Using these network ports, the threat actor then pivoted to various internal and software-as-a-service (SaaS) infrastructures used by the affected organizations.

iptables proxying

While analyzing compromised vCenter appliances, Mandiant recovered several commands from Systemd Journal executed by the threat actor using a deployed SLAYSTYLE web shell. These iptable commands were used for Single Packet Authorization and consisted of:

  • Monitoring incoming traffic on port 443 for a specific HEX string

  • Adding the source IP of that traffic to a list and if the IP is on the list and connects to port 10443, the connection is ACCEPTED

  • Once the initial approved traffic comes in to port 10443, any subsequent traffic is automatically redirected

  • For the next 300 seconds (five minutes), any traffic to port 443 is silently redirected to port 10443 if the IP is on the approved list

iptables -I INPUT -i eth0 -p tcp --dport 443 -m string --hex-string <HEX_STRING>
iptables -A port_filter -i eth0 -p tcp --dport 10443 --syn -m recent --rcheck --name ipt -j ACCEPT
iptables -t nat -N IPT
iptables -t nat -A IPT -p tcp -j REDIRECT --to-ports 10443
iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 443 --syn -m recent --rcheck --name ipt --seconds 300 -j IPT

Remediation

The following investigative guide can assist defenders in analyzing Dell RecoverPoint for Virtual Machines

Forensic Analysis of Dell RecoverPoint Disk Image

The following artifacts are high-value sources of evidence for incident responders conducting full disk image analysis of Dell RecoverPoint for Virtual Machines.

  • Web logs for Tomcat Manager are stored in /home/kos/auditlog/fapi_cl_audit_log.log. Check log file for any instances of requests to /manager. Any instances of those requests should be considered suspicious

    • Any requests for PUT /manager/text/deploy?path=/<MAL_PATH>&update=true are potentially malicious. MAL_PATH will be the path where a potentially malicious WAR file was uploaded

  • Uploaded WAR files are typically stored in /var/lib/tomcat9

  • Compiled artifacts for uploaded WAR files are located in /var/cache/tomcat9/Catalina

  • Tomcat application logs located in /var/log/tomcat9/

    • Catalina - investigate any org.apache.catalina.startup.HostConfig.deployWAR and org.apache.catalina.startup.HostConfig.deployWAR events

    • Localhost - Contains additional events associated with WAR deployment and any exceptions generated by malicious WAR and embedded files 

  • Persistence for BRICKSTORM and GRIMBOLT backdoors on Dell RecoverPoint for Virtual Machines was established by modifying /home/kos/kbox/src/installation/distribution/convert_hosts.sh to include the path to the backdoor

Indicators of Compromise (IOCs)

To assist the wider community in hunting and identifying activity outlined in this blog post, we have included IOCs in a free GTI Collection for registered users.

File Indicators

Family

File Name

SHA256

GRIMBOLT 

support

24a11a26a2586f4fba7bfe89df2e21a0809ad85069e442da98c37c4add369a0c

GRIMBOLT

out_elf_2

dfb37247d12351ef9708cb6631ce2d7017897503657c6b882a711c0da8a9a591

SLAYSTYLE

default_jsp.java

92fb4ad6dee9362d0596fda7bbcfe1ba353f812ea801d1870e37bfc6376e624a

BRICKSTORM

N/A

aa688682d44f0c6b0ed7f30b981a609100107f2d414a3a6e5808671b112d1878

BRICKSTORM

splisten

2388ed7aee0b6b392778e8f9e98871c06499f476c9e7eae6ca0916f827fe65df

BRICKSTORM

N/A

320a0b5d4900697e125cebb5ff03dee7368f8f087db1c1570b0b62f5a986d759

BRICKSTORM

N/A

90b760ed1d0dcb3ef0f2b6d6195c9d852bcb65eca293578982a8c4b64f51b035

BRICKSTORM

N/A

45313a6745803a7f57ff35f5397fdf117eaec008a76417e6e2ac8a6280f7d830

Network Indicators

Family

Indicator

Type

GRIMBOLT

wss://149.248.11.71/rest/apisession

C2 Endpoint

GRIMBOLT

149.248.11.71

C2 IP

YARA Rules

G_APT_BackdoorToehold_GRIMBOLT_1
rule G_APT_BackdoorToehold_GRIMBOLT_1
{
  meta:
    author = "Google Threat Intelligence Group (GTIG)"
  strings:
    $s1 = { 40 00 00 00 41 18 00 00 00 4B 21 20 C2 2C 08 23 02 }
    $s2 = { B3 C3 BB 41 0D ?? ?? ?? 00 81 02 0C ?? ?? ?? 00 }
    $s3 = { 39 08 01 49 30 A0 52 30 00 00 00 DB 40 09 00 02 00 80 65 BC 98 }
    $s4 = { 2F 00 72 00 6F 00 75 00 74 00 65 79 23 E8 03 0E 00 00 00 2F 00 70 00 72 00 6F 00 63 00 2F 00 73 00 65 00 6C 00 66 00 2F 00 65 00 78 00 65 }
  condition:
    (uint32(0) == 0x464c457f) //linux
    and all of ($s*)
}
G_Hunting_BackdoorToehold_GRIMBOLT_1
rule G_Hunting_BackdoorToehold_GRIMBOLT_1
{
    meta:
        author = "Google Threat Intelligence Group (GTIG)"

    strings:
        $s1 = "[!] Error : Plexor is nul" ascii wide
        $s2 = "port must within 0~6553" ascii wide
        $s3 = "[*] Disposing.." ascii wide
        $s4 = "[!] Connection error. Kill Pty" ascii wide
        $s5 = "[!] Unkown message type" ascii wide
        $s6 = "[!] Bad dat" ascii wide
    condition:
        (  
            (uint16(0) == 0x5a4d and uint32(uint32(0x3C)) == 0x00004550) or
            uint32(0) == 0x464c457f or
            uint32(0) == 0xfeedface or
            uint32(0) == 0xcefaedfe or
            uint32(0) == 0xfeedfacf or
            uint32(0) == 0xcffaedfe or
            uint32(0) == 0xcafebabe or
            uint32(0) == 0xbebafeca or
            uint32(0) == 0xcafebabf or
            uint32(0) == 0xbfbafeca
        ) and any of them
}
G_APT_BackdoorWebshell_SLAYSTYLE_4
rule G_APT_BackdoorWebshell_SLAYSTYLE_4
{
        meta:
                author = "Google Threat Intelligence Group (GTIG)"
        strings:
                $str1 = "<%@page import=\"java.io" ascii wide
                $str2 = "Base64.getDecoder().decode(c.substring(1)" ascii wide
                $str3 = "{\"/bin/sh\",\"-c\"" ascii wide
                $str4 = "Runtime.getRuntime().exec(" ascii wide
                $str5 = "ByteArrayOutputStream();" ascii wide
                $str6 = ".printStackTrace(" ascii wide
        condition:
                $str1 at 0 and all of them
}

Google Security Operations (SecOps)

Google Security Operations (SecOps) customers have access to these broad category rules and more under the “Mandiant Frontline Threats” and “Mandiant Hunting Rules” rule packs. The activity discussed in the blog post is detected in Google SecOps under the rule names:

  • Web Archive File Write To Tomcat Directory

  • Remote Application Deployment via Tomcat Manager

  • Suspicious File Write To Tomcat Cache Directory

  • Kbox Distribution Script Modification

  • Multiple DNS-over-HTTPS Services Queried

  • Unknown Endpoint Generating DNS-over-HTTPS and Web Application Development Services Communication

  • Unknown Endpoint Generating Google DNS-over-HTTPS and Cloudflare Hosted IP Communication

  • Unknown Endpoint Generating Google DNS-over-HTTPS and Amazon Hosted IP Communication

Acknowledgements

We appreciate Dell for their collaboration against this threat. This analysis would not have been possible without the assistance from across Google Threat Intelligence Group, Mandiant Consulting and FLARE. We would like to specifically thank Jakub Jozwiak and Allan Sepillo from GTIG Research and Discovery (RAD).



from Threat Intelligence https://ift.tt/r1EL0Zc
via IFTTT

Linux Lite 7.8: The Lightweight and user-friendly Linux Distro That’s puts New Life into Old Hardware

I was knee-deep in virtualization tech for over a decade now, from VMware’s latest releases to backup solutions like Veeam and Nakivo. But every once in a while, I like to step back and explore something a bit different – like lightweight operating systems that can run efficiently in virtual machines or on an old laptop. Nothing more makes me angry that having old laptops being decommissioned because of non-compatibility with Windows 11.

That’s where Linux Lite comes in. I’ve covered tools like Xormon for monitoring IT infrastructure, and Proxmox as a VMware alternative, but Linux Lite caught my eye as a simple, fast OS that’s perfect for testing environments or even as a daily driver for non-demanding tasks. It is rock solid and very fast distro which is optimized for getting the most out of an older hardware. Compared to Deepin which I covered in my last article, it is way faster. For Deepin, you need a bit beefier hardware to feel comfortable. Linux Lite is really for older, unused laptops/desktops or a virtual machine and the speed of execution is fabulous.

In this blog post, I’ll dive deep into Linux Lite 7.8, the latest version as of early 2026. We’ll cover its history, key features, system requirements, installation process, pros and cons, and how it stacks up against other popular distros. If you’re a Windows refugee looking for an alternative, or a virtualization admin wanting a lean guest OS, this might just be the hidden gem you’ve been searching for. Let’s break it down.

What is Linux Lite?

Linux Lite is a free, open-source Linux distribution designed with simplicity and performance in mind. It’s built on top of Ubuntu’s Long-Term Support (LTS) releases, which means it inherits the stability and vast software repository of one of the most popular Linux distros out there. But where Ubuntu can sometimes feel bloated, especially on older hardware, Linux Lite strips things down to essentials while keeping a user-friendly interface.

The project is led by Jerry Bezencon and a small team of developers, aiming to make Linux accessible to everyone – from beginners switching from Windows to experienced users who want a no-fuss setup.

It’s particularly liked by users for its ability to run smoothly on low-spec machines, making it ideal for reviving old PCs or running in resource-constrained virtual environments.

I first stumbled upon Linux Lite while testing lightweight distros for VMware VMs. Too many times, I’ve seen full-fat Ubuntu chug along in a virtual machine with limited RAM, but Linux Lite felt snappy right out of the box. It’s not trying to reinvent the wheel; it’s just making the wheel roll faster and smoother.

A Brief History of Linux Lite

Linux Lite has been around since 2012, starting as a fork of Ubuntu with the goal of creating a “lite” version that’s easy to use and light on resources. The first release was version 1.0, codenamed “Amethyst,” and it quickly gained a following among users frustrated with Windows’ bloat or the complexity of other Linux distros.

Over the years, it has stuck to Ubuntu’s LTS cycle, ensuring long-term support and security updates. By 2026, we’re at version 7.8, which is based on Ubuntu 24.04 LTS.

This latest release includes rewrites to many of Linux Lite’s custom utilities, improving stability and user experience. It’s a minor update from 7.6, focusing on bug fixes, LTS updates, code optimizations, and GUI tweaks, along with newer versions of apps like LibreOffice.

The distro has built a solid community, with active forums, a built-in help manual, and even a Discord server for support. Stats from the official site show over 18 million downloads, which speaks to its popularity among everyday users and educators.

Key Features of Linux Lite 7.8

What sets Linux Lite apart? It’s all about balance – powerful enough for daily tasks but lightweight enough not to overwhelm your hardware. Here’s a rundown of the standout features:

  • XFCE Desktop Environment: Linux Lite uses XFCE, a lightweight desktop that’s customizable and intuitive. It looks and feels a bit like Windows, with a start menu, taskbar, and familiar layout, making the transition easy for newcomers. It’s a bit less polished, but hey, you must choose – old hardware with acceptable speed or more polished UI but slower.
  • Optimized for Speed: Everything is tuned for performance. Boot times are quick, and it runs efficiently on older CPUs and limited RAM. I’ve tested it on a 10-year-old laptop, and it was remarkably responsive compared to Windows 10 on the same machine.
  • Built-in Tools and Utilities: Linux Lite comes with custom apps like Lite Welcome (a setup wizard), Lite Updates (for easy patching), and Lite Tweaks (for system optimization). These make maintenance a breeze without diving into the terminal.
  • Software Selection: Pre-installed apps include Firefox (or Chrome if you prefer), LibreOffice for productivity, VLC for media, and Thunderbird for email. The Ubuntu repositories give you access to thousands more via the Synaptic Package Manager.

 

wp-image-33510

 

Managing installation/uninstallation of software in Linux Lite

 

Click the Settings > Lite Software > Install Software and pick the software you want to install.

And then you can add/remove the software you want. Out of the box, there is the LibreOffice pre-installed, but then again, it’s up to you to pick the OpenOffice or other software you need.

wp-image-33511

 

Pick other software you want to install Is easy

 

  • Security and Stability: Regular security updates from Ubuntu, plus built-in firewall (needs to be activated, as by default it is OFF!) and easy encryption options during install. It’s stable for long sessions, which is great for virtualization labs where you need reliable guest OSes.
  • Modern Look with Low Overhead: The default theme is clean and modern, but you can tweak it easily. It supports multiple monitors and high-res displays without hogging resources.

In my experience, the developer-friendly aspects shine through – essential tools for coding and system management are there, but without the clutter. For virtualization pros, it’s a great choice for nesting VMs or testing scripts in a controlled environment.

System Requirements: Keeping It Lite

One of Linux Lite’s biggest selling points is its modest hardware needs. According to the official site, the minimum specs are:

  • Processor: 1.5 GHz Dual-Core
  • RAM: 4 GB
  • Storage: 40 GB HDD/SSD/NVMe
  • Display: VGA, DVI, DP, or HDMI capable of 1366×768 resolution

Compare that to Ubuntu’s 4 GB minimum (but realistically more for smooth operation) or Windows 11’s 4 GB plus TPM requirements, and you see why it’s a favorite for older hardware. I’ve run it on machines with just 2 GB RAM in a pinch, though 4 GB is ideal for multitasking.

For virtualization, this means you can allocate minimal resources to a Linux Lite VM – say, 2 vCPUs and 2 GB RAM – and still get great performance. Perfect for homelabs or cloud instances where every core counts.

Installation Process: Step-by-Step Guide

Installing Linux Lite is straightforward, even for beginners. I recently set it up in a VMware Workstation VM to test, and it took under 20 minutes. Here’s a quick guide:

1. Download the ISO: Head to the official site and grab the latest 64-bit ISO (around 2 GB). Verify the checksum for security.

2. Create Bootable Media: Use tools like Rufus or Etcher to make a USB drive, or mount the ISO in your hypervisor.

3. Boot and Start Install: Boot from the media. You’ll see a live session – try it out before committing.

wp-image-33512

Live session starts first, you must install it to your hard drive

4. Language and Setup: Choose your language, then hit “Install Linux Lite.” The installer is graphical and user-friendly.

5. Installation Type: Options include erasing the disk, installing alongside another OS, or manual partitioning. For dual-boot with Windows, select “alongside.”

wp-image-33513

Pick the option that fits your needs

6. User Details: Set up your username, password, and time zone. Enable auto-login if desired.

7. Complete and Reboot: The install copies files, then reboots into your new system. Post-install, run Lite Updates to grab the latest patches.

The whole process is intuitive, with clear warnings about data loss. If you’re in a virtual environment, enable EFI boot for modern features. Common pitfalls? Ensure your BIOS is set to boot from USB, and back up data first.

Pros and Cons:

Is It Right for You? Like any OS, Linux Lite has its strengths and weaknesses.

Pros:

  • Extremely lightweight and fast, even on old hardware.
  • Beginner-friendly with Windows-like interface and helpful tools.
  • Free forever, with strong community support (forums have over 91 million views).
  • Great for education, development, or as a VM guest.
  • Regular updates without forcing major changes.

Cons:

  • Some users find it “too dumbed down” – limited advanced options out of the box.
  • Relies on Ubuntu’s ecosystem, so if you hate apt, this isn’t for you.
  • No official ARM support yet, limiting it to x86/64.
  • Custom utilities are great, but power users might prefer more configurable distros like Arch.

In my tests, the pros far outweigh the cons for its target audience. If you’re coming from Windows, it’s a soft landing.

wp-image-33514

Linux lite packages management and updates

Comparisons to Other Distros, how does Linux Lite stack up?

  • Vs. Ubuntu: Ubuntu is more feature-rich but heavier. Linux Lite is Ubuntu “lite” – same base, less bloat. Ideal if Ubuntu feels sluggish.
  • vs. Linux Mint: Mint is also user-friendly with Cinnamon desktop. Linux Lite is lighter on resources, better for very old PCs, but Mint has more polish.
  • Vs. Zorin OS Lite: Similar lightweight focus, but Zorin mimics Windows more closely. Linux Lite edges out in speed, Zorin in aesthetics.
  • Vs. Xubuntu: Both use XFCE, but Linux Lite adds custom tools and optimizations. Xubuntu is purer Ubuntu, Linux Lite more streamlined.

For virtualization, I’d pick Linux Lite over these for minimal footprint in VMs.

Community and Support

Linux Lite’s community is active and welcoming. The forums are a goldmine for troubleshooting, with sections for hardware, software, and general chat. There’s a built-in Help Manual that’s comprehensive, covering everything from installs to tweaks.

If you need real-time help, join the Discord server. With 10,500 social media followers, you’re never alone.

Final Thoughts

Linux Lite 7.8 isn’t flashy, but that’s its strength. In a world of bloated OSes, it’s a breath of fresh air for older hardware, beginners, and virtualization setups. Whether you’re ditching Windows, testing in a VM, or just want something simple, give it a spin. I’ve installed it on an old ThinkPad, and it’s transformed it into a productive machine. If you’ve tried Linux Lite, share your experiences in the comments.



from StarWind Blog https://ift.tt/N9z0avm
via IFTTT

Divestitures and carve-outs: Untangling spaghetti

In the world of corporate strategy, mergers and acquisitions (M&A) grab the big headlines. They are the high-stakes, high-visibility deals that capture the imagination. But there’s another side to that coin that can be more complex, technically difficult, and demanding for the IT organization: divestitures and carve-outs.

If you’re a CIO who has been involved in one of these, you know this pain. This isn’t just the simple reverse of an acquisition. It’s more like performing open-heart surgery on a living enterprise. The business is still operating at full speed, yet you are tasked with surgically separating systems, data, and processes, all while making sure neither the parent nor the child company flatlines.

Untangling that commingled technology, or “spaghetti architecture”, is easily one of the most demanding challenges IT leaders will face. The stakes are massive: a slip-up can halt operations, trigger a security breach, or leave the parent company drowning in stranded costs that wipe out the deal’s intended value.

Success here isn’t about brute force – it requires a shift in mindset. It’s about orchestrating a clean, secure, and highly efficient separation to ensure both companies emerge from the process leaner, stronger, and ready for their respective futures.

Divestiture dilemma: four traps to avoid

A divestiture is a minefield of operational and technical risks. The difficulty lies in navigating four simultaneous, interconnected challenges, where failure in one area can quickly cause a massive domino effect across the whole project.

1. The “spaghetti architecture” nightmare

Business units are rarely self-contained. Over years of growth, everything becomes deeply intertwined and co-dependent. We’re talking about shared ERPs, centralized data warehouses, and common security protocols that make simply “cutting and pasting” a business unit impossible. The foundational challenge is untangling this intricate web without breaking critical processes for either the parent or the new entity.

2. The data security tightrope walk

This is the monumental job of securely dividing vast and complex datasets. On Day 1, the new entity needs all its customer, financial, and operational data to function. At the same time, you must absolutely guarantee that none of the parent company’s sensitive intellectual property (IP) or proprietary information accidentally walks out the door with it. It’s a balancing act with zero margin for error.

3. The Transition Service Agreement tangle

To keep the lights on and ensure business continuity, most deals rely on Transition Service Agreements (TSAs), where the parent company continues to provide IT services for a set period. While frequently necessary, TSAs are a double-edged sword. They prolong dependencies, extend security risks, and severely limit the agility and flexibility of both organizations. For every CIO, a key goal must be to aggressively minimize the scope and duration of these agreements.

4. The stranded costs hangover

After the divested business unit is gone, the parent company finds itself holding onto over-provisioned “stuff” it no longer needs. What’s left behind? It’s frequently oversized data centers, excess software licenses, and staff scaled for a larger organization. These stranded costs become a direct and brutal drain on profitability, directly undermining the financial rationale of the whole divestiture.

A strategic scalpel for a clean separation

Fighting the chaos and complexity of a divestiture with tactical, piecemeal efforts is a recipe for delay and frustration. To really navigate this maze of risks, you need a strategic framework designed for speed, security, and efficiency. The only way to win is by implementing a unified platform strategy that acts as a strategic scalpel, enabling you to execute a clean, value-preserving separation.

A strategic platform allows the CIO to focus on three critical actions that turn a high-risk operation into a managed, strategic maneuver.

Accelerating Day 1 independence

A big threat to value in a divestiture is the extended Transition Service Agreements (TSAs). They create long-term dependencies and security risks. The fastest way to escape the imprisonment of TSAs is to empower the new entity to stand on its own two feet – and quickly.

Look for a desktop virtualization platform that acts as a powerful accelerator. Picture launching the spun-off company with a secure, scalable digital workspace available right from Day 1. By preparing to be “divestiture ready” and putting a strategic platform like Citrix in place, ahead of any separation, the parent organization can supply all essential applications, so the new entity hits the ground running. This dramatically shortens its dependence on the parent company and minimizes the need for restrictive TSAs. It’s about being born agile, unburdened by legacy technology, and able to compete in the market right away.

Enforcing a secure, surgical boundary

A clean break requires a clear, enforceable security boundary. What’s important to have is a way to create a secure, isolated environment for the divested unit during the transition.

Consider capabilities that enforce granular access policies with Zero Trust Network Access (ZTNA). This means guaranteeing that the new entity users can only see and access the specific applications and data they are entitled to. This approach does two crucial things: it protects the parent company’s sensitive IP throughout the entire transition period, and it provides a clear, auditable trail of data access. This process turns what could be a messy separation into a controlled, surgical procedure.

Aggressively eliminating stranded costs

For the parent company, the clock starts ticking on eliminating stranded costs the moment the deal closes. Look at functionality that provides the data and control to aggressively right-size your remaining operations.

This means getting hard data on real-world application usage. With those insights, you can terminate or renegotiate expensive enterprise software licenses and eliminate millions in licensing waste.

It also means smart infrastructure management. You should be able to dynamically shrink your cloud infrastructure to align with the new, smaller workforce, preventing waste on overprovisioned cloud instances. Additionally, look for solutions that help you repurpose existing hardware for new roles, avoiding unnecessary capital costs.

Finally, the IT teams trying to manage complex, separate environments can automate the creation, deployment, and management of distinct systems. This eliminates the need for admins to manually untangle or replicate configurations across environments, saving time and reducing errors.

A win-win outcome

A well-executed divestiture is truly a strategic win for both parties.

The parent company emerges as a leaner, more focused organization with a lower cost base, free to invest its energy and resources into its core business. Meanwhile, the new entity is born agile, able to compete from Day 1 without being hindered by legacy technology.

By approaching the challenge with a comprehensive, strategic platform, CIOs can transform a divestiture from a high-risk technical cleanup into a high-value strategic maneuver. It’s more than just cutting the cord – it’s about ensuring the cut is clean, precise, and sets both organizations up for sustained success.

If you’re ready to start building that strategic playbook for clean separation, download the full whitepaper, The CIO’s M&A Playbook: Accelerating value and de-risking integration and the companion e-book, How Citrix cuts months off M&A time to value.



from Citrix Blogs https://ift.tt/HUEJy8I
via IFTTT

The Multi-Model Database for AI Agents: Deploy SurrealDB with Docker Extension

When it comes to building dynamic and real-work solutions, developers need to stitch multiple databases (relational, document, graph, vector, time-series, search) together and build complex API layers to integrate them. This generates significant complexity, cost, and operational risk, and reduces speed of innovation. More often than not, developers end up focusing on building glue code and managing infrastructure rather than building application logic. For AI use cases, using multiple databases means AI Agents have fragmented data, context and memory, producing bad outputs at high latency.

Enter SurrealDB.

SurrealDB is a multi-model database built in Rust that unifies document, graph, relational, time-series, geospatial, key-value, and vector data into a single engine. Its SQL-like query language, SurrealQL, lets you traverse graphs, perform vector search, and query structured data – all in one statement.

Designed for data-intensive workloads like AI agent memory, knowledge graphs, real-time applications, and edge deployments, SurrealDB runs as a single binary anywhere: embedded in your app, in the browser via WebAssembly, at the edge, or as a distributed cluster.

What problem does SurrealDB solve?

Modern AI systems place very different demands on data infrastructure than traditional applications. SurrealDB addresses these pressures directly:

  • Single runtime for multiple data models – AI systems frequently combine vector search, graph traversal, document storage, real-time state, and relational data in the same request path. SurrealDB supports these models natively in one engine, avoiding brittle cross-database APIs, ETL pipelines, and consistency gaps.
  • Low-latency access to changing context – Voice agents, interactive assistants, and stateful agents are sensitive to both latency and data freshness. SurrealDB’s query model and real-time features serve up-to-date context without polling or background sync jobs.
  • Reduced system complexity – Replacing multiple specialized databases with a single multi-model store reduces services, APIs, and failure modes. This simplifies deployment, debugging, and long-term maintenance.
  • Faster iteration on data-heavy features – Opt in schemas definitions and expressive queries let teams evolve data models alongside AI features without large migrations. This is particularly useful when experimenting with embeddings, relationships, or agent memory structures.
  • Built-in primitives for common AI patterns – Native support for vectors, graphs, and transactional consistency enables RAG, graph-augmented retrieval, recommendation pipelines, and agent state management – without external systems or custom glue code.

In this article, you’ll see how to build a WhatsApp RAG chatbot using SurrealDB Docker Extension. You’ll learn how SurrealDB Docker Extension powers an intelligent WhatsApp chatbot that turns your chat history into searchable, AI-enhanced conversations with vector embeddings and precise source citations.

Understanding SurrealDB Architecture

SurrealDB’s architecture unifies multiple data models within a single database engine, eliminating the need for separate systems and synchronization logic (figure below).

SurrealDB Architecture diagram

Caption: SurrealDB Architecture diagram

Architecture diagram of SurrealDB showing a unified multi-model database with real-time capabilities

Caption: Architecture diagram of SurrealDB showing a unified multi-model database with real-time capabilities. (more information at https://surrealdb.com/docs/surrealdb/introduction/architecture)

With SurrealDB, you can:

  • Model complex relationships using graph traversal syntax (e.g., ->bought_together->product)
  • Store flexible documents alongside structured relational tables
  • Subscribe to real-time changes with LIVE SELECT queries that push updates instantly
  • Ensure data consistency with ACID-compliant transactions across all models

Learn more about SurrealDB’s architecture and key features on the official documentation.

How does Surreal work?

SurrealDB diagram

SurrealDB separates storage from compute, enabling you to scale these independently without the need to manually shard your data.

The query layer (otherwise known as the compute layer) handles queries from the client, analyzing which records need to be selected, created, updated, or deleted.

The storage layer handles the storage of the data for the query layer. By scaling storage nodes, you are able to increase the amount of supported data for each deployment.

SurrealDB supports all the way from single-node to highly scalable fault-tolerant deployments with large amounts of data.

For more information, see https://surrealdb.com/docs/surrealdb/introduction/architecture

Why should you run SurrealDB as a Docker Extension

For developers already using Docker Desktop, running SurrealDB as an extension eliminates friction. There’s no separate installation, no dependency management, no configuration files – just a single click from the Extensions Marketplace.

Docker provides the ideal environment to bundle and run SurrealDB in a lightweight, isolated container. This encapsulation ensures consistent behavior across macOS, Windows, and Linux, so what works on your laptop works identically in staging.

The Docker Desktop Extension includes:

  • Visual query editor with SurrealQL syntax highlighting
  • Real-time data explorer showing live updates as records change
  • Schema visualization for tables and relationships
  • Connection management to switch between local and remote instances
  • Built-in backup/restore for easy data export and import

With Docker Desktop as the only prerequisite, you can go from zero to a running SurrealDB instance in under a minute.

Getting Started

To begin, download and install Docker Desktop on your machine. Then follow these steps:

  1. Open Docker Desktop and select Extensions in the left sidebar
  2. Switch to the Browse tab
  3. In the Filters dropdown, select the Database category
  4. Find SurrealDB and click Install
Docker Desktop browse extensions
SurrealDB in Docker Desktop extensions
Installing SurrealDB in Docker Desktop extensions

Caption: Installing the SurrealDB Extension from Docker Desktop’s Extensions Marketplace.

SurrealDB Docker Extension
SurrealDB Docker Extension manager
SurrealDB Docker Extension help

Real-World Example

Smart Team Communication Assistant

Imagine searching through months of team WhatsApp conversations to answer the question: “What did we decide about the marketing campaign budget?”

Traditional keyword search fails, but RAG with SurrealDB and LangChain solves this by combining semantic vector search with relationship graphs.

This architecture analyzes group chats (WhatsApp, Instagram, Slack) by storing conversations as vector embeddings while simultaneously building a knowledge graph linking conversations through extracted keywords like “budget,” “marketing,” and “decision.” When queried, the system retrieves relevant context using both similarity matching and graph traversal, delivering accurate answers about past discussions, decisions, and action items even when phrased differently than the original conversation.

This project is inspired by Multi-model RAG with LangChain | GitHub Example

1. Clone the repository:

git clone https://github.com/Raveendiran-RR/surrealdb-rag-demo 

2. Enable Docker Model Runner by visiting  Docker Desktop  > Settings > AI

Enable Docker Model Runner

Caption: Enable Docker Model Runner in Docker Desktop > settings > AI

3. Pull llama3.2 model from Docker Hub

Search for llama 3.2 under Models > Docker Hub and pull the right model.

llama3.2 in Docker Hub

Caption:  Pull the Docker model llama3.2

llama3.2 in Docker Hub

4. Download the embeddinggemma model from Docker Hub

embeddinggemma model in DMR

Caption: Click on Models > Search for embeddinggemma > download the model

5. Run this command to connect to the persistent surrealDB container

  • Browse to the directory where you have cloned the repository
  • Create directory “mydata”
mkdir -p mydata

6. Run this command:

docker run -d --name demo_data \
  -p 8002:8000 \
  -v "$(pwd)/mydata:/mydata" \
  surrealdb/surrealdb:latest \
  start --log debug --user root --pass root \
  rocksdb://mydata

Note: use the path based on the operating system. 

  • For windows , use rocksdb://mydata
  • For linux and macOS, use rocksdb:/mydata

7. Open SurrealDB Docker Extension and connect with SurrealDB.

Connecting to SurrealDB through Docker Desktop Extension

Caption: Connecting to SurrealDB through Docker Desktop Extension

  • Connection name: RAGBot
  • Remote address: http://localhost:8002
  • Username: root | password: root
  • Click on Create Connection

8. Run the setup instructions 

9. Upload the whatsapp chat

Create connection to the SurrealDB Docker container

Caption: Create connection to the SurrealDB Docker container

10. Start chatting with the RAG bot and have fun 

11. We can verify the correctness data in SurrealDB list 

  • Ensure that you connect to the right namespace (whatsapp) and database (chats)
python3 load_whatsapp.py
python3 rag_chat_ui.py
SurrealDB namespace query

Caption: connect to the “whatsapp” namespace and “chats” database

SurrealDB namespace query
Data stored as vectors in SurrealDB

Caption: Data stored as vectors in SurrealDB

Interact with the RAG bot UI

Caption: Interact with the RAG bot UI where it gives you the answer and exact reference for it 

Using this chat bot, now you can get information about the chat.txt file that was ingested. You can also verify the information in the query editor as shown below when you can run custom queries to validate the results from the chat bot. You can ingest new messages through the load_whatsapp.py file, please ensure that the message format is same as in the sample whatsChatExport.txt file.

Learn more about SurrealQL here.

SurrealDB Query editor in the Docker Desktop Extension

Caption: SurrealDB Query editor in the Docker Desktop Extension

Conclusion

The SurrealDB Docker Extension offers an accessible and powerful solution for developers building data-intensive applications – especially those working with AI agents, knowledge graphs, and real-time systems. Its multi-model architecture eliminates the need to stitch together separate databases, letting you store documents, traverse graphs, query vectors, and subscribe to live updates from a single engine.

With Docker Desktop integration, getting started takes seconds rather than hours. No configuration files, no dependency management – just install the extension and start building. The visual query editor and real-time data explorer make it easy to prototype schemas, test queries, and inspect data as it changes.

Whether you’re building agent memory systems, real-time recommendation engines, or simply looking to consolidate a sprawling database stack, SurrealDB’s Docker Extension provides an intuitive path forward. Install it today and see how a unified data layer can simplify your architecture.

If you have questions or want to connect with other SurrealDB users, join the SurrealDB community on Discord.

Learn More



from Docker https://ift.tt/b4Hl2hV
via IFTTT