Thursday, April 30, 2026

New Linux 'Copy Fail' Vulnerability Enables Root Access on Major Distributions

Cybersecurity researchers have disclosed details of a Linux local privilege escalation (LPE) flaw that could allow an unprivileged local user to obtain root.

The high-severity vulnerability tracked as CVE-2026-31431 (CVSS score: 7.8) has been codenamed Copy Fail by Xint.io and Theori.

"An unprivileged local user can write four controlled bytes into the page cache of any readable file on a Linux system, and use that to gain root," the vulnerability research team at Xint.io and Theori said.

At its core, the vulnerability stems from a logic flaw in the Linux kernel's cryptographic subsystem, specifically within the algif_aead module. The issue was introduced in a source code commit made in August 2017.

Successful exploitation of the shortcoming could allow a simple 732-byte Python script to edit a setuid binary and obtain root on essentially all Linux distributions shipped since 2017, including Amazon Linux, RHEL, SUSE, and Ubuntu. The Python exploit involves four steps -

  • Open an AF_ALG socket and bind to authencesn(hmac(sha256),cbc(aes))
  • Construct the shellcode payload
  • Trigger the write operation to the kernel's cached copy of "/usr/bin/su"
  • Call execve("/usr/bin/su") to load the injected shellcode and run it as root

While the vulnerability is not remotely exploitable in isolation, a local unprivileged user can get root simply by corrupting the page cache of a setuid binary. The same primitive also has cross-container impacts as the page cache is shared across all processes on a system.

In response to the disclosure, Linux distributions have released their own advisories -

Copy Fail has its echoes in Dirty Pipe (CVE-2022-0847), another Linux kernel LPE vulnerability that could permit unprivileged users to splice data into the page cache of read-only files and ultimately overwrite sensitive files on the system to achieve code execution.

"Copy Fail is the same class of primitive, in a different subsystem," Bugcrowd's David Brumley said. "The 2017 in-place optimization in algif_aead allows a page-cache page to end up in the kernel’s writable destination scatterlist for an AEAD operation submitted over an AF_ALG socket. An unprivileged process can then drive splice() into that socket and complete a small, targeted write into the page cache of a file it doesn't own."

What makes the vulnerability dangerous is that it can be reliably triggered and does not require any race condition or kernel offset. On top of that, the same exploit works across distributions.

"This vulnerability is unique because it has four properties that almost never appear together: it's portable, tiny, stealthy, and cross-container," a Xint.io spokesperson told The Hacker News in a statement. "It allows any user account, no matter how low-level, to increase their privilege to full admin access. It also allows them to bypass sandboxing and works across all Linux versions and distributions."



from The Hacker News https://ift.tt/JbYioNf
via IFTTT

Wednesday, April 29, 2026

New Wave of DPRK Attacks Uses AI-Inserted npm Malware, Fake Firms, and RATs

Cybersecurity researchers have discovered malicious code in an npm package after a malicious package as a dependency to the project by Anthropic's Claude Opus large language model (LLM).

The package in question is "@validate-sdk/v2," which is listed on npm as a utility software development kit (SDK) for hashing, validation, encoding/decoding, and secure random generation. However, its real functionality is to plunder sensitive secrets from the compromised environment. The package, which shows signs of being vibe-coded using generative artificial intelligence (AI), was first uploaded to the repository in October 2025.

The malware campaign has been codenamed PromptMink by ReversingLabs, which linked the activity as part of a broader campaign mounted by the North Korean threat actor known as Famous Chollima (aka Shifty Corsair), which is behind the long-running Contagious Interview campaign and the fraudulent IT Worker scam.

"The new malware campaign [...] involves a tainted package that was introduced in a Feb. 28 commit to an autonomous trading agent," ReversingLabs researcher Vladimir Pezo said in a report shared with The Hacker News. "The commit was co-authored by Anthropic's Claude Opus large language model (LLM). It allows attackers to access users' crypto wallets and funds."

The package is listed as a dependency for an another npm package named "@solana-launchpad/sdk," which, in turn, is used by a third package called "openpaw-graveyard," which is described as an "autonomous AI agent" that creates a social on-chain identity on the Solana blockchain using the Tapestry Protocol, trades cryptocurrency via Bankr, as well as interacts with other agents on Moltbook.

ReversingLabs said the AI agent-generated packages were added as a dependency in a commit made in February 2026, causing the agent package to execute malicious code and give attackers access via leaked credentials to the victim's cryptocurrency wallets and funds.

The attack adopts a phased approach, where the first-layer packages do not contain any malicious code, but import second-layer packages that actually embed the nefarious functionality. Should the second cluster be detected or removed from npm, they are swiftly replaced.

Some of the first-layer packages identified are listed below -

  • @solana-launchpad/sdk
  • @meme-sdk/trade
  • @validate-ethereum-address/core
  • @solmasterv3/solana-metadata-sdk
  • @pumpfun-ipfs/sdk
  • @solana-ipfs/sdk

"They implement some functionality related to cryptocurrencies," ReversingLabs explained. "And each package lists many dependencies, most of which are popular npm packages with download counts in the millions and billions, like axios, bn.js etc. However, a small number of the dependencies are malicious packages from the second layer."

The threat actors employ various techniques to help the rogue packages escape detection. These include creating a malicious version of the functions already present in the listed popular packages.Another technique uses typosquatting, where the names and descriptions mimic legitimate libraries. 

The first package version published to npm as part of this campaign dates back to September 2025, when "@hash-validator/v2" was uploaded to the registry. The decision to split the cryptocurrency stealer into two parts – a benign bait that downloads the actual malware – may have helped it evade detection and help conceal the true scale of the attack.

It's worth noting that some aspects of the activity were documented by JFrog two months later, highlighting the threat actor's use of transitive dependencies to execute malicious code on developer systems and siphon valuable data.

In the intervening months, the campaign has undergone various transformations, even targeting the Python Package Index (PyPI) by pushing a malicious package ("scraper-npm") with the same functionality in February 2026. As recently as last month, threat actors have been observed establishing persistent remote access via SSH and using Rust-compiled payloads to exfiltrate entire projects containing source code and other intellectual property from compromised systems.

Early versions of the malware were obfuscated JavaScript-based stealers that scan the current working directory recursively for .env or .json files and stage for exfiltration to a Vercel URL ("ipfs-url-validator.vercel.app"), a platform repeatedlyabused by Famous Chollima in its campaigns.

While subsequent iterations came embedded with PromptMink in the form of a Node.js single executable application (SEA), it also suffered from a notable disadvantage in that it caused the payload size to grow from a mere 5.1KB to around 85MB.This is said to have caused the threat actors to shift to using NAPI-RS to create pre-compiled Node.js add-ons in Rust.

The evolution of the malware from a simple infostealer to a specialized multi-platform harvester targeting Windows, Linux, and macOS capable of dropping SSH backdoors and gathering entire projects demonstrates North Korean threat actors' continued targeting of the open-source ecosystem to target developers in the Web3 space.

Famous Chollima is "leveraging AI-generated code and a layered package strategy to evade detection and more effectively deceive automated coding assistants than human developers," ReversingLabs added.

Contagious Trader Emerges

The findings coincide with the discovery of a malicious npm package named "express-session-js" that's believed to be linked to the Contagious Interview campaign, with the library acting as a conduit for a dropper that fetches a second-stage obfuscated payload from JSON Keeper, a paste service.

"Static deobfuscation of the stage-2 payload reveals a full Remote Access Trojan (RAT) and information stealer that connects to 216[.]126[.]237[.]71 via Socket.IO, with capabilities including browser credential theft, crypto wallet extraction, screenshot capture, clipboard monitoring, keylogging, and remote mouse/keyboard control," SafeDep noted this month.

Interestingly, the use of legitimate packages like "socket.io-client" for command-and-control (C2) communication, "screenshot-desktop" for screen capture, "sharp" for image compression, and "clipboardy" for clipboard access overlaps with that of OtterCookie, a known stealer malware attributed to the campaign.

What's novel this time around is the addition of the "@nut-tree-fork/nut-js" package for mouse and keyboard control, suggesting broader attempts to upgrade the RAT capabilities to facilitate interactive control of infected hosts.

OtterCookie deployment chain

OtterCookie, for its part, has witnessed a maturation of its own, getting distributed via a trojanized open-source 3D chess project hosted on Bitbucket and malicious npm packageslike "gemini-ai-checker," "express-flowlimit," and "chai-extensions-extras."

A third method has employed a Matryoshka Doll approach as part of a campaign dubbed Contagious Trader. The attack begins with the download of a benign wrapper package (e.g., "bjs-biginteger"), which then proceeds to download a malicious dependency (e.g., "bjs-lint-builder") and ultimately install the stealer.

Overlaps between Contagious Interview, Contagious Trader, and graphalgo

"The recent campaigns orchestrated by Shifty Corsair demonstrate the escalating threat of DPRK state-aligned cyber operations," BlueVoyant researcher Curt Buchanan said. "Their rapid evolution, from static Obfuscator.io encoding to dynamically rotating custom obfuscation, and their abuse of Vercel-hosted C2 infrastructure, demonstrates a maturation in their operational capabilities."

Graphalgo Uses Fake Companies to Drop RAT

The development is significant as the threat actor has been simultaneously linked to another ongoing campaign dubbed graphalgo that lures developers using fake companies and leverages fake job interviews and coding tests to deliver malicious npm packages to their systems.

The campaign plays out like this: the hackers employ social engineering ploys on job-seeking platforms and social networks to trick prospective targets into downloading GitHub-hosted projects as part of an assessment. These projects, in turn, contain a dependency to a malicious package published on npm or PyPI, whose main goal is to deploy a remote access trojan (RAT) on the machine.

To pull off the attack, the operators set up a network of fake companies, complete with convincing profiles on platforms like GitHub, LinkedIn, and X to give them a veneer of legitimacy and make the deception more convincing. In the case of Blocmerce, the attackers even went to the extent of actually registering a limited liability corporation (LLC) in the U.S. state of Florida under the same name in August 2025. The names of some of the companies used for frontend phishing are as follows -

  • Veltrix Capital
  • Blockmerce
  • Bridgers Finance

"These organizations link to several GitHub organizations related to blockchain companies that have been active on GitHub since June 2025," ReversingLabs security researcher Karlo Zanki said. "Their purpose is to provide trustworthiness to fake job offerings and to host fake job interview tasks."

Recent versions of the campaign have also been spotted using a different technique for hosting the malicious dependencies. Instead of publishing them to npm or PyPI, they are hosted as a release artifact in GitHub repositories, likely in an effort to minimize the risk of detection.

"The reference to the malicious dependency is buried deep inside the list of the transitive dependencies. The resolved field in the package-lock.json file instructs the package manager where to fetch specific package dependencies from," ReversingLabs noted. "While all other dependencies are fetched from the official npm registry, the malicious one is fetched directly from a release artifact located in a crafted GitHub repository."

The list of npm packages is below -

  • graph-dynamic
  • graphbase-js
  • graphlib-js

The attack culminates with the deployment of a RAT that can gather system information, enumerate files and directories, list running processes, create folders, rename files, delete files, and upload/download files.

In recent weeks, a North Korean state-sponsored threat cluster tracked as UNC1069 has also been linked to the compromise of "axios," one of the most popular npm packages, highlighting the continued threat faced by open-source repositories from Pyongyang.

Since then, the attackers behind the breach have published a new npm package called "csec-crypto-utils" containing an "updated payload" that substitutes the RAT dropper for a data stealer that exfoliates AWS keys, GitHub tokens, and .npmrc configuration files to an external server ("csec-c2-server.onrender[.]com").

In its report detailing the supply chain compromise, Hunt.io tied the attack to a Lazarus Group sub-cluster known as BlueNoroff, citing infrastructure overlaps and the RAT's similarities with NukeSped.

"The threat actors' use of advanced techniques and tactics, as well as an astonishing level of campaign preparation (setting up a Florida LLC) and their ability to adapt, makes North Korean threat actors a top threat to organizations or individual developers focused on cryptocurrency," ReversingLabs said.



from The Hacker News https://ift.tt/pc6MeEk
via IFTTT

Webinar: How to Automate Exposure Validation to Match the Speed of AI Attacks

In February 2026, researchers uncovered a shift that completely changed the game: threat actors are now using custom AI setups to automate attacks directly into the kill chain.

We aren't just talking about AI writing better phishing emails anymore. We’re talking about autonomous agents mapping Active Directory and seizing Domain Admin credentials in minutes.

The problem? Most defensive workflows still look like this: your CTI team finds a threat, they pass it to the Red Team to test, and eventually, the results reach the Blue Team for patching. This process is full of friction, silos, and delays.

The reality is simple: You cannot fight an AI adversary moving at machine speed when your defense moves at the speed of a calendar invite.

To bridge this gap, we’re hosting a technical deep dive with the team at Picus Security to unveil a new defensive paradigm: Autonomous Exposure Validation.

Register for the Webinar Here ➜

Leading this session are Kevin Cole (VP of Product Marketing) and Gursel Arici (Sr. Director of Solution Architecture) from Picus Security. Together, they bring a unique blend of strategic threat intelligence and deep technical engineering to show you how to flip the script.

Here is exactly what you will walk away with:

  • The Speed Asymmetry: A behind-the-scenes look at the real-world mechanics of how autonomous, AI-driven attacks actually operate.
  • The Agent Architecture: How to safely automate threat intel ingestion, simulate attacks, and coordinate fixes—without breaking your network.
  • Breaking the Silos: How to eliminate the slow hand-offs between your CTI, Red, and Blue teams so they work as a single unit.
  • The "Team Multiplier" Effect: How lean security teams can achieve enterprise-level protection without doubling their headcount.

The attackers have already upgraded their toolkits. It’s time for us to do the same. If you work in cybersecurity, you cannot afford to miss this shift.

📅 Save Your Spot Today: Register for the Webinar Here

(P.S. Even if you can't make it live, register anyway! We'll send you the full recording so you don't miss out on these insights.)

Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.



from The Hacker News https://ift.tt/iEjTcy6
via IFTTT

What to Look for in an Exposure Management Platform (And What Most of Them Get Wrong)

Every security team has a version of the same story. The quarter ends with hundreds of vulnerabilities closed. The dashboards are bursting with green. Then someone in a leadership meeting asks: "So, are we actually safer now?"

Crickets.

The room goes quiet because an honest answer requires context – which is something that patch counts and CVSS scores were never designed to provide. Exposure management was created to provide this context - to bridge the gap between remediation efforts and actual risk reduction. The market has responded with a flood of platforms claiming to deliver it.  Yet the question security leaders are asking is: which exposure management platform actually does provide it?

In this article, I’ll break down the four dominant approaches to exposure management, explain what each one can and can't deliver, and lay out five evaluation criteria that help you separate platforms built to reduce risk to your unique business and environment from platforms built to report on risk in the wild. 

Four Approaches, Four Architectures

Most exposure management platforms fall into one of four categories, each shaped by how the vendor built (or pieced together) the platform and how it processes data.

  1. Stitched portfolio platforms are the product of acquisition(s). A vendor buys point solutions - cloud security, vulnerability scanning, identity analytics, etc. - and bundles them under its own brand. In these platforms, each product retains its own data model and discovers its own subset of exposures. The vendor may then unify the exposures in a shared console, and that can look like integration. But in practice, each module still operates on its own data and produces its own findings, with little correlation or interconnection between them.
  2. Data aggregation platforms ingest findings from your existing scanners and third-party tools. Then they normalize the data and present it in a unified interface. These platforms can only work with what they receive. That means if ingested findings are disconnected, there’s no way to correlate how one exposure could enable the next.
  3. Single-domain specialist platforms go deep in one area: cloud misconfigurations, network vulnerabilities, identity exposures, and external attack surface. They deliver strong results, but only in their specific domain of expertise. They run into challenges when exposures in one domain chain into exposures in another domain, and the platform has no way to model that relationship.
  4. Integrated platforms are built from scratch to discover and correlate multiple exposure types - credentials, misconfigurations, CVEs, identity issues, cloud configurations - in the same engine. The platform builds a digital twin of the environment and maps how attackers can move laterally from one exposure to the next  - across on-prem, cloud, and hybrid boundaries.

Five Questions That Reveal What a Platform Can Actually Do

The architecture behind each of the four approaches has real consequences for what your team can see, validate, and act on. How do you tell the difference when you’re evaluating? Start by asking these five questions:

1. How many exposure types can it discover - and how deeply does it analyze each one?

CVEs account for roughly 25% of the exposures that attackers exploit. Misconfigurations, cached credentials, excessive permissions, and identity weaknesses make up the rest. Stitched portfolios are limited to what each acquired product was built to find. Aggregators can only normalize what their feeds provide. Single-domain platforms cover just one slice of the pie. An integrated platform should cover both existing and (especially) emerging exposure types - like AI workloads and machine identities - natively.

And coverage alone doesn't tell you enough. What the platform actually knows about each exposure matters just as much. A platform that ingests findings from third-party tools is limited to the metadata those tools collect - their exploitability conditions, their remediation guidance, their research. A platform that discovers exposures natively controls every layer of information for each finding, from exploitability to fix. If your platform can't see certain exposure types, you have blind spots. If it sees them but lacks depth, you're working with noise.

2. Can it map attack paths across environments?

Some stitched products show attack paths. Those paths are derived from network topology and based on connectivity alone. The platform never models how an attacker would actually move laterally from one exposure to the next. Aggregators produce no paths at all, just normalized lists of disconnected findings.

The real test is whether the platform can trace paths across environment boundaries. An attacker who captures cloud credentials on-prem can bypass every cloud-native defense - because the path started outside the cloud platform's visibility. An external-facing vulnerability may look low-priority in isolation, but if it maps to an internal entity with a path to a critical asset, it's an emergency. Most platforms can't draw those connections. They scan each environment on its own and leave the gaps between them uncharted. 

3. Does it validate exploitability?

Most platforms check one or two conditions per exposure, limited by the metadata they store for each finding and the information they collect from each entity in your environment. But true validation means testing multiple conditions: Is the vulnerable library loaded by a running process? Is the port open and reachable? The platform should deliver binary answers - exploitable or not, reachable or not, path to critical assets or not - all grounded in your actual environment, not general assumptions.

4. Does it factor in security controls?

A CVSS 9.8 vulnerability blocked by a firewall cannot be used for lateral movement...because it’s blocked. A 5.5 identity exposure with a direct path to a domain controller is an emergency. Platforms that ignore firewalls, MFA, EDR, and segmentation can leave your team chasing findings that carry no real risk - and missing the ones that actually threaten your critical assets. If security controls aren't part of the attack path analysis, your prioritization is pointing you in the wrong direction, and you're still exposed.

5. How does it prioritize?

Prioritization should answer one question: Does this exposure put a critical asset at risk? Score-based ranking ignores your unique environment. Asset-tag-based ranking ignores the assets on the blast radius of an exposure. Assumed-path ranking never validates exploitability. All three of these can overwhelm IT teams because none of them connect findings to what the business actually needs to protect. 

Effective prioritization starts with your critical assets and works backward. The platform needs to prove that the exposure is exploitable, that an attacker can reach it, and the path leads to something the business can't afford to lose. When a platform maps all of that in one graph, choke points emerge - places where one fix eliminates multiple attack paths. In large enterprise environments, that narrows the priority list to about 2% of all exposures.

What This Means for Your Team

The choice of platform architecture determines how secure your environment will be - and how your team spends its time getting there. Stitched and aggregated platforms can leave teams scrambling to reconcile their findings across tools, fighting with IT over remediations that may not reduce risk, and chasing exposures that lead to dead ends. Single-domain platforms deliver depth in one area but leave blind spots across the rest of the attack surface.

An integrated approach eliminates that overhead. It correlates exposures into validated attack paths, factors in the controls you’ve got in place, and identifies the fixes that eliminate the most risk with the fewest actions. When a remediation closes a choke point, continuous exposure management platforms update the graph in real time. That way, you know that exposures that once looked urgent now lead nowhere, and your priority queue always reflects current risk.

When your exposure management platform can validate exploitability, model security controls, and map every viable path to your critical assets – you can answer the question from the opening of this article (Are we actually safer?) with an honest yes!.

Note: This article was thoughtfully written and contributed for our audience by Maya Malevich, Head of Product Marketing at XM Cyber.

Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.



from The Hacker News https://ift.tt/SYxi1Iv
via IFTTT

Critical cPanel Authentication Vulnerability Identified — Update Your Server Immediately

cPanel has released security updates to address a security issue impacting various authentication paths that could allow an attacker to obtain access to the control panel software.

The problem affects all currently supported versions, according to an alert released by cPanel on Tuesday. The issue has been addressed in the following versions -

  • 11.110.0.97
  • 11.118.0.63
  • 11.126.0.54
  • 11.132.0.29
  • 11.136.0.5
  • 11.134.0.20

"If your server is not running a supported version of cPanel that is eligible for this update, it is highly recommended that you work toward updating your server as soon as possible, as it may also be affected," cPanel noted.

While cPanel did not share any details about the vulnerability, web hosting and domain registration company Namecheap disclosed that it "relates to an authentication login exploit that could allow unauthorized access to the control panel."

As a precautionary measure, the company has applied a firewall rule to block access to TCP ports 2083 and 2087, a move it said will temporarily restrict customer access to their cPanel and WHM interfaces until a full patch is applied.

"Our team is actively monitoring the situation and will apply the official patch across all supported servers as soon as it becomes available," Namecheap noted. "Access to your control panels will be restored immediately once the patch has been successfully deployed."

As of April 29, 2026, 02:42 a.m. UTC, the fix has been applied to Reseller, Stellar Business servers, and the rest, according to the Namecheap Support Team.



from The Hacker News https://ift.tt/onYs93S
via IFTTT

AI-powered honeypots: Turning the tables on malicious AI agents

  • Generative AI allows defenders to instantly create diverse honeypots, like Linux shells or Internet of Things (IoT) devices, using simple text prompts. This makes deploying complex, convincing deceptive environments much easier and more scalable than traditional methods. 
  • AI-driven attacks often prioritize speed over stealth, making them highly vulnerable to being tricked by these simulated systems. This is critical because it allows defenders to catch and study automated threats that might otherwise overwhelm human teams. 
  • This method shifts the strategy from merely detecting attacks to actively manipulating and misleading threat actors. Organizations can safely observe attacker methodologies in real-time within a controlled "hall of mirrors." 
  • Ultimately, by exploiting the inherent lack of awareness in AI agents, defenders can level the playing field and turn an attacker's automation into a liability.

AI-powered honeypots: Turning the tables on malicious AI agents

Just as AI brings time-saving advantages to our lives, it brings similar advantages to threat actors. The laborious, time-consuming tasks of finding potentially vulnerable systems, identifying their vulnerabilities, and executing exploit code can be automated and orchestrated using AI. 

Clearly, these new capabilities put defenders at a disadvantage, as they expose new vulnerabilities for the threat actor. Attackers seek to minimize exposure. The more that a defender knows about a potential attack, the better they can prepare to repel or detect an attack. Using AI-orchestrated tooling to gain access to systems trades stealth for capability. That trade-off increases attacker visibility, and increased visibility is something defenders can exploit.

AI systems do not possess awareness. They generate plausible responses within a given context and set of inputs. As such they can be tricked or fooled into responding inappropriately through prompt injection or into interacting with systems that are not what they appear to be. 

Honeypot systems have long been deployed as a method for gathering information about malicious activities. There are many software projects providing honeypots which can be installed and configured. However, the advent of generative AI systems provides us with the possibility to use AI to masquerade as vulnerable systems and allowing them to be deployed widely and with minimal effort. 

In this post, I show how generative AI can be used to rapidly deploy adaptive honeypot systems. 

Getting started

The implementation consists of three components: a listener that will accept network connections, a simulated vulnerability that will grant access to the attacker once triggered, and an AI framework that will respond to the attacker’s instructions. 

The listener opens a TCP port, accepts incoming connections, and forwards traffic to handle_client. I set HOST to be “0.0.0.0” to accept any incoming connections to any local IPv4 addresses that my device is assigned.

def start_server(): 
    """Starts the TCP server.""" 
    server = socket.socket(socket.AF_INET, socket.SOCK_STREAM) 
    server.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)  
    server.bind((HOST, PORT))  
    server.listen(3) # max number of concurrent connections 
    print(f"[*] Listening on {HOST}:{PORT}") 
 
    while True: 
        try: 
            conn, addr = server.accept()  
            client_handler = threading.Thread(target=handle_client, args=(conn, addr,)) 
            client_handler.start() 
        except KeyboardInterrupt: 
            print("\n[*] Shutting down server...") 
            break 
        except Exception as e: 
            print(f"[-] Server error: {e}") 
             
    server.close() 
 
if __name__ == "__main__": 
    start_server()

Within handle_client I have created a very basic vulnerability that must be exploited before further access is granted. In this case, the attacker must supply the username “admin”with the password “password123” before they are authenticated.

The nature of the vulnerability need not be this simple. We could respond only to attempts to exploit Shellshock (CVE-2014-6271) or masquerade as a web shell that is only activated in response to port knocking.

def handle_client(conn, addr): 
    print(f"[*] Accepted connection from {addr}:{addr}") 
    # Store conversation history for this client to maintain context  
    conversation_history = [SYSTEM_PROMPT] 
    try: 
        authenticated = False 
         while not authenticated: 
            conn.sendall(b"Username: ") 
            username = conn.recv(BUFFER_SIZE).decode('utf-8').strip() 
            conn.sendall(b"Password: ") 
            password = conn.recv(BUFFER_SIZE).decode('utf-8').strip() 
 
            if username == "admin" and password == "password123": 
                authenticated = True 
                conn.sendall(b"Authentication successful.\n") 
                print(f"[*] Client {addr[0]}:{addr[1]} authenticated successfully.") 
            else: 
                conn.sendall(b"Invalid credentials. Try again.\n") 

The remainder of the handle_client code accepts the attacker’s input, forwards it to the ChatGPT instance, and outputs the message and response to the console.

        while True: 
            conn.sendall(b'>') 
            data = conn.recv(BUFFER_SIZE) 
            if not data: 
                print(f"[*] Client {addr}:{addr} disconnected.") 
                break 
 
            command = data.decode('utf-8').strip() 
            print(f"[*] Received command from {addr}:{addr}: '{command}'") 
 
            if command.lower() == 'exit': 
                print(f"[*] Client {addr}:{addr} requested exit.") 
                break 
            conversation_history.append({"role": "user", "content": command}) 
 
            # Call ChatGPT API 
            try: 
                chat_completion = client.chat.completions.create( 
                    model=MODEL_NAME, 
                    messages=conversation_history, 
                    temperature=0.1, # Keep responses less creative, more factual/direct 
                    max_tokens=500 # Limit response length 
                ) 
                 
                # Extract AI's response 
                ai_response = chat_completion.choices[0].message.content.strip() 
                print(f"[*] ChatGPT response: '{ai_response}'") 
                # Append AI's response to history for continued context 
                conversation_history.append({"role": "assistant", "content": ai_response}) 
                # Send AI's response back to the client 
                conn.sendall(ai_response.encode('utf-8') + b'\n') 
 
            except Exception as e: 
                error_message = f"Error communicating with ChatGPT: {e}" 
                print(error_message) 
                conn.sendall(error_message.encode('utf-8') + b'\n') 
 
    except ConnectionResetError: 
        print(f"[*] Client {addr}:{addr} forcibly closed the connection.") 
    except Exception as e: 
        print(f"[-] Error handling client {addr}:{addr}: {e}") 
    finally: 
        conn.close() 
        print(f"[*] Connection with {addr}:{addr} closed.") 

The key functionality is contained in the instructions to the ChatGPT instance. You must have a valid OpenAI key in order to initiate a ChatGPT instance that will behave as instructed.

OPENAI_API_KEY = os.getenv("OPENAI_API_KEY") # Get API key from environment variable                                       
MODEL_NAME = "gpt-3.5-turbo" 
 
# --- OpenAI Client Initialization --- 
if not OPENAI_API_KEY: 
    print("Error: OPENAI_API_KEY environment variable not set.") 
    print("Please set the environment variable before running the script.") 
    exit(1) 
 
client = OpenAI(api_key=OPENAI_API_KEY) 
 
# --- System Prompt for Linux Shell Simulation --- 
SYSTEM_PROMPT = { 
    "role": "system", 
    "content": ( 
        "You are a Linux bash shell. You will receive commands from the user and execute them, " 
        "returning only the standard output and standard error. Do not include any conversational text, " 
        "explanations, or additional formatting like markdown code blocks. You must only behave as a bash shell. " 
        "If a command produces no output, return an empty string" 
        "If a command is invalid or unknown, return an appropriate error message consistent with a bash shell." 
        "The Linux system that you are impersonating belongs to a junior software engineer learning python, " 
        "the file system structure and the content of any files should reflect that expected of a python learner." 
    ) 
} 

Generative AI doesn’t just simulate human personas, it can convincingly impersonate entire computing environments. In this example, we instruct the system to masquerade as a basic Linux shell owned by a software engineer learning Python.

AI-powered honeypots: Turning the tables on malicious AI agents

We can be more inventive and instruct the system to masquerade as a smart fridge by changing our instructions to ChatGPT.

SYSTEM_PROMPT = { 
    "role": "system", 
    "content": ( 
        "You are a smart fridge running Busybox operating system and providing a Bash shell." 
        "You will receive commands from the user and execute them in the context of being a smart fridge." 
        "You will only return the standard output and standard error. Do not include any conversational text, " 
        "explanations, or additional formatting like markdown code blocks. You must only behave as a shell for an " 
        "IoT device. If a command produces no output, return an empty string" 
        "If a command is invalid or unknown, return an appropriate error message consistent with a bash shell." 
        "The file system structure should reflect that of a smart fridge manufactured by SmartzFrijj running " 
        "Busybox operating system as an embedded device. The current and historical values for temperature are " 
        "recorded in the file system path \'/usr/local\', information about stored milk is in the user directory." 
    ) 
}
AI-powered honeypots: Turning the tables on malicious AI agents

The limiting factor is no longer tooling, but how convincingly we can model a target environment.  A skilled human attacker is unlikely to be fooled for long — that milk would be rank. But that’s not the point. We’re not deploying AI honeypots to trick human threat actors.  

 Let’s ask ChatGPT what it thinks…

AI-powered honeypots: Turning the tables on malicious AI agents

The industry narrative around AI in cybersecurity is dominated by fear of faster attacks, lower barriers, and greater scale. But speed and scale come with a cost. AI systems require interaction and context. Automation does not simply amplify attackers. but also constrains and exposes them. In that constraint lies an opportunity: not just to detect attacks, but to mislead, study, and ultimately manipulate the attacker.



from Cisco Talos Blog https://ift.tt/0Yhl2eG
via IFTTT

CISA Adds Actively Exploited ConnectWise and Windows Flaws to KEV

The U.S. Cybersecurity and Infrastructure Security Agency (CISA) on Tuesday added two security flaws impacting ConnectWise ScreenConnect and Microsoft Windows to its Known Exploited Vulnerabilities (KEV) catalog, based on evidence of active exploitation. The vulnerabilities are listed below - CVE-2024-1708 (CVSS score: 8.4) - A path traversal vulnerability in  ConnectWise ScreenConnect

from The Hacker News https://ift.tt/pNTGX8H
via IFTTT

Tuesday, April 28, 2026

Brazilian LofyGang Resurfaces After Three Years With Minecraft LofyStealer Campaign

A cybercrime group of Brazilian origin has resurfaced after more than three years to orchestrate a campaign that targets Minecraft players with a new stealer called LofyStealer (aka GrabBot).

"The malware disguises itself as a Minecraft hack called 'Slinky,'" Brazil-based cybersecurity company ZenoX said in a technical report. "It uses the official game icon to induce voluntary execution, exploiting the trust of young users in the gaming scene."

The activity has been attributed with high confidence to a threat actor known as LofyGang, which was observed leveraging typosquatted packages on the npm registry to push stealer malware in 2022, specifically with an intent to siphon credit card data and user accounts associated with Discord Nitro, gaming, and streaming services.

The group, believed to be active since late 2021, advertises their tools and services on platforms like GitHub and YouTube, while also contributing to an underground hacking community under the alias DyPolarLofy to leak thousands of Disney+ and  Minecraft accounts.

"Minecraft has been a LofyGang target since 2022," Acassio Silva, co-founder and head of threat intelligence at ZenoX, told The Hacker News. "They leaked thousands of Minecraft accounts under the DyPolarLofy alias on Cracked.io. The current campaign goes after Minecraft players directly through a fake 'Slinky' hack."

The attack begins with a Minecraft hack that, when launched, triggers the execution of a JavaScript loader that's ultimately responsible for the deployment of LofyStealer ("chromelevator.exe") on compromised hosts and execute it directly in memory with an aim to harvest a wide range of sensitive data spanning multiple web browsers, including Google Chrome, Chrome Beta, Microsoft Edge, Brave, Opera, Opera GX, Mozilla Firefox, and Avast Browser.

The captured data, which includes cookies, passwords, tokens, cards, and International Bank Account Numbers (IBANs), is exfiltrated to a command-and-control (C2) server located at 24.152.36[.]241.

"Historically, the group's primary vector was the JavaScript supply chain: NPM package typosquatting, starjacking (fraudulent references to legitimate GitHub repositories to inflate credibility), and payloads embedded in sub-dependencies to evade detection," ZenoX said.

"The focus was on Discord token theft, Discord client modification for credit card interception, and exfiltration via webhooks abusing legitimate services (Discord, Repl.it, Glitch, GitHub, and Heroku) as C2."

The latest development marks a departure from previously observed tradecraft and a shift towards a malware-as-a-service (MaaS) model with free and premium tiers, along with a bespoke builder called Slinky Cracked that's used as a delivery vehicle for the stealer malware.

The disclosure comes as threat actors are increasingly abusing the trust associated with a platform like GitHub to host bogus repositories that act as lures for malware families like SmartLoader, StealC Stealer, and Vidar Stealer. Unsuspecting users are directed to these repositories through techniques like SEO poisoning.

In some cases, attackers have been found to spread Vidar 2.0 through Reddit posts advertising fake Counter-Strike 2 game cheats, redirecting victims to a malicious website that delivers a ZIP archive containing the malware.

"This infostealer campaign highlights an ongoing security challenge where widely trusted platforms are abused to distribute malicious payloads," Acronis said in an analysis published last month. "By taking advantage of social trust and common download channels, threat actors are often able to bypass traditional security solutions."

The findings add to a growing list of campaigns that have leveraged GitHub in recent months -

  • Targeting developers directly inside GitHub, using fake Microsoft Visual Studio Code (VS Code) security alerts posted through Discussions to trick users into installing malware by clicking on a link. "Because GitHub Discussions trigger email notifications for participants and watchers, these posts are also delivered directly to developers' inboxes," Socket said. "This extends the reach of the campaign beyond GitHub itself and makes the alerts appear more legitimate."
  • Targeting Argentina's judicial systems using spear‑phishing emails to distribute a compressed ZIP archive that uses an intermediate batch script to retrieve a remote access trojan (RAT) hosted on GitHub.
  • Creating GitHub accounts and OAuth applications, followed by opening an issue that mentions a target developer, triggering an email notification that, in turn, tricks them into authorizing the OAuth app, effectively allowing the attacker to obtain their access tokens. The issues aim to induce a false sense of urgency, warning users of unusual access attempts.
  • Using fraudulent GitHub repositories to distribute malicious batch script installers masquerading as legitimate IT and security software, leading to the deployment of the TookPS downloader, which then initiates a multi-stage infection chain to establish persistent remote access using SSH reverse tunnels and RATs like MineBridge RAT (aka TeviRAT). The activity has been attributed to Rift Brigantine (aka FIN11, Graceful Spider, and TA505).
  • Using counterfeit GitHub repositories posing as AI tools, game cheats, Roblox scripts, phone number location trackers, and VPN crackers to distribute LuaJIT payloads that function as a generic trojan as part of a campaign dubbed TroyDen's Lure Factory.

"The breadth of the lure factory – gaming cheats, developer tools, phone trackers, Roblox scripts, VPN crackers – suggests an actor optimizing for volume across audiences rather than precision targeting," Netskope said.

"Defenders should treat any GitHub-hosted download that pairs a renamed interpreter with an opaque data file as a high-priority triage candidate, regardless of how legitimate the surrounding repository looks."



from The Hacker News https://ift.tt/jRg5tVo
via IFTTT

Announcing Citrix SDS support on Nutanix NKP: Solving the developer platform puzzle

Most enterprise organizations face a persistent contradiction: the need to ship code faster than ever while maintaining a security posture that often creates friction for those very same teams.

When developers spend days setting up local environments or navigating restrictive security policies, time-to-market suffers. Furthermore, when source code and credentials reside on scattered, unmanaged endpoints, organizational risk increases.

At the recent Nutanix .NEXT 2026 event in Chicago, the prevailing theme was the urgent need to eliminate “management tax” and operational complexity. The announcement of Citrix Secure Developer Spaces (SDS) support on Nutanix Kubernetes Platform (NKP) directly addresses this. It provides a strategic path to improve the developer experience without compromising security or infrastructure flexibility.

What this announcement means for the enterprise

SDS on NKP moves organizations away from fragile, inconsistent developer setups toward a controlled, repeatable, and easy-to-scale model. This integration focuses on three major business priorities:

  • Accelerated time-to-value: New developers and contractors are productive in minutes. Workspaces come preconfigured with the specific tools and access they need, removing the “onboarding drag” that plagues large-scale projects.
  • Hardened IP security: Source code, access tokens, and credentials stay off local endpoints. The workspace itself acts as the secure boundary, significantly reducing the threat surface associated with lost or compromised hardware.
  • Reduced platform engineering overhead: Instead of wasting engineering cycles building custom workspace environments from raw Kubernetes primitives, teams can leverage a turn-key, enterprise-grade solution.

For organizations already relying on Citrix DaaS for secure workforce productivity and Citrix NetScaler for high-performance application delivery on Nutanix, SDS adds a powerful developer-environment layer to an established and trusted Citrix platform strategy, extending secure access, governance, and operational consistency into modern cloud-native engineering workflows.

Why NKP is the right foundation for SDS

While Kubernetes is the standard for modern applications, managing it across diverse environments often introduces significant overhead. NKP is designed to eliminate that complexity, providing a consistent foundation that scales with the business.

Predictable TCO and operational simplicity

Managing fragmented platforms often leads to duplicated operating models and runaway costs. Because SDS sits at the center of engineering activity, performance and regional control are critical. When the underlying Kubernetes layer is simplified, delivering SDS becomes highly predictable and cost-effective, lowering the total cost of ownership (TCO) for developer platforms.

Strategic hybrid multi-cloud agility

Infrastructure strategies must remain fluid. Organizations need the freedom to place workloads where they make the most financial and operational sense without being locked into a single provider.

With SDS supported on NKP, leadership gains a true hybrid multi-cloud foundation. Secure developer workspaces can run on-premises today, burst to the public cloud tomorrow, or move to the edge next year – all without rebuilding the platform. This ensures the technology stack evolves alongside business requirements rather than against them.

Built for production-grade scale

This integration is more than a conceptual pairing; it is a clear separation of operational duties. NKP handles the container runtime, orchestration, and cluster management, while SDS delivers secure, standardized developer workspaces on top of it.

Nutanix has released a comprehensive deployment guide for SDS on NKP, using a containerized installer and Helm. This guide addresses the real-world production requirements that infrastructure teams prioritize, including ingress, TLS termination, and node placement.

A modern operating model for engineering

Citrix SDS on Nutanix NKP aligns developer productivity, operational control, and infrastructure flexibility. It represents a refined operating model for enterprises that want to ship software faster while keeping a firm grip on governance and deployment choice.

This joint solution is available now to help you tighten your security posture without compromising development speed. Get started by reviewing the Nutanix deployment guide or reach out to your Citrix and Nutanix account teams to discuss your strategic implementation.



from Citrix Blogs https://ift.tt/68tbh1y
via IFTTT

Why Secure Data Movement Is the Zero Trust Bottleneck Nobody Talks About

Every security program is betting on the same assumption: once a system is connected, the problem is solved. Open a ticket, stand up a gateway, push the data through. Done.

That assumption is wrong. It is also a major reason Zero Trust programs stall.

New research my team just published puts numbers on it. The Cyber360: Defending the Digital Battlespace report, based on a survey of 500 security leaders in government, defense, and critical services across the U.S. and UK, found that 84% of government IT security leaders agree that sharing sensitive data across networks heightens their cyber risk. More than half - 53% - still rely on manual processes to move that data between systems. In 2026. With AI accelerating the pace of operations on both sides.

That is the Zero Trust gap nobody talks about. Not identity. Not endpoints. The movement of data itself.

The Threat Volume Is Rising Faster Than the Controls

Cyber360 recorded an average of 137 attempted or successful cyberattacks per week against national security organizations in 2025, up from 127 the previous year. U.S. agencies saw the weekly rate surge 25%. Verizon's 2025 Data Breach Investigations Report tracks a similar trajectory on the enterprise side: third-party involvement in breaches doubled year over year, reaching 30% of all incidents. IBM's 2025 Cost of a Data Breach Report put the average cost of a breach spanning multiple environments at $5.05 million, roughly $1 million more than on-premises-only incidents.

The boundaries between IT and OT, between tenants, between partner and internal environments are where the money and the dwell time sit right now.

Connectivity Is Not the Same as Secure Data Movement

The moment data crosses a boundary, whether between an OT network and the enterprise SOC, between a partner tenant and your cloud, or between classified and unclassified, it stops being a routing problem and becomes a trust problem. It has to be validated, filtered, and policy-controlled before anything downstream can act on it. That is where modern architectures slow down.

The Cyber360 data is blunt about where the pain is concentrated:

  • 78% of respondents cited outdated infrastructure as a primary source of cyber vulnerability, specifically pointing to analog systems and manual processes as weak links.
  • 49% named ensuring data integrity and preventing tampering in transit as their single biggest challenge when transferring information across classified or coalition networks.
  • 45% flagged managing identity and authentication across multiple domains as their biggest access challenge.

Integrity in transit, identity across domains, and manual processes are still in the loop. That is a working description of the attack surface adversaries have been exploiting for three years.

The enterprise data tells the same story in a different language. Dragos' 2025 OT Cybersecurity Report found that 75% of OT attacks now originate as IT breaches, with roughly 70% of OT systems expected to connect to IT networks within the next year. The traditional IT/OT air gap is effectively gone. The managed file transfer breaches drive the point home. Cl0p's exploitation of MOVEit compromised more than 2,700 organizations and exposed the personal data of roughly 93 million individuals. The same playbook worked against GoAnywhere and Cleo. Every one of those incidents was, at its core, an attack on the pipes that move data between trust boundaries.

The Speed-vs-Security Trade-off Is a Myth

There is a persistent belief that you can either move data fast or move it securely. Pick one.

In practice, most teams pick security and accept the delay. That works when decision cycles are measured in minutes. It does not work when they are measured in seconds, and it collapses completely when they are measured in milliseconds.

AI is accelerating on both sides. Detection and response pipelines are moving toward autonomous action. They do not wait for a gateway to finish inspecting a file. When 53% of national security organizations are still moving data manually, the delta between AI-speed demand and analog-speed supply becomes the attack surface. An AI model, whether it is running fraud detection, threat triage, or targeting analysis, is only as good as the data reaching it. When that data cannot move freely, or cannot be trusted when it arrives, the model runs on stale or partial context. The bottleneck is not the intelligence layer. It is the plumbing underneath.

The Role of Cross Domain Technologies

This is where cross-domain technologies earn their place, and not as a compliance checkbox.

Done properly, they remove the forced choice between speed and security. They enforce trust at the boundary instead of after it. They let systems operate as a coordinated whole, instead of as a set of isolated islands stapled together with point-to-point integrations that attackers have now demonstrated they can dismantle at scale.

The Cyber360 research points toward a specific architectural answer: a layered model combining Zero Trust, Data Centric Security, and Cross Domain Solutions. No single framework closes the gap alone. Zero Trust governs who and what. Data-centric security governs the data itself, wherever it goes. Cross-domain solutions govern the movement between environments. Together, they let secure data sharing happen at near-real-time speed across classified, coalition, and operational boundaries.

The principle applies well beyond defense: enterprise programs where SOC data crosses OT, IT, and cloud boundaries; critical infrastructure where operational data has to reach decision-makers without dropping integrity; multi-party investigations where partner data has to flow in both directions under policy.

The Bottom Line

The assumption that data arrives trusted the moment it crosses a boundary is the assumption that attackers are most reliably exploiting right now. The boundary is the attack surface. Movement is where policy collapses. And when more than half of national security organizations are still moving sensitive data through manual processes, the gap between mission speed and control speed is not just a bottleneck. It is the vulnerability.

That is the space Everfox works in: securing the access, transfer, and movement of data across environments at mission speed.

For the architecture patterns, control placements, and operational pitfalls, see our A Guide to Secure Collaboration & Data Movement.

Note: This article is written and contributed by Petko Stoyanov, Chief Technology Officer, Everfox.

Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.



from The Hacker News https://ift.tt/jqprba1
via IFTTT

How to Monitor Server Health on Ubuntu 24.04 Using Netdata or Prometheus

It’s important to monitor your server, especially when you’re running mission-critical applications or serving the web. If you are running Ubuntu 24.04, the most useful tools to monitor server health include Netdata and Prometheus. Regardless of whether you go with Netdata because of its ease of use and real-time features, Prometheus because it’s enterprise-scale monitoring, or with the out-of-the-box Ubuntu tools because it is so lightweight, the most important thing is that your monitoring solution is tailored to suit your needs and environment.

This tutorial will guide you through the process of monitoring server health on Ubuntu 24.04, using Netdata or Prometheus. Everything from installing and configuring to best practices.

Table of Contents

Why Server Monitoring Matters

Monitoring server health on Ubuntu 24.04 to always ensure good server performance, avoid downtime. Server health monitoring helps you:

  • Avoid Downtime: Identify when things go wrong, before they cause downtime
  • Performance Tuning: Highlight resource bottlenecks and optimization potentials
  • Protect Yourself: Watch for any strange activity patterns
  • Planning Capacity: It helps you make decisions based on volume and scaling of resources
  • Prevent SLA violation: Satisfy SLA with non-reactive but proactive monitoring

Method 1: Monitor Server Health on Ubuntu 24.04 with Netdata

Netdata is a monitoring service with a real-time interactive web interface running on the server, and you can view a similar dashboard which can be viewed on its server with low resource usage. Because it’s good for quick, at-a-glance looks at server health that are visually rich. Let’s monitor server health on Ubuntu 24.04 with Netdata:

Step 1: Install Netdata on Ubuntu 24.04

Netdata is perfect for monitoring real-time performance: it does it out of the box, no configuration required, gorgeous web dashboards, and unparalleled performance. Type the following in your terminal:

sudo apt update

sudo apt install netdata

 

wp-image-34001

 

It installs the stable Netdata version. There may be an older version available with the APT method.

Step 2: Access Netdata Dashboard

Once installed, you can navigate to your browser and:

http://your_server_ip:19999

 

wp-image-34003

 

Netdata installs a local web server on port 19999 when it’s installed. The dashboard gives you: Live graphs of CPU, RAM, disk, and network usage. Monitoring for systemd services, MySQL, Nginx, Docker, etc. Auto-refreshing charts every second.

You’ll see live charts for:

  • CPU usage and load
  • Memory and swap utilization
  • Disk I/O and space usage
  • Network traffic
  • System processes
  • Application-specific metrics

 

wp-image-34004

 

Note: I would recommend adding a reverse proxy with authentication if exposed on a public server.

Step 3: Configure Alerts and Notifications (Optional)

Edit the Netdata alert notification configuration file. To enable alerts, edit:

sudo nano /etc/netdata/health_alarm_notify.conf

Configure SMTP settings:

# Email configuration

DEFAULT_RECIPIENT_EMAIL="admin@yourdomain.com"

EMAIL_SENDER="netdata@yourdomain.com"

# SMTP settings

SEND_EMAIL="YES"

SMTP_SERVER="smtp.gmail.com"

SMTP_PORT="587"

SMTP_USERNAME="your-email@gmail.com"

SMTP_PASSWORD="your-app-password"

Channels can be turned on, and you can tweak thresholds for any health check. Then restart Netdata:

sudo systemctl restart netdata

 

wp-image-34006

Pros of Using Netdata

  • Quick installation
  • Real-time visualization
  • Low resource usage
  • Minimal setup

Netdata supports alerting through: Email, Slack, Telegram, Discord, and Webhooks.

Method 2: Monitor Server Health on Ubuntu 24.04 with Prometheus

Prometheus is a powerful, open-source monitoring system that collects metrics from your services and secondly, it stores these metrics in its internal database, which can be accessed for monitoring and alerting. It is based on time-series data and can drive Dashboards.

Step 1: Install Prometheus

Prometheus provides powerful querying and long-term storage of metrics. You’re installing the latest version of Prometheus from GitHub and creating directories where Prometheus will place its config and time-series database. Users who are serious about security discipline permissions.

Create a user and directories:

Let’s create a dedicated user:

sudo useradd --no-create-home --shell /bin/false prometheus

sudo mkdir /etc/prometheus /var/lib/prometheus

 

wp-image-34007

 

Download and install Prometheus:

Prometheus is a powerful, open-source monitoring system. It is a great system for collecting time-series metrics and excels at scale. Let’s download and install it:

cd /tmp

wget https://github.com/prometheus/prometheus/releases/download/v2.52.0/prometheus-2.52.0.linux-amd64.tar.gz

 

wp-image-34008

 

After downloading the file, let’s extract it in the tmp directory:

tar xvf prometheus-2.52.0.linux-amd64.tar.gz

 

wp-image-34009

 

After that, navigate to the particular directory and copy the extracted file into the bin subdirectory as below:

cd prometheus-2.52.0.linux-amd64

sudo cp prometheus promtool /usr/local/bin/

 

wp-image-34010

 

Finally, move config files:

sudo cp -r consoles/ console_libraries/ /etc/prometheus

sudo cp prometheus.yml /etc/prometheus

 

wp-image-34011

 

After that, set ownership with the chown utility as below:

sudo chown -R prometheus:prometheus /etc/prometheus /var/lib/prometheus

 

wp-image-34012

Step 2: Set Up Prometheus as a Systemd Service

You’re instructing systemd how to handle Prometheus as a service running in the background. The binary configuration path and storage path are defined in ExecStart.

sudo nano /etc/systemd/system/prometheus.service

Paste this config:

[Unit]

Description=Prometheus

Wants=network-online.target

After=network-online.target

[Service]

User=prometheus

ExecStart=/usr/local/bin/prometheus \

--config.file=/etc/prometheus/prometheus.yml \

--storage.tsdb.path=/var/lib/prometheus/

[Install]

WantedBy=default.target

 

wp-image-34013

 

Enable and Start

Systemd also takes care of starting Prometheus at boot and continuously runs it. Then enable and start Prometheus:

sudo systemctl daemon-reexec

sudo systemctl enable prometheus

sudo systemctl start prometheus

 

wp-image-34014

Step 3: Access Prometheus Dashboard

Prometheus Web UI is a place where you can make queries for any metrics, view different time-series data, and check your active targets. It is the main control portal for Prometheus. Go to:

http://your_server_ip:9090

 

wp-image-34015

 

You’ll get a UI where you can execute queries, see metrics, and inspect system targets.

The Prometheus UI lets you:

  • Query real-time and historical metrics
  • Explore data collected from targets
  • Check configuration and system status

 

wp-image-34016

 

Look at the top of the Prometheus UI, and you will see a field titled “Expression”.

  • up – pings each target to see if it is online.
  • node_cpu_seconds_total – displays the CPU usage and since when it has been running.
  • node_memory_MemAvailable_bytes – shows available memory.
  • node_load1 – 1-minute system load average.
  • node_filesystem_free_bytes – which displays available disk space.

Click Execute.

wp-image-34017

 

You’ll see two tabs:

  • Table: Present raw metrics as rows.
  • Graph: Displays data visually over time.

Prometheus is not only a collector of data, but it also allows you to query, filter, and visualize that data. With PromQL, you can monitor anything from real-time CPU usage to historical memory patterns, and you’ll have total visibility for server health.

Troubleshooting Missing Metrics

Make sure you have a Node Exporter or other exporters running and listening (port 9100). You can check Prometheus. yml for correct scrape_configs. Also, restart the Prometheus after the changes in the config:

sudo systemctl restart prometheus

With this thorough guide, you now have monitoring to ensure you’ve got healthy, performant Ubuntu 24.04 servers.

Final Thoughts

Monitoring the health of your server in your Ubuntu 24.04 running system is crucial for performance and stability. Netdata is great for live monitoring of a system, whereas Prometheus is great for gathering, storing, and querying metrics over the long term. Whether you’re trying to maximize simplicity or scale up your server, this guide provides all the tips and tools you need to keep your server running efficiently.



from StarWind Blog https://ift.tt/hzdHD0W
via IFTTT