Posts on Security, Cloud, DevOps, Citrix, VMware and others.
Words and views are my own and do not reflect on my companies views.
Disclaimer: some of the links on this site are affiliate links, if you click on them and make a purchase, I make a commission.
Cybersecurity researchers have disclosed three security flaws in Planet Technology's WGS-804HPT industrial switches that could be chained to achieve pre-authentication remote code execution on susceptible devices.
"These switches are widely used in building and home automation systems for a variety of networking applications," Claroty's Tomer Goldschmidt said in a Thursday report. "An attacker who is able to remotely control one of these devices can use them to further exploit devices in an internal network and do lateral movement."
The operational technology security firm, which carried out an extensive analysis of the firmware used in these switches using the QEMU framework, said the vulnerabilities are rooted in the dispatcher.cgi interface used to provide a web service. The list of flaws is below -
CVE-2024-52558 (CVSS score: 5.3) - An integer underflow flaw that can allow an unauthenticated attacker to send a malformed HTTP request, resulting in a crash
CVE-2024-52320 (CVSS score: 9.8) - An operating system command injection flaw that can allow an unauthenticated attacker to send commands through a malicious HTTP request, resulting in remote code execution
CVE-2024-48871 (CVSS score: 9.8) - A stack-based buffer overflow flaw that can allow an unauthenticated attacker to send a malicious HTTP request, resulting in remote code execution
Successful exploitation of the flaws could permit an attacker to hijack the execution flow by embedding a shellcode in the HTTP request and gain the ability to execute operating system commands.
Following responsible disclosure, the Taiwanese company has rolled out patches for the shortcomings with version 1.305b241111 released on November 15, 2024.
Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.
from The Hacker News https://ift.tt/su3YWF2
via IFTTT
The Good | DoJ Indicts Crypto Mixer Operators & Deletes PlugX Malware from Over 4000 Machines
The DoJ has indicted three Russian nationals – Roman Vitalyevich Ostapenko, Alexander Evgenievich Oleynik, and Anton Vyachlavovich Tarasov – for operating cryptocurrency mixing services Blender[.]io and Sinbad[.]io. These mixers, used heavily by ransomware gangs and North Korean hackers, laundered criminal proceeds, including ransomware payouts and stolen cryptocurrency. Blender[.]io operated from 2018 to 2022 and helped Lazarus Group launder $500 million of the $617 million stolen in the Axie Infinity Ronin bridge attack. After the site was shut down, Sinbad[.]io emerged, offering similar services until it was seized in November 2023 in a joint law enforcement operation.
A joint statement from the U.S., South Korea, and Japan came just days after the indictment, revealing that North Korean threat actors have stolen over $659 million in cryptocurrency. Crypto mixers are a significant part of how actors profit from crypto-heists and remain a significant threat to the integrity of the global financial system.
A court-ordered FBI operation has successfully removed PlugX malware from over 4,200 infected computers across the U.S.PlugX has long been a thorn in the side of governments, businesses, dissidents, and NGOs worldwide. First observed in 2008, the remote access trojan (RAT) is primarily used by PRC-sponsored hacking outfits and enables attackers to steal sensitive information, remotely control infected devices, and spread to additional systems via USB drives.
The FBI and DOJ announced a court-authorized operation that removed PlugX malware from thousands of computers in the U.S. and abroad. PRC-sponsored hackers used PlugX to target businesses, governments, and Chinese dissidents. Find more information here: https://t.co/4facCNfk5Ppic.twitter.com/j7kPCHeu1x
The operation kicked off in July 2024 and involved an international effort between law enforcement and cybersecurity partners who managed to sinkhole a PlugX variant’s server and issue a self-delete command to erase the malware, registry keys, and directories from infected machines without affecting legitimate system functions.
The Bad | New Evidence Connects DPRK’s IT Worker Fraud to 2016 Crowdfunding Scam
Infrastructure links between the DPRK’s fraudulent IT worker schemes and a crowdfunding scam from 2016 have emerged, shining a light on the extent of North Korea’s early experimentation with illicit money-making tactics.
Exposed in 2023, North Korea’s ongoing fraudulent IT worker scheme involves DPRK actors using fake identities to secure jobs globally, generating revenue for the sanctions-laden regime. These workers have often been tied to the 313th General Bureau under North Korea’s Workers’ Party and subsequently deployed to China and Russia to work for front companies like Yanbian Silverstar and Volasys Silver Star, both sanctioned by the U.S. in 2018 for facilitating North Korean labor exports.
In a new report, researchers examined the 17 domains seized by U.S. authorities back in October 2023, all impersonating IT service companies to mask North Korean workers’ identities as they applied for freelance jobs. One domain, silverstarchina[.]com, was traced to Yanbian Silverstar offices located in the Yanbian prefecture. The same street address and email were reused to register other domain names.
This included kratosmemory[.]com, a domain used in a 2016 IndieGoGo crowdfunding campaign, which managed to raise $21,877 but was exposed as a scam when backers received no products or refunds. Researchers found that the WHOIS registrant information for kratosmemory[.]com was last updated in mid-2016 to a persona called ‘Dan Moulding’ – a match to the IndieGoGo user profile used in the Kratos scam.
Though the 2016 scam is considered low complexity compared to the DPRK’s current IT worker schemes, it shows an early example of North Korea’s efforts to generate revenue while evading sanctions, demonstrating the regime’s willingness to evolve and diversify its financial tactics.
The Ugly | GRU-Linked APT Conducts Cyber Espionage on Kazakhstani Diplomatic Relations
UAC-0063, a threat actor likely associated with GRU-backed APT28, has been targeting Kazakhstan and neighboring countries with multi-stage infections to enable continuous data extraction in ongoing cyberespionage campaigns.
CERT-UA first exposed UAC-0063 in April 2023, revealing its attacks on government entities that reach across Ukraine, Israel, India, Kazakhstan, Kyrgyzstan, and Tajikistan. Now, new analysis from researchers explains how UAC-0063 now leverages spearphishing emails that contain malicious Microsoft Office documents from Kazakhstan’s Ministry of Foreign Affairs. These documents, like intergovernmental letters and administrative notes, trigger an infection chain called “Double-Tap”, which drops the HATVIBE malware.
HATVIBE is a Visual Basic Script (VBS) backdoor that operates as a loader for additional modules, paving the way for Python-based CHERRYSPY backdoor. CHERRYSPY allows the attackers to execute code received from a C2 server. Researchers note that the original source of the documents are unknown but were likely exfiltrated in a previous attack.
UAC-0063’s focus on targeting diplomatic sectors is a reflection of Russia’s cyberespionage efforts in Kazakhstan. Since the 2022 Russian invasion of Ukraine, Kazakhstan has balanced support for Ukraine’s territorial integrity. This stance, coupled with efforts to strengthen ties with Western powers, China, and its neighboring Central Asian states, has increased Kazakhstan’s strategic significance in the political sphere. Kazakhstan’s steady departure from Russia’s influence also explains the espionage campaign’s emphasis on weaponized documents related to Kazakhstan’s foreign relations.
These campaigns continue the GRU’s efforts in gathering intelligence on Kazakhstan’s geopolitical alliances, trade routes, and strategic projects to counter competing powers and maintain Russia’s influence in Central Asia.
Cybersecurity researchers have exposed a new campaign that targets web servers running PHP-based applications to promote gambling platforms in Indonesia.
"Over the past two months, a significant volume of attacks from Python-based bots has been observed, suggesting a coordinated effort to exploit thousands of web apps," Imperva researcher Daniel Johnston said in an analysis. "These attacks appear tied to the proliferation of gambling-related sites, potentially as a response to the heightened government scrutiny."
The Thales-owned company said it has detected millions of requests originating from a Python client that includes a command to install GSocket (aka Global Socket), an open-source tool that can be used to establish a communication channel between two machines regardless of the network perimeter.
It's worth noting that GSocket has been put to use in many a cryptojacking operation in recent months, not to mention even exploiting the access provided by the utility to insert malicious JavaScript code on sites to steal payment information.
The attack chains particularly involve attempts to deploy GSocket by leveraging web pre-existing web shells installed on already compromised servers. A majority of the attacks have been found to single out servers running a popular learning management system (LMS) called Moodle.
A noteworthy aspect of the attacks are the additions to bashrc and crontab system files to ensure that GSocket is actively running even after the removal of the web shells.
It has been determined that the access afforded by GSocket to these target servers is weaponized to deliver PHP files that contain HTML content referencing online gambling services particularly aimed at Indonesian users.
"At the top of each PHP file was PHP code designed to allow only search bots to access the page, but regular site visitors would be redirected to another domain," Johnston said. "The objective behind this is to target users searching for known gambling services, then redirect them to another domain."
Imperva said the redirections lead to "pktoto[.]cc," a known Indonesian gambling site.
The development comes as c/side revealed a widespread malware campaign that has targeted over 5,000 sites globally to create unauthorized administrator accounts, install a malicious plugin from a remote server, and siphon credential data back to it.
The exact initial access vector used to deploy the JavaScript malware on these sites is presently not known. The malware has been codenamed WP3.XYZ in reference to the domain name that's associated with the server used to fetch the plugin and exfiltrate data ("wp3[.]xyz").
To mitigate against the attack, it's recommended that WordPress site owners keep their plugins up-to-date, block the rogue domain using a firewall, scan for suspicious admin accounts or plugins, and remove them.
Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.
from The Hacker News https://ift.tt/Xw5UJTq
via IFTTT
The Russian threat actor known as Star Blizzard has been linked to a new spear-phishing campaign that targets victims' WhatsApp accounts, signaling a departure from its longstanding tradecraft in a likely attempt to evade detection.
"Star Blizzard's targets are most commonly related to government or diplomacy (both incumbent and former position holders), defense policy or international relations researchers whose work touches on Russia, and sources of assistance to Ukraine related to the war with Russia," the Microsoft Threat Intelligence team said in a report shared with The Hacker News.
Star Blizzard (formerly SEABORGIUM) is a Russia-linked threat activity cluster known for its credential harvesting campaigns. Active since at least 2012, it's also tracked under the monikers Blue Callisto, BlueCharlie (or TAG-53), Calisto (alternately spelled Callisto), COLDRIVER, Dancing Salome, Gossamer Bear, Iron Frontier, TA446, and UNC4057.
Previously observed attack chains have involved sending spear-phishing emails to targets of interest, usually from a Proton account, attaching documents embedding malicious links that redirect to an Evilginx-powered page that's capable of harvesting credentials and two-factor authentication (2FA) codes via an adversary-in-the-middle (AiTM) attack.
Star Blizzard has also been linked to the use of email marketing platforms like HubSpot and MailerLite to conceal the true email sender addresses and obviate the need for including actor-controlled domain infrastructure in email messages.
Late last year, Microsoft and the U.S. Department of Justice (DoJ) announced the seizure of more than 180 domains that were used by the threat actor to target journalists, think tanks, and non-governmental organizations (NGOs) between January 2023 and August 2024.
The tech giant assessed public disclosure into its activities may have likely prompted the hacking crew to switch up its tactics by compromising WhatsApp accounts. That said, the campaign appears to have been limited and wound down at the end of November 2024.
"The targets primarily belong to the government and diplomacy sectors, including both current and former officials," Sherrod DeGrippo, director of threat intelligence strategy at Microsoft, told The Hacker News.
"Additionally, the targets encompass individuals involved in defense policy, researchers in international relations focusing on Russia, and those providing assistance to Ukraine in relation to the war with Russia."
It all starts with a spear-phishing email that purports to be from a U.S. government official to lend it a veneer of legitimacy and increase the likelihood that the victim would engage with them.
The message contains a quick response (QR) code that urges the recipients to join a supposed WhatsApp group on "the latest non-governmental initiatives aimed at supporting Ukraine NGOs." The code, however, is deliberately broken so as to trigger a response from the victim.
Should the email recipient reply, Star Blizzard sends a second message, asking them to click on a t[.]ly shortened link to join the WhatsApp group, while apologizing for the inconvenience caused.
"When this link is followed, the target is redirected to a web page asking them to scan a QR code to join the group," Microsoft explained. "However, this QR code is actually used by WhatsApp to connect an account to a linked device and/or the WhatsApp Web portal."
In the event the target follows the instructions on the site ("aerofluidthermo[.]org"), the approach allows the threat actor to gain unauthorized access to their WhatsApp messages and even exfiltrate the data via browser add-ons.
Individuals who belonging to sectors targeted by Star Blizzard are advised to exercise caution when it comes to handling emails containing links to external sources.
The campaign "marks a break in long-standing Star Blizzard TTPs and highlights the threat actor's tenacity in continuing spear-phishing campaigns to gain access to sensitive information even in the face of repeated degradations of its operations."
Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.
from The Hacker News https://ift.tt/tXjcnld
via IFTTT
With the price increase of VMware Licenses, you might want to get more out of your existing hardware and your existing virtual infrastructure. By upgrading to the latest vSphere and ESXi 8.0 U3 you’ll unlock an interesting feature that we have recently blogged about – Exploring the “Memory Tiering over NVMe” Feature in vSphere 8.0 Update 3.
While some of you are already thinking (or leaving) the VMware ecosystem for some other alternative, most of you might still be in the phase of reflection or thinking what to do next.
Adding more DRAM is possible, but it’s more costly. In fact, the total cost of the DRAM is 50 to 80 percent of the cost of the server. By adding NVMe to the equation, you saving a lot of money while increasing the server’s RAM capacity and consolidation ratio.
The testing setup VMware has used
VMware has done testing with industry standard tools. Login VSI tool for VDI testing, HammerDB for SQL Server testing and DVD Store Benchmark. They tested ESXi 8 U3 with Memory Tiering feature activated and configured it to use 1 to 1 DRAM to NVMe ratio. What this means is that for every gigabyte of DRAM, you get one gigabyte of NVMe. That’s what is 1 is to 1 NVMe ratio.
As you can see here, your total system available memory is 1 terabyte, and 512 gigabytes is coming from DRAM, which is a faster tier, and 512 is coming from your NVMe tier. Simple, right?
Behind the scenes, ESX divides that memory into two tiers, Tier 1 and Tier 2. Tier 1 memory being DRAM, which is a faster memory, Tier 2 being cheaper and bigger memory, and that’s typically slower devices like NVMe SSDs and CXL memory devices.
Screenshot from VMware below.
1Tb of DRAM composed from 512Gb of RAM and 512 Gb of NVMe SSD
How the system works?
Inside the system, there is a very intelligent page classification scheme that is completely new. It figures out which memory pages are hot and which of the memory pages are cold in a specific time window.
For example, in the past one minute or so, how many pages are accessed more frequently? So, the most frequently accessed pages within a given time period will be classified as hot pages, and obviously you want hot pages to go into a faster tier, which is DRAM, and the colder pages will go on the slow tier. So that’s one difference. Once the VMs are running, you could see that in your typical environments, the workload characteristics would change.
VMware intelligent page classification mechanism in Memory Tiering Feature
The system continues monitoring and keep on changing this classification, and it keeps on running in a feedback loop. Your system is going through phase changes, you’re running database transactions, sometimes you’re not running anything. This system will automatically adjust all these tier sizes, so you can really optimize your memory usage.
What VMware does with memory tiering is that they unlock this additional memory using NVMe as the second tier. Now if you look at their just average VM sizes and average host sizes and what’s their consumption and activeness look like, and again, their activeness is pretty low.
If your applications are memory bound, you will hit this memory capacity first and you leave some of the cores which are idle.
Performance Testing VMware
The VDI – The first result is the baseline, which is only using DRAM, 1 terabyte of memory. So, you get an end user score of about 8.5, which is an excellent score. Now, if you enable memory tearing, you can run twice as many VMs.
At the beginning, you are able to run 120 VMs in the baseline without performance degradation. And when you enable memory tearing, you can run 240 VMs. The end user score is not degraded.
You’ll get 2x increase in VM density with less than 3% performance loss.
The SQL Server – The SQL server test showed 2x Increase in VM density with less than 10% in performance loss.
SQL server Density increase compare graph
Oracle DB – The Oracle DB test showed that 2x in VM density with less than 5% performance loss.
Oracle density increase by using DVD Store Benchmark
You can watch the video where Todd Muirhead talks with Qasim Ali, both from VMware. They show all the graphs, performance degradation (very small compared to benefits) and the possibilities.
They also show where you can monitor the memory consumption and activeness.
Note: You don’t have to use 1:1 ratio. In my lab, I’ve tested other scenarios where I allocated more NVMe SSD capacity than the DRAM.
However, the recommendation from VMware is to not configure more than you have DRAM. But hey, this is a lab.
With only 16Gb of RAM we now added 30 Gb of NNMe SSD tier that is also used as a RAM giving us 46 Gb of overall RAM capacity on our ESXi hosts that were normally running only with 16 Gb of RAM. You can perfectly test this within VMware Workstation nested lab.
Look at this screenshot.
Tier 0 and Tier 1 from my nested lab
Final Words
Memory Tiering from vSphere 8 U3 (VCF 9) is a true game changing where you are able to increase the server density by adding some NVMe SSDs to each of the hosts within your cluster, while keeping excellent performance. The configuration in the ESXi U3 needs some CLI which I documented in my previous blog post, however I assume that the next version of vSphere will have a new UI for that particular feature.
You can see that by increasing your server density a lot you only have slight increase on the CPU overhead. In the video they were talking a worst-case scenario was 10%, where usual performance hit at the CPU level was 2-5%.
Memory tiering helps to reduce TCO costs, optimizes memory usage, and of course, improves VM consolidation (up to 2x with 1:1 ratio across different workloads and applications).
This is extremely encouraging for existing infrastructure running VMware environments. This feature can probably give you some additional time for reflection, to see whether you’ll continue with VMware or start transitioning to another hypervisor platform.
The reason for transition is almost always cost driven. On the other hand, you must weight all factors, including time for V2V, reconfiguration, trainings etc. The transition might not start before the beginning or during next year’s period.
from StarWind Blog https://ift.tt/3nUV5SD
via IFTTT
As cyber threats grow more sophisticated, security teams need the right tools powered by generative AI (GenAI) to detect and protect at machine speed. At SentinelOne, we’re already making this future a reality with Purple AI, equipping security teams with the AI-powered tools to help stay ahead of attacks.
Purple AI is the industry’s most advanced AI security analyst: It streamlines threat hunting, query writing, investigations, and navigates complex data schemas within SentinelOne and across partner log sources. By optimizing workflows, Purple enables your team to focus on solving problems rather than managing processes.
Today, we’re excited to announce two important new features in Purple AI that deliver the next step in AI security innovation to accelerate efficiencies for security teams:
Expanded Third-Party Log Source Support – Enabling SOC teams to detect threats earlier with expanded data visibility and a unified data stream across the enterprise.
Early Access to Multilingual Question Support – Equipping global security teams and organizations to hunt, investigate, and respond faster in their preferred language.
Partner Log Sources | Unlock Deeper Data Visibility for Faster, Smarter Responses
Organizations rely on diverse data sources to build a comprehensive defense. However, having access to more data often comes with the challenge of learning new data schemas and mastering complex query languages.
Purple AI simplifies the data problem for security teams. It’s the only GenAI security analyst in the industry built on normalized data on ingest via the Open Cybersecurity Schema Framework (OCSF) to deliver instant querying of native and third-party data, scalability across expanding data sources, and normalized data views for faster investigations.
We’re helping security teams further harness the power of data and AI by expanding Purple’s supported third-party log sources to include:
Palo Alto Networks Firewall
ZScaler Internet Access
Proofpoint TAP
Microsoft Office 365
Fortinet FortiGate
Okta
With Purple AI, your SOC can leverage this expanded data to uncover threats faster, gain broader visibility, and focus on making critical decisions. Purple AI takes the complexity out of querying, ensuring that more data doesn’t slow you down but, instead, empowers faster and more efficient security processes.
Broaden Your Visibility
Starting today, security teams can leverage the full breadth of Purple AI’s threat hunting and investigation capabilities to query across an expansive list of native and third party sources. Security analysts can ask questions like:
“Show how many users accessed cloud applications from Zscaler Internet Access logs from Dec 21-23 2024,” or “Show user accounts in Okta with the highest number of failed login attempts today.”
Alternatively, use a Quickstart question to begin a conversation with Purple AI. Receive a precise events table tailored to the new data sources along with relevant PowerQuery syntax. Users can also leverage contextual follow-ups to uncover deeper insights across expanded datasets without missing a beat.
By integrating data from these widely used platforms, Purple AI expands its role as a trusted partner for SOC teams, helping you stay ahead of evolving threats while reinforcing the tools and processes you rely on every day. This is more than just accessing data. This is about making your data work smarter and helping your team stay ahead in the game.
Multilingual Questions | Empowering Global SOCs with the Power of Purple
Cybersecurity shouldn’t be limited by borders or languages. While Purple AI has already empowered countless global security teams, we recognize the importance of equipping security teams with access to the best AI security tools in their preferred language.
That’s why we’re thrilled to introduce early access to multilingual question support, available at no additional cost to all Purple AI customers. Purple AI is now more accessible than ever before, expanding its reach to organizations worldwide.
Key Benefits of Multilingual Support
Breaking Language Barriers – Ask Purple AI your questions in any supported language and it will translate them into the necessary PowerQuery syntax to deliver accurate results.
Fostering Worldwide Collaboration – Multilingual support simplifies communication by enabling on-the-fly translations. Investigation steps are saved in the Notebook with translated summaries, making it easier to share findings with international teams or stakeholders.
Global Mission, Local Access – By making Purple AI available in more languages, we’re taking steps toward ensuring that every organization, regardless of geography or language, has access to world-class security tools.
Global Threat Hunting Simplified
Multilingual support in Purple AI empowers security teams to respond to threats with speed, access, and precision, regardless of language preference. We’re helping SOC teams break down borders, fostering stronger collaboration, and ensuring that every organization, no matter where they are, has access to the tools they need to stay secure.
Using this feature is as simple as adding a query in your preferred language. For example:
Ask in Spanish: “¿Muestra cuántos usuarios accedieron a aplicaciones en la nube desde los registros de acceso a Internet de Zscaler del 21 al 23 de diciembre de 2024””
Ask in Japanese: “2024年12月21日から23日までのZscalerインターネットアクセスログからクラウドアプリケーションにアクセスしたユーザー数を表示します。”
Supported languages include Spanish, French, German, Italian, Dutch, Arabic, Japanese, Korean, Thai, Malay, Indonesian, and more. Just ask a question in the language of your choice, and we’ll take care of the rest by translating your query, interpreting the data, and delivering precise insights. While the resulting summaries and follow-ups are currently presented in English by default, simply ask Purple AI to provide translated results by adding queries like “Tell me in Japanese” or, in your preferred language.
Bringing It All Together
Whether by broadening visibility with expanded log source support or making security accessible to a global audience with multilingual features, our mission is clear: To safeguard your data by empowering every analyst to detect earlier, respond faster, and stay ahead of attacks.
With these updates, we’re building a future where collaboration and inclusivity drive innovation in cybersecurity. Together, we can outpace threats and create a safer, more connected world. Stay vigilant, stay connected, and stay secure.
Ready to explore the new features?
Existing Singularity Complete and Purple AI customers can start exploring these capabilities today. Open Purple AI, type your first query, and see the results in action. If you have questions or need assistance, reach out to our support team.
New to Purple AI? Learn how Purple AI can transform your SOC’s threat-hunting capabilities. Contact us or request a demo to get started.
Purple AI
Your AI security analyst. Detect earlier, respond faster, and stay ahead of attacks.
Hey there, fellow engineers and tech enthusiasts! I’m excited to share one of my favorite strategies for modern software delivery: combining Docker and Jenkins to power up your CI/CD pipelines.
Throughout my career as a Senior DevOps Engineer and Docker Captain, I’ve found that these two tools can drastically streamline releases, reduce environment-related headaches, and give teams the confidence they need to ship faster.
In this post, I’ll walk you through what Docker and Jenkins are, why they pair perfectly, and how you can build and maintain efficient pipelines. My goal is to help you feel right at home when automating your workflows. Let’s dive in.
Brief overview of continuous integration and continuous delivery
Continuous integration (CI) and continuous delivery (CD) are key pillars of modern development. If you’re new to these concepts, here’s a quick rundown:
Continuous integration (CI): Developers frequently commit their code to a shared repository, triggering automated builds and tests. This practice prevents conflicts and ensures defects are caught early.
Continuous delivery (CD): With CI in place, organizations can then confidently automate releases. That means shorter release cycles, fewer surprises, and the ability to roll back changes quickly if needed.
Leveraging CI/CD can dramatically improve your team’s velocity and quality. Once you experience the benefits of dependable, streamlined pipelines, there’s no going back.
Why combine Docker and Jenkins for CI/CD?
Docker allows you to containerize your applications, creating consistent environments across development, testing, and production. Jenkins, on the other hand, helps you automate tasks such as building, testing, and deploying your code. I like to think of Jenkins as the tireless “assembly line worker,” while Docker provides identical “containers” to ensure consistency throughout your project’s life cycle.
Here’s why blending these tools is so powerful:
Consistent environments: Docker containers guarantee uniformity from a developer’s laptop all the way to production. This consistency reduces errors and eliminates the dreaded “works on my machine” excuse.
Speedy deployments and rollbacks: Docker images are lightweight. You can ship or revert changes at the drop of a hat — perfect for short delivery process cycles where minimal downtime is crucial.
Scalability: Need to run 1,000 tests in parallel or support multiple teams working on microservices? No problem. Spin up multiple Docker containers whenever you need more build agents, and let Jenkins orchestrate everything with Jenkins pipelines.
For a DevOps junkie like me, this synergy between Jenkins and Docker is a dream come true.
Setting up your CI/CD pipeline with Docker and Jenkins
Before you roll up your sleeves, let’s cover the essentials you’ll need:
Docker Desktop (or a Docker server environment) installed and running. You can get Docker for various operating systems.
Jenkins downloaded from Docker Hub or installed on your machine. These days, you’ll want jenkins/jenkins:lts (the long-term support image) rather than the deprecated library/jenkins image.
Proper permissions for Docker commands and the ability to manage Docker images on your system.
A GitHub or similar code repository where you can store your Jenkins pipeline configuration (optional, but recommended).
Pro tip: If you’re planning a production setup, consider a container orchestration platform like Kubernetes. This approach simplifies scaling Jenkins, updating Jenkins, and managing additional Docker servers for heavier workloads.
Building a robust CI/CD pipeline with Docker and Jenkins
After prepping your environment, it’s time to create your first Jenkins-Docker pipeline. Below, I’ll walk you through common steps for a typical pipeline — feel free to modify them to fit your stack.
1. Install necessary Jenkins plugins
Jenkins offers countless plugins, so let’s start with a few that make configuring Jenkins with Docker easier:
Docker Pipeline Plugin
Docker
CloudBees Docker Build and Publish
How to install plugins:
Open Manage Jenkins > Manage Plugins in Jenkins.
Click the Available tab and search for the plugins listed above.
Pro tip (advanced approach): If you’re aiming for a fully infrastructure-as-code setup, consider using Jenkins configuration as code (JCasC). With JCasC, you can declare all your Jenkins settings — including plugins, credentials, and pipeline definitions — in a YAML file. This means your entire Jenkins configuration is version-controlled and reproducible, making it effortless to spin up fresh Jenkins instances or apply consistent settings across multiple environments. It’s especially handy for large teams looking to manage Jenkins at scale.
In this step, you’ll define your pipeline. A Jenkins “pipeline” job uses a Jenkinsfile (stored in your code repository) to specify the steps, stages, and environment requirements.
Now that your pipeline is set up, you’ll want Jenkins to run it automatically:
Webhook triggers: Configure your source control (e.g., GitHub) to send a webhook whenever code is pushed. Jenkins will kick off a build immediately.
Poll SCM: Jenkins periodically checks your repo for new commits and starts a build if it detects changes.
Which trigger method should you choose?
Webhook triggers are ideal if you want near real-time builds. As soon as you push to your repo, Jenkins is notified, and a new build starts almost instantly. This approach is typically more efficient, as Jenkins doesn’t have to continuously check your repository for updates. However, it requires that your source control system and network environment support webhooks.
Poll SCM is useful if your environment can’t support incoming webhooks — for example, if you’re behind a corporate firewall or your repository isn’t configured for outbound hooks. In that case, Jenkins routinely checks for new commits on a schedule you define (e.g., every five minutes), which can add a small delay and extra overhead but may simplify setup in locked-down environments.
Personal experience: I love webhook triggers because they keep everything as close to real-time as possible. Polling works fine if webhooks aren’t feasible, but you’ll see a slight delay between code pushes and build starts. It can also generate extra network traffic if your polling interval is too frequent.
4. Build, test, and deploy with Docker containers
Here comes the fun part — automating the entire cycle from build to deploy:
Build Docker image: After pulling the code, Jenkins calls docker.build to create a new image.
Run tests: Automated or automated acceptance testing runs inside a container spun up from that image, ensuring consistency.
Push to registry: Assuming tests pass, Jenkins pushes the tagged image to your Docker registry — this could be Docker Hub or a private registry.
Deploy: Optionally, Jenkins can then deploy the image to a remote server or a container orchestrator (Kubernetes, etc.).
This streamlined approach ensures every step — build, test, deploy — lives in one cohesive pipeline, preventing those “where’d that step go?” mysteries.
5. Optimize and maintain your pipeline
Once your pipeline is up and running, here are a few maintenance tips and enhancements to keep everything running smoothly:
Clean up images: Routine cleanup of Docker images can reclaim space and reduce clutter.
Security updates: Stay on top of updates for Docker, Jenkins, and any plugins. Applying patches promptly helps protect your CI/CD environment from vulnerabilities.
Resource monitoring: Ensure Jenkins nodes have enough memory, CPU, and disk space for builds. Overworked nodes can slow down your pipeline and cause intermittent failures.
Pro tip: In large projects, consider separating your build agents from your Jenkins controller by running them in ephemeral Docker containers (also known as Jenkins agents). If an agent goes down or becomes stale, you can quickly spin up a fresh one — ensuring a clean, consistent environment for every build and reducing the load on your main Jenkins server.
Why use Declarative Pipelines for CI/CD?
Although Jenkins supports multiple pipeline syntaxes, Declarative Pipelines stand out for their clarity and resource-friendly design. Here’s why:
Simplified, opinionated syntax: Everything is wrapped in a single pipeline { ... } block, which minimizes “scripting sprawl.” It’s perfect for teams who want a quick path to best practices without diving deeply into Groovy specifics.
Easier resource allocation: By specifying an agent at either the pipeline level or within each stage, you can offload heavyweight tasks (builds, tests) onto separate worker nodes or Docker containers. This approach helps prevent your main Jenkins controller from becoming overloaded.
Parallelization and matrix builds: If you need to run multiple test suites or support various OS/browser combinations, Declarative Pipelines make it straightforward to define parallel stages or set up a matrix build. This tactic is incredibly handy for microservices or large test suites requiring different environments in parallel.
Built-in “escape hatch”: Need advanced Groovy features? Just drop into a script block. This lets you access Scripted Pipeline capabilities for niche cases, while still enjoying Declarative’s streamlined structure most of the time.
Cleaner parameterization: Want to let users pick which tests to run or which Docker image to use? The parameters directive makes your pipeline more flexible. A single Jenkinsfile can handle multiple scenarios — like unit vs. integration testing — without duplicating stages.
Declarative Pipeline examples
Below are sample pipelines to illustrate how declarative syntax can simplify resource allocation and keep your Jenkins controller healthy.
Using Declarative Pipelines helps ensure your CI/CD setup is easier to maintain, scalable, and secure. By properly configuring agents — whether Docker-based or label-based — you can spread workloads across multiple worker nodes, minimize resource contention, and keep your Jenkins controller humming along happily.
Best practices for CI/CD with Docker and Jenkins
Ready to supercharge your setup? Here are a few tried-and-true habits I’ve cultivated:
Leverage Docker’s layer caching: Optimize your Dockerfiles so stable (less frequently changing) layers appear early. This drastically reduces build times.
Run tests in parallel: Jenkins can run multiple containers for different services or microservices, letting you test them side by side. Declarative Pipelines make it easy to define parallel stages, each on its own agent.
Shift left on security: Integrate security checks early in the pipeline. Tools like Docker Scout let you scan images for vulnerabilities, while Jenkins plugins can enforce compliance policies. Don’t wait until production to discover issues.
Optimize resource allocation: Properly configure CPU and memory limits for Jenkins and Docker containers to avoid resource hogging. If you’re scaling Jenkins, distribute builds across multiple worker nodes or ephemeral agents for maximum efficiency.
Configuration management: Store Jenkins jobs, pipeline definitions, and plugin configurations in source control. Tools like Jenkins Configuration as Code simplify versioning and replicating your setup across multiple Docker servers.
With these strategies — plus a healthy dose of Declarative Pipelines — you’ll have a lean, high-octane CI/CD pipeline that’s easier to maintain and evolve.
Troubleshooting Docker and Jenkins Pipelines
Even the best systems hit a snag now and then. Here are a few hurdles I’ve seen (and conquered):
Handling environment variability: Keep Docker and Jenkins versions synced across different nodes. If multiple Jenkins nodes are in play, standardize Docker versions to avoid random build failures.
Troubleshooting build failures: Use docker logs -f <container-id> to see exactly what happened inside a container. Often, the logs reveal missing dependencies or misconfigured environment variables.
Networking challenges: If your containers need to talk to each other — especially across multiple hosts — make sure you configure Docker networks or an orchestration platform properly. Read Docker’s networking documentation for details, and check out the Jenkins diagnosing issues guide for more troubleshooting tips.
Conclusion
Pairing Docker and Jenkins offers a nimble, robust approach to CI/CD. Docker locks down consistent environments and lightning-fast rollouts, while Jenkins automates key tasks like building, testing, and pushing your changes to production. When these two are in harmony, you can expect shorter release cycles, fewer integration headaches, and more time to focus on developing awesome features.
A healthy pipeline also means your team can respond quickly to user feedback and confidently roll out updates — two crucial ingredients for any successful software project. And if you’re concerned about security, there are plenty of tools and best practices to keep your applications safe.
I hope this guide helps you build (and maintain) a high-octane CI/CD pipeline that your team will love. If you have questions or need a hand, feel free to reach out on the community forums, join the conversation on Slack, or open a ticket on GitHub issues. You’ll find plenty of fellow Docker and Jenkins enthusiasts who are happy to help.
Without continuous improvement in software security, you’re not standing still — you’re walking backward into oncoming traffic. Attack vectors multiply, evolve, and look for the weakest link in your software supply chain daily.
Cybersecurity Ventures forecasts that the global cost of software supply chain attacks will reach nearly $138 billion by 2031, up from $60 billion in 2025 and $46 billion in 2023. A single overlooked vulnerability isn’t just a flaw; it’s an open invitation for compromise, potentially threatening your entire system. The cost of a breach doesn’t stop with your software — it extends to your reputation and customer trust, which are far harder to rebuild.
In this post, we’ll explore how these tools provide built-in security, governance, and visibility, helping your team innovate faster while staying protected.
Securing the supply chain
Your software supply chain isn’t just an automated sequence of tools and processes. It’s a promise — to your customers, team, and future. Promises are fragile. The cracks can start to show with every dependency, third-party integration, and production push. Tools like Image Access Management help protect your supply chain by providing granular control over who can pull, share, or modify images, ensuring only trusted team members access sensitive assets. Meanwhile, Hardened Docker Desktop ensures developers work in a secure, tamper-proof environment, giving your team confidence that development is aligned with enterprise security standards. The solution isn’t to slow down or second-guess; it’s to continuously improve on securing your software supply chain, such as automated vulnerability scans and trusted content from Docker Hub.
A breach is more than a line item in the budget. Customers ask themselves, “If they couldn’t protect this, what else can’t they protect?” Downtime halts innovation as fines for compliance failures and engineering efforts re-route to forensic security analysis. The brand you spent years perfecting could be reduced to a cautionary tale. Regardless of how innovative your product is, it’s not trusted if it’s not secure.
Organizations must stay prepared by regularly updating their security measures and embracing new technologies to outpace evolving threats. As highlighted in the article Rising Tide of Software Supply Chain Attacks: An Urgent Problem, software supply chain attacks are increasingly targeting critical points in development workflows, such as third-party dependencies and build environments. High-profile incidents like the SolarWinds attack have demonstrated how adversaries exploit trust relationships and weaknesses in widely used components to cause widespread damage.
Preventing security problems from the start
Preventing attacks like the SolarWinds breach requires prioritizing code integrity and adopting secure software development practices. Tools like Docker Scout seamlessly integrate security into developers’ workflows, enabling proactive identification of vulnerabilities in dependencies and ensuring that trusted components form the backbone of your applications.
Docker Hub’s trusted content and Docker Scout’s policy evaluation features help ensure that your organization uses compliant and secure images. Docker Official Images (DOI) provide a robust foundation for deployments, mitigating risks from untrusted components. To extend this security foundation, Image Access Management allows teams to enforce image-sharing policies and restrict access to sensitive components, preventing accidental exposure or misuse. For local development, Hardened Docker Desktop ensures that developers operate in a secure, enterprise-grade environment, minimizing risks from the outset. This combination of tools enables your engineering team to put out fires and, more importantly, prevent them from starting in the first place.
Building guardrails
Governance isn’t a roadblock; it’s the blueprint for progress. The problem is that some companies treat security like a fire extinguisher — something you grab when things go wrong. That is not a viable strategy in the long run. Real innovation happens when security guardrails are so well-designed that they feel like open highways, empowering teams to move fast without compromising safety.
A structured policy lifecycle loop — mapping connections, planning changes, deploying cleanly, and retiring the dead weight — turns governance into your competitive edge. Automate it, and you’re not just checking boxes; you’re giving your teams the freedom to move fast and trust the road ahead.
Continuous improvement on security policy management doesn’t have to feel like a bureaucratic chokehold. Docker provides a streamlined workflow to secure your software supply chain effectively. Docker Scout integrates seamlessly into your development lifecycle, delivering vulnerability scans, image analysis, and detailed reports and recommendations to help teams address issues before code reaches production.
With the introduction of Docker Health Scores — a security grading system for container images — teams gain a clear and actionable snapshot of their image security posture. These scores empower developers to prioritize remediation efforts and continuously improve their software’s security from code to production.
Keeping up with continuous improvement
Security threats aren’t slowing down. New attack vectors and vulnerabilities grow every day. With cybercrime costs expected to rise from $9.22 trillion in 2024 to $13.82 trillion by 2028, organizations face a critical choice: adapt to this evolving threat landscape or risk falling behind, exposing themselves to escalating costs and reputational damage. Continuous improvement in software security isn’t a luxury. Building and maintaining trust with your customers is essential so they know that every fresh deployment is better than the one that came before. Otherwise, expect high costs due to imminent software supply chain attacks.
Best practices for securing the software supply chain involve integrating vulnerability scans early in the development lifecycle, leveraging verified content from trusted sources, and implementing governance policies to ensure consistent compliance standards without manual intervention. Continuous monitoring of vulnerabilities and enforcing runtime policies help maintain security at scale, adapting to the dynamic nature of modern software ecosystems.
Start today
Securing your software supply chain is a journey of continuous improvement. With Docker’s tools, you can empower your teams to build and deploy software securely, ensuring vulnerabilities are addressed before they become liabilities.
Don’t wait until vulnerabilities turn into liabilities. Explore Docker Hub, Docker Scout, Hardened Docker Desktop, and Image Access Management to embed security into every stage of development. From granular control over image access to tamper-proof local environments, Docker’s suite of tools helps safeguard your innovation, protect your reputation, and empower your organization to thrive in a dynamic ecosystem.
Learn more
Docker Scout: Integrates seamlessly into your development lifecycle, delivering vulnerability scans, image analysis, and actionable recommendations to address issues before they reach production.
Docker Health Scores: A security grading system for container images, offering teams clear insights into their image security posture.
Docker Hub: Access trusted, verified content, including Docker Official Images (DOI), to build secure and compliant software applications.
Docker Official Images (DOI): A curated set of high-quality images that provide a secure foundation for your containerized applications.
Image Access Management (IAM): Enforce image-sharing policies and restrict access to sensitive components, ensuring only trusted team members access critical assets.
Hardened Docker Desktop: A tamper-proof, enterprise-grade development environment that aligns with security standards to minimize risks from local development.
Cybersecurity researchers have detailed an attack that involved a threat actor utilizing a Python-based backdoor to maintain persistent access to compromised endpoints and then leveraged this access to deploy the RansomHub ransomware throughout the target network.
According to GuidePoint Security, initial access is said to have been facilitated by means of a JavaScript malware downloaded named SocGholish (aka FakeUpdates), which is known to be distributed via drive-by campaigns that trick unsuspecting users into downloading bogus web browser updates.
Such attacks commonly involve the use of legitimate-but-infected websites that victims are redirected to from search engine results using black hat Search Engine Optimization (SEO) techniques. Upon execution, SocGholish establishes contact with an attacker-controlled server to retrieve secondary payloads.
As recently as last year, SocGholish campaigns have targeted WordPress sites relying on outdated versions of popular SEO plugins such as Yoast (CVE-2024-4984, CVSS score: 6.4) and Rank Math PRO (CVE-2024-3665, CVSS score: 6.4) for initial access.
In the incident investigated by GuidePoint Security, the Python backdoor was found to be dropped about 20 minutes after the initial infection via SocGholish. The threat actor then proceeded to deliver the backdoor to other machines located in the same network during lateral movement via RDP sessions.
"Functionally, the script is a reverse proxy that connects to a hard-coded IP address. Once the script has passed the initial command-and-control (C2) handshake, it establishes a tunnel that is heavily based on the SOCKS5 protocol," security researcher Andrew Nelson said.
"This tunnel allows the threat actor to move laterally in the compromised network using the victim system as a proxy."
The Python script, an earlier version of which was documented by ReliaQuest in February 2024, has been detected in the wild since early December 2023, while undergoing "surface-level changes" that are aimed at improving the obfuscation methods used to to avoid detection.
GuidePoint also noted that the decoded script is both polished and well-written, indicating that the malware author is either meticulous about maintaining a highly readable and testable Python code or is relying on artificial intelligence (AI) tools to assist with the coding task.
"With the exception of local variable obfuscation, the code is broken down into distinct classes with highly descriptive method names and variables," Nelson added. "Each method also has a high degree of error handling and verbose debug messages."
The Python-based backdoor is far from the only precursor detected in ransomware attacks. As highlighted by Halcyon earlier this month, some of the other tools deployed prior to ransomware deployment include those responsible for -
Disabling Endpoint Detection and Response (EDR) solutions using EDRSilencer and Backstab
Stealing credentials using LaZagne
Compromising email accounts by brute-forcing credentials using MailBruter
Maintaining stealthy access and delivering additional payloads using Sirefef and Mediyes
Ransomware campaigns have also been observed targeting Amazon S3 buckets by leveraging Amazon Web Services' Server-Side Encryption with Customer Provided Keys (SSE-C) to encrypt victim data. The activity has been attributed to a threat actor dubbed Codefinger.
Besides preventing recovery without their generated key, the attacks employ urgent ransom tactics wherein the files are marked for deletion within seven days via the S3 Object Lifecycle Management API to pressurize victims into paying up.
"Threat actor Codefinger abuses publicly disclosed AWS keys with permissions to write and read S3 objects," Halcyon said. "By utilizing AWS native services, they achieve encryption in a way that is both secure and unrecoverable without their cooperation."
The development comes as SlashNext said it has witnessed a surge in "rapid-fire" phishing campaigns mimicking the Black Basta ransomware crew's email bombing technique to flood victims' inboxes with over 1,100 legitimate messages related to newsletters or payment notices.
"Then, when people feel overwhelmed, the attackers swoop in via phone calls or Microsoft Teams messages, posing as company tech support with a simple fix," the company said.
"They speak with confidence to gain trust, directing users to install remote-access software like TeamViewer or AnyDesk. Once that software is on a device, attackers slip in quietly. From there, they can spread harmful programs or sneak into other areas of the network, clearing a path straight to sensitive data."
Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.
from The Hacker News https://ift.tt/pj74qwU
via IFTTT
Ivanti has rolled out security updates to address several security flaws impacting Avalanche, Application Control Engine, and Endpoint Manager (EPM), including four critical bugs that could lead to information disclosure.
All the four critical security flaws, rated 9.8 out of 10.0 on the CVSS scale, are rooted in EPM, and concern absolute path traversal flaws that allow a remote unauthenticated attacker to leak sensitive information. The flaws are listed below -
CVE-2024-10811
CVE-2024-13161
CVE-2024-13160, and
CVE-2024-13159
The shortcomings affect EPM versions 2024 November security update and prior, and 2022 SU6 November security update and prior. They have been addressed in EPM 2024 January-2025 Security Update and EPM 2022 SU6 January-2025 Security Update.
Horizon3.ai security researcher Zach Hanley has been credited with discovering and reporting all vulnerabilities in question.
Also patched by Ivanti are multiple high-severity bugs in Avalanche versions prior to 6.4.7 and Application Control Engine before version 10.14.4.0 that could permit an attacker to bypass authentication, leak sensitive information, and get around the application blocking functionality.
The company said it has no evidence that any of the flaws are being exploited in the wild, and that it has intensified its internal scanning and testing procedures to promptly flag and address security issues.
The development comes as SAP released fixes to resolve two critical vulnerabilities in its NetWeaver ABAP Server and ABAP Platform (CVE-2025-0070 and CVE-2025-0066, CVSS scores: 9.9) that allows an authenticated attacker to exploit improper authentication checks in order to escalate privileges and access restricted information due to weak access controls.
"SAP strongly recommends that the customer visits the Support Portal and applies patches on priority to protect their SAP landscape," the company said in its January 2025 bulletin.
Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.
from The Hacker News https://ift.tt/hx9OvUd
via IFTTT