Saturday, March 22, 2025

U.S. Treasury Lifts Tornado Cash Sanctions Amid North Korea Money Laundering Probe

Mar 22, 2025Ravie LakshmananFinancial Security / Cryptocurrency

The U.S. Treasury Department has announced that it's removing sanctions against Tornado Cash, a cryptocurrency mixer service that has been accused of aiding the North Korea-linked Lazarus Group to launder their ill-gotten proceeds.

"Based on the Administration's review of the novel legal and policy issues raised by use of financial sanctions against financial and commercial activity occurring within evolving technology and legal environments, we have exercised our discretion to remove the economic sanctions against Tornado Cash," the Treasury said in a statement.

In conjunction with the move, over 100 Ethereum (ETH) wallet addresses are also being removed from the Specially Designated Nationals (SDN) list.

The department's Office of Foreign Assets Control (OFAC) added Tornado Cash to its sanctions list in August 2022. It was estimated to have been used to launder more than $7.6 billion worth of virtual assets since its creation in 2019, the Treasury said at the time.

However, a U.S. Fifth Circuit court issued a decision in November 2024, reversing a decision about the mixer, ruling that OFAC "overstepped its congressionally defined authority" when it sanctioned the cryptocurrency mixer.

This stemmed from the court's view that OFAC's ability to sanction entities does not extend to Tornado Cash because its immutable smart contracts cannot be deemed as "property" under the International Emergency Economic Powers Act (IEEPA).

"With respect to immutable smart contracts, the court reasoned, there is no person in control and therefore 'no party with which to contract,'" according to documents filed by the Treasury Department as part of the case.

It further said it remains committed to using its powers to combat and disrupt malicious cyber actors from exploiting the digital assets ecosystem, and it will do everything in its capacity to restrict the ability of North Korea to fund its weapons of mass destruction and ballistic missile programs.

"Digital assets present enormous opportunities for innovation and value creation for the American people," said Secretary of the Treasury Scott Bessent.

"Securing the digital asset industry from abuse by North Korea and other illicit actors is essential to establishing U.S. leadership and ensuring that the American people can benefit from financial innovation and inclusion."

Last May, a Dutch court on Tuesday sentenced Alexey Pertsev, one of the co-founders of Tornado Cash, to 5 years and 4 months in prison. Two of its other founders Roman Storm and Roman Semenov were indicted by the U.S. Department of Justice in August 2023.

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.



from The Hacker News https://ift.tt/CMVa3YR
via IFTTT

Friday, March 21, 2025

HashiConf 2025 scholarship program now open for applications

We are pleased to once again offer a scholarship program to support members of our community from all backgrounds to attend HashiConf 2025. This community cloud conference will be held September 24-26 in San Francisco, California.

At HashiCorp, we value diversity and strive to foster an inclusive community. Applicants from all backgrounds in technology, cloud computing, and open source communities are welcome. The scholarship program exists to support members of the technical community who may lack the financial sponsorship or means to attend.

Applications will be evaluated according to three criteria:

  • Need: Will a scholarship allow the applicant to attend when they otherwise could not?
  • Value: Would the applicant's experience and interests enable them to receive value from attending HashiConf?
  • Impact: Is the applicant likely to make a positive impact on the HashiCorp community by attending HashiConf?

There are three categories of scholarship available, based roughly on where the applicant will be traveling from and therefore how much financial support they will need.

Scholarship winners will be selected by a diverse group of HashiCorp employees. The personal information in applications will remain confidential to the committee during the review process and after scholarships are awarded.

Apply now for the HashiCorp scholarship to attend HashiConf 2025. The deadline to apply for the HashiConf 2025 scholarship program is 11:59 p.m. PT on Tuesday, April 29, 2025.

We look forward to your applications! Please reach out to scholarship@hashicorp.com if you have any questions.



from HashiCorp Blog https://ift.tt/EBnosmC
via IFTTT

pfSense Software Takes Home 36 Awards in the G2 Winter 2025 Report

pfSense® software from Netgate® received 36 awards in the G2 Winter 2025 report. G2 is a technology review platform where businesses can find and compare software solutions based on user reviews and ratings. pfSense software has been recognized across various business segments and performance areas, with Enterprise, Mid-Market, and Small Business awards in categories such as Best Results, Best Relationship, Best Usability, and Most Implementable for both the Firewall Software and Business VPN groups.

G2 awards are based on reviews by real users. Our numerous awards indicate that we continue to provide high-performance and affordable firewall, VPN, and routing solutions. Placing first in many of these categories further validates that our work is important and appreciated. We are honored to receive these awards and grateful to our customers for your support. Thank you–we couldn't have done it without you!

Top pfSense Software Awards

  • #1 Europe Regional Grid® Report for Business VPN 
  • #1 Implementation Index for Firewall Software 
  • #1 Small-Business Relationship Index for Firewall Software 
  • #1 Small-Business Results Index for Firewall Software 
  • #1 Small-Business Implementation Index for Firewall Software 
  • #1 Small-Business Grid® Report for Firewall Software 
  • #1 Grid® Report for Business VPN 
  • #1 Small-Business Grid® Report for Business VPN 
  • #1 Small-Business Results Index for Business VPN 
  • #1 Small-Business Usability Index for Firewall Software 
  • #1 Small-Business Relationship Index for Business VPN 
  • #1 Relationship Index for Business VPN 
  • #1 Results Index for Firewall Software 
  • #1 Small-Business Europe Regional Grid® Report for Business VPN 
  • #1 EMEA Regional Grid® Report for Business VPN 
  • #1 Relationship Index for Firewall Software
  • #1 Momentum Grid® Report for Firewall Software
  • #1 Small-Business EMEA Regional Grid® Report for Business VPN 

Other Notable pfSense Software Awards

  • #2 Momentum Grid® Report for Business VPN 
  • #2 Enterprise Grid® Report for Business VPN 
  • #2 Small-Business Implementation Index for Business VPN 
  • #2 Implementation Index for Business VPN 
  • #2 Results Index for Business VPN 
  • #2 Usability Index for Business VPN 
  • #2 Small-Business Usability Index for Business VPN 
  • #2 Enterprise Relationship Index for Firewall Software 
  • #2 Grid® Report for Firewall Software 
  • #2 Mid-Market Usability Index for Firewall Software 
  • #2 Mid-Market Relationship Index for Firewall Software 
  • #2 Europe Regional Grid® Report for Firewall Software 
  • #2 Usability Index for Firewall Software 
  • #2 EMEA Regional Grid® Report for Firewall Software 

About pfSense Software

The leading firewall, router, and VPN solution for network edge and cloud secure networking, pfSense software is the world’s most trusted firewall. The software has garnered the respect and adoration of users worldwide. pfSense software is made possible by open-source technology and made into a robust, reliable, dependable product by Netgate. 

Get pfSense Plus Today

About Netgate

Netgate is dedicated to developing and providing secure networking solutions to businesses, government, and educational institutions worldwide. Netgate is the only provider of pfSense products, which include pfSense Plus and pfSense Community Edition software. TNSR® software - a super-scale vRouter - extends the company’s open-source leadership and expertise into high-performance secure networking, capable of delivering compelling value at a fraction of the cost of proprietary solutions.



from Blog https://ift.tt/4zgrIOs
via IFTTT

UAT-5918 Targets Taiwan's Critical Infrastructure Using Web Shells and Open-Source Tools

Mar 21, 2025Ravie LakshmananThreat Hunting / Vulnerability

Threat hunters have uncovered a new threat actor named UAT-5918 that has been attacking critical infrastructure entities in Taiwan since at least 2023.

"UAT-5918, a threat actor believed to be motivated by establishing long-term access for information theft, uses a combination of web shells and open-sourced tooling to conduct post-compromise activities to establish persistence in victim environments for information theft and credential harvesting," Cisco Talos researchers Jungsoo An, Asheer Malhotra, Brandon White, and Vitor Ventura said.

Besides critical infrastructure, some of the other targeted verticals include information technology, telecommunications, academia, and healthcare.

Assessed to be an advanced persistent threat (APT) group looking to establish long-term persistent access in victim environments, UAT-5918 is said to share tactical overlaps with several Chinese hacking crews tracked as Volt Typhoon, Flax Typhoon, Tropic Trooper, Earth Estries, and Dalbit.

Attack chains orchestrated by the group involve obtaining initial access by exploiting N-day security flaws in unpatched web and application servers exposed to the internet. The foothold is then used to drop several open-source tools to conduct network reconnaissance, system information gathering, and lateral movement.

UAT-5918's post-exploitation tradecraft involves the use of Fast Reverse Proxy (FRP) and Neo-reGeorge to set up reverse proxy tunnels for accessing compromised endpoints via attacker controlled remote hosts.

The threat actor has also been leveraging tools like Mimikatz, LaZagne, and a browser-based extractor dubbed BrowserDataLite to harvest credentials to further burrow deep into the target environment via RDP, WMIC, or Impact. Also used are Chopper web shell, Crowdoor, and SparrowDoor, the latter two of which have been previously put to use by another threat group called Earth Estries.

BrowserDataLite, in particular, is designed to pilfer login information, cookies, and browsing history from web browsers. The threat actor also engages in systematic data theft by enumerating local and shared drives to find data of interest.

"The activity that we monitored suggests that the post-compromise activity is done manually with the main goal being information theft," the researchers said. "Evidently, it also includes deployment of web shells across any discovered sub-domains and internet-accessible servers to open multiple points of entry to the victim organizations."

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.



from The Hacker News https://ift.tt/6eN8ESM
via IFTTT

Kaspersky Links Head Mare to Twelve, Targeting Russian Entities via Shared C2 Servers

Mar 21, 2025Ravie LakshmananMalware / Cyber Attack

Two known threat activity clusters codenamed Head Mare and Twelve have likely joined forces to target Russian entities, new findings from Kaspersky reveal.

"Head Mare relied heavily on tools previously associated with Twelve. Additionally, Head Mare attacks utilized command-and-control (C2) servers exclusively linked to Twelve prior to these incidents," the company said. "This suggests potential collaboration and joint campaigns between the two groups."

Both Head Mare and Twelve were previously documented by Kaspersky in September 2024, with the former leveraging a now-patched vulnerability in WinRAR (CVE-2023-38831) to obtain initial access and deliver malware and in some cases, even deploy ransomware families like LockBit for Windows and Babuk for Linux (ESXi) in exchange for a ransom.

Twelve, on the other hand, has been observed staging destructive attacks, taking advantage of various publicly available tools to encrypt victims' data and irrevocably destroy their infrastructure with a wiper to prevent recovery efforts.

Kaspersky's latest analysis shows Head Mare's use of two new tools, including CobInt, a backdoor used by ExCobalt and Crypt Ghouls in attacks aimed at Russian firms in the past, as well as a bespoke implant named PhantomJitter that's installed on servers for remote command execution.

The deployment of CobInt has also been observed in attacks mounted by Twelve, with overlaps uncovered between the hacking crew and Crypt Ghouls, indicating some kind of tactical connection between different groups currently targeting Russia.

Other initial access pathways exploited by Head Mare include the abuse of other known security flaws in Microsoft Exchange Server (e.g., CVE-2021-26855 aka ProxyLogon), as well as via phishing emails bearing rogue attachments and compromising contractors' networks to infiltrate victim infrastructure, a technique known as the trusted relationship attack.

"The attackers used ProxyLogon to execute a command to download and launch CobInt on the server," Kaspersky said, highlighting the use of an updated persistence mechanism that eschews scheduled tasks in favor of creating new privileged local users on a business automation platform server. These accounts are then used to connect to the server via RDP to transfer and execute tools interactively.

Besides assigning the malicious payloads names that mimic benign operating system files (e.g., calc.exe or winuac.exe), the threat actors have been found to remove traces of their activity by clearing event logs and use proxy and tunneling tools like Gost and Cloudflared to conceal network traffic.

Some of the other utilities used are

  • quser.exe, tasklist.exe, and netstat.exe for system reconnaissance
  • fscan and SoftPerfect Network Scanner for local network reconnaissance
  • ADRecon for gathering information from Active Directory
  • Mimikatz, secretsdump, and ProcDump for credential harvesting
  • RDP for lateral movement
  • mRemoteNG, smbexec, wmiexec, PAExec, and PsExec for remote host communication
  • Rclone for data transfer

The attacks culminate with the deployment of LockBit 3.0 and Babuk ransomware on compromised hosts, followed by dropping a note that urges victims to contact them on Telegram for decrypting their files.

"Head Mare is actively expanding its set of techniques and tools," Kaspersky said. "In recent attacks, they gained initial access to the target infrastructure by not only using phishing emails with exploits but also by compromising contractors. Head Mare is working with Twelve to launch attacks on state- and privately-controlled companies in Russia."

The development comes as BI.ZONE linked the North Korea-linked threat actor known as ScarCruft (aka APT37, Reaper, Ricochet Chollima, and Squid Werewolf) to a phishing campaign in December 2024 that delivered a malware loader responsible for deploying an unknown payload from a remote server.

The activity, the Russian company said, closely resembles another campaign dubbed SHROUDED#SLEEP that Securonix documented in October 2024 as leading to the deployment of a backdoor referred to as VeilShell in intrusions targeting Cambodia and likely other Southeast Asian countries.

Last month, BI.ZONE also detailed continued cyber attacks staged by Bloody Wolf to deliver NetSupport RAT as part of a campaign that has compromised more than 400 systems in Kazakhstan and Russia, marking a shift from STRRAT.

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.



from The Hacker News https://ift.tt/Oq0WtiC
via IFTTT

Ongoing Cyber Attacks Exploit Critical Vulnerabilities in Cisco Smart Licensing Utility

Mar 21, 2025Ravie LakshmananCyber Attack / Vulnerability

Two now-patched security flaws impacting Cisco Smart Licensing Utility are seeing active exploitation attempts, according to SANS Internet Storm Center.

The two critical-rated vulnerabilities in question are listed below -

  • CVE-2024-20439 (CVSS score: 9.8) - The presence of an undocumented static user credential for an administrative account that an attacker could exploit to log in to an affected system
  • CVE-2024-20440 (CVSS score: 9.8) - A vulnerability arising due to an excessively verbose debug log file that an attacker could exploit to access such files by means of a crafted HTTP request and obtain credentials that can be used to access the API

Successful exploitation of the flaws could enable an attacker to log in to the affected system with administrative privileges, and obtain log files that contain sensitive data, including credentials that can be used to access the API.

That said, the vulnerabilities are only exploitable in scenarios where the utility is actively running.

The shortcomings, which impact versions 2.0.0, 2.1.0, and 2.2.0, have since been patched by Cisco in September 2024. Version 2.3.0 of Cisco Smart License Utility is not susceptible to the two bugs.

As of March 2025, threat actors have been observed attempting to actively exploit the two vulnerabilities, SANS Technology Institute's Dean of Research Johannes B. Ullrich said, adding the unidentified threat actors are also weaponizing other flaws, including what appears to be an information disclosure flaw (CVE-2024-0305, CVSS score: 5.3) in Guangzhou Yingke Electronic Technology Ncast.

It's currently not known what the end goal of the campaign is, or who is behind it. In light of active abuse, it's imperative that users apply the necessary patches for optimal protection.

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.



from The Hacker News https://ift.tt/lMpXWNo
via IFTTT

Thursday, March 20, 2025

Tomorrow, and tomorrow, and tomorrow: Information security and the Baseball Hall of Fame

Tomorrow, and tomorrow, and tomorrow: Information security and the Baseball Hall of Fame

Welcome to this week’s edition of the Threat Source newsletter. 

“Tomorrow, and tomorrow, and tomorrow / Creeps in this petty pace from day to day / To the last syllable of recorded time.” - Shakespeare’s Macbeth 

"But I am very poorly today and very stupid and I hate everybody and everything. One lives only to make blunders." - Charles Darwin’s letter to Charles Lyell 
 
“Another day, another box of stolen pens.” - Homer Simpson 

Some people are blessed with the ability to deal with monotony, and some are maddened beyond all recourse by it. In the worlds of both information security and baseball, the ability to overcome tedium is paramount. To be great — not just very good — requires the kind of devotion that many people cannot fathom. 

Ichiro Suzuki is one the greatest players in baseball history and a phenomenal hitter. His dedication led him to practice his swing every day, taking hundreds of swings from both sides of the plate even though he solely batted from the left. He practiced from the right side simply to stay in balance. Ichiro understood that changing your perspective enhances your strengths. 

In cybersecurity, the ability to track and defend against living-off-the-land binaries (LoL bins) is the kind of tedium that garners Hall of Fame results. Cybercriminals and state-sponsored actors exploit built-in tools across all platforms, hiding in the noise of trusted and normal traffic. Once logged in, often with valid credentials, detecting and countering their activity becomes a much more challenging and tedious game, especially for newly minted junior analysts.  

Take some time each day to look at the correlated data from a different source, a different perspective. If you normally look at reconnaissance activity from specific devices, take a few moments to trace the path attackers took across non-security devices for a fuller understanding.  

Ultimately, it comes down to knowing your environment, just as Ichiro worked through the tedium to know his swing. Take the time to learn it from several angles instead of simply banging away from the same view. When all else fails, take a break, walk away, and breathe before getting back in the batter’s box and taking another 500 swings at the tee to become a .300 hitter.

Pssst! The devil on William's shoulder here. Want to procrastinate and avoid today’s tedium? Curious about what Talos does and how we defend organizations from the latest cyber attacks? Check out this new animated video. From threat hunting, detection building, vulnerability discoveries and incident response, we show up every day to try and make the internet a safer place.

The one big thing 

Cisco Talos released a blog highlighting research into UAT-5918 which has been targeting critical infrastructure entities in Taiwan. UAT-5918's post-compromise activity, tactics, techniques, and procedures (TTPs), and victimology overlaps the most with Volt Typhoon, Flax Typhoon, Earth Estries, and Dalbit intrusions we’ve observed in the past.

Why do I care? 

Understanding the actions of motivated and capable threat actors is at the core of defending against them. Threat actors continue to leverage a plethora of open-source tools for network reconnaissance to move through the compromised enterprise, and we see this with UAT-59128. UAT-5918's intrusions harvest credentials to obtain local and domain level user credentials and the creation of new administrative user accounts to facilitate additional channels of access, such as RDP to endpoints of significance to the threat actor.  

Typical tooling used by UAT-5918 includes networking tools such as FRPC, FScan, In-Swor, Earthworm, and Neo-reGeorg. They harvest credentials by dumping registry hives, NTDS, and using tools such as Mimikatz and browser credential extractors. These credentials are then used to perform lateral movement via either RDP, WMIC (PowerShell remoting), or Impacket.

So now what? 

Use the IOCs associated with the campaign in the blog post to search for evidence of incursion within your own environment. Use this exercise as a means of verifying that you have visibility of the systems on your network and that you are able to search for known malicious IOCs across platforms and datasets.

Top security headlines of the week

New ChatGPT attacks: Attackers are actively exploiting a flaw in ChatGPT that allows them to redirect users to malicious URLs from within the artificial intelligence (AI) chatbot application. There were more than 10,000 exploit attempts in a single week originating from a single malicious IP address (DarkReading)  

Not your usual spam: Generative AI spammers are brute forcing social media and search algorithms with nightmare-fuel videos, and it’s working. (404 media

Zero-day Windows vulnerability: An unpatched security flaw impacting Microsoft Windows has been exploited by 11 state-sponsored groups from China, Iran, North Korea, and Russia as part of data theft, espionage, and financially motivated campaigns that date back to 2017. (The Hacker News)

Can’t get enough Talos? 

Upcoming events where you can find Talos 

Amsterdam 2025 FIRST Technical Colloquium Amsterdam (March 25-27, 2025) Amsterdam, NL 
RSA (April 28-May 1, 2025)  San Francisco, CA   
CTA TIPS 2025 (May 14-15, 2025) Arlington, VA  
Cisco Live U.S. (June 8 – 12, 2025) San Diego, CA

Most prevalent malware files from Talos telemetry over the past week  

SHA 256:7b3ec2365a64d9a9b2452c22e82e6d6ce2bb6dbc06c6720951c9570a5cd46fe5
MD5: ff1b6bb151cf9f671c929a4cbdb64d86 
VirusTotal : https://www.virustotal.com/gui/file/7b3ec2365a64d9a9b2452c22e82e6d6ce2bb6dbc06c6720951c9570a5cd46fe5 
Typical Filename: endpoint.query
Claimed Product: Endpoint-Collector
Detection Name: W32.File.MalParent   

SHA 256: 47ecaab5cd6b26fe18d9759a9392bce81ba379817c53a3a468fe9060a076f8ca  
MD5: 71fea034b422e4a17ebb06022532fdde  
VirusTotal: https://www.virustotal.com/gui/file/47ecaab5cd6b26fe18d9759a9392bce81ba379817c53a3a468fe9060a076f8ca 
Typical Filename: VID001.exe 
Claimed Product: N/A   
Detection Name: Coinminer:MBT.26mw.in14.Talos 

SHA 256: a31f222fc283227f5e7988d1ad9c0aecd66d58bb7b4d8518ae23e110308dbf91
MD5: 7bdbd180c081fa63ca94f9c22c457376  
VirusTotal: https://www.virustotal.com/gui/file/a31f222fc283227f5e7988d1ad9c0aecd66d58bb7b4d8518ae23e110308dbf91/details%C2%A0 
Typical Filename: c0dwjdi6a.dll  
Claimed Product: N/A   
Detection Name: Trojan.GenericKD.33515991 

SHA 256:9f1f11a708d393e0a4109ae189bc64f1f3e312653dcf317a2bd406f18ffcc507 
MD5: 2915b3f8b703eb744fc54c81f4a9c67f 
VirusTotal: https://www.virustotal.com/gui/file/9f1f11a708d393e0a4109ae189bc64f1f3e312653dcf317a2bd406f18ffcc507 
Typical Filename: VID001.exe 
Detection Name: Simple_Custom_Detection 



from Cisco Talos Blog https://ift.tt/CpXmW6j
via IFTTT

NVMe over TCP: Practical Deployment and Limitations. Rethinking Its Role in Modern Data Centers 

Introduction

NVMe over TCP (NVMe/TCP) extends Non-Volatile Memory Express (NVMe) over Ethernet networks using standard TCP/IP. Although initially seen as a budget-friendly alternative to NVMe over RDMA (NVMe/RDMA), its role in enterprise environments deserves a broader perspective. As adoption of RDMA-capable hardware grows, NVMe/TCP remains valuable, particularly for small to medium-sized businesses (SMBs) or scenarios involving strategic repurposing of existing hardware.

What is NVMe over TCP (NVMe/TCP)?

NVMe/TCP is an NVMe over Fabrics (NVMe-oF) transport that transmits NVMe commands over TCP/IP networks. Unlike NVMe/RDMA, which leverages RDMA-capable hardware for ultra-low latency, NVMe/TCP runs over standard Ethernet without requiring specialized network hardware. Both the Linux Kernel (version 5.0 onward) and SPDK (v19.01 onward) support NVMe/TCP, ensuring open-source ecosystem compatibility.

Assessing the Need for NVMe/TCP in Modern Enterprises

1. RDMA Hardware Adoption is Increasing, but Not Universal

While many enterprises have incorporated RDMA-supportive hardware from vendors such as NVIDIA (Mellanox), Intel, and Broadcom, the rate of adoption varies significantly, especially in smaller organizations. SMBs often maintain longer hardware lifecycles, making NVMe/TCP an appealing solution for modernizing storage without immediate infrastructure overhaul.

2. Legacy Hardware and Tiered Storage Utilization

Rather than dismissing legacy hardware outright, organizations frequently repurpose older but still reliable hardware from primary (Tier 1) storage to secondary (Tier 2) or backup tiers. NVMe/TCP effectively extends the life of existing Ethernet hardware, enabling cost-effective upgrades without compromising on improved performance and efficiency over legacy protocols such as iSCSI.

3. Replacing iSCSI for Improved Performance

Another use case for NVMe/TCP is as an upgrade path for organizations still relying on iSCSI-based storage. iSCSI introduces performance bottlenecks due to its reliance on SCSI encapsulation and CPU-intensive processing. NVMe/TCP provides a direct, high-performance alternative that eliminates these inefficiencies while leveraging the existing Ethernet infrastructure. This makes NVMe/TCP a viable option for enterprises looking to modernize storage connectivity without adopting RDMA. Transitioning from iSCSI to NVMe/TCP reduces overhead and unlocks better performance, making it an attractive option for organizations seeking an incremental upgrade without overhauling network hardware.

NVMe/TCP vs. Other Transport Protocols

1. NVMe/TCP vs. NVMe/RDMA

NVMe over RoCE (RDMA over Converged Ethernet) provides lower latency and higher performance compared to NVMe/TCP. RoCEv2 delivers superior performance for latency-sensitive workloads but requires specialized hardware and more complex network configurations. NVMe/TCP, on the other hand, leverages standard Ethernet infrastructure, making it more cost-effective and easier to implement in diverse environments.

2. NVMe/TCP vs. NVMe/FC

NVMe over Fibre Channel (NVMe/FC) provides similar benefits to NVMe/TCP in terms of compatibility but benefits from the deterministic performance of Fibre Channel networks. If enterprises are already using FC, NVMe/FC is the logical transition rather than switching to NVMe/TCP.

3. NVMe/TCP vs. iSCSI

NVMe/TCP provides significant performance advantages over iSCSI, thanks to native NVMe command support, reduced processing overhead, and more streamlined data handling. Enterprises committed to improving performance without significant infrastructure disruption may find NVMe/TCP an optimal upgrade path compared to continuing incremental enhancements within iSCSI.

Performance Considerations & Limitations

NVMe over TCP (NVMe/TCP) offers a compelling balance of performance and ease of deployment, but its practical implementation comes with both advantages and challenges. Understanding these factors is crucial for organizations considering NVMe/TCP in their storage infrastructure.

  • Network Infrastructure: NVMe/TCP is an attractive option for organizations looking to modernize their storage without overhauling their network infrastructure. However, to fully benefit from NVMe/TCP’s performance advantages, high-speed networks (25GbE or higher) are necessary.
  • CPU Overhead: NVMe/TCP utilizes more CPU resources compared to NVMe/RDMA due to lack of hardware acceleration. This overhead, while notable, remains acceptable for many SMB and mid-tier workloads.
  • Latency: NVMe/TCP inherently experiences slightly higher latency compared to RDMA alternatives. However, for workloads not demanding the ultra-lowest latency, NVMe/TCP remains practical and effective.
  • Performance Variability: The performance of NVMe/TCP is more susceptible to network congestion and varying workloads compared to specialized protocols like NVMe over Fibre Channel.
  • TCP Stack Optimizations: To mitigate latency and performance issues, careful tuning of TCP parameters is often necessary. This includes optimizing TCP window sizes, buffer allocations, and congestion control settings.

When Would NVMe/TCP Make Sense?

Given the widespread availability of RDMA-capable hardware, NVMe/TCP has limited practical applications in modern enterprises. However, NVMe/TCP remains relevant and practical in several enterprise scenarios, particularly:

  • iSCSI Replacement: Enterprises looking to phase out iSCSI in favor of a more efficient NVMe-based storage protocol while still leveraging standard Ethernet. While not matching RDMA-based solutions in absolute performance, NVMe/TCP offers significant improvements over traditional iSCSI, with, on average, 35% higher IOPS and 25% lower latency.
  • Brownfield Deployments: Organizations can leverage their current network investments without the need for specialized hardware like Fibre Channel or RDMA adapters.
  • Edge Computing & ROBO (Remote Office/Branch Office): Sites with minimal infrastructure where RDMA investment is not justified.
  • Test & Development Environments: NVMe/TCP can provide a low-cost alternative for staging and testing without requiring RDMA hardware.

Conclusion

NVMe/TCP retains significant practical value despite growing RDMA adoption, especially for SMBs and enterprises strategically managing hardware lifecycles. It offers a logical and economical upgrade from legacy protocols like iSCSI, ensures effective reuse of existing infrastructure, and aligns well with less latency-sensitive use cases. While NVMe/RDMA may dominate high-performance computing workloads, NVMe/TCP continues to provide a flexible, accessible solution for many enterprise storage environments.



from StarWind Blog https://ift.tt/IizyY6e
via IFTTT

How to Build and Maintain an Effective AWS Security Posture

Foreword & Guest Author Bio

As part of this ongoing series, SentinelOne is excited to present a series of guest blogs from cloud security experts covering their views on cloud security best practices. First in this series is the following blog from Aidan Steele, where he outlines his views on an introductory approach to AWS security posture.

Aidan is a security engineer at Cash App and an AWS Serverless Hero. He is an avid AWS user, having first got started on the platform with EC2 in 2008. Sixteen years later, EC2 still has a special place in his heart, but his interests now are in containers and serverless functions – blurring the distinction between them wherever possible.

He enjoys finding novel uses for AWS services, especially when they have a security or network focus. This is best demonstrated through his open source contributions on GitHub, where he demonstrates interesting use cases via hands-on projects. Aidan regularly shares his thoughts and new projects (sometimes useful, sometimes just for fun) on his blog and X account.

Introduction

With great cloud power comes great cloud security responsibility. AWS has added hundreds of new services, thousands of new APIs and many new approaches to how to maintain a proper security posture over the years. In this article, I share a few different methods on how to approach it. Some are well-established best practices, some are less well-known, and others I rarely see taken advantage of in all but the most mature companies.

Use Multiple AWS Accounts

A common mistake I see in companies of all sizes is using too few AWS accounts. At a minimum, I would highly recommend that you create separate AWS accounts for production workloads and pre-production workloads. There are multiple benefits to this, and they’re not all just about security.

The first benefit is that you can minimize the chance of production downtime or data loss (e.g. due to untested automation), while maximizing your developers’ productivity by allowing them to experiment confidently and safely in a “development-only” account. Developers will be more confident to evaluate new ideas, knowing that they can’t accidentally bring down production.

A secondary benefit is that it gives you an easier way to view and manage costs. You can easily identify how much you are spending on development versus production and implement different kinds of cost saving measures in each environment.

Use AWS Organizations

AWS Organizations provides an easy way to manage multiple AWS accounts. Rather than having to go through the “new AWS account” sign-up flow and juggling multiple bills, AWS Organizations allows you to create new accounts in a couple of clicks or a single API call. There are a few things to keep in mind when using AWS Organizations.

  • Keep in mind that one AWS account is designated as the “management account” for an organization. You really want to have zero workloads running in this account and provide as little access to it as possible, because it contains the keys to your whole kingdom.
  • This means that if you are setting up an organization for the first time in an account you’ve been using for years – stop! Instead, create a brand new AWS account the “old-fashioned way” and create an organization in that account. This becomes your management account.
  • Finally, use your management account to send an invitation to your existing account to join the new organization. Your security team will thank you for doing this in the years to come.

Creating a “Security” AWS Account & Delegate Administrative Tasks to It

Calling back to using the management account as little as possible, note that AWS makes this achievable by allowing you to designate an account in your organization as the “delegated administrator” for particular organization-level services. By default the administrator account is your management account. Instead, I recommend you create an AWS account dedicated to all “security” tasks and delegate administration of services to it wherever possible.

Setting Up an Organization-Level CloudTrail

CloudTrail is the definitive audit log for almost everything that happens in AWS. You should enable it as early as possible. After delegating CloudTrail administration to the security account, you can create an organization-wide CloudTrail in that account. This means that a record of all AWS APIs invoked in all AWS accounts in your organization will be stored in an S3 bucket for easy querying.

If you want to start querying immediately, you can enable CloudTrail Lake. This provides an easy way to query your CloudTrail logs using SQL. For more cost savings, you can instead use AWS Athena to query the CloudTrail logs stored in S3. Both of these options are much better than the “event history” available in the regular CloudTrail web console, because that view only provides very limited filtering – and only for the last 90 days.

Using Organizational Units (OUs) Effectively

AWS Organizations allows you to group AWS accounts together into organizational units (OUs). You can then attach service-control policies to particular OUs. For large companies, it can be tempting to reflect your corporate org chart in the AWS web console. Don’t do this! Instead, group accounts by their environment, e.g. create a “development” OU and place all your pre-production AWS accounts there. Create a “production” OU and place all your production AWS accounts there. This makes it easier to enforce SCPs like “no deleting databases in production”.

In addition to development and production OUs, another couple of common OUs are “security” and “acquisitions”. The security OU is where you put the aforementioned security AWS account, and other delegated administrator accounts if you choose to further subdivide them. The acquisitions OU is especially useful for larger companies that acquire other companies. During an acquisition you often will invite your acquisition’s AWS accounts to join your organization. You don’t know quite how they use AWS yet and you don’t want to break anything by enforcing SCPs too early. The acquisition OU is effectively a staging area for these accounts until you are confident you can safely move them into your regular OU hierarchy.

Use IAM Identity Center

This service was previously known as “AWS Single Sign-On”. Regardless of its name, using it will make life easier for you and your developers – and more secure, too. IAM Identity Center can either plug into your existing SSO solution, or stand alone. You can assign your developers access to whichever AWS accounts they need access to in your organization. You can give them power-user access in development accounts and read-only access in production. Best of all, they don’t need to store any AWS credentials on their development machine – they can log into any account using the AWS CLI aws sso login command.

Leverage IAM Roles Over IAM Users

If you follow the guidance in the previous section, you are already well on your way to making IAM users obsolete at your company. IAM users are all too often implicated in security breaches. This is because they have access keys that are easy to identify and are long-lived: they don’t expire automatically like IAM role credentials. However, there are still some other places you might be tempted to use them – let’s discuss them and the alternatives.

Use federation where possible, especially for third party integration. There are broadly three ways you can federate access into AWS. All of these options are preferable to deploying IAM users, and which one to use depends on the use case:

  • OIDC – This is often used for CI/CD tools, e.g. GitHub Actions, GitLab, BuildKite, etc. You can create a “CI” role in your AWS account and have your CI system assume that role via an OIDC trust relationship. Bonus: CloudTrail will show auditors which build pipeline made particular API calls when you do this.
  • SAML – This is often used for humans assuming a role in AWS from your SSO provider. Technically this is how IAM Identity Center works. SAML federation can also be useful for federating from on-premises Active Directory workloads to AWS.
  • Roles Anywhere – The newest option available, this is the marketing name for X.509 certificate-based federation. A good rule of thumb is that you shouldn’t consider this unless you already have a mature, established X.509 public key infrastructure deployed across your company.

Using these three approaches means that you will have no need for IAM users. Still have on-premises machines and none of the above options work for you? Consider Systems Manager hybrid activations.

De-Duplicate Your Event-Driven Automation

A common pattern is to use EventBridge and Lambda functions for security-related automation. For example, you might have a Lambda function that is triggered when an EC2 instance is launched, and it automatically terminates the instance if something is misconfigured. People often deploy these Lambda functions into every AWS account and region that they run workloads in. This technically works, but can be burdensome to deploy and maintain.

Instead, I would recommend deploying this Lambda function once to a single AWS account and region. You configure it to be triggered by the relevant event rule, and configure the event bus to allow events to be forwarded from other AWS accounts in your organization. You can then use a CloudFormation service-managed stack set to deploy an EventBridge rule and IAM role in every account in your organization. Those rules will forward events to your single, centralized bus. The role can be assumed by your Lambda function when it needs to take action in response to an event. Now you can iterate on your security automation quicker and easier than ever!

Use IAM Role Paths

Paths are a convenient way to organize IAM roles. Often you will deploy a collection of roles to every account in your organization. You want to ensure that developers don’t accidentally change the configuration of these roles and want to write an SCP to protect them. Rather than having to enumerate the name of each protected role in your SCPs, you can instead put all roles in an /org-admin/ path and write a short SCP to prevent modification of roles with the ARN arn:aws:iam::*:role/org-admin/*.

Use the Power of Security Group References

In the pre-cloud world, we used to assign IP addresses from workload-specific IP ranges. Some people like to carry this over to AWS, but there’s a better option. Instead of a security group rule that says “anything with IP address 10.1.2.x/24 can access this RDS database”, you can instead rely on the fact that workloads can have multiple security groups attached.

You can create a “database client” security group and write a security group rule on the RDS instance that says “anything with the source security group sg-1234 can access this RDS database”. This grants you more flexibility and makes the purpose of security group rules more apparent.

Conclusion | The SentinelOne Perspective

As always, Cloud Security is a shared responsibility. While cloud providers secure the underlying infrastructure, it’s crucial for developers and operations teams to ensure they secure their functions, data, and access policies. SentinelOne’s recommendation is to leverage Well Architected Frameworks where possible, to ensure environments are built with secure-by-design principles.

Additionally, Cloud Native Application Protection Platforms, like SentinelOne’s Cloud Native Security, can assist cloud and security teams by detecting and prioritizing cloud risk. Our CNAPP is able to detect misconfigurations across cloud services, infrastructure and cloud identity, as well as vulnerabilities, and provides evidence-based insight into cloud risks that can be externally exploited.

Next in our guest blog series, cloud security engineer Don Magee will go into the next level of detail on account, role and credential management in AWS and more, with his post covering 6 AWS Security Blind Spots.

Singularity™ Cloud Security
Discover all of your assets and deploy AI-powered protection to shield your cloud from build time to runtime.


from SentinelOne https://ift.tt/q279EBK
via IFTTT

YouTube Game Cheats Spread Arcane Stealer Malware to Russian-Speaking Users

Mar 20, 2025Ravie LakshmananMalware / Threat Analysis

YouTube videos promoting game cheats are being used to deliver a previously undocumented stealer malware called Arcane likely targeting Russian-speaking users.

"What's intriguing about this malware is how much it collects," Kaspersky said in an analysis. "It grabs account information from VPN and gaming clients, and all kinds of network utilities like ngrok, Playit, Cyberduck, FileZilla, and DynDNS."

The attack chains involve sharing links to a password-protected archive on YouTube videos, which, when opened, unpacks a start.bat batch file that's responsible for retrieving another archive file via PowerShell.

The batch file then utilizes PowerShell to launch two executables embedded within the newly downloaded archive, while also disabling Windows SmartScreen protections and every drive root folder to SmartScreen filter exceptions.

Of the two binaries, one is a cryptocurrency miner and the other is a stealer dubbed VGS that's a variant of the Phemedrone Stealer malware. As of November 2024, the attacks have been found to replace VGS with Arcane.

"Although much of it was borrowed from other stealers, we could not attribute it to any of the known families," the Russian cybersecurity company noted.

Besides stealing login credentials, passwords, credit card data, and cookies from various Chromium- and Gecko-based browsers, Arcane is equipped to harvest comprehensive system data as well as configuration files, settings, and account information from several apps such as follows -

  • VPN clients: OpenVPN, Mullvad, NordVPN, IPVanish, Surfshark, Proton, hidemy.name, PIA, CyberGhost, and ExpressVPN
  • Network clients and utilities: ngrok, Playit, Cyberduck, FileZilla, and DynDNS
  • Messaging apps: ICQ, Tox, Skype, Pidgin, Signal, Element, Discord, Telegram, Jabber, and Viber
  • Email clients: Microsoft Outlook
  • Gaming clients and services: Riot Client, Epic, Steam, Ubisoft Connect (ex-Uplay), Roblox, Battle.net, and various Minecraft clients
  • Crypto wallets: Zcash, Armory, Bytecoin, Jaxx, Exodus, Ethereum, Electrum, Atomic, Guarda, and Coinomi

Furthermore, Arcane is designed to take screenshots of the infected device, enumerate running processes, and list saved Wi-Fi networks and their passwords.

"Most browsers generate unique keys for encrypting sensitive data they store, such as logins, passwords, cookies, etc.," Kaspersky said. "Arcane uses the Data Protection API (DPAPI) to obtain these keys, which is typical of stealers."

"But Arcane also contains an executable file of the Xaitax utility, which it uses to crack browser keys. To do this, the utility is dropped to disk and launched covertly, and the stealer obtains all the keys it needs from its console output."

Adding to its capabilities, the stealer malware implements a separate method for extracting cookies from Chromium-based browsers launching a copy of the browser through a debug port.

The unidentified threat actors behind the operation have since expanded their offerings to include a loader named ArcanaLoader that's ostensibly meant to download game cheats, but delivers the stealer malware instead. Russia, Belarus, and Kazakhstan have emerged as the primary targets of the campaign.

"What's interesting about this particular campaign is that it illustrates how flexible cybercriminals are, always updating their tools and the methods of distributing them," Kasperksy said. "Besides, the Arcane stealer itself is fascinating because of all the different data it collects and the tricks it uses to extract the information the attackers want."

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.



from The Hacker News https://ift.tt/Svctu9e
via IFTTT

Wednesday, March 19, 2025

Citrix and NVIDIA partner to deliver AI Virtual Workstations

As we head into the second half of our week at Citrix UNITE, our annual sales kickoff event happening in New Orleans, LA, the good times keep rolling and so do the exciting announcements. Following yesterday’s news, today we have another exciting announcement around our joint collaboration with NVIDIA, the world leader in artificial intelligence computing. Together we are delivering artificial intelligence (AI) virtual workstations built on NVIDIA Virtual GPU (vGPU), empowering forward-thinking enterprises to develop and prototype AI applications more securely and cost-effectively than ever before.

Strengthening your AI strategy with Citrix

Evolving from the early days of AI, more use cases are being unleashed into the enterprise than ever before. In fact, over 50% of companies are already deploying generative and predictive AI. Now, Citrix customers can deploy NVIDIA RTX Virtual Workstation along with Citrix DaaS™, supporting efforts to accelerate the development, prototyping, and delivery of AI apps, all while ensuring security, cost savings, scalability, and performance.

How it works

While many enterprises are currently in the process of prototyping dozens of AI applications, concerns about data residency, security, and cost often emerge at some point in the proof of concept (POC) phase, slowing both production deployments and the ability to realize business value. And it makes sense, as it can be expensive developing AI apps, not to mention a very time-consuming process with significant security and compliance requirements. Customers in highly regulated industries such as healthcare, financial services, and government have looked to Citrix for over 35 years to address challenges like this. With this partnership, Citrix is once again the key platform to accelerate and drive business agility, now for AI. 

Bringing together Citrix DaaS and the NVIDIA RTX Virtual Workstation solves these challenges, allowing you to securely develop and deliver AI POCs and applications within your existing, secure infrastructure. Doing this minimizes the risk of proprietary information leaking out into public large language models (LLMs), and uses existing GPUs to significantly minimize costs. 

This integration allows you to take advantage of Citrix’s AI-ready solutions by:

  • Securing your AI development work: Leverage advanced security policies, such as zero-trust network access (ZTNA), already offered in the Citrix® platform, to safeguard AI models and enhance platform observability, protecting AI systems from potential threats.
  • Leveraging AI workload isolation: Gain control of outbound network communications, reducing attack surfaces, malicious software, and malware spread by running in isolated virtualized environments.
  • Experiencing cost and time savings, with scalability: Consolidate costs and get started faster, avoiding GPU supply chain lines, by virtualizing existing GPU resources and providing AI developers with secure, high-performance computing and GPU workloads to scale.

The immediate benefits to Citrix customers

If you are one of our many customers already running Citrix DaaS on servers with NVIDIA GPUs, this partnership allows you to leverage your existing Citrix environments to more securely and cost-effectively prototype, POC, and roll out AI applications with NVIDIA GPUs. 

With this partnership, you are now able to approach your AI application strategy with:

  • More confidence than ever: Rest assured that data and proprietary information will be protected and not leak out into public LLMs.
  • Additional value in your GPU investment: Develop, prototype, and run POCs for AI apps in a more cost-effective way by leveraging the GPUs already running in your existing Citrix environment. This can also eliminate the time that IT needs to validate a new platform for AI.
  • Enhanced security: Allow users to interact with production AI apps within the Citrix platform to ensure secure access.

Investing to meet your needs

As AI continues driving more initiatives and strategies across the enterprise, Citrix is here to support the workforce of the future. This partnership with NVIDIA helps make this possible, and we could not be more excited to introduce this technology into your Citrix environment. 

Learn more about this announcement via our press release, and stay tuned for more exciting updates as we continue making our way through 2025, and beyond.



from Citrix Blogs https://ift.tt/daIrNSH
via IFTTT

S3 Backup: Everything You Need to Know

Amazon Simple Storage Service (S3) is one of the most widely used cloud storage services, but just storing data is not enough. Without a solid backup strategy, you are vulnerable to accidental deletions, cyber threats, and unexpected data corruption.

So, what is the best way to back up your S3 data? This guide covers everything you need to know – from S3 backup methods and storage classes to best practices, pricing, and limitations. Whether you are exploring AWS Backup for S3 or looking for better alternatives, you will find everything you need to protect your data efficiently.

What Is AWS S3 Backup and How Does It Work?

AWS S3 Backup refers to the process of protecting Amazon S3 data through automated or manual backup methods. AWS Backup, a fully managed service, allows users to back up Amazon S3 data with policies that define retention periods and backup frequency.

When enabled, AWS Backup for S3 captures continuous backups, allowing you to restore to any point within the last 35 days. Alternatively, you can create periodic backups (snapshots) that can be stored for years. These backups help organizations meet compliance requirements, recover from data loss, and ensure business continuity.

Common Use Cases for Amazon S3 Backup

Amazon S3 backup supports a variety of business needs, including:

  • Hybrid cloud backup: can store on-prem backups in Amazon S3 using AWS Storage Gateway, reducing hardware reliance, and enabling seamless cloud integration.
  • Tape replacement: migrate from physical tape libraries to cloud-based virtual tape storage for improved durability and cost efficiency.
  • Data lifecycle management: use S3 lifecycle policies to automatically transition data to lower-cost storage tiers like Glacier.
  • Data resiliency: leverage AWS Backup’s cross-region and cross-account backup capabilities to ensure high availability and minimize downtime risks.
  • Regulatory compliance: store immutable archives with S3 Object Lock to meet legal and industry retention requirements.

Key Features of AWS S3 Backup

AWS S3 offers several built-in features to ensure reliable backups:

  • Object versioning: keep multiple versions of an object to prevent accidental deletions or overwrites. This allows organizations to recover previous versions in case of data corruption or unintended modifications.
  • Multi-zone storage durability: data is distributed across multiple Availability Zones (AZs) within a region, enhancing redundancy against outages and hardware failures. Even if one AZ experiences issues, the data remains accessible from another, ensuring uninterrupted service.
  • Optimized data transfer: AWS S3 leverages global network optimizations, including S3 Transfer Acceleration, to speed up data uploads and ensure efficient data movement between locations. This is particularly useful for large-scale backups requiring minimal downtime.
  • Immutable storage options: with S3 Object Lock, businesses can enforce write-once-read-many (WORM) policies to prevent modifications or deletions of backup data. This ensures compliance with industry regulations and protects against ransomware attacks.
  • Robust data encryption: AWS S3 secures backup data using server-side encryption (SSE) with AES-256 and integration with AWS Key Management Service (KMS). This protects backups both at rest and in transit, meeting strict security requirements.

Backup Methods for Amazon S3

There are two primary ways to back up Amazon S3 data:

  • Continuous backups: track all changes made to S3 objects and enable restoration to any point within the last 35 days. This method ensures data is always up to date and allows businesses to perform granular restores based on their specific recovery requirements.
  • Periodic snapshots: capture scheduled backups that can be retained for up to 99 years. These snapshots provide a point-in-time recovery solution, making them useful for compliance and archival purposes while minimizing storage costs.

Pro Tip: combine both methods. Use continuous backups for recent data recovery and periodic snapshots for long-term archival. This way, you get fast restores and lower storage costs.

AWS S3 Storage Classes for Backup

AWS Backup supports multiple Amazon S3 storage classes, allowing businesses to tailor their backup strategy based on access frequency and cost considerations. Here is a breakdown of the available options:

  • S3 Standard: designed for frequently accessed backups, offering high durability and fast retrieval. This class is ideal for active workloads where data needs to be quickly restored without delays.
  • S3 Standard-IA: a cost-efficient option for backups that are accessed occasionally but still require quick retrieval. It is well-suited for disaster recovery and long-term storage where data is needed infrequently but must be available on demand. S3 Standard-IA provides the same high durability as S3 Standard but with lower storage costs.
  • S3 One Zone-IA: a budget-friendly option for infrequently accessed backups stored in a single Availability Zone. While it reduces costs, it carries a higher risk of data loss compared to multi-zone storage.
  • S3 Intelligent-Tiering: automatically shifts data between frequent and infrequent access tiers based on usage patterns. This class optimizes costs by ensuring backups are stored in the most economical tier without manual intervention.
  • S3 Glacier Instant Retrieval: designed for long-term archival storage while still providing millisecond retrieval times. It offers the lowest storage costs for backups that are rarely accessed but require instant availability when needed.

It is important to note that AWS Backup does not support archiving snapshots to any S3 archive tiers, such as Glacier or Glacier Deep Archive.

Best Practices for S3 Backup

To ensure a reliable and efficient S3 backup strategy, consider the following best practices:

  • Organize metadata efficiently: implement structured metadata tags to enhance searchability and streamline data classification.
  • Verify data integrity with checksums: use checksum validation during uploads and restores to detect any data corruption or inconsistencies.
  • Follow best practices for object key naming: keep object names consistent and free of special characters to avoid compatibility issues across services.
  • Optimize storage costs with lifecycle policies: automate data transitions to Glacier or other lower-cost storage classes when backups are no longer frequently accessed.
  • Control versioning: regularly audit and remove obsolete object versions to keep storage costs in check.
  • Enable activity monitoring: use AWS logging and event notifications to track modifications and ensure compliance with security policies.

AWS S3 Backup Pricing

AWS S3 backup pricing depends on several factors, including storage usage, request operations, and data retrieval. Costs typically include:

  • Storage costs: billed based on the average storage used per month, with prices starting from $0.023 per GB-Month for S3 Standard, and lower rates for archival tiers.
  • Request fees: operations such as GET/LIST and PUT requests during backup and restore processes incur additional charges.
  • Restore costs: data retrieval is charged per GB, starting at $0.01 for Glacier and varying based on the speed of access.
  • Data transfer fees: Moving data between AWS regions or to on-premises infrastructure incurs outbound transfer costs.

For up-to-date pricing, refer to the AWS S3 Pricing Page.

Limitations of AWS Backup for S3

While AWS Backup for Amazon S3 provides a centralized and automated backup solution, it comes with several limitations that users should be aware of:

  1. Limited metadata and configuration backup: AWS Backup does not support backing up all S3 object metadata. It excludes certain properties like the original creation date, version ID, storage class, and e-tags. Additionally, bucket-level configurations such as bucket policies, settings, names, and access points are not included in backups.
  2. No support for SSE-C encryption: AWS Backup does not support backing up objects encrypted with SSE-C (Server-Side Encryption with Customer-Provided Keys). This means that organizations relying on SSE-C for enhanced security cannot use AWS Backup to protect their data. Furthermore, AWS Backup does not support backing up S3 data stored on AWS Outposts, limiting its usability for hybrid cloud environments.
  3. No cold storage transition: AWS Backup does not allow transitioning S3 backups to cold storage, such as S3 Glacier or S3 Glacier Deep Archive. This limitation can lead to higher storage costs, especially for organizations looking to retain backups for long-term archival purposes.
  4. Object key name restrictions: AWS Backup only supports object key names containing specific Unicode characters. Objects with key names that include unsupported characters might be excluded from backups, potentially leading to data loss or incomplete backups.

What StarWind has to Offer?

While AWS Backup for S3 offers a managed backup solution, it comes with limitations, particularly in cold storage transitions and long-term archival cost optimization. AWS Backup does not allow S3 snapshots to be archived in Glacier or Glacier Deep Archive, forcing businesses to store backups in higher-cost storage tiers.

StarWind Virtual Tape Library (VTL) for AWS and Veeam addresses these challenges by seamlessly integrating cost-efficient cold storage tiers into the backup infrastructure. It enables businesses to implement a Disk-to-Disk-to-Cloud (D2D2C) strategy, allowing backups to be stored on fast local storage first before being automatically tiered to Amazon S3 and Glacier. This ensures that frequently accessed data remains on performance-optimized storage while older backups are offloaded to low-cost archival tiers, significantly reducing storage expenses.

By leveraging StarWind VTL, businesses gain a more cost-effective and secure backup strategy compared to AWS Backup alone, ensuring compliance with the 3-2-1-1 backup rule.

Conclusion

Amazon S3 offers versatile backup capabilities via continuous backups and periodic snapshots, enabling a variety of restore options. However, it’s essential to be aware of limitations such as incomplete metadata backup and the inability to directly archive snapshots to cold storage tiers like Glacier. By carefully considering storage classes, implementing best practices, and understanding the Amazon S3 pricing structure, businesses can effectively leverage S3 for data protection and achieve a robust backup strategy.



from StarWind Blog https://ift.tt/Gq4BMyC
via IFTTT

Hackers Exploit Severe PHP Flaw to Deploy Quasar RAT and XMRig Miners

Mar 19, 2025Ravie LakshmananThreat Intelligence / Cryptojacking

Threat actors are exploiting a severe security flaw in PHP to deliver cryptocurrency miners and remote access trojans (RATs) like Quasar RAT.

The vulnerability, assigned the CVE identifier CVE-2024-4577, refers to an argument injection vulnerability in PHP affecting Windows-based systems running in CGI mode that could allow remote attackers to run arbitrary code.

Cybersecurity company Bitdefender said it has observed a surge in exploitation attempts against CVE-2024-4577 since late last year, with a significant concentration reported in Taiwan (54.65%), Hong Kong (27.06%), Brazil (16.39%), Japan (1.57%), and India (0.33%).

About 15% of the detected exploitation attempts involve basic vulnerability checks using commands like "whoami" and "echo <test_string>." Another 15% revolve around commands used for system reconnaissance, such as process enumeration, network discovery, user and domain information, and system metadata gathering.

Martin Zugec, technical solutions director at Bitdefender, noted that at least roughly 5% of the detected attacks culminated in the deployment of the XMRig cryptocurrency miner.

"Another smaller campaign involved the deployment of Nicehash miners, a platform that allows users to sell computing power for cryptocurrency," Zugec added. "The miner process was disguised as a legitimate application, such as javawindows.exe, to evade detection."

Other attacks have been found to weaponize the shortcoming of delivering remote access tools like the open-source Quasar RAT, as well as execute malicious Windows installer (MSI) files hosted on remote servers using cmd.exe.

In perhaps something of a curious twist, the Romanian company said it also observed attempts to modify firewall configurations on vulnerable servers with an aim to block access to known malicious IPs associated with the exploit.

This unusual behavior has raised the possibility that rival cryptojacking groups are competing for control over susceptible resources and preventing them from targeting those under their control a second time. It's also consistent with historical observations about how cryptjacking attacks are known to terminate rival miner processes prior to deploying their own payloads.

The development comes shortly after Cisco Talos revealed details of a campaign weaponizing the PHP flaw in attacks targeting Japanese organizations since the start of the year.

Users are advised to update their PHP installations to the latest version to safeguard against potential threats.

"Since most campaigns have been using LOTL tools, organizations should consider limiting the use of tools such as PowerShell within the environment to only privileged users such as administrators," Zugec said.

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.



from The Hacker News https://ift.tt/yXczDZI
via IFTTT

Disaster recovery strategies with Terraform

The total cost of unplanned outages has been rising exponentially each year. A 2016 study conducted by the Ponemon Institute stated that the mean total cost per minute of an unplanned outage was $8,851, a 32% increase since 2013, and a 81% increase since 2010. A 2022 study by EMA Research says that number is up to $12,900. These metrics showcase how crucial it is for organizations to have a solid and well-thought disaster recovery strategy in place in order to reduce downtime and data loss as much as possible once disaster strikes.

Ensuring business continuity and safeguarding mission-critical systems against unexpected failures can be time-consuming, expensive, and difficult to maintain, especially as systems scale. It is also not uncommon for disaster recovery (DR) solutions to cost enterprises anywhere from several hundreds of thousands to millions of dollars per year, creating significant strain on IT budgets within organizations.

However, setting up and maintaining DR infrastructure doesn't have to be so cumbersome nor costly. This is where leveraging infrastructure as code (IaC) within your DR plan comes into play.

This blog post showcases how HashiCorp Terraform can be used to effectively setup, test, and validate your DR environments in a cost-efficient, practical, and consistent manner by codifying the infrastructure provisioning process.

Disaster recovery strategies and terminologies

Before diving into how Terraform can help provision and manage DR related infrastructure, it’s important to understand the concepts of Recovery Time Objective (RTO) and Recovery Point Objective (RPO), including how they differ from each other along with how they should be applied to your organization’s particular DR strategy:

  • Recovery Time Objective (RTO): Refers to the amount of time required to restore business operations after an unplanned outage takes place before negatively impacting the organization’s mission.
  • Recovery Point Objective (RPO): RPO refers to the amount of data a business can afford to lose, measured in time. This typically can vary anywhere from a few minutes to several hours depending on business requirements.

It’s also necessary to understand some of the most popular disaster recovery strategies. The list below starts with the least expensive strategy and the strategies get more expensive as you go further down the list (Figure 1 below shows each method on a spectrum of complexity and RTO/RPO).

Keep in mind that you will typically see a combination of these methods being used simultaneously within an organization’s DR strategy. For example, a container/VM cluster orchestrator will typically leverage the Pilot Light methodology, while database infrastructure might use the Backup & Data Recovery method:

  • Backup & Data Recovery: The least complex and least costly DR strategy covered here. This method involves backing up your systems/data to a different location and, in case of disaster, the data is restored from backup onto either an existing, or new system. This can be a simple and cost-effective strategy. However, depending on the amount of data and recovery process, can lead to high RTOs and/or RPOs.
  • Pilot Light: The goal of a Pilot Light environment is to have a minimalistic copy of your production environment with only the key components/services running in another location. When disaster occurs, the additional required components are provisioned and scaled up to production capacity. This strategy is typically quicker than the Backup/Data Recovery option, but it brings more complexity and cost as well.
  • Active/Passive: In this strategy, a fully functional replica of the production environment is created in a secondary location. This is a more expensive and complex strategy out of all the options discussed so far. However, it also provides the quickest recovery-time and minimal data loss compared to all of the previous methods.
  • Multi-Region Active/Active: This is where systems/applications are built to be distributed across various geographic regions. If one region fails, traffic is automatically redirected to other healthy regions. This is the most complex and expensive out of all strategies. It also provides the highest level of resilience and availability while also protecting mission-critical applications against full-region outages.
Disaster

Why use Terraform with your DR strategy?

If you have gone through the process of selecting and using DR tooling in the past, you most likely encountered one, or more, of the following problems:

  • Cost: As I previously mentioned, disaster recovery tools can be extremely expensive. Licensing fees coupled with ongoing costs of maintaining redundant, idle infrastructure can be a significant strain on IT budgets.
  • Lack of flexibility: DR toolsets are typically tied to a particular platform. This results in additional complexity and reduced flexibility when it comes to setting DR strategies across multiple cloud providers. This also applies to leveraging a managed solution from one of the major public clouds. While leveraging a cloud-specific DR solution may be convenient at first, it will limit your options for multi-cloud and hybrid strategies in the future as you expand.
  • Performance: These tools can also be very slow when it comes to performance and recovery speed. Legacy DR solutions typically rely on complex mechanisms that are slow and error prone, making desired RTO and RPO difficult to achieve.

Terraform not only helps solve all these issues, but provides several other key advantages when it is leveraged within your disaster recovery strategy:

  • Automation: Terraform allows you to automate the entire infrastructure deployment and recovery process, minimizing the need for manual intervention and greatly reducing risk of human error. This also ensures consistency and repeatability within your DR infrastructure setup.
  • Repeatability: With Terraform, you are adopting an infrastructure as code mindset, meaning that you ensure consistent infrastructure configuration across multiple environments by defining your infrastructure once in a codified manner. This mitigates configuration drift and ensures that your DR environment accurately mirrors your production setup.
  • Scalability: Terraform enables you to scale your environments as needed with ease, allowing you to test your DR infrastructure plans at scale, ensuring they can handle real-world scenarios.
  • Cost efficiency: Terraform allows you to dynamically provision and destroy ephemeral resources as needed, resulting in greatly reduced infrastructure costs as you only pay for the resources utilized during your DR exercise instead of incurring ongoing costs from resources that remain idle most of the time.
  • Flexibility: With Terraform being a cloud agnostic solution, you have the ability to not only spin up infrastructure in different availability zones or regions within a single cloud provider, but you can provision and manage resources across multiple cloud providers as well.

How to use Terraform with your DR strategy

Lets revisit the DR strategies mentioned previously and take a look at examples of how Terraform can be utilized with each one:

  • Backup & Data Recovery: The -refresh-only flag can update the Terraform state file to match the actual infrastructure state without modifying the infrastructure itself. This can be used after a backup or recovery operation in order to sync Terraform state and reduce drift.
  • Pilot Light and Active/Passive: Terraform conditional expressions can be leveraged to deploy only the required infrastructure components needed for a Pilot Light while keeping other resources in a dormant state, or label an Active/Passive configuration as on/off until a DR event occurs. Once a DR event occurs, conditionals can trigger resource scaling to full production capacity, ensuring minimal downtime and operational impact. The next section of this post shows an example of this Active/Passive cutover.
  • Multi-Region Active/Active: Terraform modules can be used to encapsulate and re-use infrastructure components. This plays a crucial role in ensuring consistency is maintained in large-scale, multi-region environments while simplifying infrastructure management by ensuring a single source of truth for your infrastructure code. As an example, you can parameterize our modules by region, ensuring you deploy the same infrastructure across various regions:
#Terraform modules parameterized by region

module "vpc" {
  source = "./modules/vpc"
  region = var.region
}

module "compute" {
  source = "./modules/compute"
  region = var.region
  instance_count = var.instance_count
}

It is also worth noting that the Terraform import command can be a valuable tool within your DR strategy by ensuring existing infrastructure created outside of Terraform is integrated and managed.

Disaster Recovery Active/Passive cutover example

To demonstrate how you can leverage Terraform for your DR strategy, the example below shows how to conduct a complete region failover within AWS for a web server hosted on an Amazon EC2 instance behind Route 53 (Refer to Figure 2 below).

The complete code repository for this example can be found here.

Note: I will be using my own domain already set up as an AWS Route 53 Hosted Zone (andrecfaria.com). If you are following along, this value should be replaced with whatever domain you set up within your Terraform configuration.

In a real-world scenario, your environment typically will be much more robust, most likely including:

  • Multiple web servers across several availability zones
  • Load balancers sitting in front of the web servers
  • Databases in both regions with cross-region replication in place
  • And more

However, for simplicity, this example only uses EC2 instance.

Figure

This scenario, employs the Active/Passive DR strategy with all of your infrastructure provisioned and managed through Terraform. However, the infrastructure required for a DR failover will only be provisioned when you trigger the failover itself, preventing ongoing costs related to idle compute instances and other cloud resources. After running a terraform apply, you see the following outputs:

Outputs:

current_active_environment = "Production"
dns_record = "test.andrecfaria.com"
production_public_ip = "18.234.86.230"

You can use the dig command to verify that your DNS record points to the production IP address:

$ dig test.andrecfaria.com

; <>> DiG 9.18.28-0ubuntu0.22.04.1-Ubuntu <>> test.andrecfaria.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<- opcode: QUERY, status: NOERROR, id: 58089
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 512
;; QUESTION SECTION:
;test.andrecfaria.com.          IN      A

;; ANSWER SECTION:
test.andrecfaria.com.   60      IN      A       18.234.86.230

;; Query time: 9 msec
;; SERVER: 10.255.255.254#53(10.255.255.254) (UDP)
;; WHEN: Mon Feb 10 16:04:47 EST 2025
;; MSG SIZE  rcvd: 65

You can also run a curl command to visualize the contents of your production webpage:

$ curl "http://test.andrecfaria.com"

Hello World from Production!

Looking at the Terraform code, within the variables.tf file you can find the following dr_switchover variable:

variable "dr_switchover" {
  type        = bool
  description = "Flag to control environment switchover (false = Production | true = Disaster Recovery)"
  default     = false
}

This variable is a key component of the DR configuration because it will define whether the Route 53 DNS record points to the production web server (by keeping the default value of false), or if the record should switch over to the DR web server and create the required infrastructure resources for the DR failover to take place, by setting its value to true.

This is accomplished by leveraging the conditional expressions functionality of Terraform when setting the records argument within the aws_route53_record resource declaration, as well as leveraging the count argument within the DR resources.

# Route53 Record - Conditional based on dr_switchover

resource "aws_route53_record" "test" {
  zone_id = data.aws_route53_zone.selected.zone_id
  name    = "${var.subdomain}.${var.domain_name}"
  type    = "A"
  ttl = 60
  records = [var.dr_switchover ? aws_instance.dr_webserver.public_ip : aws_instance.prod_webserver.public_ip]
}
# Disaster Recovery EC2 Instance

resource "aws_instance" "dr_webserver" {
  count                  = var.dr_switchover ? 1 : 0
  provider               = aws.dr
  ami                    = var.dr_ami_id
  instance_type          = var.instance_type
  key_name               = var.key_name
  vpc_security_group_ids = [aws_security_group.dr_sg.id]
  user_data              = <-EOF
              #!/bin/bash
              sudo yum update -y
              sudo yum install -y nginx
              sudo systemctl start nginx
              sudo systemctl enable nginx
              echo "

Hello World from Disaster Recovery!

" | sudo tee /usr/share/nginx/html/index.html
              EOF
  tags = {
    Name        = "dr-instance"
    Environment = "Disaster Recovery"
  }
  depends_on = [aws_security_group.dr_sg]
}

The only change required in order to cutover to the DR environment is setting the value of the dr_switchover variable to true:

$ terraform apply -var="dr_switchover=true" -auto-approve

Below are the actions and output that Terraform will display when creating the DR EC2 instance and performing an in-place update to the Route 53 record resource, changing the records argument to point to your DR web server IP address instead of the production IP address:

Terraform will perform the following actions:

  # aws_instance.dr_webserver[0] will be created
  + resource "aws_instance" "dr_webserver" {
        ...
    }

  # aws_route53_record.test will be updated in-place
  ~ resource "aws_route53_record" "test" {
        id = "Z0441403334ANN7OFVRF1_test.andrecfaria.com_A"
        name = "test.andrecfaria.com"
        ~ records = [
        - "18.234.86.230",
        ] -> (known after apply)
        # (7 unchanged attributes hidden)
        }

Plan: 1 to add, 1 to change, 0 to destroy.

Changes to Outputs:
  ~ current_active_environment = "Production" -> "Disaster Recovery"
  + dr_public_ip  = (known after apply)


Outputs:

current_active_environment = "Disaster Recovery"
dns_record = "test.andrecfaria.com"
dr_public_ip = "54.219.217.97"
production_public_ip = "18.234.86.230

Once the Terraform run is complete, you can validate that the DNS record now points to the DR web server by using the same dig and curl commands as before):

#dig command results showing DR IP address

$ dig test.andrecfaria.com

; <>> DiG 9.18.28-0ubuntu0.22.04.1-Ubuntu <>> test.andrecfaria.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<- opcode: QUERY, status: NOERROR, id: 19471
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 512
;; QUESTION SECTION:
;test.andrecfaria.com.          IN      A

;; ANSWER SECTION:
test.andrecfaria.com.   60      IN      A       54.219.217.97

;; Query time: 19 msec
;; SERVER: 10.255.255.254#53(10.255.255.254) (UDP)
;; WHEN: Mon Feb 10 16:16:25 EST 2025
;; MSG SIZE  rcvd: 65
#curl command showcasing DR webpage contents

$ curl "http://test.andrecfaria.com"

Hello World from Disaster Recovery!

Finally, we can fail back to production by simply running the terraform apply command again, this time while setting the dr_switchover variable back to false. This will also destroy all the infrastructure created when failing over to DR, enabling us to prevent unnecessary spend related to idle resources.

#Setting the dr_switchover variable value via CLI

$ terraform apply -var="dr_switchover=false" -auto-approve
#Terraform apply run output

Terraform will perform the following actions:

  # aws_instance.dr_webserver[0] will be destroyed
  # (because index [0] is out of range for count)
  - resource "aws_instance" "dr_webserver" {
        ...
    }

  # aws_route53_record.test will be updated in-place
  ~ resource "aws_route53_record" "test" {
        id = "Z0441403334ANN7OFVRF1_test.andrecfaria.com_A"
        name = "test.andrecfaria.com"
        ~ records = [
        - "54.219.217.97",
        + "18.234.86.230",
        ]
        # (7 unchanged attributes hidden)
        }

Plan: 0 to add, 1 to change, 1 to destroy.

Changes to Outputs:
  ~ current_active_environment = "Disaster Recovery" -> "Production"
  - dr_public_ip = "54.219.217.97" -> null

Cleanup

If you have been following along by deploying your own resources, don’t forget to run the terraform destroy command in order to clean up your environment and not incur any unwanted costs.

Other considerations

Some additional considerations to be mindful of when using Terraform for DR infrastructure provisioning include, but are not limited to:

  • Application install time: Applications that are not dependent on Terraform can take additional time to be installed and configured when performing a DR failover. Ensure that this is accounted for when determining RTO.
  • DNS propagation time: Keep in mind that DNS changes might take time to propagate. This can be mitigated by proactively lowering the time-to-live values of your DNS records a few days prior to the migration in the event of a planned failover.
  • Backups: Terraform does not backup your data and is not a replacement for your backup systems. Ensure that you have a solid backup strategy in place that meets your requirements in addition to your DR strategy.

Conclusion

This blog post demonstrated how Terraform can be leveraged to automate, simplify, and reduce costs related to provisioning and managing infrastructure within your disaster recovery strategy. To learn more about Terraform, visit the HashiCorp developer portal, where you can find more information regarding best practices, integrations, and reference architectures.



from HashiCorp Blog https://ift.tt/nUPpbDm
via IFTTT