Thursday, October 17, 2024

What I’ve learned in my first 7-ish years in cybersecurity

What I’ve learned in my first 7-ish years in cybersecurity

When I first interviewed with Joel Esler for my position at Cisco Talos, I remember when the time came for me to ask questions, one thing stood out. I asked what resources were available to me to learn about cybersecurity, because I was totally new to the space.  

His answer: The people. When I asked that question, Joel told me that the entire office was a library for me. He told me to just ask as many questions as I could. 

Coming from journalism, where I was reporting on a range of topics from local government, finance and banking, art and culture, and sports, cybersecurity was totally new to me. Now almost seven years later, I’ve been able to host a podcast that went nearly 200 episodes, relaunch a cybersecurity newsletter, researched malicious Facebook groups trading stolen personal information, and I’ve even learned how to write a ClamAV signature. 

Unfortunately, this week is my last at Talos, but far from my last in cybersecurity. I’m off to a new adventure, but I wanted to take the space here to talk about what I’ve learned in my career at Talos.  

I think that this is a good lesson for anyone reading this: If you want to work in cybersecurity, you can, no matter what your background or education is. I’ve met colleagues across Talos who previously studied counterterrorism operations, German and Russian history, and political science. And I walked into my first day on the job knowing next to nothing about cybersecurity. I knew I could write, and I knew I could help Talos tell their story (and clean up the occasional passive voice in their blog posts). But I had never heard of a remote access trojan before.  

I hope these lessons resonate with you, your team, or the next person you think about hiring into the cybersecurity industry.  

  1. You can’t do any of this without people. This has become extraordinarily relevant this year with the advent of AI. I personally have beef with the term “AI” anyway because we’ve been using machine learning in cybersecurity for years now, which is essentially what we’re using the “AI” buzzword to mean now. But at the end of the day, people are what makes cybersecurity detection work in the first place. If you don’t have a team that’s ready to put in the work necessary to write, test and improve the intelligence that goes into security products (AI or not), you’re doomed. Any of these tools are only as good as the people who put the information into them. I’ve been beyond impressed with the experience, work ethic, and knowledge that everyone in Talos has. They are what makes the engine run, and none of this would work without them. 
  2. You can carve out your own niche in cybersecurity. That said, you don’t have to know how to code to work in cybersecurity if you don’t want to. Anyone can carve out their own niche in the space with their own skillset. I still barely know how to write Python, but I’ve been able to use the skills that I do have (research, writing, storytelling, audio editing, etc.) to carve out my space in cybersecurity. I can speak intelligently about security problems and solutions with my colleagues without needing to know how to reverse-engineer a piece of malware. And even on the technical side of things, everyone can carve out their own specialty. Talos has experts on email spam, and even specific types of email spam, that their colleagues may not know anything about. Others specialize in certain geographic areas because they can speak the language there and can peel back an additional layer that non-native speakers can’t.  
  3. Be a sponge. Going back to the opening of this week’s newsletter, I needed to ask hundreds of questions in my first few months at Talos. It took me a good amount of time to get over my fear of looking stupid, and that held me back early on from having more intelligent conversations with my teammates because I would keep questions inside or just assume that Google had the right answers. No matter how many years you’ve been in the security space, there is always something new to learn. Never assume you know everything there is to know on a given topic. If you are a sponge for information, you never know what new skills you can pick up along the way. When I graduated from college with a journalism degree, I never would have believed you if you told me at the time that I’d be needing to understand how atomic clocks keep power grids running. But here we are. 

The Threat Source newsletter will be off for a few weeks while it undergoes a revamp, and it’ll be back with a new look.  

I want to thank everyone who has enabled me to grow and shape this newsletter over the years, growing it to thousands of subscribers. And, of course, thanks to the readers who have engaged, read and shared over the years.  

The one big thing 

Cisco Talos has observed a new wave of attacks active since at least late 2023, from a Russian-speaking group we track as “UAT-5647” against Ukrainian government entities and unknown Polish entities. The latest series of attacks deploys an updated version of the RomCom malware we track as “SingleCamper.” This version is loaded directly from the registry into memory and uses a loopback address to communicate with its loader. 

Why do I care? 

UAT-5647 has long been considered a multi-motivational threat actor performing ransomware and espionage-oriented attacks. However, in recent months, it has accelerated its attacks with a clear focus on establishing long–term access for exfiltrating data of strategic interest to it. UAT-5647 has also evolved its tooling to include four distinct malware families: two downloaders we track as RustClaw and MeltingClaw, a RUST-based backdoor we call DustyHammock, and a C++-based backdoor we call ShadyHammock. 

So now what? 

Cisco Talos has released several Snort rules and ClamAV signatures to detect and defend against the several malware families that UAT-5647 uses.  

Top security headlines of the week 

Government and security officials are still unraveling what to make of recent revelations around multiple Chinese-state-sponsored actors infiltrating U.S. networks. Most recently, Salt Typhoon was unveiled as a new actor that may have accessed foreign intelligence surveillance systems and electronic communications that some ISPs collect. like Verizon and AT&T collect based on U.S. court orders. The actor reportedly accessed highly sensitive intelligence and law enforcement data. This followed on reports earlier this year of other Chinese state-sponsored actors Volt Typhoon and Flax Typhoon, which targeted U.S. government networks and systems on military bases. One source told the Wall Street Journal that the latest discovery of Salt Typhoon could be “potentially catastrophic.” The actor allegedly gained access to Verizon, AT&T and Lumen Technologies by exploiting systems those companies use to comply with the U.S. CALEA act, which essentially legalizes wiretapping when required by law enforcement. (Axios, Tech Crunch

Chip maker Qualcomm says adversaries exploited a zero-day vulnerability in dozens of its chipsets used in popular Android devices. While few details are currently available regarding the vulnerability, CVE-2024-43047, researchers at Google and Amnesty International say they are working with Qualcomm to remediate and responsibly disclose more information. Qualcomm listed 64 different chipsets as being affected by the vulnerability, including the company’s Snapdragon 8 mobile platform, which is used many Android phones, including some made by Motorola, Samsung and ZTE. The U.S. Cybersecurity and Infrastructure Security Agency (CISA) also added the issue to its Known Exploited Vulnerabilities catalog, indicating they can confirm it’s been actively exploited in the wild. Qualcomm said it issued a fix in September, and it is now on the device manufacturers to roll out patches to their customers for affected devices. (Android Police, Tech Crunch

As many as 14,000 medical devices across the globe are online and vulnerable to a bevy of security vulnerabilities and exploits, according to a new study. Security research firm Censys recently found the devices exposed, which “greatly raise the risk of unauthorized access and exploitation.” Forty-nine percent of the exposed devices are located in the U.S. America’s decentralized health care system is largely believed to affect the amount of vulnerable devices, because there is less coordinaton to isolate the devices or patch them when vulnerabilities are disclosed, unlike countries like the U.K., where the health care system is solely organized and managed by the government. The Censys study found that many of the networks belonging to smaller health care organizations used residential ISPs, making them inherently less secure. Others set up devices and connected them to the internet without changing the preconfigured credentials or leaving their connections unencrypted. Others had simply been misconfigured. Open DICOM and DICOM-enabled web interfaces that are intended to share and view medical images were responsible for 36 percent of the exposures, with 5,100 IPs hosting these systems. (CyberScoop, Censys

Can’t get enough Talos? 

Upcoming events where you can find Talos

MITRE ATT&CKcon 5.0 (Oct. 22 - 23) 

McLean, Virginia and Virtual

Nicole Hoffman and James Nutland will provide a brief history of Akira ransomware and an overview of the Linux ransomware landscape. Then, morph into action as they take a technical deep dive into the latest Linux variant using the ATT&CK framework to uncover its techniques, tactics and procedures.

it-sa Expo & Congress (Oct. 22 - 24) 

Nuremberg, Germany

White Hat Desert Con (Nov. 14) 

Doha, Qatar

misecCON (Nov. 22) 

Lansing, Michigan

Terryn Valikodath from Cisco Talos Incident Response will explore the core of DFIR, where digital forensics becomes detective work and incident response turns into firefighting.

Most prevalent malware files from Talos telemetry over the past week 

There is no new data to report this week. This section will be overhauled in the next edition of the Threat Source newsletter.  



from Cisco Talos Blog https://ift.tt/mcCyne5
via IFTTT

Russian RomCom Attacks Target Ukrainian Government with New SingleCamper RAT Variant

Oct 17, 2024Ravie LakshmananThreat Intelligence / Malware

The Russian threat actor known as RomCom has been linked to a new wave of cyber attacks aimed at Ukrainian government agencies and unknown Polish entities since at least late 2023.

The intrusions are characterized by the use of a variant of the RomCom RAT dubbed SingleCamper (aka SnipBot or RomCom 5.0), said Cisco Talos, which is monitoring the activity cluster under the moniker UAT-5647.

"This version is loaded directly from the registry into memory and uses a loopback address to communicate with its loader," security researchers Dmytro Korzhevin, Asheer Malhotra, Vanja Svajcer, and Vitor Ventura noted.

RomCom, also tracked as Storm-0978, Tropical Scorpius, UAC-0180, UNC2596, and Void Rabisu, has engaged in multi-motivational operations such as ransomware, extortion, and targeted credential gathering since its emergence in 2022.

It's been assessed that the operational tempo of their attacks has increased in recent months with an aim to set up long-term persistence on compromised networks and exfiltrate data, suggesting a clear espionage agenda.

To that end, the threat actor is said to be "aggressively expanding their tooling and infrastructure to support a wide variety of malware components authored in diverse languages and platforms" such as C++ (ShadyHammock), Rust (DustyHammock), Go (GLUEEGG), and Lua (DROPCLUE).

The attack chains start with a spear-phishing message that delivers a downloader -- either coded in C++ (MeltingClaw) or Rust (RustyClaw) -- which serves to deploy the ShadyHammock and DustyHammock backdoors, respectively. In parallel, a decoy document is displayed to the recipient to maintain the ruse.

While DustyHammock is engineered to contact a command-and-control (C2) server, run arbitrary commands, and download files from the server, ShadyHammock acts as a launchpad for SingleCamper as well as listening for incoming commands.

Despite's ShadyHammock additional features, it's believed that it's a predecessor to DustyHammock, given the fact that the latter was observed in attacks as recently as September 2024.

SingleCamper, the latest version of RomCom RAT, is responsible for a wide range of post-compromise activities, which entail downloading the PuTTY's Plink tool to establish remote tunnels with adversary-controlled infrastructure, network reconnaissance, lateral movement, user and system discovery, and data exfiltration.

"This specific series of attacks, targeting high profile Ukrainian entities, is likely meant to serve UAT-5647's two-pronged strategy in a staged manner – establish long-term access and exfiltrate data for as long as possible to support espionage motives, and then potentially pivot to ransomware deployment to disrupt and likely financially gain from the compromise," the researchers said.

"It is also likely that Polish entities were also targeted, based on the keyboard language checks performed by the malware."

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.



from The Hacker News https://ift.tt/W0qncvg
via IFTTT

New in NotebookLM: Customizing your Audio Overviews and introducing NotebookLM Business

NotebookLM is piloting a way for teams to collaborate, and introducing a new way to customize Audio Overviews.

from AI https://ift.tt/WNGeM1R
via IFTTT

Managed IT Services: A Practical Guide for Businesses

In today’s fast-paced, tech-driven world, keeping up with IT demands can feel like trying to catch a moving train. For businesses, especially those without a dedicated IT team, the challenge is real. How do you stay on top of security, software updates, cloud management, and disaster recovery without derailing your core operations? This is where managed IT services come to the rescue. They provide a comprehensive solution for outsourcing IT responsibilities, ensuring smooth operations, enhanced security, and cost savings. This guide will walk you through what managed IT services entail, the types available, their benefits, and challenges.

What Are Managed IT Services?

Managed IT services are a form of full or partial outsourcing where a third-party provider takes responsibility for an organization’s IT infrastructure and operations. Instead of handling IT tasks in-house, you delegate responsibilities like system monitoring, security, and cloud management to the experts. These services are typically provided through a subscription-based model, allowing businesses to focus on their core activities while the managed services provider (MSP) handles IT needs.

Over time, managed IT services have evolved from simple break-fix models – where IT professionals fix issues as they arise – to proactive service management. Now, it’s about prevention. MSPs monitor your infrastructure 24/7, preventing problems before they occur, ensuring better stability, and making IT costs more predictable. They keep your technology running smoothly, so you can focus on growing your business.

Types of Managed IT Services

Managed IT services aren’t one-size-fits-all. Whether you need help with cybersecurity or cloud management, there’s a service to fit your needs. Here’s a breakdown of the most common types:

  • Cloud Services

With the shift to remote work and digital-first strategies, cloud management has become essential. Managed cloud services help businesses move their workloads to the cloud, optimize performance, and ensure secure data storage. MSPs also handle disaster recovery, ensuring your data is safe and accessible no matter what happens.

  • Cybersecurity Services

Given the rising tide of cyber threats, safeguarding your digital assets is non-negotiable. Managed cybersecurity services offer a comprehensive shield – constant monitoring, malware protection, data encryption, and vulnerability assessments. Some MSPs even provide full-fledged Security Operations Centers (SOC), keeping hackers at bay 24/7.

  • Remote Monitoring and Management (RMM)

This is like having a security camera on your IT systems, with an alarm that rings before anything goes wrong. RMM ensures that MSPs are keeping an eye on your networks, servers, and devices to detect and fix issues before they disrupt your operations.

  • Data Backup and Disaster Recovery (BDR)

Imagine losing all your data to a system crash or cyberattack. That’s where BDR comes in. Managed backup services continuously replicate your data, ensuring it’s securely stored and can be restored in the event of a disaster. It’s your safety net, protecting you from catastrophic data loss.

  • On-Site IT Services

Some problems require hands-on expertise. Managed on-site IT services provide field technicians who can handle tasks such as hardware installations, network setup, and maintenance directly at your location.

  • Helpdesk and IT Support

Having a technical issue at 2 PM on a Monday? Managed helpdesk services provide immediate support for your employees, solving IT problems in real-time. It’s like having an in-house IT team without the overhead costs, keeping your team productive with minimal downtime.

  • Project-Based Services

For businesses tackling large-scale IT projects, managed project services provide the needed expertise. Whether it’s implementing a new CRM system or upgrading existing infrastructure, MSPs can manage the project from start to finish, ensuring it’s completed on time and within budget.

  • Communication Services

Unified communication systems are becoming a core part of business IT strategy. Managed communication services integrate voice, video, messaging, and conferencing into one streamlined solution, allowing for seamless communication across teams and devices. MSPs ensure these systems are always available and secure.

  • Analytics Services

Businesses today run on data. Managed analytics services help collect, process, and analyze business data to provide actionable insights. Whether tracking customer behavior, optimizing workflows, or enhancing product performance, MSPs can turn raw data into strategic business intelligence.

  • Managed Print Services (MPS)

For organizations with high printing demands, managed print services take the burden of managing printers off your hands. MSPs handle everything from device setup to maintenance and supply management, ensuring smooth printing operations while reducing operational costs.

Benefits of Managed IT Services

Why are so many businesses turning to managed IT services? The answer lies in the sheer number of advantages they offer. Here’s how outsourcing IT management can work wonders for your business:

  • Cost Reduction

Running an internal IT department is expensive. Between hiring, training, and maintaining staff, costs add up quickly. Managed IT services cut down those costs by offering predictable, subscription-based pricing that covers everything from network monitoring to disaster recovery. Plus, there are no unexpected repair or upgrade fees.

  • Access to Expertise

With an MSP, you don’t just get one IT expert – you get a whole team of professionals with specialized knowledge in various fields. Whether it’s cloud management, security, or software updates, you’ll have access to experts who ensure you’re using the best tools and practices available.

  • Enhanced Security

With cyber threats evolving by the day, having a strong defense is critical. Managed IT services provide comprehensive security measures, including continuous monitoring, firewall management, and advanced threat detection. This means potential risks are identified and neutralized before they cause harm.

  • Operational Efficiency

Managed services take care of routine tasks like updates, backups, and system monitoring, so your internal team can focus on what really matters – driving innovation and strategy. Everything runs more smoothly when professionals are managing the backend.

  • Scalability

As your business grows, your IT needs will change. Managed IT services are designed to be flexible and scalable. Whether you need more bandwidth, additional security measures, or more cloud storage, MSPs can quickly adapt their services to your growing needs.

Challenges of Managed IT Services

Managed IT services are powerful, but they’re not without challenges. Here are some potential pitfalls to consider:

  • Lack of Control

When you outsource IT management, you might feel like you’re handing over control of your operations to a third party. It’s a valid concern – your MSP will be managing everything from your network to your data. However, a strong partnership with clear communication and defined service level agreements (SLAs) can minimize this feeling.

  • Potential for Higher Costs

While managed IT services often lead to cost savings, it’s easy to oversubscribe. If you’re not careful, you could end up paying for services you don’t need. Make sure you’re clear about what’s included in your contract and periodically review your service usage.

  • Dependency on the MSP

Outsourcing IT services can lead to a dependency on the provider. If your MSP doesn’t meet expectations or goes out of business, it could cause major disruptions. This is why it’s important to choose a reliable provider with a strong reputation and solid SLAs.

What StarWind Has to Offer?

Although StarWind doesn’t directly provide outsourced IT services, its products are designed to offer similar benefits by streamlining IT operations and reducing the burden on internal teams. Whether it’s StarWind VSAN, StarWind HCI Appliance, StarWind Virtual HCI Appliance, or StarWind Backup Appliance, customers save significant time for their IT staff thanks to the solutions’ user-friendly design and expert installation assistance from StarWind engineers. Support and ongoing maintenance are also handled by StarWind, further reducing the IT team’s responsibilities. With AI-powered telemetry and a proactive “call home” system, StarWind ensures 24/7 monitoring of the HCI infrastructure, preventing issues before they become problems. Rather than simply offering support, StarWind “babysits” the entire IT environment, making the customer’s IT experience smooth and stress-free.



from StarWind Blog https://ift.tt/04qgwiH
via IFTTT

Researchers Uncover Cicada3301 Ransomware Operations and Its Affiliate Program

Oct 17, 2024Ravie LakshmananRansomware / Network Security

Cybersecurity researchers have gleaned additional insights into a nascent ransomware-as-a-service (RaaS) called Cicada3301 after successfully gaining access to the group's affiliate panel on the dark web.

Singapore-headquartered Group-IB said it contacted the threat actor behind the Cicada3301 persona on the RAMP cybercrime forum via the Tox messaging service after the latter put out an advertisement, calling for new partners into its affiliate program.

"Within the dashboard of the Affiliates' panel of Cicada3301 ransomware group contained sections such as Dashboard, News, Companies, Chat Companies, Chat Support, Account, an FAQ section, and Log Out," researchers Nikolay Kichatov and Sharmine Low said in a new analysis published today.

Cicada3301 first came to light in June 2024, with the cybersecurity community uncovering strong source code similarities with the now-defunct BlackCat ransomware group. The RaaS scheme is estimated to have compromised no less than 30 organizations across critical sectors, most of which are located in the U.S. and the U.K.

The Rust-based ransomware is cross-platform, allowing affiliates to target devices running Windows, Linux distributions Ubuntu, Debian, CentOS, Rocky Linux, Scientific Linux, SUSE, Fedora, ESXi, NAS, PowerPC, PowerPC64, and PowerPC64LE.

Like other ransomware strains, attacks involving Cicada3301 have the ability to either fully or partially encrypt files, but not before shutting down virtual machines, inhibiting system recovery, terminating processes and services, and deleting shadow copies. It's also capable of encrypting network shares for maximum impact.

"Cicada3301 runs an affiliate program recruiting penetration testers (pentesters) and access brokers, offering a 20% commission, and providing a web-based panel with extensive features for affiliates," the researchers noted.

A summary of the different sections is as follows -

  • Dashboard - An overview of the successful or failed logins by the affiliate, and the number of companies attacked
  • News - Information about product updates and news of the Cicada3301 ransomware program
  • Companies - Provides options to add victims (i.e., company name, ransom amount demanded, discount expiration date etc.) and create Cicada3301 ransomware builds
  • Chat Companies - An interface to communicate and negotiate with victims
  • Chat Support - An interface for the affiliates to communicate with representatives of the Cicada3301 ransomware group to resolve issues
  • Account - A section devoted to affiliate account management and resetting their password
  • FAQ - Provides details about rules and guides on creating victims in the "Companies" section, configuring the builder, and steps to execute the ransomware on different operating systems
Cybersecurity

"The Cicada3301 ransomware group has rapidly established itself as a significant threat in the ransomware landscape, due to its sophisticated operations and advanced tooling," the researchers said.

"By leveraging ChaCha20 + RSA encryption and offering a customizable affiliate panel, Cicada3301 enables its affiliates to execute highly targeted attacks. Their approach of exfiltrating data before encryption adds an additional layer of pressure on victims, while the ability to halt virtual machines increases the impact of their attacks."

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.



from The Hacker News https://ift.tt/Qynb2TJ
via IFTTT

New Docker Terraform Provider: Automate, Secure, and Scale with Ease

We’re excited to announce the launch of the Docker Terraform Provider, designed to help users and organizations automate and securely manage their Docker-hosted resources. This includes repositories, teams, organization settings, and more, all using Terraform’s infrastructure-as-code approach. This provider brings a unified, scalable, and secure solution for managing Docker resources in an automated fashion — whether you’re managing a single repository or a large-scale organization.

2400x1260 evergreen docker blog g

A new way of working with Docker Hub

The Docker Terraform Provider introduces a new way of working with Docker Hub, enabling infrastructure-as-code best practices that are already widely adopted across cloud-native environments. By integrating Docker Hub with Terraform, organizations can streamline resource management, improve security, and collaborate more effectively, all while ensuring Docker resources remain in sync with other infrastructure components.

The Problem

Managing Docker Hub resources manually can become cumbersome and prone to errors, especially as teams grow and projects scale. Maintaining configurations can lead to inconsistencies, reduced security, and a lack of collaboration between teams without a streamlined, version-controlled system. The Docker Terraform Provider solves this by allowing you to manage Docker Hub resources in the same way you manage your other cloud resources, ensuring consistency, auditability, and automation across the board.

The solution

The Docker Terraform Provider offers:

  • Unified management: With this provider, you can manage Docker repositories, teams, users, and organizations in a consistent workflow, using the same code and structure across environments.
  • Version control: Changes to Docker Hub resources are captured in your Terraform configuration, providing a version-controlled, auditable way to manage your Docker infrastructure.
  • Collaboration and automation: Teams can now collaborate seamlessly, automating the provisioning and management of Docker Hub resources with Terraform, enhancing productivity and ensuring best practices are followed.
  • Scalability: Whether you’re managing a few repositories or an entire organization, this provider scales effortlessly to meet your needs.

Example

At Docker, even we faced challenges managing our Docker Hub resources, especially when adding repositories without owner permissions — it was a frustrating, manual process. With the Terraform provider, anyone in the company can create a new repository without having elevated Docker Hub permissions. All levels of employees are now empowered to write code rather than track down coworkers. This streamlines developer workflows with familiar tooling and reduces employee permissions. Security and developers are happy!

Here’s an example where we are managing a repository, an org team, the permissions for the created repo, and a PAT token:

terraform {
  required_providers {
    docker = {
      source  = "docker/docker"
      version = "~> 0.2"
    }
  }
}

# Initialize provider
provider "docker" {}

# Define local variables for customization
locals {
  namespace        = "my-docker-namespace"
  repo_name        = "my-docker-repo"
  org_name         = "my-docker-org"
  team_name        = "my-team"
  my_team_users    = ["user1", "user2"]
  token_label      = "my-pat-token"
  token_scopes     = ["repo:read", "repo:write"]
  permission       = "admin"
}

# Create repository
resource "docker_hub_repository" "org_hub_repo" {
  namespace        = local.namespace
  name             = local.repo_name
  description      = "This is a generic Docker repository."
  full_description = "Full description for the repository."
}

# Create team
resource "docker_org_team" "team" {
  org_name         = local.org_name
  team_name        = local.team_name
  team_description = "Team description goes here."
}

# Team association
resource "docker_org_team_member" "team_membership" {
  for_each = toset(local.my_team_users)

  org_name  = local.org_name
  team_name = docker_org_team.team.team_name
  user_name = each.value
}

# Create repository team permission
resource "docker_hub_repository_team_permission" "repo_permission" {
  repo_id    = docker_hub_repository.org_hub_repo.id
  team_id    = docker_org_team.team.id
  permission = local.permission
}

# Create access token
resource "docker_access_token" "access_token" {
  token_label = local.token_label
  scopes      = local.token_scopes
}

Future work

We’re just getting started with the Docker Terraform Provider, and there’s much more to come. Future work will expand support to other products in Docker’s suite, including Docker Scout, Docker Build Cloud, and Testcontainers Cloud. Stay tuned as we continue to evolve and enhance the provider with new features and integrations.

For feedback and issue tracking, visit the official Docker Terraform Provider repository or submit feedback via our issue tracker.

We’re confident this new provider will enhance how teams work with Docker Hub, making it easier to manage, secure, and scale their infrastructure while focusing on what matters most — building great software.

Learn more



from Docker https://ift.tt/dmj7wOC
via IFTTT

Wednesday, October 16, 2024

This Cybersecurity Awareness Month, Enhance Security with Citrix Zero Trust Application Delivery

As we recognize Cybersecurity Awareness Month this October, it’s the perfect opportunity to reassess how we secure applications in today’s rapidly evolving technology landscape. With the increase of remote work, cloud-based applications, and SaaS platforms, attack surfaces have expanded dramatically. In response, we’re introducing the concept of Zero Trust Application Delivery and Protection (ZTADP)—an approach that integrates zero trust principles with cutting-edge Citrix technologies to safeguard both custom and modern applications.

In this blog, we explore how Citrix’s unique capabilities enable IT admins to securely deliver applications—whether they’re custom-built or modern SaaS-based—anywhere and on any device. We’ll also discuss how Citrix offers unparalleled flexibility by securing all types of applications in a hybrid IT environment.

What is Zero Trust Application Delivery and Protection?

Zero Trust Application Delivery and Protection (ZTADP) is an approach designed to secure every aspect of application access and delivery based on the principles of “never trust, always verify.” This approach continuously verifies every user, device, and session, enforcing least privilege access to minimize the risk of threats

ZTADP secures all types of applications—whether custom, cloud-based, or SaaS—by delivering them through secure channels, monitoring user activities, and preventing unauthorized access. With ZTADP, applications are protected at every stage, ensuring that every interaction is secure, from access to delivery.

Securing Custom Applications with Citrix Virtual Apps and Desktops

For many enterprises, custom applications remain essential to day-to-day operations. These include legacy systems or proprietary tools specifically tailored to meet unique business needs. However, these applications were often not built with modern security requirements in mind. This is where Citrix Virtual Apps and Desktops (CVAD) steps in, allowing organizations to virtualize and securely deliver custom applications without requiring rewrites or replacements, while enhancing monitoring through observability technologies and tools like UberAgent.

Common Examples of Custom Applications:

  • ERP Systems (e.g., SAP, Oracle E-Business Suite): Many organizations have custom workflows and configurations within their ERP systems, which are mission-critical for managing supply chains, inventory, or financials.
  • Healthcare Applications (e.g., custom EHR/EMR systems): Hospitals and healthcare providers often rely on custom-built Electronic Health Record (EHR) systems that store sensitive patient data, making security paramount.
  • Manufacturing Control Systems: Manufacturing companies often use custom applications to control machinery, manage production lines, or track operations. These applications typically run on legacy systems that require secure remote access.
  • Banking Software: Banks and financial institutions use custom core banking applications that are central to managing transactions, client records, and regulatory compliance.
  • Government/Legal Applications: Custom software used for case management, court scheduling, or regulatory compliance in government agencies, which must meet strict security and privacy requirements.

How Citrix Secures Custom Applications:

  1. Isolation and Virtualization: Citrix virtualizes these critical custom applications, delivering them through a secure, isolated environment. The application runs on a secure server and is accessed via encrypted channels, reducing the risk of endpoint compromise.
  2. Centralized Management: With Citrix, custom applications are centrally managed, making it easy to apply security patches, updates, and configurations. This ensures security best practices are consistently applied across the enterprise, regardless of where the application is accessed.
  3. Granular Access Control: By integrating Citrix’s access control technologies, administrators can enforce role-based access and apply contextual security policies based on factors such as device compliance and user behavior. For example, only authorized healthcare professionals can access patient records in custom EHR systems based on their role and device security status.
  4. Data Protection: Data from custom applications like ERP systems, banking platforms, or healthcare systems is processed and stored on secure servers, ensuring that sensitive information remains protected, even if the end user’s device is compromised.

Enhancing Security with Observability and UberAgent:

Citrix’s observability technologies provide visibility into how applications are being used, offering deep insights into application performance, security posture, and user behavior. When combined with UberAgent, these tools provide a detailed understanding of how custom applications are accessed and utilized, allowing IT administrators to:

  • Monitor performance metrics: Track resource usage, application responsiveness, and user activity, ensuring that custom applications are performing optimally.
  • Identify potential security threats: Detect anomalies in user behavior or access patterns that may indicate a security breach, and respond before an attack escalates.
  • Optimize resource allocation: By monitoring which resources are being consumed by specific applications, IT teams can optimize performance and troubleshoot issues more effectively.
  • Improve endpoint visibility: UberAgent offers real-time insights into endpoint health, usage patterns, and application performance, ensuring that devices accessing custom applications are compliant with security policies and standards.

By delivering custom applications through Citrix Virtual Apps and Desktops and leveraging observability tools and UberAgent, organizations can not only extend the life of critical systems but also gain real-time insights into the performance, security, and usage of these applications.

Incorporating observability and UberAgent helps IT admins and security teams monitor and secure custom applications in real-time, ensuring they operate securely and efficiently across diverse environments. This combination of technologies allows organizations to stay ahead of potential issues while keeping their custom applications secure against modern threats.

Delivering modern applications with Zero Trust Network Access  and an Enterprise Browser 

While custom applications are still in use, today’s enterprise environment increasingly revolves around cloud-based and SaaS applications. These applications are highly accessible, flexible, and support remote work environments—but they also introduce significant security risks, such as SaaS SSO-based attacks and unauthorized data access.

Citrix Secure Private Access and Citrix Enterprise Browser are designed to protect modern applications by providing a zero trust framework that enforces strict access controls, session monitoring, and data protection.

Citrix SPA: Enhancing Application Security with Zero Trust and Contextual Access Control

  • Contextual Access Control: Citrix Secure Private Access ensures that every user and device is continuously verified before being granted access to applications. It does this by leveraging contextual information such as user behavior, device health, and location.
  • Zero Trust Network Access (ZTNA): Unlike traditional VPNs that grant broad access to the network, Citrix Secure Private Access provides Zero Trust Network Access (ZTNA). This means that users can only access the specific applications they are authorized to use, minimizing the risk of lateral movement if a user’s credentials are compromised.
  • Protection against SaaS SSO-based attacks: Citrix Secure Private Access protects against common SaaS SSO-based attacks, such as phishing and credential stuffing, by enforcing multi-factor authentication (MFA) and continuously validating session security. If an anomalous login attempt is detected, access can be revoked in real time.

Citrix Enterprise Browser: Isolating SaaS Applications for Enhanced Security

  • Secure Browser Isolation: Citrix Enterprise Browser creates a secure, isolated environment where users can access SaaS applications. This isolation prevents malware from spreading beyond the browser and ensures that SaaS applications are protected from browser-based threats.
  • Granular control: Administrators can control exactly what users can do within the browser, such as restricting downloads, file uploads, or copy/paste functions. This helps protect against data exfiltration and insider threats.
  • Real-time monitoring and reporting: Citrix Enterprise Browser monitors user sessions and provides detailed reports on activity, allowing administrators to detect and respond to suspicious behavior quickly.

Citrix: The only solution that secures both custom and modern applications

In today’s hybrid enterprise, businesses are running a mix of custom and modern applications, both on-premises and in the cloud. Citrix is unique in its ability to offer a complete solution that secures both custom and modern applications under a single, unified platform. Here’s why Citrix stands out:

  1. Custom Application security: With Citrix Virtual Apps and Desktops, custom applications are virtualized and securely delivered to any device, ensuring they remain protected in the modern IT landscape.
  2. SaaS application security: Through Citrix Secure Private Access and Citrix Enterprise Browser, Citrix secures access to modern SaaS applications and cloud services, applying zero trust principles to every interaction.
  3. Flexible Deployment: Whether your applications are hosted on-premises, in the cloud, or delivered as SaaS, Citrix provides the tools to secure them. This flexibility allows you to adapt as your business evolves, ensuring security remains strong across all environments.
  4. Unified Security Framework: Citrix provides a single platform to enforce security policies, access controls, and monitoring across all applications, reducing complexity and improving the overall security posture of the enterprise.

Addressing Modern Security Requirements with Observability and Analytics

Citrix’s security solutions are bolstered by observability technologies that provide real-time visibility into application performance and security. For example:

  • Real-time analytics: Citrix offers monitoring and reporting capabilities that allow administrators to detect anomalies, investigate security events, and respond quickly to threats.
  • Behavioral insights: With advanced user and entity behavior analytics (UEBA), Citrix helps detect abnormal behavior patterns, providing early indicators of potential breaches or insider threats.
  • Endpoint observability: Technologies provide visibility into user devices, including resource consumption, performance, and security compliance, ensuring that endpoints remain secure.

These insights enable businesses to continuously monitor their security landscape, ensuring that modern applications and custom apps are protected from emerging threats.

Take action this October: Implement zero trust application delivery and protection 

As we recognize Cybersecurity Awareness Month, now is the time to take a proactive approach to securing any kind of application. By adopting zero trust application delivery and protection with Citrix, your organization can ensure that applications remain secure, no matter where they are hosted or how they are accessed. Start by:

  • Deploying Citrix Virtual Apps and Desktops to securely deliver custom, web, and SaaS applications.
  • Implementing Citrix Secure Private Access to provide zero trust security for both cloud and on-premises applications.
  • Leveraging Citrix Enterprise Browser to secure access to modern SaaS applications with browser isolation and granular controls.

Learn more about our security use cases, such as protecting sensitive business information, delivering mission-critical applications, and application performance and security. Discover how Citrix can help you enhance your IT security.



from Citrix Blogs https://ift.tt/qgZIhBD
via IFTTT

Hackers Abuse EDRSilencer Tool to Bypass Security and Hide Malicious Activity

Oct 16, 2024Ravie LakshmananEndpoint Security / Malware

Threat actors are attempting to abuse the open-source EDRSilencer tool as part of efforts to tamper endpoint detection and response (EDR) solutions and hide malicious activity.

Trend Micro said it detected "threat actors attempting to integrate EDRSilencer in their attacks, repurposing it as a means of evading detection."

EDRSilencer, inspired by the NightHawk FireBlock tool from MDSec, is designed to block outbound traffic of running EDR processes using the Windows Filtering Platform (WFP).

It supports terminating various processes related to EDR products from Microsoft, Elastic, Trellix, Qualys, SentinelOne, Cybereason, Broadcom Carbon Black, Tanium, Palo Alto Networks, Fortinet, Cisco, ESET, HarfangLab, and Trend Micro.

By incorporating such legitimate red teaming tools into their arsenal, the goal is to render EDR software ineffective and make it a lot more challenging to identify and remove malware.

"The WFP is a powerful framework built into Windows for creating network filtering and security applications," Trend Micro researchers said. "It provides APIs for developers to define custom rules to monitor, block, or modify network traffic based on various criteria, such as IP addresses, ports, protocols, and applications."

"WFP is used in firewalls, antivirus software, and other security solutions to protect systems and networks."

EDRSilencer takes advantage of WFP by dynamically identifying running EDR processes and creating persistent WFP filters to block their outbound network communications on both IPv4 and IPv6, thereby preventing security software from sending telemetry to their management consoles.

The attack essentially works by scanning the system to gather a list of running processes associated with common EDR products, followed by running EDRSilencer with the argument "blockedr" (e.g., EDRSilencer.exe blockedr) to inhibit outbound traffic from those processes by configuring WFP filters.

"This allows malware or other malicious activities to remain undetected, increasing the potential for successful attacks without detection or intervention," the researchers said. "This highlights the ongoing trend of threat actors seeking more effective tools for their attacks, especially those designed to disable antivirus and EDR solutions."

The development comes as ransomware groups' use of formidable EDR-killing tools like AuKill (aka AvNeutralizer), EDRKillShifter, TrueSightKiller, GhostDriver, and Terminator is on the rise, with these programs weaponizing vulnerable drivers to escalate privileges and terminate security-related processes.

"EDRKillShifter enhances persistence mechanisms by employing techniques that ensure its continuous presence within the system, even after initial compromises are discovered and cleaned," Trend Micro said in a recent analysis.

"It dynamically disrupts security processes in real-time and adapts its methods as detection capabilities evolve, staying a step ahead of traditional EDR tools."

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.



from The Hacker News https://ift.tt/RaqbMdO
via IFTTT

Docker Best Practices: Using ARG and ENV in Your Dockerfiles

If you’ve worked with Docker for any length of time, you’re likely accustomed to writing or at least modifying a Dockerfile. This file can be thought of as a recipe for a Docker image; it contains both the ingredients (base images, packages, files) and the instructions (various RUN, COPY, and other commands that help build the image).

In most cases, Dockerfiles are written once, modified seldom, and used as-is unless something about the project changes. Because these files are created or modified on such an infrequent basis, developers tend to rely on only a handful of frequently used instructions — RUN, COPY, and EXPOSE being the most common. Other instructions can enhance your image, making it more configurable, manageable, and easier to maintain. 

In this post, we will discuss the ARG and ENV instructions and explore why, how, and when to use them.

2400x1260 best practices

ARG: Defining build-time variables

The ARG instruction allows you to define variables that will be accessible during the build stage but not available after the image is built. For example, we will use this Dockerfile to build an image where we make the variable specified by the ARG instruction available during the build process.

FROM ubuntu:latest
ARG THEARG="foo"
RUN echo $THEARG
CMD ["env"]

If we run the build, we will see the echo foo line in the output:

$ docker build --no-cache -t argtest .
[+] Building 0.4s (6/6) FINISHED                                                                     docker:desktop-linux
<-- SNIP -->
 => CACHED [1/2] FROM docker.io/library/ubuntu:latest@sha256:8a37d68f4f73ebf3d4efafbcf66379bf3728902a8038616808f04e  0.0s
 => => resolve docker.io/library/ubuntu:latest@sha256:8a37d68f4f73ebf3d4efafbcf66379bf3728902a8038616808f04e34a9ab6  0.0s
 => [2/2] RUN echo foo                                                                                               0.1s
 => exporting to image                                                                                               0.0s
<-- SNIP -->

However, if we run the image and inspect the output of the env command, we do not see THEARG:

$ docker run --rm argtest
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=d19f59677dcd
HOME=/root

ENV: Defining build and runtime variables

Unlike ARG, the ENV command allows you to define a variable that can be accessed both at build time and run time:

FROM ubuntu:latest
ENV THEENV="bar"
RUN echo $THEENV
CMD ["env"]

If we run the build, we will see the echo bar line in the output:

$ docker build -t envtest .
[+] Building 0.8s (7/7) FINISHED                                                                     docker:desktop-linux
<-- SNIP -->
 => CACHED [1/2] FROM docker.io/library/ubuntu:latest@sha256:8a37d68f4f73ebf3d4efafbcf66379bf3728902a8038616808f04e  0.0s
 => => resolve docker.io/library/ubuntu:latest@sha256:8a37d68f4f73ebf3d4efafbcf66379bf3728902a8038616808f04e34a9ab6  0.0s
 => [2/2] RUN echo bar                                                                                               0.1s
 => exporting to image                                                                                               0.0s
<-- SNIP -->

If we run the image and inspect the output of the env command, we do see THEENV set, as expected:

$ docker run --rm envtest
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=f53f1d9712a9
THEENV=bar
HOME=/root

Overriding ARG

A more advanced use of the ARG instruction is to serve as a placeholder that is then updated at build time:

FROM ubuntu:latest
ARG THEARG
RUN echo $THEARG
CMD ["env"]

If we build the image, we see that we are missing a value for $THEARG:

$ docker build -t argtest .
<-- SNIP -->
 => CACHED [1/2] FROM docker.io/library/ubuntu:latest@sha256:8a37d68f4f73ebf3d4efafbcf66379bf3728902a8038616808f04e  0.0s
 => => resolve docker.io/library/ubuntu:latest@sha256:8a37d68f4f73ebf3d4efafbcf66379bf3728902a8038616808f04e34a9ab6  0.0s
 => [2/2] RUN echo $THEARG                                                                                           0.1s
 => exporting to image                                                                                               0.0s
 => => exporting layers                                                                                              0.0s
<-- SNIP -->

However, we can pass a value for THEARG on the build command line using the --build-arg argument. Notice that we now see THEARG has been replaced with foo in the output:

 => CACHED [1/2] FROM docker.io/library/ubuntu:latest@sha256:8a37d68f4f73ebf3d4efafbcf66379bf3728902a8038616808f04e  0.0s
 => => resolve docker.io/library/ubuntu:latest@sha256:8a37d68f4f73ebf3d4efafbcf66379bf3728902a8038616808f04e34a9ab6  0.0s
 => [2/2] RUN echo foo                                                                                               0.1s
 => exporting to image                                                                                               0.0s
 => => exporting layers                                                                                              0.0s
<-- SNIP -->

The same can be done in a Docker Compose file by using the args key under the build key. Note that these can be set as a mapping (THEARG: foo) or a list (- THEARG=foo):

services:
  argtest:
    build:
      context: .
      args:
        THEARG: foo

If we run docker compose up --build, we can see the THEARG has been replaced with foo in the output:

$ docker compose up --build
<-- SNIP -->
 => [argtest 1/2] FROM docker.io/library/ubuntu:latest@sha256:8a37d68f4f73ebf3d4efafbcf66379bf3728902a8038616808f04  0.0s
 => => resolve docker.io/library/ubuntu:latest@sha256:8a37d68f4f73ebf3d4efafbcf66379bf3728902a8038616808f04e34a9ab6  0.0s
 => CACHED [argtest 2/2] RUN echo foo                                                                                0.0s
 => [argtest] exporting to image                                                                                     0.0s
 => => exporting layers                                                                                              0.0s
<-- SNIP -->
Attaching to argtest-1
argtest-1  | PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
argtest-1  | HOSTNAME=d9a3789ac47a
argtest-1  | HOME=/root
argtest-1 exited with code 0

Overriding ENV

You can also override ENV at build time; this is slightly different from how ARG is overridden. For example, you cannot supply a key without a value with the ENV instruction, as shown in the following example Dockerfile:

FROM ubuntu:latest
ENV THEENV
RUN echo $THEENV
CMD ["env"]

When we try to build the image, we receive an error:

$ docker build -t envtest .
[+] Building 0.0s (1/1) FINISHED                                                                     docker:desktop-linux
 => [internal] load build definition from Dockerfile                                                                 0.0s
 => => transferring dockerfile: 98B                                                                                  0.0s
Dockerfile:3
--------------------
   1 |     FROM ubuntu:latest
   2 |
   3 | >>> ENV THEENV
   4 |     RUN echo $THEENV
   5 |
--------------------
ERROR: failed to solve: ENV must have two arguments

However, we can remove the ENV instruction from the Dockerfile:

FROM ubuntu:latest
RUN echo $THEENV
CMD ["env"]

This allows us to build the image:

$ docker build -t envtest .
<-- SNIP -->
 => [1/2] FROM docker.io/library/ubuntu:latest@sha256:8a37d68f4f73ebf3d4efafbcf66379bf3728902a8038616808f04e34a9ab6  0.0s
 => => resolve docker.io/library/ubuntu:latest@sha256:8a37d68f4f73ebf3d4efafbcf66379bf3728902a8038616808f04e34a9ab6  0.0s
 => CACHED [2/2] RUN echo $THEENV                                                                                    0.0s
 => exporting to image                                                                                               0.0s
 => => exporting layers                                                                                              0.0s
<-- SNIP -->

Then we can pass an environment variable via the docker run command using the -e flag:

$ docker run --rm -e THEENV=bar envtest
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=638cf682d61f
THEENV=bar
HOME=/root

Although the .env file is usually associated with Docker Compose, it can also be used with docker run.

$ cat .env
THEENV=bar

$ docker run --rm --env-file ./.env envtest
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=59efe1003811
THEENV=bar
HOME=/root

This can also be done using Docker Compose by using the environment key. Note that we use the variable format for the value:

services:
  envtest:
    build:
      context: .
    environment:
      THEENV: ${THEENV}

If we do not supply a value for THEENV, a warning is thrown:

$ docker compose up --build
WARN[0000] The "THEENV" variable is not set. Defaulting to a blank string.
<-- SNIP -->
 => [envtest 1/2] FROM docker.io/library/ubuntu:latest@sha256:8a37d68f4f73ebf3d4efafbcf66379bf3728902a8038616808f04  0.0s
 => => resolve docker.io/library/ubuntu:latest@sha256:8a37d68f4f73ebf3d4efafbcf66379bf3728902a8038616808f04e34a9ab6  0.0s
 => CACHED [envtest 2/2] RUN echo ${THEENV}                                                                          0.0s
 => [envtest] exporting to image                                                                                     0.0s
<-- SNIP -->
 ✔ Container dd-envtest-1    Recreated                                                                               0.1s
Attaching to envtest-1
envtest-1  | PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
envtest-1  | HOSTNAME=816d164dc067
envtest-1  | THEENV=
envtest-1  | HOME=/root
envtest-1 exited with code 0

The value for our variable can be supplied in several different ways, as follows:

  • On the compose command line:
$ THEENV=bar docker compose up

[+] Running 2/0
 ✔ Synchronized File Shares                                                                                          0.0s
 ✔ Container dd-envtest-1    Recreated                                                                               0.1s
Attaching to envtest-1
envtest-1  | PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
envtest-1  | HOSTNAME=20f67bb40c6a
envtest-1  | THEENV=bar
envtest-1  | HOME=/root
envtest-1 exited with code 0
  • In the shell environment on the host system:
$ export THEENV=bar
$ docker compose up

[+] Running 2/0
 ✔ Synchronized File Shares                                                                                          0.0s
 ✔ Container dd-envtest-1    Created                                                                                 0.0s
Attaching to envtest-1
envtest-1  | PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
envtest-1  | HOSTNAME=20f67bb40c6a
envtest-1  | THEENV=bar
envtest-1  | HOME=/root
envtest-1 exited with code 0
  • In the special .env file:
$ cat .env
THEENV=bar

$ docker compose up

[+] Running 2/0
 ✔ Synchronized File Shares                                                                                          0.0s
 ✔ Container dd-envtest-1    Created                                                                                 0.0s
Attaching to envtest-1
envtest-1  | PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
envtest-1  | HOSTNAME=20f67bb40c6a
envtest-1  | THEENV=bar
envtest-1  | HOME=/root
envtest-1 exited with code 0

Finally, when running services directly using docker compose run, you can use the -e flag to override the .env file.

$ docker compose run -e THEENV=bar envtest

[+] Creating 1/0
 ✔ Synchronized File Shares                                                                                          0.0s
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=219e96494ddd
TERM=xterm
THEENV=bar
HOME=/root

The tl;dr

If you need to access a variable during the build process but not at runtime, use ARG. If you need to access the variable both during the build and at runtime, or only at runtime, use ENV.

To decide between them, consider the following flow (Figure 1):

Both ARG and ENV can be overridden from the command line in docker run and docker compose, giving you a powerful way to dynamically update variables and build flexible workflows.

Learn more



from Docker https://ift.tt/Cb6Qs8o
via IFTTT

FIDO Alliance Drafts New Protocol to Simplify Passkey Transfers Across Different Platforms

Oct 16, 2024Ravie LakshmananData Privacy / Passwordless

The FIDO Alliance said it's working to make passkeys and other credentials more easier to export across different providers and improve credential provider interoperability, as more than 12 billion online accounts become accessible with the passwordless sign-in method.

To that end, the alliance said it has published a draft for a new set of specifications for secure credential exchange, following commitments among members of its Credential Provider Special Interest Group (SIG).

This includes 1Password, Apple, Bitwarden, Dashlane, Enpass, Google, Microsoft, NordPass, Okta, Samsung, and SK Telecom.

"Secure credential exchange is a focus for the FIDO Alliance because it can help further accelerate passkey adoption and enhance user experience," the FIDO Alliance said in a statement.

"Sign-ins with passkeys reduce phishing and eliminate credential reuse while making sign-ins up to 75% faster, and 20% more successful than passwords or passwords plus a second factor like SMS OTP."

While passkeys have the advantage of being secure and phishing-resistant, they are essentially locked in to the operating system or the password manager service, making it impossible to transfer them when switching platforms and, therefore, requiring users to create new passkeys per device.

The new specification proposed by the FIDO Alliance aims to address this gap with the Credential Exchange Protocol (CXP) and Credential Exchange Format (CXF).

They "define a standard format for transferring credentials in a credential manager including passwords, passkeys, and more to another provider in a manner that ensures transfer are not made in the clear and are secure by default," it said.

The development comes as Amazon revealed that more than 175 million customers have enabled passkeys on their accounts, nearly one year after the initial rollout.

"Passkeys fundamentally shift the way we sign in to our online accounts for the better — and seeing Amazon roll out passkeys is evidence of its commitment to its customers' time, experiences, and security across Amazon web and mobile shopping experiences," said Andrew Shikiar, chief executive officer of FIDO Alliance.

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.



from The Hacker News https://ift.tt/oCpXwqJ
via IFTTT

Protecting major events: An incident response blueprint

Protecting major events: An incident response blueprint

Ensuring the cybersecurity of major events — whether it’s sports, professional conferences, expos, inter-government meetings or other gatherings — is a complex and time-intensive task.  

It requires a comprehensive approach and collaboration among various stakeholders, including vendors, hospitality teams, and service providers, to establish a consistent cybersecurity strategy across the entire event ecosystem. 

In our latest version of the “Protecting major events: An incident response blueprint” whitepaper, Cisco Talos Incident Response outlines the essential steps organizations should take to secure any major event. This paper highlights 13 critical focus areas that will guide organizing committees and participating businesses, offering key questions and actionable answers to help ensure robust event security. 



from Cisco Talos Blog https://ift.tt/wCTKpXr
via IFTTT

How Docker IT Streamlined Docker Desktop Deployment Across the Global Team

At Docker, innovation and efficiency are integral to how we operate. When our own IT team needed to deploy Docker Desktop to various teams, including non-engineering roles like customer support and technical sales, the existing process was functional but manual and time-consuming. Recognizing the need for a more streamlined and secure approach, we leveraged new Early Access (EA) Docker Business features to refine our deployment strategy.

2400x1260 evergreen docker blog d

A seamless deployment process

Faced with the challenge of managing diverse requirements across the organization, we knew it was time to enhance our deployment methods.

The Docker IT team transitioned from using registry.json files to a more efficient method involving registry keys and new MSI installers for Windows, along with configuration profiles and PKG installers for macOS. This transition simplified deployment, provided better control for admins, and allowed for faster rollouts across the organization.

“From setup to deployment, it took 24 hours. We started on a Monday morning, and by the next day, it was done,” explains Jeffrey Strauss, Head of Docker IT. 

Enhancing security and visibility

Security is always a priority. By integrating login enforcement with single sign-on (SSO) and System for Cross-domain Identity Management (SCIM), Docker IT ensured centralized control and compliance with security policies. The Docker Desktop Insights Dashboard (EA) offered crucial visibility into how Docker Desktop was being used across the organization. Admins could now see which versions were installed and monitor container usage, enabling informed decisions about updates, resource allocation, and compliance. (Docker Business customers can learn more about access and timelines by contacting their account reps. The Insights Dashboard is only available to Docker Business customers with enforced authentication for organization users.)

Steven Novick, Docker’s Principal Product Manager, emphasized, “With the new solution, deployment was simpler and tamper-proof, giving a clear picture of Docker usage within the organization.”

Benefits beyond deployment

The improvements made by Docker IT extended beyond just deployment efficiency:

  • Improved visibility: The Insights Dashboard provided detailed data on Docker usage, helping ensure all users are connected to the organization.
  • Efficient deployment: Docker Desktop was deployed to hundreds of computers within 24 hours, significantly reducing administrative overhead.
  • Enhanced security: Centralized control to enforce authentication via MDM tools like Intune for Windows and Jamf for macOS strengthened security and compliance.
  • Seamless user experience: Early and transparent communication ensured a smooth transition, minimizing disruptions.

Looking ahead

The successful deployment of Docker Desktop within 24 hours demonstrates Docker’s commitment to continuous improvement and innovation. We are excited about the future developments in Docker Desktop management and look forward to supporting our customers as they achieve their goals with Docker. 

Existing Docker Business customers can learn more about access and timelines by contacting their account reps. The Insights Dashboard is only available in Early Access to select Docker Business customers with enforced authentication for organization users.

Curious about how Docker’s new features can benefit your team? Get in touch to discover more or explore our customer stories to see how others are succeeding with Docker.

Learn more



from Docker https://ift.tt/PTYNe9w
via IFTTT

Tuesday, October 15, 2024

How Low Can You Go? An Analysis of 2023 Time-to-Exploit Trends

Written by: Casey Charrier, Robert Weiner


TTE 2023 executive summary

Mandiant analyzed 138 vulnerabilities that were disclosed in 2023 and that we tracked as exploited in the wild. Consistent with past analyses, the majority (97) of these vulnerabilities were exploited as zero-days (vulnerabilities exploited before patches are made available, excluding end-of-life technologies). Forty-one vulnerabilities were exploited as n-days (vulnerabilities first exploited after patches are available). While we have previously seen and continue to expect a growing use of zero-days over time, 2023 saw an even larger discrepancy grow between zero-day and n-day exploitation as zero-day exploitation outpaced n-day exploitation more heavily than we have previously observed.

While our data is based on reliable observations, we note that the numbers are conservative estimates as we rely on the first reported exploitation of a vulnerability. Frequently, first exploitation dates are not publicly disclosed or are given vague timeframes (e.g., "mid-July" or "Q2 2023"), in which case we assume the latest plausible date. It is also likely that undiscovered exploitation has occurred. Therefore, actual times to exploit are almost certainly earlier than this data suggests.

Exploitation Timelines

Time-to-Exploit

Time-to-exploit (TTE) is our metric for defining the average time taken to exploit a vulnerability before or after a patch is released. Historically, our analyses have seen reduced times-to-exploit year over year. Through 2018 to 2019, we observed an average TTE of 63 days. From 2020 to the start of 2021, that number decreased to 44 days. Then, across all of 2021 and 2022, the average observed TTE dropped further to 32 days, already half of our first tracked TTE starting in 2018. In 2023, we observed the largest drop in TTE thus far, with an average of just five days. This is less than a sixth of the previously observed TTE. 

Our average TTE excludes 15 total data points, including two n-days and 13 zero-days, that we identified as outliers from a standard deviation-based statistical analysis. Without the removal of these outlier TTEs, the average grows from five to 47.

Zero-Day vs. N-day Exploitation

Prior to 2023, we had observed steady ratios of n-days to zero-days, being 38:62 across 2021–2022 and 39:61 across 2020 into part of 2021. However, in 2023, this ratio shifted to 30:70, a notable departure from what we had observed previously. Given that zero-day exploitation has risen steadily over the years, the shifting ratio appears to be influenced more from the recent increase in zero-day usage and detection rather than a drop in n-day usage. It is also possible that actors had a larger number of successful attempts to exploit zero-days in 2023. Future data and analyses will show whether this is the start of a noticeable shift, or if 2023 is a one-off in this regard.

2023 zero-day vs. n-day exploitation

N-Day Exploitation

Consistent with our last analysis, we found that exploitation was most likely to occur within the first month of a patch being made available for an already disclosed vulnerability. Twelve percent (5) of n-days were exploited within one day, 29% (12) were exploited within one week, and over half (56%) were exploited within one month. In our last report, 25%  of the n-day vulnerabilities were exploited after the six-month mark. In 2023, all but two (5%) n-days were exploited within six months.

N-day exploitation timeline

Disclosure to Exploit to Exploitation Timelines

Of the analyzed vulnerabilities, 41 (30%) were first exploited after the vulnerability's public disclosure. This section will focus on this subset of vulnerabilities. While we have pursued analysis of associations between exploit availability and exploitation timelines, Mandiant has continued not to observe a correlation between the two. It may be common to assume a relationship between the two data points; however, our longer-term analysis shows a distinct lack of association.

First exploit release prior to exploitation vs. after exploitation

For vulnerabilities with exploits available prior to exploitation, we observed a median of 7 days from the date of disclosure (DoD) to the first public exploit release, and a median of 30 days from the exploit's release date to the date of first known exploitation. The median time from disclosure to exploitation of these vulnerabilities was 43 days.

For vulnerabilities with exploits first made available after exploitation, we observed a median time of 15 days from disclosure to exploitation. The median time from exploitation to a publicly available exploit was observed to be four days, with a median timeline from disclosure to exploit release being observed as 23 days.

These statistics are consistent with our past analyses, which have expressed non-deterministic outcomes regarding the influence of existing exploits on in-the-wild exploitation. We continue to find this true while also noting that there are other factors that affect the exploitation timeline of a given vulnerability. Potential factors include, but are not limited to, exploitation value and exploitation difficulty. To highlight one of these factors, we note that of the vulnerabilities disclosed in 2023 that received media coverage, 58% are not known to be exploited in the wild, and for those with at least one public proof of concept (PoC) or exploit, 72% are not known to be exploited in the wild. The following are two specific examples we observed that demonstrate the variance in how much of an effect an exploit's release can have on the time to in-the-wild exploitation, and that illustrate other potential influences in exploitation.

CVE-2023-28121 Use Case

CVE-2023-28121 is an improper authentication vulnerability affecting the WooCommerce Payments plugin for WordPress. This vulnerability was disclosed on March 23, 2023, and did not receive its first proof of concept or even technical details until three and a half months later on July 3, when a blog was posted outlining how to create an Administrator user without prior authentication. This was followed quickly by a Metasploit module being released on July 4 with the ability to scan for the vulnerability and exploit it to create a new Administrator user. No exploitation activity was seen immediately following the release of this PoC or Metasploit module. Instead, exploitation activity is first known to have begun on July 14, soon after another weaponized exploit was released. This exploit was first released July 11, with an upgraded version then released on July 13. Both versions of this exploit have the capability to exploit an arbitrary number of vulnerable systems in order to create a new Administrator user. Wordfence later reported that the exploitation campaign began on July 14 and activity peaked on July 16 with 1.3 million attacks observed on that day alone.

CVE-2023-28121 timeline

This vulnerability's timeline highlights a period of over three months where exploitation did not occur following disclosure; however, large-scale exploitation began 10 days after the first exploit was released and only three days after a second exploit with mass-exploitation capabilities was released. In this case, we can see that there is likely an increased motivation for a threat actor to exploit this vulnerability due to a functional, large-scale, and reliable exploit being made publicly available.

CVE-2023-27997 Use Case

CVE-2023-27997, also known as XORtigate, is a heap-based buffer overflow in the Secure Sockets Layer (SSL) / virtual private network (VPN) component of Fortinet FortiOS. This vulnerability was disclosed on June 11, 2023, and immediately received significant media attention, being named XORtigate prior to Fortinet even releasing their official security advisory on June 12. The disclosure was quickly followed on June 13 with two blog posts containing PoCs, and one since-deleted non-weaponized exploit on GitHub. By June 16, proof-of-concept code, scanners, and weaponized exploit code were all publicly available. While exploitation could be expected to swiftly follow the immediate attention and exploits released, it was not until around four months after disclosure, on Sept. 12, that Mandiant first observed exploitation activity. Exploitation of this vulnerability is only known by Mandiant to be performed in relatively limited and targeted campaigns. In this case, we see that public interest and exploit availability did not appear to impact the timeline of exploitation.

CVE-2023-27997 timeline

Use Case Comparison

One of the most likely reasons for the difference in observed timelines we see here is the difference in the reliability and ease of exploitation between the two vulnerabilities. CVE-2023-28121, which was exploited soon after exploits became available, is quite simple to exploit, requiring just one specific HTTP header to be set on an otherwise normally formatted web request. This makes large-scale and automated exploitation campaigns more plausible. On the other hand, CVE-2023-27997 requires exploiting a heap-based buffer overflow against systems which typically have several standard and non-standard protections, including data execution prevention (DEP) and address space layout randomization (ASLR), as well as navigating the logic of a custom hashing and XORing mechanism. When considering the multiple complexities involved in addition to the fact that targeted systems would likely already have multiple mitigations in place, we can see how much less time-efficient and reliable exploitation of this vulnerability would be.

The other potential factor we identified is the difference in the value provided to an attacker by exploiting the affected products. FortiOS is a security-focused product that is typically deployed, oftentimes with significant privileges, within highly sensitive environments. Therefore, exploitation of CVE-2023-27997 could provide an attacker with those same privileges, furthering the potential damage an attacker could cause. WooCommerce Payments is one of the most popular WordPress plugins, and exploitation of CVE-2023-28121 can lead to complete access of the underlying web server that the plugin is running on. However, these web servers typically exist within demilitarized zones (DMZs) or other low-privileged network segments and thus present limited value to an attacker looking to exploit the larger organization that the plugin runs within. This suggests that intended utilization is a driving consideration for an adversary. Directing more energy toward exploit development of the more difficult, yet "more valuable" vulnerability would be logical if it better aligns with their objectives, whereas the easier-to-exploit and "less valuable" vulnerability may present more value to more opportunistic adversaries.

Exploited Vulnerabilities by Vendor

Exploited vendors continue to grow in both number and variety. In 2023, we saw a 17% increase from our previous highest exploited vendor count in 2021. In recent years, Microsoft, Apple, and Google have been the most exploited vendors year over year. However, their prominence in the overall number of exploited vendors has continued to decrease, falling just below 40% this past year. This is about a 10% drop from the just under 50% we saw from 2021 to 2022. Additionally, this is one of the first times in a while that one of the three has barely made a top spot. Google had eight vulnerabilities exploited, while Adobe, the fourth most exploited vendor, had six vulnerabilities exploited. Further, 31 of the 53 vendors (58%) had only one vulnerability exploited. Attackers are diversifying their targets and seeing success in doing so. As variance in targeted products continues to grow along with exploitation frequency, defenders must meet the challenge of protecting sprawling attack surfaces.

Number of vendors exploited by year

We note that the total number of vulnerabilities affecting a vendor does not directly relate to how secure or insecure a vendor's security posture is, nor does it signify that it is "less secure" than its competitors. Ubiquity of product use and the extent of a vendor's offered products both impact the numbers we see. Given the extent of today's challenges around defending such diversified systems and networks, learning from best practices across industries will lead to some of the best approaches for seeing successful exploitation prevention.

Implications

As the amount of discovered vulnerabilities grows over time, threat actors are provided with more opportunities to take advantage of these weaknesses. Mandiant has found that exploits, for both zero-days and n-days, have been the number one initial infection vector in Mandiant Incident Response (IR) engagements for 2020, 2021, 2022, and 2023. This is pushing defenders to provide efficient detection and response as well as to adapt to events in real time. Further, patching prioritization is increasingly difficult as n-days are exploited more quickly and in a greater variety of products. This increase in available technologies expands attack surfaces, reinforcing the importance of considering how a singular vulnerable technology could affect systems and networks laterally. Segmented architectures and access control implementations should be prioritized in order to limit the extent of impacted systems and data when exploitation does occur.

After multiple years of tracking our observed TTEs, we can see that the numbers fall drastically with each analysis. Just five to six years ago, we observed an average TTE of 63 days. That number has now fallen to five days. While we are aware that better and more common threat detection capabilities are likely an aspect of growing exploitation numbers, our data clearly shows that attackers are able to move quickly enough to beat patching cycles. As threat actors shorten TTEs and have more success with zero-day exploitation, delaying patching and exposing insufficiently protected attack surfaces heightens the chance of successful attacks.

Our data has continued to show that exploit release and media attention are not predictive of exploitation timelines. While in some cases these data points are correlated, the trends do not currently show that these factors should dictate prioritization or constitute an elevated response to a given vulnerability. Exploit release and the attention received by a vulnerability should be taken into account; however, they should be considered heuristic data points alongside other factors such as the difficulty of exploitation and the value exploitation may present.

Outlook

Based on our analyses, we know that zero-day exploitation remains a coveted approach for threat actors. If zero-day exploitation continues to outnumber n-day exploitation while n-day exploitation continues to occur more quickly following disclosure, we could expect the average TTE to fall further in the future. Additionally, because zero-day discovery is more difficult, there is room for growing numbers of exploited vulnerabilities over time as detection tools continue improving and become more widespread.

We do not expect n-day usage to drop significantly, nor for the number of targeted vendors to decrease over the coming years. We expect threat actors to continue using both n-days and zero-days as well as to expand exploitation across more vendors and products. Trends are expected to likely follow quicker exploitation timelines across a larger span of targets.

It is important to note that the increased ratio of zero-day exploitation and the generally shrinking timelines do not imply that threat actors will stop targeting n-days. We have seen, many times over, how threat actors will utilize vulnerabilities months or years after patches have been released.



from Threat Intelligence https://ift.tt/Y7wNVkL
via IFTTT