1. Preparing your environment for installation
Review the following prerequisites before you install orcharhino Proxy Server.
1.1. Operating system requirements
The following operating system is supported for deploying orcharhino:
-
Enterprise Linux 9 (x86_64)
Installing orcharhino on a system with Extra Packages for Enterprise Linux (EPEL) is not supported.
Do not register orcharhino Proxy Server to the Red Hat Content Delivery Network (CDN).
1.2. System requirements
Follow these system requirements when installing orcharhino Proxy Server:
-
Install orcharhino Proxy Server on a freshly provisioned system that serves no other function except to run orcharhino Proxy Server. Do not use an existing system because the orcharhino installer will affect the configuration of several components.
-
Ensure you have administrative user (root) access to the system.
-
Ensure the system meets the following requirements:
-
4 CPU cores
-
12 GB or higher
-
A unique host name, which can contain lower-case letters, numbers, dots (.) and hyphens (-)
-
-
If you use custom certificates, ensure that the Common Name (CN) of the custom certificate is a fully qualified domain name (FQDN). orcharhino Server and orcharhino Proxy Server do not support shortnames in the hostnames.
-
Ensure the system clock on the system is synchronized across the network. If the system clock is not synchronized, SSL certificate verification might fail.
-
Ensure the system uses the
UTF-8
encoding. If your territory is USA and your language is English, seten_US.utf-8
as the system-wide locale settings. For more information about configuring system locale in Enterprise Linux, see Configuring the system locale in Red Hat Enterprise Linux 9 Configuring basic system settings. -
If you use an external identity provider in your deployment, ensure the provider did not create the following user accounts on the system. These user accounts can cause conflicts with the local users that orcharhino Proxy Server creates:
-
apache
-
foreman-proxy
-
postgres
-
pulp
-
puppet
-
redis
-
Warning
|
The version of orcharhino Proxy must match the version of orcharhino installed. For example, the orcharhino Proxy version 7.4 cannot be registered with the orcharhino version 7.3. |
1.3. Port and firewall requirements
For the components of orcharhino architecture to communicate, ensure that the required network ports are open and free on the base operating system. You must also ensure that the required network ports are open on any network-based firewalls.
The installation of a orcharhino Proxy Server fails if the ports between orcharhino Server and orcharhino Proxy Server are not open before installation starts.
Use this information to configure any network-based firewalls. Note that some cloud solutions must be specifically configured to allow communications between machines because they isolate machines similarly to network-based firewalls. If you use an application-based firewall, ensure that the application-based firewall permits all applications that are listed in the tables and known to your firewall. If possible, disable the application checking and allow open port communication based on the protocol.
orcharhino Server has an integrated orcharhino Proxy and any host that is directly connected to orcharhino Server is a Client of orcharhino in the context of this section. This includes the base operating system on which orcharhino Proxy Server is running.
Hosts which are clients of orcharhino Proxies, other than orcharhino’s integrated orcharhino Proxy, do not need access to orcharhino Server. For more information on orcharhino Topology, see orcharhino Proxy networking in Planning for orcharhino.
Required ports can change based on your configuration.
The following tables indicate the destination port and the direction of network traffic:
Destination Port | Protocol | Service | Source | Required For | Description |
---|---|---|---|---|---|
53 |
TCP and UDP |
DNS |
DNS Servers and clients |
Name resolution |
DNS (optional) |
67 |
UDP |
DHCP |
Client |
Dynamic IP |
DHCP (optional) |
69 |
UDP |
TFTP |
Client |
TFTP Server (optional) |
|
443, 80 |
TCP |
HTTPS, HTTP |
Client |
Content Retrieval |
Content |
443, 80 |
TCP |
HTTPS, HTTP |
Client |
Content Host Registration |
orcharhino Proxy CA RPM installation |
443 |
TCP |
HTTPS |
orcharhino |
Content Mirroring |
Management |
443 |
TCP |
HTTPS |
orcharhino |
orcharhino Proxy API |
Smart Proxy functionality |
443 |
TCP |
HTTPS |
Client |
Content Host registration |
Initiation Uploading facts Sending installed packages and traces |
1883 |
TCP |
MQTT |
Client |
Pull based REX (optional) |
Content hosts for REX job notification (optional) |
8000 |
TCP |
HTTP |
Client |
Provisioning templates |
Template retrieval for client installers, iPXE or UEFI HTTP Boot |
8000 |
TCP |
HTTP |
Client |
PXE Boot |
Installation |
8140 |
TCP |
HTTPS |
Client |
Puppet agent |
Client updates (optional) |
8443 |
TCP |
HTTPS |
Client |
Content Host registration |
Deprecated and only needed for Client hosts deployed before upgrades |
9090 |
TCP |
HTTPS |
orcharhino |
orcharhino Proxy API |
orcharhino Proxy functionality |
9090 |
TCP |
HTTPS |
Client |
Register Endpoint |
Client registration with orcharhino Proxy Servers |
9090 |
TCP |
HTTPS |
Client |
OpenSCAP |
Configure Client (if the OpenSCAP plugin is installed) |
9090 |
TCP |
HTTPS |
Discovered Node |
Discovery |
Host discovery and provisioning (if the discovery plugin is installed) |
9090 |
TCP |
HTTPS |
Client |
Pull based REX (optional) |
Content hosts for REX job notification (optional) |
Any host that is directly connected to orcharhino Server is a client in this context because it is a client of the integrated orcharhino Proxy. This includes the base operating system on which a orcharhino Proxy Server is running.
A DHCP orcharhino Proxy performs ICMP ping and TCP echo connection attempts to hosts in subnets with DHCP IPAM set to find out if an IP address considered for use is free.
This behavior can be turned off using orcharhino-installer --foreman-proxy-dhcp-ping-free-ip false
.
Destination Port | Protocol | Service | Destination | Required For | Description |
---|---|---|---|---|---|
ICMP |
ping |
Client |
DHCP |
Free IP checking (optional) |
|
7 |
TCP |
echo |
Client |
DHCP |
Free IP checking (optional) |
22 |
TCP |
SSH |
Target host |
Remote execution |
Run jobs |
53 |
TCP and UDP |
DNS |
DNS Servers on the Internet |
DNS Server |
Resolve DNS records (optional) |
53 |
TCP and UDP |
DNS |
DNS Server |
orcharhino Proxy DNS |
Validation of DNS conflicts (optional) |
68 |
UDP |
DHCP |
Client |
Dynamic IP |
DHCP (optional) |
443 |
TCP |
HTTPS |
orcharhino |
orcharhino Proxy |
orcharhino Proxy Configuration management Template retrieval OpenSCAP Remote Execution result upload |
443 |
TCP |
HTTPS |
orcharhino |
Content |
Sync |
443 |
TCP |
HTTPS |
orcharhino |
Client communication |
Forward requests from Client to orcharhino |
443 |
TCP |
HTTPS |
Infoblox DHCP Server |
DHCP management |
When using Infoblox for DHCP, management of the DHCP leases (optional) |
623 |
Client |
Power management |
BMC On/Off/Cycle/Status |
||
7911 |
TCP |
DHCP, OMAPI |
DHCP Server |
DHCP |
The DHCP target is configured using ISC and |
8443 |
TCP |
HTTPS |
Client |
Discovery |
orcharhino Proxy sends reboot command to the discovered host (optional) |
Note
|
ICMP to Port 7 UDP and TCP must not be rejected, but can be dropped. The DHCP orcharhino Proxy sends an ECHO REQUEST to the Client network to verify that an IP address is free. A response prevents IP addresses from being allocated. |
1.4. Enabling connections from orcharhino Server and clients to a orcharhino Proxy Server
On the base operating system on which you want to install orcharhino Proxy, you must enable incoming connections from orcharhino Server and clients to orcharhino Proxy Server and make these rules persistent across reboots.
-
Open the ports for clients on orcharhino Proxy Server:
# firewall-cmd \ --add-port="8000/tcp" \ --add-port="9090/tcp"
-
Allow access to services on orcharhino Proxy Server:
# firewall-cmd \ --add-service=dns \ --add-service=dhcp \ --add-service=tftp \ --add-service=http \ --add-service=https \ --add-service=puppetmaster
-
Make the changes persistent:
# firewall-cmd --runtime-to-permanent
-
Enter the following command:
# firewall-cmd --list-all
For more information, see Using and configuring firewalld in Red Hat Enterprise Linux 9 Configuring firewalls and packet filters.
2. Installing orcharhino Proxy Server
Before you install orcharhino Proxy Server, you must ensure that your environment meets the requirements for installation. For more information, see Preparing your Environment for Installation.
2.1. Configuring repositories
Ensure the repositories required to install orcharhino Proxy Server are enabled on your Enterprise Linux host.
2.2. Installing orcharhino Proxy Server packages
Before installing orcharhino Proxy Server packages, you must upgrade all packages that are installed on the base operating system.
-
Upgrade all packages:
# dnf upgrade
-
Install the packages:
# dnf install foreman-installer
2.3. Assigning the correct organization and location to orcharhino Proxy Server in the orcharhino management UI
After installing orcharhino Proxy Server packages, if there is more than one organization or location, you must assign the correct organization and location to orcharhino Proxy to make orcharhino Proxy visible in the orcharhino management UI.
-
Log into the orcharhino management UI.
-
From the Organization list in the upper-left of the screen, select Any Organization.
-
From the Location list in the upper-left of the screen, select Any Location.
-
In the orcharhino management UI, navigate to Hosts > All Hosts and select orcharhino Proxy Server.
-
From the Select Actions list, select Assign Organization.
-
From the Organization list, select the organization where you want to assign this orcharhino Proxy.
-
Click Fix Organization on Mismatch.
-
Click Submit.
-
Select orcharhino Proxy Server. From the Select Actions list, select Assign Location.
-
From the Location list, select the location where you want to assign this orcharhino Proxy.
-
Click Fix Location on Mismatch.
-
Click Submit.
-
In the orcharhino management UI, navigate to Administer > Organizations and click the organization to which you have assigned orcharhino Proxy.
-
Click orcharhino Proxies tab and ensure that orcharhino Proxy Server is listed under the Selected items list, then click Submit.
-
In the orcharhino management UI, navigate to Administer > Locations and click the location to which you have assigned orcharhino Proxy.
-
Click orcharhino Proxies tab and ensure that orcharhino Proxy Server is listed under the Selected items list, then click Submit.
Optionally, you can verify if orcharhino Proxy Server is correctly listed in the orcharhino management UI.
-
Select the organization from the Organization list.
-
Select the location from the Location list.
-
In the orcharhino management UI, navigate to Hosts > All Hosts.
-
In the orcharhino management UI, navigate to Infrastructure > orcharhino Proxies.
3. Performing additional configuration on orcharhino Proxy Server
After installation, you can configure additional settings on your orcharhino Proxy Server.
3.1. Configuring orcharhino Proxy for host registration and provisioning
Use this procedure to configure orcharhino Proxy so that you can register and provision hosts using your orcharhino Proxy Server instead of your orcharhino Server.
-
On orcharhino Server, add the orcharhino Proxy to the list of trusted proxies.
This is required for orcharhino to recognize hosts' IP addresses forwarded over the
X-Forwarded-For
HTTP header set by orcharhino Proxy. For security reasons, orcharhino recognizes this HTTP header only from localhost by default. You can enter trusted proxies as valid IPv4 or IPv6 addresses of orcharhino Proxies, or network ranges.WarningDo not use a network range that is too broad because that might cause a security risk.
Enter the following command. Note that the command overwrites the list that is currently stored in orcharhino. Therefore, if you have set any trusted proxies previously, you must include them in the command as well:
# orcharhino-installer \ --foreman-trusted-proxies "127.0.0.1/8" \ --foreman-trusted-proxies "::1" \ --foreman-trusted-proxies "My_IP_address" \ --foreman-trusted-proxies "My_IP_range"
The localhost entries are required, do not omit them.
-
List the current trusted proxies using the full help of orcharhino installer:
# orcharhino-installer --full-help | grep -A 2 "trusted-proxies"
-
The current listing contains all trusted proxies you require.
3.2. Configuring pull-based transport for remote execution
By default, remote execution uses push-based SSH as the transport mechanism for the Script provider. If your infrastructure prohibits outgoing connections from orcharhino Proxy Server to hosts, you can use remote execution with pull-based transport instead, because the host initiates the connection to orcharhino Proxy Server. The use of pull-based transport is not limited to those infrastructures.
The pull-based transport comprises pull-mqtt
mode on orcharhino Proxies in combination with a pull client running on hosts.
Note
|
The |
The mode is configured per orcharhino Proxy Server.
Some orcharhino Proxy Servers can be configured to use pull-mqtt
mode while others use SSH.
If this is the case, it is possible that one remote job on a given host will use the pull client and the next job on the same host will use SSH.
If you wish to avoid this scenario, configure all orcharhino Proxy Servers to use the same mode.
-
Enable the pull-based transport on your orcharhino Proxy Server:
# orcharhino-installer --foreman-proxy-plugin-remote-execution-script-mode pull-mqtt
-
Configure the firewall to allow the MQTT service on port 1883:
# firewall-cmd --add-service=mqtt
-
Make the changes persistent:
# firewall-cmd --runtime-to-permanent
-
In
pull-mqtt
mode, hosts subscribe for job notifications to either your orcharhino Server or any orcharhino Proxy Server through which they are registered. Ensure that orcharhino Server sends remote execution jobs to that same orcharhino Server or orcharhino Proxy Server:-
In the orcharhino management UI, navigate to Administer > Settings.
-
On the Content tab, set the value of Prefer registered through orcharhino Proxy for remote execution to Yes.
-
-
Configure your hosts for the pull-based transport. For more information, see Transport modes for remote execution in Managing hosts.
3.3. Adding lifecycle environments to orcharhino Proxy Servers
If your orcharhino Proxy Server has the content functionality enabled, you must add an environment so that orcharhino Proxy can synchronize content from orcharhino Server and provide content to host systems.
Do not assign the Library lifecycle environment to your orcharhino Proxy Server because it triggers an automated orcharhino Proxy sync every time the CDN updates a repository. This might consume multiple system resources on orcharhino Proxies, network bandwidth between orcharhino and orcharhino Proxies, and available disk space on orcharhino Proxies.
You can use Hammer CLI on orcharhino Server or the orcharhino management UI.
-
In the orcharhino management UI, navigate to Infrastructure > orcharhino Proxies, and select the orcharhino Proxy that you want to add a lifecycle to.
-
Click Edit and click the Lifecycle Environments tab.
-
From the left menu, select the lifecycle environments that you want to add to orcharhino Proxy and click Submit.
-
To synchronize the content on the orcharhino Proxy, click the Overview tab and click Synchronize.
-
Select either Optimized Sync or Complete Sync.
For definitions of each synchronization type, see Recovering a Repository.
-
To display a list of all orcharhino Proxy Servers, on orcharhino Server, enter the following command:
# hammer proxy list
Note the orcharhino Proxy ID of the orcharhino Proxy to which you want to add a lifecycle.
-
Using the ID, verify the details of your orcharhino Proxy:
# hammer proxy info \ --id My_orcharhino_Proxy_ID
-
To view the lifecycle environments available for your orcharhino Proxy Server, enter the following command and note the ID and the organization name:
# hammer proxy content available-lifecycle-environments \ --id My_orcharhino_Proxy_ID
-
Add the lifecycle environment to your orcharhino Proxy Server:
# hammer proxy content add-lifecycle-environment \ --id My_orcharhino_Proxy_ID \ --lifecycle-environment-id My_Lifecycle_Environment_ID --organization "My_Organization"
Repeat for each lifecycle environment you want to add to orcharhino Proxy Server.
-
Synchronize the content from orcharhino to orcharhino Proxy.
-
To synchronize all content from your orcharhino Server environment to orcharhino Proxy Server, enter the following command:
# hammer proxy content synchronize \ --id My_orcharhino_Proxy_ID
-
To synchronize a specific lifecycle environment from your orcharhino Server to orcharhino Proxy Server, enter the following command:
# hammer proxy content synchronize \ --id My_orcharhino_Proxy_ID \ --lifecycle-environment-id My_Lifecycle_Environment_ID
-
To synchronize all content from your orcharhino Server to your orcharhino Proxy Server without checking metadata:
# hammer proxy content synchronize \ --id My_orcharhino_Proxy_ID \ --skip-metadata-check true
This equals selecting Complete Sync in the orcharhino management UI.
-
3.4. Enabling power management on hosts
To perform power management tasks on hosts using the intelligent platform management interface (IPMI) or a similar protocol, you must enable the baseboard management controller (BMC) module on orcharhino Proxy Server.
-
All hosts must have a network interface of BMC type. orcharhino Proxy Server uses this NIC to pass the appropriate credentials to the host. For more information, see Configuring a baseboard management controller (BMC) interface in Managing hosts.
-
To enable BMC, enter the following command:
# orcharhino-installer \ --foreman-proxy-bmc "true" \ --foreman-proxy-bmc-default-provider "freeipmi"
Appendix A: orcharhino Proxy Server scalability considerations when managing Puppet clients
orcharhino Proxy Server scalability when managing Puppet clients depends on the number of CPUs, the run-interval distribution, and the number of Puppet managed resources. orcharhino Proxy Server has a limitation of 100 concurrent Puppet agents running at any single point in time. Running more than 100 concurrent Puppet agents results in a 503 HTTP error.
For example, assuming that Puppet agent runs are evenly distributed with less than 100 concurrent Puppet agents running at any single point during a run-interval, a orcharhino Proxy Server with 4 CPUs has a maximum of 1250 – 1600 Puppet clients with a moderate workload of 10 Puppet classes assigned to each Puppet client. Depending on the number of Puppet clients required, the orcharhino installation can scale out the number of orcharhino Proxy Servers to support them.
If you want to scale your orcharhino Proxy Server when managing Puppet clients, the following assumptions are made:
-
There are no external Puppet clients reporting directly to your orcharhino Server.
-
All other Puppet clients report directly to orcharhino Proxy Servers.
-
There is an evenly distributed run-interval of all Puppet agents.
Note
|
Deviating from the even distribution increases the risk of overloading orcharhino Server. The limit of 100 concurrent requests applies. |
The following table describes the scalability limits using the recommended 4 CPUs.
Puppet Managed Resources per Host | Run-Interval Distribution |
---|---|
1 |
3000 – 2500 |
10 |
2400 – 2000 |
20 |
1700 – 1400 |
The following table describes the scalability limits using the minimum 2 CPUs.
Puppet Managed Resources per Host | Run-Interval Distribution |
---|---|
1 |
1700 – 1450 |
10 |
1500 – 1250 |
20 |
850 – 700 |