1. Preparing your environment for installation
1.1. System requirements
The following requirements apply to the networked base operating system:
-
x86_64 architecture
-
4-core 2.0 GHz CPU at a minimum
-
A minimum of 12 GB RAM is required for orcharhino Proxy to function. In addition, a minimum of 4 GB RAM of swap space is also recommended. orcharhino Proxy running with less RAM than the minimum value might not operate correctly.
-
Administrative user (root) access
-
Full forward and reverse DNS resolution using a fully-qualified domain name
orcharhino only supports UTF-8
encoding.
If your territory is USA and your language is English, set en_US.utf-8
as the system-wide locale settings.
For more information about configuring system locale in Enterprise Linux, see Configuring the system locale.
orcharhino Server and orcharhino Proxy do not support shortnames in the hostnames. When using custom certificates, the Common Name (CN) of the custom certificate must be a fully qualified domain name (FQDN) instead of a shortname. This does not apply to the clients of a orcharhino.
Before you install orcharhino Proxy, ensure that your environment meets the requirements for installation.
Warning
|
The version of orcharhino Proxy must match with the version of orcharhino installed. It should not be different. For example, the orcharhino Proxy version 6.7 cannot be registered with the orcharhino version 6.6. |
orcharhino Proxy must be installed on a freshly provisioned system that serves no other function except to run orcharhino Proxy. The freshly provisioned system must not have the following users provided by external identity providers to avoid conflicts with the local users that orcharhino Proxy creates:
-
apache
-
foreman-proxy
-
postgres
-
pulp
-
puppet
-
redis
The system clock on the base operating system where you are installing your orcharhino Proxy must be synchronized across the network. If the system clock is not synchronized, SSL certificate verification might fail.
1.2. Supported operating systems
The following operating systems are supported by the installer, have packages, and are tested for deploying orcharhino:
Operating System |
Architecture |
Notes |
x86_64 only |
EPEL is not supported. |
|
x86_64 only |
EPEL is not supported. |
ATIX AG advises against using an existing system because the orcharhino installer will affect the configuration of several components.
1.3. Port and firewall requirements
For the components of orcharhino architecture to communicate, ensure that the required network ports are open and free on the base operating system. You must also ensure that the required network ports are open on any network-based firewalls.
The installation of a orcharhino Proxy fails if the ports between orcharhino Server and orcharhino Proxy are not open before installation starts.
Use this information to configure any network-based firewalls. Note that some cloud solutions must be specifically configured to allow communications between machines because they isolate machines similarly to network-based firewalls. If you use an application-based firewall, ensure that the application-based firewall permits all applications that are listed in the tables and known to your firewall. If possible, disable the application checking and allow open port communication based on the protocol.
orcharhino Server has an integrated orcharhino Proxy and any host that is directly connected to orcharhino Server is a Client of orcharhino in the context of this section. This includes the base operating system on which orcharhino Proxy is running.
Hosts which are clients of orcharhino Proxies, other than orcharhino’s integrated orcharhino Proxy, do not need access to orcharhino Server. For more information on orcharhino Topology, see orcharhino Proxy Networking in Planning for orcharhino.
Required ports can change based on your configuration.
The following tables indicate the destination port and the direction of network traffic:
Destination Port | Protocol | Service | Source | Required For | Description |
---|---|---|---|---|---|
53 |
TCP and UDP |
DNS |
DNS Servers and clients |
Name resolution |
DNS (optional) |
67 |
UDP |
DHCP |
Client |
Dynamic IP |
DHCP (optional) |
69 |
UDP |
TFTP |
Client |
TFTP Server (optional) |
|
443, 80 |
TCP |
HTTPS, HTTP |
Client |
Content Retrieval |
Content |
443, 80 |
TCP |
HTTPS, HTTP |
Client |
Content Host Registration |
orcharhino Proxy CA RPM installation |
443 |
TCP |
HTTPS |
orcharhino |
Content Mirroring |
Management |
443 |
TCP |
HTTPS |
orcharhino |
orcharhino Proxy API |
Smart Proxy functionality |
443 |
TCP |
HTTPS |
Client |
Content Host registration |
Initiation Uploading facts Sending installed packages and traces |
1883 |
TCP |
MQTT |
Client |
Pull based REX (optional) |
Content hosts for REX job notification (optional) |
8000 |
TCP |
HTTP |
Client |
Provisioning templates |
Template retrieval for client installers, iPXE or UEFI HTTP Boot |
8000 |
TCP |
HTTP |
Client |
PXE Boot |
Installation |
8140 |
TCP |
HTTPS |
Client |
Puppet agent |
Client updates (optional) |
8443 |
TCP |
HTTPS |
Client |
Content Host registration |
Deprecated and only needed for Client hosts deployed before upgrades |
9090 |
TCP |
HTTPS |
orcharhino |
orcharhino Proxy API |
orcharhino Proxy functionality |
9090 |
TCP |
HTTPS |
Client |
Register Endpoint |
Client registration with an external orcharhino Proxy |
9090 |
TCP |
HTTPS |
Client |
OpenSCAP |
Configure Client (if the OpenSCAP plugin is installed) |
9090 |
TCP |
HTTPS |
Discovered Node |
Discovery |
Host discovery and provisioning (if the discovery plugin is installed) |
9090 |
TCP |
HTTPS |
Client |
Pull based REX (optional) |
Content hosts for REX job notification (optional) |
Any host that is directly connected to orcharhino Server is a client in this context because it is a client of the integrated orcharhino Proxy. This includes the base operating system on which a orcharhino Proxy is running.
A DHCP orcharhino Proxy performs ICMP ping and TCP echo connection attempts to hosts in subnets with DHCP IPAM set to find out if an IP address considered for use is free.
This behavior can be turned off using foreman-installer --foreman-proxy-dhcp-ping-free-ip=false
.
Destination Port | Protocol | Service | Destination | Required For | Description |
---|---|---|---|---|---|
ICMP |
ping |
Client |
DHCP |
Free IP checking (optional) |
|
7 |
TCP |
echo |
Client |
DHCP |
Free IP checking (optional) |
22 |
TCP |
SSH |
Target host |
Remote execution |
Run jobs |
53 |
TCP and UDP |
DNS |
DNS Servers on the Internet |
DNS Server |
Resolve DNS records (optional) |
53 |
TCP and UDP |
DNS |
DNS Server |
orcharhino Proxy DNS |
Validation of DNS conflicts (optional) |
68 |
UDP |
DHCP |
Client |
Dynamic IP |
DHCP (optional) |
443 |
TCP |
HTTPS |
orcharhino |
orcharhino Proxy |
orcharhino Proxy Configuration management Template retrieval OpenSCAP Remote Execution result upload |
443 |
TCP |
HTTPS |
orcharhino |
Content |
Sync |
443 |
TCP |
HTTPS |
orcharhino |
Client communication |
Forward requests from Client to orcharhino |
443 |
TCP |
HTTPS |
Infoblox DHCP Server |
DHCP management |
When using Infoblox for DHCP, management of the DHCP leases (optional) |
623 |
Client |
Power management |
BMC On/Off/Cycle/Status |
||
7911 |
TCP |
DHCP, OMAPI |
DHCP Server |
DHCP |
The DHCP target is configured using ISC and |
8443 |
TCP |
HTTPS |
Client |
Discovery |
orcharhino Proxy sends reboot command to the discovered host (optional) |
Note
|
ICMP to Port 7 UDP and TCP must not be rejected, but can be dropped. The DHCP orcharhino Proxy sends an ECHO REQUEST to the Client network to verify that an IP address is free. A response prevents IP addresses from being allocated. |
1.4. Enabling connections from orcharhino Server and clients to a orcharhino Proxy
On the base operating system on which you want to install orcharhino Proxy, you must enable incoming connections from orcharhino Server and clients to orcharhino Proxy and make these rules persistent across reboots.
-
Open the ports for clients on orcharhino Proxy:
# firewall-cmd \ --add-port="8000/tcp" \ --add-port="9090/tcp"
-
Allow access to services on orcharhino Proxy:
# firewall-cmd \ --add-service=dns \ --add-service=dhcp \ --add-service=tftp \ --add-service=http \ --add-service=https \ --add-service=puppetmaster
-
Make the changes persistent:
# firewall-cmd --runtime-to-permanent
-
Enter the following command:
# firewall-cmd --list-all
For more information, see Using and configuring firewalld in Red Hat Enterprise Linux 9 guide or Using and configuring firewalld in Red Hat Enterprise Linux 8 guide.
2. Installing orcharhino Proxy
Before you install orcharhino Proxy, you must ensure that your environment meets the requirements for installation. For more information, see Preparing your Environment for Installation.
2.1. Optional: Using fapolicyd on orcharhino Proxy
By enabling fapolicyd
on your orcharhino Server, you can provide an additional layer of security by monitoring and controlling access to files and directories.
The fapolicyd daemon uses the RPM database as a repository of trusted binaries and scripts.
You can turn on or off the fapolicyd on your orcharhino Server or orcharhino Proxy at any point.
2.1.1. Installing fapolicyd on orcharhino Proxy
You can install fapolicyd
along with orcharhino Proxy or can be installed on an existing orcharhino Proxy.
If you are installing fapolicyd
along with the new orcharhino Proxy, the installation process will detect the fapolicyd in your Enterprise Linux host and deploy the orcharhino Proxy rules automatically.
-
Ensure your host has access to the BaseOS repositories of Enterprise Linux.
-
For a new installation, install fapolicyd:
# dnf install fapolicyd
-
For an existing installation, install fapolicyd using dnf install:
# dnf install fapolicyd
-
Start the
fapolicyd
service:# systemctl enable --now fapolicyd
-
Verify that the
fapolicyd
service is running correctly:# systemctl status fapolicyd
In case of new orcharhino Server or orcharhino Proxy installation, follow the standard installation procedures after installing and enabling fapolicyd on your Enterprise Linux host.
For more information on fapolicyd, see Blocking and allowing applications using fapolicyd in Red Hat Enterprise Linux 9 guide or Blocking and allowing applications using fapolicyd in Red Hat Enterprise Linux 8 guide.
2.2. Installing orcharhino Proxy packages
Before installing orcharhino Proxy packages, you must update all packages that are installed on the base operating system.
To install orcharhino Proxy, complete the following steps:
2.4. Assigning the correct organization and location to orcharhino Proxy in the orcharhino management UI
After installing orcharhino Proxy packages, if there is more than one organization or location, you must assign the correct organization and location to orcharhino Proxy to make orcharhino Proxy visible in the orcharhino management UI.
-
Log into the orcharhino management UI.
-
From the Organization list in the upper-left of the screen, select Any Organization.
-
From the Location list in the upper-left of the screen, select Any Location.
-
In the orcharhino management UI, navigate to Hosts > All Hosts and select orcharhino Proxy.
-
From the Select Actions list, select Assign Organization.
-
From the Organization list, select the organization where you want to assign this orcharhino Proxy.
-
Click Fix Organization on Mismatch.
-
Click Submit.
-
Select orcharhino Proxy. From the Select Actions list, select Assign Location.
-
From the Location list, select the location where you want to assign this orcharhino Proxy.
-
Click Fix Location on Mismatch.
-
Click Submit.
-
In the orcharhino management UI, navigate to Administer > Organizations and click the organization to which you have assigned orcharhino Proxy.
-
Click orcharhino Proxies tab and ensure that orcharhino Proxy is listed under the Selected items list, then click Submit.
-
In the orcharhino management UI, navigate to Administer > Locations and click the location to which you have assigned orcharhino Proxy.
-
Click orcharhino Proxies tab and ensure that orcharhino Proxy is listed under the Selected items list, then click Submit.
Optionally, you can verify if orcharhino Proxy is correctly listed in the orcharhino management UI.
-
Select the organization from the Organization list.
-
Select the location from the Location list.
-
In the orcharhino management UI, navigate to Hosts > All Hosts.
-
In the orcharhino management UI, navigate to Infrastructure > orcharhino Proxies.
3. Performing additional configuration on orcharhino Proxy
After installation, you can configure additional settings on your orcharhino Proxy.
3.1. Configuring orcharhino Proxy for host registration and provisioning
Use this procedure to configure orcharhino Proxy so that you can register and provision hosts using your orcharhino Proxy instead of your orcharhino Server.
-
On orcharhino Server, add the orcharhino Proxy to the list of trusted proxies.
This is required for orcharhino to recognize hosts' IP addresses forwarded over the
X-Forwarded-For
HTTP header set by orcharhino Proxy. For security reasons, orcharhino recognizes this HTTP header only from localhost by default. You can enter trusted proxies as valid IPv4 or IPv6 addresses of orcharhino Proxies, or network ranges.WarningDo not use a network range that is too wide, because that poses a potential security risk. Enter the following command. Note that the command overwrites the list that is currently stored in orcharhino. Therefore, if you have set any trusted proxies previously, you must include them in the command as well:
# foreman-installer \ --foreman-trusted-proxies "127.0.0.1/8" \ --foreman-trusted-proxies "::1" \ --foreman-trusted-proxies "My_IP_address" \ --foreman-trusted-proxies "My_IP_range"
The localhost entries are required, do not omit them.
-
List the current trusted proxies using the full help of orcharhino installer:
# foreman-installer --full-help | grep -A 2 "trusted-proxies"
-
The current listing contains all trusted proxies you require.
3.2. Configuring pull-based transport for remote execution
By default, remote execution uses push-based SSH as the transport mechanism for the Script provider. If your infrastructure prohibits outgoing connections from orcharhino Proxy to hosts, you can use remote execution with pull-based transport instead, because the host initiates the connection to orcharhino Proxy. The use of pull-based transport is not limited to those infrastructures.
The pull-based transport comprises pull-mqtt
mode on orcharhino Proxies in combination with a pull client running on hosts.
Note
|
The |
The mode is configured per orcharhino Proxy.
Some orcharhino Proxys can be configured to use pull-mqtt
mode while others use SSH.
If this is the case, it is possible that one remote job on a given host will use the pull client and the next job on the same host will use SSH.
If you wish to avoid this scenario, configure all orcharhino Proxys to use the same mode.
-
Enable the pull-based transport on your orcharhino Proxy:
# foreman-installer --foreman-proxy-plugin-remote-execution-script-mode=pull-mqtt
-
Configure the firewall to allow the MQTT service on port 1883:
# firewall-cmd --add-service=mqtt
-
Make the changes persistent:
# firewall-cmd --runtime-to-permanent
-
In
pull-mqtt
mode, hosts subscribe for job notifications to either your orcharhino Server or any orcharhino Proxy through which they are registered. Ensure that orcharhino Server sends remote execution jobs to that same orcharhino Server or orcharhino Proxy:-
In the orcharhino management UI, navigate to Administer > Settings.
-
On the Content tab, set the value of Prefer registered through orcharhino Proxy for remote execution to Yes.
-
-
Configure your hosts for the pull-based transport. For more information, see Transport modes for remote execution in Managing hosts.
3.3. Enabling OpenSCAP on orcharhino Proxies
On orcharhino Server and the integrated orcharhino Proxy of your orcharhino Server, OpenSCAP is enabled by default. To use the OpenSCAP plugin and content on external orcharhino Proxies, you must enable OpenSCAP on each orcharhino Proxy.
-
To enable OpenSCAP, enter the following command:
# foreman-installer \ --enable-foreman-proxy-plugin-openscap \ --foreman-proxy-plugin-openscap-ansible-module true \ --foreman-proxy-plugin-openscap-puppet-module true
If you want to use Puppet to deploy compliance policies, you must enable it first. For more information, see Configuring hosts using Puppet.
3.4. Adding lifecycle environments to orcharhino Proxies
If your orcharhino Proxy has the content functionality enabled, you must add an environment so that orcharhino Proxy can synchronize content from orcharhino Server and provide content to host systems.
Do not assign the Library lifecycle environment to your orcharhino Proxy because it triggers an automated orcharhino Proxy sync every time the CDN updates a repository. This might consume multiple system resources on orcharhino Proxies, network bandwidth between orcharhino and orcharhino Proxies, and available disk space on orcharhino Proxies.
You can use Hammer CLI on orcharhino Server or the orcharhino management UI.
-
In the orcharhino management UI, navigate to Infrastructure > orcharhino Proxies, and select the orcharhino Proxy that you want to add a lifecycle to.
-
Click Edit and click the Lifecycle Environments tab.
-
From the left menu, select the lifecycle environments that you want to add to orcharhino Proxy and click Submit.
-
To synchronize the content on the orcharhino Proxy, click the Overview tab and click Synchronize.
-
Select either Optimized Sync or Complete Sync.
For definitions of each synchronization type, see Recovering a Repository.
-
To display a list of all orcharhino Proxies, on orcharhino Server, enter the following command:
# hammer proxy list
Note the orcharhino Proxy ID of the orcharhino Proxy to which you want to add a lifecycle.
-
Using the ID, verify the details of your orcharhino Proxy:
# hammer proxy info \ --id Myorcharhino-proxy_ID_
-
To view the lifecycle environments available for your orcharhino Proxy, enter the following command and note the ID and the organization name:
# hammer proxy content available-lifecycle-environments \ --id Myorcharhino-proxy_ID_
-
Add the lifecycle environment to your orcharhino Proxy:
# hammer proxy content add-lifecycle-environment \ --id Myorcharhino-proxy_ID_ \ --lifecycle-environment-id My_Lifecycle_Environment_ID --organization "My_Organization"
Repeat for each lifecycle environment you want to add to orcharhino Proxy.
-
Synchronize the content from orcharhino to orcharhino Proxy.
-
To synchronize all content from your orcharhino Server environment to orcharhino Proxy, enter the following command:
# hammer proxy content synchronize \ --id Myorcharhino-proxy_ID_
-
To synchronize a specific lifecycle environment from your orcharhino Server to orcharhino Proxy, enter the following command:
# hammer proxy content synchronize \ --id Myorcharhino-proxy_ID_ --lifecycle-environment-id My_Lifecycle_Environment_ID
-
To synchronize all content from your orcharhino Server to your orcharhino Proxy without checking metadata:
# hammer proxy content synchronize \ --id Myorcharhino-proxy_ID_ \ --skip-metadata-check true
This equals selecting Complete Sync in the orcharhino management UI.
-
3.5. Enabling power management on hosts
To perform power management tasks on hosts using the intelligent platform management interface (IPMI) or a similar protocol, you must enable the baseboard management controller (BMC) module on orcharhino Proxy.
-
All hosts must have a network interface of BMC type. orcharhino Proxy uses this NIC to pass the appropriate credentials to the host. For more information, see Adding a Baseboard Management Controller (BMC) Interface in Managing hosts.
-
To enable BMC, enter the following command:
# foreman-installer \ --foreman-proxy-bmc "true" \ --foreman-proxy-bmc-default-provider "freeipmi"
3.6. Configuring DNS, DHCP, and TFTP on orcharhino Proxy
To configure the DNS, DHCP, and TFTP services on orcharhino Proxy, use the foreman-installer
command with the options appropriate for your environment.
Any changes to the settings require entering the foreman-installer
command again.
You can enter the command multiple times and each time it updates all configuration files with the changed values.
-
You must have the correct network name (
dns-interface
) for the DNS server. -
You must have the correct interface name (
dhcp-interface
) for the DHCP server. -
Contact your network administrator to ensure that you have the correct settings.
-
Enter the
foreman-installer
command with the options appropriate for your environment. The following example shows configuring full provisioning services:# foreman-installer \ --foreman-proxy-dns true \ --foreman-proxy-dns-managed true \ --foreman-proxy-dns-zone example.com \ --foreman-proxy-dns-reverse 2.0.192.in-addr.arpa \ --foreman-proxy-dhcp true \ --foreman-proxy-dhcp-managed true \ --foreman-proxy-dhcp-range "192.0.2.100 192.0.2.150" \ --foreman-proxy-dhcp-gateway 192.0.2.1 \ --foreman-proxy-dhcp-nameservers 192.0.2.2 \ --foreman-proxy-tftp true \ --foreman-proxy-tftp-managed true \ --foreman-proxy-tftp-servername 192.0.2.3
You can monitor the progress of the foreman-installer
command displayed in your prompt.
You can view the logs in /var/log/foreman-installer/katello.log
.
-
For more information about the
foreman-installer
command, enterforeman-installer --help
. -
For more information about configuring DHCP, DNS, and TFTP services, see Configuring Network Services in Provisioning hosts.
4. Configuring orcharhino Proxy with external services
If you do not want to configure the DNS, DHCP, and TFTP services on orcharhino Proxy, use this section to configure your orcharhino Proxy to work with external DNS, DHCP, and TFTP services.
4.1. Configuring orcharhino Proxy with external DNS
You can configure orcharhino Proxy with external DNS.
orcharhino Proxy uses the nsupdate
utility to update DNS records on the remote server.
To make any changes persistent, you must enter the foreman-installer
command with the options appropriate for your environment.
-
You must have a configured external DNS server.
-
This guide assumes you have an existing installation.
-
Copy the
/etc/rndc.key
file from the external DNS server to orcharhino Proxy:# scp root@dns.example.com:/etc/rndc.key /etc/foreman-proxy/rndc.key
-
Configure the ownership, permissions, and SELinux context:
# restorecon -v /etc/foreman-proxy/rndc.key # chown -v root:foreman-proxy /etc/foreman-proxy/rndc.key # chmod -v 640 /etc/foreman-proxy/rndc.key
-
To test the
nsupdate
utility, add a host remotely:# echo -e "server DNS_IP_Address\n \ update add aaa.example.com 3600 IN A Host_IP_Address\n \ send\n" | nsupdate -k /etc/foreman-proxy/rndc.key # nslookup aaa.example.com DNS_IP_Address # echo -e "server DNS_IP_Address\n \ update delete aaa.example.com 3600 IN A Host_IP_Address\n \ send\n" | nsupdate -k /etc/foreman-proxy/rndc.key
-
Enter the
foreman-installer
command to make the following persistent changes to the/etc/foreman-proxy/settings.d/dns.yml
file:# foreman-installer --foreman-proxy-dns=true \ --foreman-proxy-dns-managed=false \ --foreman-proxy-dns-provider=nsupdate \ --foreman-proxy-dns-server="DNS_IP_Address" \ --foreman-proxy-keyfile=/etc/foreman-proxy/rndc.key
-
In the orcharhino management UI, navigate to Infrastructure > orcharhino Proxies.
-
Locate the orcharhino Proxy and select Refresh from the list in the Actions column.
-
Associate the DNS service with the appropriate subnets and domain.
4.2. Configuring orcharhino Proxy with external DHCP
To configure orcharhino Proxy with external DHCP, you must complete the following procedures:
4.2.1. Configuring an external DHCP server to use with orcharhino Proxy
To configure an external DHCP server running Enterprise Linux to use with orcharhino Proxy, you must install the ISC DHCP Service and Berkeley Internet Name Domain (BIND) utilities packages. You must also share the DHCP configuration and lease files with orcharhino Proxy. The example in this procedure uses the distributed Network File System (NFS) protocol to share the DHCP configuration and lease files.
Note
|
If you use dnsmasq as an external DHCP server, enable the |
-
On your Enterprise Linux host, install the ISC DHCP Service and Berkeley Internet Name Domain (BIND) utilities packages:
# dnf install dhcp-server bind-utils
-
Generate a security token:
# tsig-keygen -a hmac-md5 omapi_key
-
Edit the
dhcpd
configuration file for all subnets and add the key generated bytsig-keygen
. The following is an example:# cat /etc/dhcp/dhcpd.conf default-lease-time 604800; max-lease-time 2592000; log-facility local7; subnet 192.168.38.0 netmask 255.255.255.0 { range 192.168.38.10 192.168.38.100; option routers 192.168.38.1; option subnet-mask 255.255.255.0; option domain-search "virtual.lan"; option domain-name "virtual.lan"; option domain-name-servers 8.8.8.8; } omapi-port 7911; key omapi_key { algorithm hmac-md5; secret "My_Secret"; }; omapi-key omapi_key;
Note that the
option routers
value is the IP address of your orcharhino Server or orcharhino Proxy that you want to use with an external DHCP service. -
On orcharhino Server, define each subnet. Do not set DHCP orcharhino Proxy for the defined Subnet yet.
To prevent conflicts, set up the lease and reservation ranges separately. For example, if the lease range is 192.168.38.10 to 192.168.38.100, in the orcharhino management UI define the reservation range as 192.168.38.101 to 192.168.38.250.
-
Configure the firewall for external access to the DHCP server:
# firewall-cmd --add-service dhcp
-
Make the changes persistent:
# firewall-cmd --runtime-to-permanent
-
On orcharhino Server, determine the UID and GID of the
foreman
user:# id -u foreman 993 # id -g foreman 990
-
On the DHCP server, create the
foreman
user and group with the same IDs as determined in a previous step:# groupadd -g 990 foreman # useradd -u 993 -g 990 -s /sbin/nologin foreman
-
To ensure that the configuration files are accessible, restore the read and execute flags:
# chmod o+rx /etc/dhcp/ # chmod o+r /etc/dhcp/dhcpd.conf # chattr +i /etc/dhcp/ /etc/dhcp/dhcpd.conf
-
Enable and start the DHCP service:
# systemctl enable --now dhcpd
-
Export the DHCP configuration and lease files using NFS:
# dnf install nfs-utils # systemctl enable --now nfs-server
-
Create directories for the DHCP configuration and lease files that you want to export using NFS:
# mkdir -p /exports/var/lib/dhcpd /exports/etc/dhcp
-
To create mount points for the created directories, add the following line to the
/etc/fstab
file:/var/lib/dhcpd /exports/var/lib/dhcpd none bind,auto 0 0 /etc/dhcp /exports/etc/dhcp none bind,auto 0 0
-
Mount the file systems in
/etc/fstab
:# mount -a
-
Ensure the following lines are present in
/etc/exports
:/exports 192.168.38.1(rw,async,no_root_squash,fsid=0,no_subtree_check) /exports/etc/dhcp 192.168.38.1(ro,async,no_root_squash,no_subtree_check,nohide) /exports/var/lib/dhcpd 192.168.38.1(ro,async,no_root_squash,no_subtree_check,nohide)
Note that the IP address that you enter is the orcharhino or orcharhino Proxy IP address that you want to use with an external DHCP service.
-
Reload the NFS server:
# exportfs -rva
-
Configure the firewall for DHCP omapi port 7911:
# firewall-cmd --add-port=7911/tcp
-
Optional: Configure the firewall for external access to NFS. Clients are configured using NFSv3.
# firewall-cmd \ --add-service mountd \ --add-service nfs \ --add-service rpc-bind \ --zone public
-
Make the changes persistent:
# firewall-cmd --runtime-to-permanent
4.2.2. Configuring orcharhino Server with an external DHCP server
You can configure orcharhino Proxy with an external DHCP server.
-
Ensure that you have configured an external DHCP server and that you have shared the DHCP configuration and lease files with orcharhino Proxy. For more information, see Configuring an external DHCP server to use with orcharhino Proxy.
-
Install the
nfs-utils
package:# dnf install nfs-utils
-
Create the DHCP directories for NFS:
# mkdir -p /mnt/nfs/etc/dhcp /mnt/nfs/var/lib/dhcpd
-
Change the file owner:
# chown -R foreman-proxy /mnt/nfs
-
Verify communication with the NFS server and the Remote Procedure Call (RPC) communication paths:
# showmount -e DHCP_Server_FQDN # rpcinfo -p DHCP_Server_FQDN
-
Add the following lines to the
/etc/fstab
file:DHCP_Server_FQDN:/exports/etc/dhcp /mnt/nfs/etc/dhcp nfs ro,vers=3,auto,nosharecache,context="system_u:object_r:dhcp_etc_t:s0" 0 0 DHCP_Server_FQDN:/exports/var/lib/dhcpd /mnt/nfs/var/lib/dhcpd nfs ro,vers=3,auto,nosharecache,context="system_u:object_r:dhcpd_state_t:s0" 0 0
-
Mount the file systems on
/etc/fstab
:# mount -a
-
To verify that the
foreman-proxy
user can access the files that are shared over the network, display the DHCP configuration and lease files:# su foreman-proxy -s /bin/bash $ cat /mnt/nfs/etc/dhcp/dhcpd.conf $ cat /mnt/nfs/var/lib/dhcpd/dhcpd.leases $ exit
-
Enter the
foreman-installer
command to make the following persistent changes to the/etc/foreman-proxy/settings.d/dhcp.yml
file:# foreman-installer \ --enable-foreman-proxy-plugin-dhcp-remote-isc \ --foreman-proxy-dhcp-provider=remote_isc \ --foreman-proxy-dhcp-server=My_DHCP_Server_FQDN \ --foreman-proxy-dhcp=true \ --foreman-proxy-plugin-dhcp-remote-isc-dhcp-config /mnt/nfs/etc/dhcp/dhcpd.conf \ --foreman-proxy-plugin-dhcp-remote-isc-dhcp-leases /mnt/nfs/var/lib/dhcpd/dhcpd.leases \ --foreman-proxy-plugin-dhcp-remote-isc-key-name=omapi_key \ --foreman-proxy-plugin-dhcp-remote-isc-key-secret=My_Secret \ --foreman-proxy-plugin-dhcp-remote-isc-omapi-port=7911
-
Associate the DHCP service with the appropriate subnets and domain.
4.3. Configuring orcharhino Proxy with external TFTP
You can configure orcharhino Proxy with external TFTP services.
-
Create the TFTP directory for NFS:
# mkdir -p /mnt/nfs/var/lib/tftpboot
-
In the
/etc/fstab
file, add the following line:TFTP_Server_IP_Address:/exports/var/lib/tftpboot /mnt/nfs/var/lib/tftpboot nfs rw,vers=3,auto,nosharecache,context="system_u:object_r:tftpdir_rw_t:s0" 0 0
-
Mount the file systems in
/etc/fstab
:# mount -a
-
Enter the
foreman-installer
command to make the following persistent changes to the/etc/foreman-proxy/settings.d/tftp.yml
file:# foreman-installer \ --foreman-proxy-tftp-root /mnt/nfs/var/lib/tftpboot \ --foreman-proxy-tftp=true
-
If the TFTP service is running on a different server than the DHCP service, update the
tftp_servername
setting with the FQDN or IP address of the server that the TFTP service is running on:# foreman-installer --foreman-proxy-tftp-servername=TFTP_Server_FQDN
-
In the orcharhino management UI, navigate to Infrastructure > orcharhino Proxies.
-
Locate the orcharhino Proxy and select Refresh from the list in the Actions column.
-
Associate the TFTP service with the appropriate subnets and domain.
4.4. Configuring orcharhino Proxy with external IdM DNS
When orcharhino Server adds a DNS record for a host, it first determines which orcharhino Proxy is providing DNS for that domain. It then communicates with the orcharhino Proxy that is configured to provide DNS service for your deployment and adds the record. The hosts are not involved in this process. Therefore, you must install and configure the IdM client on the orcharhino or orcharhino Proxy that is currently configured to provide a DNS service for the domain you want to manage using the IdM server.
orcharhino Proxy can be configured to use a Red Hat Identity Management (IdM) server to provide DNS service.
To configure orcharhino Proxy to use a Red Hat Identity Management (IdM) server to provide DNS service, use one of the following procedures:
To revert to internal DNS service, use the following procedure:
Note
|
You are not required to use orcharhino Proxy to manage DNS.
When you are using the realm enrollment feature of orcharhino, where provisioned hosts are enrolled automatically to IdM, the ipa-client-install script creates DNS records for the client.
Configuring orcharhino Proxy with external IdM DNS and realm enrollment are mutually exclusive.
For more information about configuring realm enrollment, see
External Authentication for Provisioned Hosts in Installing orcharhino Server.
|
4.4.1. Configuring dynamic DNS update with GSS-TSIG authentication
You can configure the IdM server to use the generic security service algorithm for secret key transaction (GSS-TSIG) technology defined in RFC3645. To configure the IdM server to use the GSS-TSIG technology, you must install the IdM client on the orcharhino Proxy base operating system.
-
You must ensure the IdM server is deployed and the host-based firewall is configured correctly.
-
You must contact the IdM server administrator to ensure that you obtain an account on the IdM server with permissions to create zones on the IdM server.
-
You should create a backup of the answer file. You can use the backup to restore the answer file to its original state if it becomes corrupted. For more information, see Configuring orcharhino Server.
To configure dynamic DNS update with GSS-TSIG authentication, complete the following steps:
-
Obtain a Kerberos ticket for the account obtained from the IdM administrator:
# kinit idm_user
-
Create a new Kerberos principal for orcharhino Proxy to use to authenticate on the IdM server:
# ipa service-add orcharhino-proxy.example.com
-
On the base operating system of either the orcharhino or orcharhino Proxy that is managing the DNS service for your deployment, install the
ipa-client
package:# dnf install ipa-client
-
Configure the IdM client by running the installation script and following the on-screen prompts:
# ipa-client-install
-
Obtain a Kerberos ticket:
# kinit admin
-
Remove any preexisting
keytab
:# rm /etc/foreman-proxy/dns.keytab
-
Obtain the
keytab
for this system:# ipa-getkeytab -p orcharhinoproxy/orcharhino.example.com@EXAMPLE.COM \ -s idm1.example.com -k /etc/foreman-proxy/dns.keytab
NoteWhen adding a keytab to a standby system with the same host name as the original system in service, add the
r
option to prevent generating new credentials and rendering the credentials on the original system invalid. -
For the
dns.keytab
file, set the group and owner toforeman-proxy
:# chown foreman-proxy:foreman-proxy /etc/foreman-proxy/dns.keytab
-
Optional: To verify that the
keytab
file is valid, enter the following command:# kinit -kt /etc/foreman-proxy/dns.keytab \ orcharhinoproxy/orcharhino.example.com@EXAMPLE.COM
-
Create and configure the zone that you want to manage:
-
Navigate to Network Services > DNS > DNS Zones.
-
Select Add and enter the zone name. For example,
example.com
. -
Click Add and Edit.
-
Click the Settings tab and in the BIND update policy box, add the following to the semi-colon separated list:
grant orcharhinoproxy\047orcharhino.example.com@EXAMPLE.COM wildcard * ANY;
-
Set Dynamic update to True.
-
Enable Allow PTR sync.
-
Click Save to save the changes.
-
-
Create and configure the reverse zone:
-
Navigate to Network Services > DNS > DNS Zones.
-
Click Add.
-
Select Reverse zone IP network and add the network address in CIDR format to enable reverse lookups.
-
Click Add and Edit.
-
Click the Settings tab and in the BIND update policy box, add the following to the semi-colon separated list:
grant orcharhinoproxy\047orcharhino.example.com@EXAMPLE.COM wildcard * ANY;
-
Set Dynamic update to True.
-
Click Save to save the changes.
-
-
Configure your orcharhino Server or orcharhino Proxy to connect to your DNS service:
# foreman-installer \ --foreman-proxy-dns-managed=false \ --foreman-proxy-dns-provider=nsupdate_gss \ --foreman-proxy-dns-server="idm1.example.com" \ --foreman-proxy-dns-tsig-keytab=/etc/foreman-proxy/dns.keytab \ --foreman-proxy-dns-tsig-principal="orcharhinoproxy/orcharhino.example.com@EXAMPLE.COM" \ --foreman-proxy-dns=true
-
For each affected orcharhino Proxy, update the configuration of that orcharhino Proxy in the orcharhino management UI:
-
In the orcharhino management UI, navigate to Infrastructure > orcharhino Proxies, locate the orcharhino Proxy, and from the list in the Actions column, select Refresh.
-
Configure the domain:
-
In the orcharhino management UI, navigate to Infrastructure > Domains and select the domain name.
-
In the Domain tab, ensure DNS orcharhino Proxy is set to the orcharhino Proxy where the subnet is connected.
-
-
Configure the subnet:
-
In the orcharhino management UI, navigate to Infrastructure > Subnets and select the subnet name.
-
In the Subnet tab, set IPAM to None.
-
In the Domains tab, select the domain that you want to manage using the IdM server.
-
In the orcharhino Proxies tab, ensure Reverse DNS orcharhino Proxy is set to the orcharhino Proxy where the subnet is connected.
-
Click Submit to save the changes.
-
-
4.4.2. Configuring dynamic DNS update with TSIG authentication
You can configure an IdM server to use the secret key transaction authentication for DNS (TSIG) technology that uses the rndc.key
key file for authentication.
The TSIG protocol is defined in RFC2845.
-
You must ensure the IdM server is deployed and the host-based firewall is configured correctly.
-
You must obtain
root
user access on the IdM server. -
You must confirm whether orcharhino Server or orcharhino Proxy is configured to provide DNS service for your deployment.
-
You must configure DNS, DHCP and TFTP services on the base operating system of either the orcharhino or orcharhino Proxy that is managing the DNS service for your deployment.
-
You must create a backup of the answer file. You can use the backup to restore the answer file to its original state if it becomes corrupted. For more information, see Configuring orcharhino Server.
To configure dynamic DNS update with TSIG authentication, complete the following steps:
-
On the IdM Server, add the following to the top of the
/etc/named.conf
file:######################################################################## include "/etc/rndc.key"; controls { inet _IdM_Server_IP_Address_ port 953 allow { _orcharhino_IP_Address_; } keys { "rndc-key"; }; }; ########################################################################
-
Reload the
named
service to make the changes take effect:# systemctl reload named
-
In the IdM web UI, navigate to Network Services > DNS > DNS Zones and click the name of the zone. In the Settings tab, apply the following changes:
-
Add the following in the
BIND update policy
box:grant "rndc-key" zonesub ANY;
-
Set Dynamic update to True.
-
Click Update to save the changes.
-
-
Copy the
/etc/rndc.key
file from the IdM server to the base operating system of your orcharhino Server. Enter the following command:# scp /etc/rndc.key root@orcharhino.example.com:/etc/rndc.key
-
To set the correct ownership, permissions, and SELinux context for the
rndc.key
file, enter the following command:# restorecon -v /etc/rndc.key # chown -v root:named /etc/rndc.key # chmod -v 640 /etc/rndc.key
-
Assign the
foreman-proxy
user to thenamed
group manually. Normally, foreman-installer ensures that theforeman-proxy
user belongs to thenamed
UNIX group, however, in this scenario orcharhino does not manage users and groups, therefore you need to assign theforeman-proxy
user to thenamed
group manually.# usermod -a -G named foreman-proxy
-
On orcharhino Server, enter the following
foreman-installer
command to configure orcharhino to use the external DNS server:# foreman-installer \ --foreman-proxy-dns-managed=false \ --foreman-proxy-dns-provider=nsupdate \ --foreman-proxy-dns-server="IdM_Server_IP_Address" \ --foreman-proxy-dns-ttl=86400 \ --foreman-proxy-dns=true \ --foreman-proxy-keyfile=/etc/rndc.key
-
Ensure that the key in the
/etc/rndc.key
file on orcharhino Server is the same key file that is used on the IdM server:key "rndc-key" { algorithm hmac-md5; secret "secret-key=="; };
-
On orcharhino Server, create a test DNS entry for a host. For example, host
test.example.com
with an A record of192.168.25.20
on the IdM server at192.168.25.1
.# echo -e "server 192.168.25.1\n \ update add test.example.com 3600 IN A 192.168.25.20\n \ send\n" | nsupdate -k /etc/rndc.key
-
On orcharhino Server, test the DNS entry:
# nslookup test.example.com 192.168.25.1 Server: 192.168.25.1 Address: 192.168.25.1#53 Name: test.example.com Address: 192.168.25.20
-
To view the entry in the IdM web UI, navigate to Network Services > DNS > DNS Zones. Click the name of the zone and search for the host by name.
-
If resolved successfully, remove the test DNS entry:
# echo -e "server 192.168.25.1\n \ update delete test.example.com 3600 IN A 192.168.25.20\n \ send\n" | nsupdate -k /etc/rndc.key
-
Confirm that the DNS entry was removed:
# nslookup test.example.com 192.168.25.1
The above
nslookup
command fails and returns theSERVFAIL
error message if the record was successfully deleted.
4.4.3. Reverting to internal DNS service
You can revert to using orcharhino Server and orcharhino Proxy as your DNS providers. You can use a backup of the answer file that was created before configuring external DNS, or you can create a backup of the answer file. For more information about answer files, see Configuring orcharhino Server.
On the orcharhino or orcharhino Proxy that you want to configure to manage DNS service for the domain, complete the following steps:
-
If you have created a backup of the answer file before configuring external DNS, restore the answer file and then enter the
foreman-installer
command:# foreman-installer
-
If you do not have a suitable backup of the answer file, create a backup of the answer file now. To configure orcharhino or orcharhino Proxy as DNS server without using an answer file, enter the following
foreman-installer
command on orcharhino or orcharhino Proxy:# foreman-installer \ --foreman-proxy-dns-managed=true \ --foreman-proxy-dns-provider=nsupdate \ --foreman-proxy-dns-server="127.0.0.1" \ --foreman-proxy-dns=true
For more information, see Configuring DNS, DHCP, and TFTP on orcharhino Proxy.
After you run the foreman-installer
command to make any changes to your orcharhino Proxy configuration, you must update the configuration of each affected orcharhino Proxy in the orcharhino management UI.
-
In the orcharhino management UI, navigate to Infrastructure > orcharhino Proxies.
-
For each orcharhino Proxy that you want to update, from the Actions list, select Refresh.
-
Configure the domain:
-
In the orcharhino management UI, navigate to Infrastructure > Domains and click the domain name that you want to configure.
-
In the Domain tab, set DNS orcharhino Proxy to the orcharhino Proxy where the subnet is connected.
-
-
Configure the subnet:
-
In the orcharhino management UI, navigate to Infrastructure > Subnets and select the subnet name.
-
In the Subnet tab, set IPAM to DHCP or Internal DB.
-
In the Domains tab, select the domain that you want to manage using orcharhino or orcharhino Proxy.
-
In the orcharhino Proxies tab, set Reverse DNS orcharhino Proxy to the orcharhino Proxy where the subnet is connected.
-
Click Submit to save the changes.
-
5. Managing DHCP using orcharhino Proxy
orcharhino can integrate with a DHCP service using your orcharhino Proxy. A orcharhino Proxy has multiple DHCP providers that you can use to integrate orcharhino with your existing DHCP infrastructure or deploy a new one. You can use the DHCP module of orcharhino Proxy to query for available IP addresses, add new, and delete existing reservations. Note that your orcharhino Proxy cannot manage subnet declarations.
-
dhcp_infoblox
– For more information, see Using Infoblox as DHCP and DNS Providers in Provisioning hosts. -
dhcp_isc
– ISC DHCP server over OMAPI. For more information, see Configuring DNS, DHCP, and TFTP on orcharhino Proxy in Installing orcharhino Proxy. -
dhcp_remote_isc
– ISC DHCP server over OMAPI with leases mounted through networking. For more information, see Configuring an External DHCP Server to Use with orcharhino Proxy in Installing orcharhino Proxy. -
dhcp_libvirt
– dnsmasq DHCP via libvirt API -
dhcp_native_ms
– Microsoft Active Directory using API
5.1. Configuring dhcp_libvirt
The dhcp_libvirt plugin manages IP reservations and leases using dnsmasq
through the libvirt API.
It uses ruby-libvirt
to connect to the local or remote instance of libvirt daemon.
-
You can use
foreman-installer
to configuredhcp_libvirt
:foreman-installer \ --foreman-proxy-dhcp true \ --foreman-proxy-dhcp-provider libvirt \ --foreman-proxy-libvirt-network default \ --foreman-proxy-libvirt-network qemu:///system
5.2. Securing the dhcpd API
orcharhino Proxy interacts with DHCP daemon using the dhcpd API to manage DHCP.
By default, the dhcpd API listens to any host without access control.
You can add an omapi_key
to provide basic security.
-
On your orcharhino Proxy, install the required packages:
# dnf install bind-utils
-
Generate a key:
# dnssec-keygen -r /dev/urandom -a HMAC-MD5 -b 512 -n HOST omapi_key # cat Komapi_key.+*.private | grep ^Key|cut -d ' ' -f2-
-
Use
foreman-installer
to secure the dhcpd API:# foreman-installer \ --foreman-proxy-dhcp-key-name "My_Name" \ --foreman-proxy-dhcp-key-secret "My_Secret"
6. Managing DNS using orcharhino Proxy
orcharhino can manage DNS records using your orcharhino Proxy. DNS management contains updating and removing DNS records from existing DNS zones. A orcharhino Proxy has multiple DNS providers that you can use to integrate orcharhino with your existing DNS infrastructure or deploy a new one.
After you have enabled DNS, your orcharhino Proxy can manipulate any DNS server that complies with RFC 2136 using the dns_nsupdate
provider.
Other providers provide more direct integration, such as dns_infoblox
for Infoblox.
-
dns_dnscmd
– Static DNS records in Microsoft Active Directory. -
dns_infoblox
– For more information, see Using Infoblox as DHCP and DNS Providers in Provisioning hosts. -
dns_libvirt
– Dnsmasq DNS via libvirt API. For more information, see Configuring dns_libvirt. -
dns_nsupdate
– Dynamic DNS update using nsupdate. For more information, see Using Infoblox as DHCP and DNS Providers in Provisioning hosts. -
dns_nsupdate_gss
– Dynamic DNS update with GSS-TSIG. For more information, see Configuring dynamic DNS update with GSS-TSIG authentication. -
dns_powerdns
– PowerDNS. For more information, see Configuring dns_powerdns.
6.1. Configuring dns_libvirt
The dns_libvirt DNS provider manages DNS records using dnsmasq through the libvirt API.
It uses ruby-libvirt
gem to connect to the local or a remote instance of libvirt daemon.
-
You can use
foreman-installer
to configuredns_libvirt
:# foreman-installer \ --foreman-proxy-dns true \ --foreman-proxy-dns-provider libvirt \ --foreman-proxy-libvirt-network default \ --foreman-proxy-libvirt-url qemu:///system
Note that you can only use one network and URL for both dns_libvirt and dhcp_libvirt.
6.2. Configuring dns_powerdns
The dns_powerdns DNS provider manages DNS records using the PowerDNS REST API.
-
You can use
foreman-installer
to configuredns_powerdns
:# foreman-installer \ --foreman-proxy-dns true \ --foreman-proxy-dns-provider powerdns \ --enable-foreman-proxy-plugin-dns-powerdns \ --foreman-proxy-plugin-dns-powerdns-rest-api-key api_key \ --foreman-proxy-plugin-dns-powerdns-rest-url http://localhost:8081/api/v1/servers/localhost
6.3. Configuring dns_route53
Route 53 is a DNS provider by Amazon. For more information, see aws.amazon.com/route53.
-
Enable Route 53 DNS on your orcharhino Proxy:
# foreman-installer \ --enable-foreman-proxy-plugin-dns-route53 \ --foreman-proxy-dns true \ --foreman-proxy-dns-provider route53 \ --foreman-proxy-plugin-dns-route53-aws-access-key My_AWS_Access_Key \ --foreman-proxy-plugin-dns-route53-aws-secret-key My_AWS_Secret_Key
Appendix A: orcharhino Proxy scalability considerations
The maximum number of orcharhino Proxies that orcharhino Server can support has no fixed limit. It was tested that a orcharhino Server can support 17 orcharhino Proxies with 2 vCPUs. However, scalability is highly variable, especially when managing Puppet clients.
orcharhino Proxy scalability when managing Puppet clients depends on the number of CPUs, the run-interval distribution, and the number of Puppet managed resources. orcharhino Proxy has a limitation of 100 concurrent Puppet agents running at any single point in time. Running more than 100 concurrent Puppet agents results in a 503 HTTP error.
For example, assuming that Puppet agent runs are evenly distributed with less than 100 concurrent Puppet agents running at any single point during a run-interval, a orcharhino Proxy with 4 CPUs has a maximum of 1250 – 1600 Puppet clients with a moderate workload of 10 Puppet classes assigned to each Puppet client. Depending on the number of Puppet clients required, the orcharhino installation can scale out the number of orcharhino Proxies to support them.
If you want to scale your orcharhino Proxy when managing Puppet clients, the following assumptions are made:
-
There are no external Puppet clients reporting directly to the orcharhino integrated orcharhino Proxy.
-
All other Puppet clients report directly to an external orcharhino Proxy.
-
There is an evenly distributed run-interval of all Puppet agents.
Note
|
Deviating from the even distribution increases the risk of overloading orcharhino Server. The limit of 100 concurrent requests applies. |
The following table describes the scalability limits using the recommended 4 CPUs.
Puppet Managed Resources per Host | Run-Interval Distribution |
---|---|
1 |
3000 – 2500 |
10 |
2400 – 2000 |
20 |
1700 – 1400 |
The following table describes the scalability limits using the minimum 2 CPUs.
Puppet Managed Resources per Host | Run-Interval Distribution |
---|---|
1 |
1700 – 1450 |
10 |
1500 – 1250 |
20 |
850 – 700 |
Appendix B: Troubleshooting DNF modules
If DNF modules fails to enable, it can mean an incorrect module is enabled. In that case, you have to resolve dependencies manually as follows. List the enabled modules:
# dnf module list --enabled
Ruby
If Ruby module fails to enable, it can mean an incorrect module is enabled. In that case, you have to resolve dependencies manually as follows:
List the enabled modules:
# dnf module list --enabled
If the Ruby 2.5 module has already been enabled, perform a module reset:
# dnf module reset ruby
PostgreSQL
If PostgreSQL module fails to enable, it can mean an incorrect module is enabled. In that case, you have to resolve dependencies manually as follows:
List the enabled modules:
# dnf module list --enabled
If the PostgreSQL 10 module has already been enabled, perform a module reset:
# dnf module reset postgresql
If a database was previously created using PostgreSQL 10, perform an upgrade:
-
Enable the DNF modules:
# dnf module enable orcharhino:el8
-
Install the PostgreSQL upgrade package:
# dnf install postgresql-upgrade
-
Perform the upgrade:
# postgresql-setup --upgrade
Appendix C: dhcp_isc settings
The dhcp_isc provider uses a combination of the ISC DHCP server OMAPI management interface and parsing of configuration and lease files.
This requires it to be run on the same host as the DHCP server.
The following settings are defined in dhcp_isc.yml
:
:config: /etc/dhcp/dhcpd.conf :leases: /var/lib/dhcpd/dhcpd.leases
:key_name: My_OMAPI_Key :key_secret: My_Key_Secret
:omapi_port: My_DHCP_Server_Port # default: 7911
The server is defined in dhcp.yml
:
:server: My_DHCP_Server_FQDN
Appendix D: DHCP options for network configuration
- --foreman-proxy-dhcp
-
Enables the DHCP service. You can set this option to
true
orfalse
. - --foreman-proxy-dhcp-managed
-
Enables Foreman to manage the DHCP service. You can set this option to
true
orfalse
. - --foreman-proxy-dhcp-gateway
-
The DHCP pool gateway. Set this to the address of the external gateway for hosts on your private network.
- --foreman-proxy-dhcp-interface
-
Sets the interface for the DHCP service to listen for requests. Set this to
eth1
. - --foreman-proxy-dhcp-nameservers
-
Sets the addresses of the nameservers provided to clients through DHCP. Set this to the address for orcharhino Server on
eth1
. - --foreman-proxy-dhcp-range
-
A space-separated DHCP pool range for Discovered and Unmanaged services.
- --foreman-proxy-dhcp-server
-
Sets the address of the DHCP server to manage.
- --foreman-proxy-dhcp-subnets
-
Sets the subnets of the DHCP server to manage. Example:
--foreman-proxy-dhcp-subnets 192.168.205.0/255.255.255.128
or--foreman-proxy-dhcp-subnets 192.168.205.128/255.255.255.128
Run foreman-installer --help
to view more options related to DHCP and other orcharhino Proxy services.