1. Preparing your environment for installation
1.1. System requirements
The following requirements apply to the networked base operating system:
-
x86_64 architecture
-
4-core 2.0 GHz CPU at a minimum
-
A minimum of 12 GB RAM is required for orcharhino Proxy to function. In addition, a minimum of 4 GB RAM of swap space is also recommended. orcharhino Proxy running with less RAM than the minimum value might not operate correctly.
-
Administrative user (root) access
-
Full forward and reverse DNS resolution using a fully-qualified domain name
orcharhino only supports UTF-8
encoding.
If your territory is USA and your language is English, set en_US.utf-8
as the system-wide locale settings.
For more information about configuring system locale in Enterprise Linux, see Configuring the system locale in Red Hat Enterprise Linux 9 Configuring basic system settings.
orcharhino Server and orcharhino Proxy do not support shortnames in the hostnames. When using custom certificates, the Common Name (CN) of the custom certificate must be a fully qualified domain name (FQDN) instead of a shortname. This does not apply to the clients of a orcharhino.
Before you install orcharhino Proxy, ensure that your environment meets the requirements for installation.
Warning
|
The version of orcharhino Proxy must match with the version of orcharhino installed. It should not be different. For example, the orcharhino Proxy version 7.0 cannot be registered with the orcharhino version 6.11. |
orcharhino Proxy must be installed on a freshly provisioned system that serves no other function except to run orcharhino Proxy. The freshly provisioned system must not have the following users provided by external identity providers to avoid conflicts with the local users that orcharhino Proxy creates:
-
apache
-
foreman-proxy
-
postgres
-
pulp
-
puppet
-
redis
The system clock on the base operating system where you are installing your orcharhino Proxy must be synchronized across the network. If the system clock is not synchronized, SSL certificate verification might fail.
1.2. Supported operating systems
The following operating systems are supported by the installer, have packages, and are tested for deploying orcharhino:
Operating System |
Architecture |
Notes |
x86_64 only |
EPEL is not supported. |
ATIX AG advises against using an existing system because the orcharhino installer will affect the configuration of several components.
1.3. Port and firewall requirements
For the components of orcharhino architecture to communicate, ensure that the required network ports are open and free on the base operating system. You must also ensure that the required network ports are open on any network-based firewalls.
The installation of a orcharhino Proxy fails if the ports between orcharhino Server and orcharhino Proxy are not open before installation starts.
Use this information to configure any network-based firewalls. Note that some cloud solutions must be specifically configured to allow communications between machines because they isolate machines similarly to network-based firewalls. If you use an application-based firewall, ensure that the application-based firewall permits all applications that are listed in the tables and known to your firewall. If possible, disable the application checking and allow open port communication based on the protocol.
orcharhino Server has an integrated orcharhino Proxy and any host that is directly connected to orcharhino Server is a Client of orcharhino in the context of this section. This includes the base operating system on which orcharhino Proxy is running.
Hosts which are clients of orcharhino Proxies, other than orcharhino’s integrated orcharhino Proxy, do not need access to orcharhino Server. For more information on orcharhino Topology, see orcharhino Proxy networking in Planning for orcharhino.
Required ports can change based on your configuration.
The following tables indicate the destination port and the direction of network traffic:
Destination Port | Protocol | Service | Source | Required For | Description |
---|---|---|---|---|---|
53 |
TCP and UDP |
DNS |
DNS Servers and clients |
Name resolution |
DNS (optional) |
67 |
UDP |
DHCP |
Client |
Dynamic IP |
DHCP (optional) |
69 |
UDP |
TFTP |
Client |
TFTP Server (optional) |
|
443, 80 |
TCP |
HTTPS, HTTP |
Client |
Content Retrieval |
Content |
443, 80 |
TCP |
HTTPS, HTTP |
Client |
Content Host Registration |
orcharhino Proxy CA RPM installation |
443 |
TCP |
HTTPS |
orcharhino |
Content Mirroring |
Management |
443 |
TCP |
HTTPS |
orcharhino |
orcharhino Proxy API |
Smart Proxy functionality |
443 |
TCP |
HTTPS |
Client |
Content Host registration |
Initiation Uploading facts Sending installed packages and traces |
1883 |
TCP |
MQTT |
Client |
Pull based REX (optional) |
Content hosts for REX job notification (optional) |
8000 |
TCP |
HTTP |
Client |
Provisioning templates |
Template retrieval for client installers, iPXE or UEFI HTTP Boot |
8000 |
TCP |
HTTP |
Client |
PXE Boot |
Installation |
8140 |
TCP |
HTTPS |
Client |
Puppet agent |
Client updates (optional) |
8443 |
TCP |
HTTPS |
Client |
Content Host registration |
Deprecated and only needed for Client hosts deployed before upgrades |
9090 |
TCP |
HTTPS |
orcharhino |
orcharhino Proxy API |
orcharhino Proxy functionality |
9090 |
TCP |
HTTPS |
Client |
Register Endpoint |
Client registration with an external orcharhino Proxy |
9090 |
TCP |
HTTPS |
Client |
OpenSCAP |
Configure Client (if the OpenSCAP plugin is installed) |
9090 |
TCP |
HTTPS |
Discovered Node |
Discovery |
Host discovery and provisioning (if the discovery plugin is installed) |
Any host that is directly connected to orcharhino Server is a client in this context because it is a client of the integrated orcharhino Proxy. This includes the base operating system on which a orcharhino Proxy is running.
A DHCP orcharhino Proxy performs ICMP ping and TCP echo connection attempts to hosts in subnets with DHCP IPAM set to find out if an IP address considered for use is free.
This behavior can be turned off using foreman-installer --foreman-proxy-dhcp-ping-free-ip=false
.
Destination Port | Protocol | Service | Destination | Required For | Description |
---|---|---|---|---|---|
ICMP |
ping |
Client |
DHCP |
Free IP checking (optional) |
|
7 |
TCP |
echo |
Client |
DHCP |
Free IP checking (optional) |
22 |
TCP |
SSH |
Target host |
Remote execution |
Run jobs |
53 |
TCP and UDP |
DNS |
DNS Servers on the Internet |
DNS Server |
Resolve DNS records (optional) |
53 |
TCP and UDP |
DNS |
DNS Server |
orcharhino Proxy DNS |
Validation of DNS conflicts (optional) |
68 |
UDP |
DHCP |
Client |
Dynamic IP |
DHCP (optional) |
443 |
TCP |
HTTPS |
orcharhino |
orcharhino Proxy |
orcharhino Proxy Configuration management Template retrieval OpenSCAP Remote Execution result upload |
443 |
TCP |
HTTPS |
orcharhino |
Content |
Sync |
443 |
TCP |
HTTPS |
orcharhino |
Client communication |
Forward requests from Client to orcharhino |
443 |
TCP |
HTTPS |
Infoblox DHCP Server |
DHCP management |
When using Infoblox for DHCP, management of the DHCP leases (optional) |
623 |
Client |
Power management |
BMC On/Off/Cycle/Status |
||
7911 |
TCP |
DHCP, OMAPI |
DHCP Server |
DHCP |
The DHCP target is configured using ISC and |
8443 |
TCP |
HTTPS |
Client |
Discovery |
orcharhino Proxy sends reboot command to the discovered host (optional) |
Note
|
ICMP to Port 7 UDP and TCP must not be rejected, but can be dropped. The DHCP orcharhino Proxy sends an ECHO REQUEST to the Client network to verify that an IP address is free. A response prevents IP addresses from being allocated. |
1.4. Enabling connections from orcharhino Server and clients to a orcharhino Proxy
On the base operating system on which you want to install orcharhino Proxy, you must enable incoming connections from orcharhino Server and clients to orcharhino Proxy and make these rules persistent across reboots.
-
Open the ports for clients on orcharhino Proxy:
# firewall-cmd \ --add-port="8000/tcp" \ --add-port="9090/tcp"
-
Allow access to services on orcharhino Proxy:
# firewall-cmd \ --add-service=dns \ --add-service=dhcp \ --add-service=tftp \ --add-service=http \ --add-service=https \ --add-service=puppetmaster
-
Make the changes persistent:
# firewall-cmd --runtime-to-permanent
-
Enter the following command:
# firewall-cmd --list-all
For more information, see Using and configuring firewalld in Red Hat Enterprise Linux 9 Configuring firewalls and packet filters.
2. Installing orcharhino Proxy
Before you install orcharhino Proxy, you must ensure that your environment meets the requirements for installation. For more information, see Preparing your Environment for Installation.
2.1. Configuring repositories
Ensure the repositories required to install orcharhino Proxy are enabled on your Enterprise Linux host.
2.2. Optional: Using fapolicyd on orcharhino Proxy
By enabling fapolicyd
on your orcharhino Server, you can provide an additional layer of security by monitoring and controlling access to files and directories.
The fapolicyd daemon uses the RPM database as a repository of trusted binaries and scripts.
You can turn on or off the fapolicyd on your orcharhino Server or orcharhino Proxy at any point.
2.2.1. Installing fapolicyd on orcharhino Proxy
You can install fapolicyd
along with orcharhino Proxy or can be installed on an existing orcharhino Proxy.
If you are installing fapolicyd
along with the new orcharhino Proxy, the installation process will detect the fapolicyd in your Enterprise Linux host and deploy the orcharhino Proxy rules automatically.
-
Ensure your host has access to the BaseOS repositories of Enterprise Linux.
-
For a new installation, install fapolicyd:
# dnf install fapolicyd
-
For an existing installation, install fapolicyd using dnf install:
# dnf install fapolicyd
-
Start the
fapolicyd
service:# systemctl enable --now fapolicyd
-
Verify that the
fapolicyd
service is running correctly:# systemctl status fapolicyd
In case of new orcharhino Server or orcharhino Proxy installation, follow the standard installation procedures after installing and enabling fapolicyd on your Enterprise Linux host.
For more information on fapolicyd, see Blocking and allowing applications using fapolicyd in Red Hat Enterprise Linux 9 Security hardening.
2.3. Installing orcharhino Proxy packages
Before installing orcharhino Proxy packages, you must update all packages that are installed on the base operating system.
To install orcharhino Proxy, complete the following steps:
2.5. Assigning the correct organization and location to orcharhino Proxy in the orcharhino management UI
After installing orcharhino Proxy packages, if there is more than one organization or location, you must assign the correct organization and location to orcharhino Proxy to make orcharhino Proxy visible in the orcharhino management UI.
-
Log into the orcharhino management UI.
-
From the Organization list in the upper-left of the screen, select Any Organization.
-
From the Location list in the upper-left of the screen, select Any Location.
-
In the orcharhino management UI, navigate to Hosts > All Hosts and select orcharhino Proxy.
-
From the Select Actions list, select Assign Organization.
-
From the Organization list, select the organization where you want to assign this orcharhino Proxy.
-
Click Fix Organization on Mismatch.
-
Click Submit.
-
Select orcharhino Proxy. From the Select Actions list, select Assign Location.
-
From the Location list, select the location where you want to assign this orcharhino Proxy.
-
Click Fix Location on Mismatch.
-
Click Submit.
-
In the orcharhino management UI, navigate to Administer > Organizations and click the organization to which you have assigned orcharhino Proxy.
-
Click orcharhino Proxies tab and ensure that orcharhino Proxy is listed under the Selected items list, then click Submit.
-
In the orcharhino management UI, navigate to Administer > Locations and click the location to which you have assigned orcharhino Proxy.
-
Click orcharhino Proxies tab and ensure that orcharhino Proxy is listed under the Selected items list, then click Submit.
Optionally, you can verify if orcharhino Proxy is correctly listed in the orcharhino management UI.
-
Select the organization from the Organization list.
-
Select the location from the Location list.
-
In the orcharhino management UI, navigate to Hosts > All Hosts.
-
In the orcharhino management UI, navigate to Infrastructure > orcharhino Proxies.
3. Performing additional configuration on orcharhino Proxy
After installation, you can configure additional settings on your orcharhino Proxy.
3.1. Configuring orcharhino Proxy for host registration and provisioning
Use this procedure to configure orcharhino Proxy so that you can register and provision hosts using your orcharhino Proxy instead of your orcharhino Server.
-
On orcharhino Server, add the orcharhino Proxy to the list of trusted proxies.
This is required for orcharhino to recognize hosts' IP addresses forwarded over the
X-Forwarded-For
HTTP header set by orcharhino Proxy. For security reasons, orcharhino recognizes this HTTP header only from localhost by default. You can enter trusted proxies as valid IPv4 or IPv6 addresses of orcharhino Proxies, or network ranges.WarningDo not use a network range that is too broad because that might cause a security risk.
Enter the following command. Note that the command overwrites the list that is currently stored in orcharhino. Therefore, if you have set any trusted proxies previously, you must include them in the command as well:
# foreman-installer \ --foreman-trusted-proxies "127.0.0.1/8" \ --foreman-trusted-proxies "::1" \ --foreman-trusted-proxies "My_IP_address" \ --foreman-trusted-proxies "My_IP_range"
The localhost entries are required, do not omit them.
-
List the current trusted proxies using the full help of orcharhino installer:
# foreman-installer --full-help | grep -A 2 "trusted-proxies"
-
The current listing contains all trusted proxies you require.
3.2. Configuring pull-based transport for remote execution
By default, remote execution uses push-based SSH as the transport mechanism for the Script provider. If your infrastructure prohibits outgoing connections from orcharhino Proxy to hosts, you can use remote execution with pull-based transport instead, because the host initiates the connection to orcharhino Proxy. The use of pull-based transport is not limited to those infrastructures.
The pull-based transport comprises pull-mqtt
mode on orcharhino Proxies in combination with a pull client running on hosts.
Note
|
The |
The mode is configured per orcharhino Proxy.
Some orcharhino Proxys can be configured to use pull-mqtt
mode while others use SSH.
If this is the case, it is possible that one remote job on a given host will use the pull client and the next job on the same host will use SSH.
If you wish to avoid this scenario, configure all orcharhino Proxys to use the same mode.
-
Enable the pull-based transport on your orcharhino Proxy:
# foreman-installer --foreman-proxy-plugin-remote-execution-script-mode=pull-mqtt
-
Configure the firewall to allow the MQTT service on port 1883:
# firewall-cmd --add-service=mqtt
-
Make the changes persistent:
# firewall-cmd --runtime-to-permanent
-
In
pull-mqtt
mode, hosts subscribe for job notifications to either your orcharhino Server or any orcharhino Proxy through which they are registered. Ensure that orcharhino Server sends remote execution jobs to that same orcharhino Server or orcharhino Proxy:-
In the orcharhino management UI, navigate to Administer > Settings.
-
On the Content tab, set the value of Prefer registered through orcharhino Proxy for remote execution to Yes.
-
-
Configure your hosts for the pull-based transport. For more information, see Transport modes for remote execution in Managing hosts.
3.3. Enabling OpenSCAP on orcharhino Proxies
On orcharhino Server and the integrated orcharhino Proxy of your orcharhino Server, OpenSCAP is enabled by default. To use the OpenSCAP plugin and content on external orcharhino Proxies, you must enable OpenSCAP on each orcharhino Proxy.
-
To enable OpenSCAP, enter the following command:
# foreman-installer \ --enable-foreman-proxy-plugin-openscap \ --foreman-proxy-plugin-openscap-ansible-module true \ --foreman-proxy-plugin-openscap-puppet-module true
If you want to use Puppet to deploy compliance policies, you must enable it first. For more information, see Configuring hosts by using Puppet.
3.4. Adding lifecycle environments to orcharhino Proxies
If your orcharhino Proxy has the content functionality enabled, you must add an environment so that orcharhino Proxy can synchronize content from orcharhino Server and provide content to host systems.
Do not assign the Library lifecycle environment to your orcharhino Proxy because it triggers an automated orcharhino Proxy sync every time the CDN updates a repository. This might consume multiple system resources on orcharhino Proxies, network bandwidth between orcharhino and orcharhino Proxies, and available disk space on orcharhino Proxies.
You can use Hammer CLI on orcharhino Server or the orcharhino management UI.
-
In the orcharhino management UI, navigate to Infrastructure > orcharhino Proxies, and select the orcharhino Proxy that you want to add a lifecycle to.
-
Click Edit and click the Lifecycle Environments tab.
-
From the left menu, select the lifecycle environments that you want to add to orcharhino Proxy and click Submit.
-
To synchronize the content on the orcharhino Proxy, click the Overview tab and click Synchronize.
-
Select either Optimized Sync or Complete Sync.
For definitions of each synchronization type, see Recovering a Repository.
-
To display a list of all orcharhino Proxies, on orcharhino Server, enter the following command:
# hammer proxy list
Note the orcharhino Proxy ID of the orcharhino Proxy to which you want to add a lifecycle.
-
Using the ID, verify the details of your orcharhino Proxy:
# hammer proxy info \ --id Myorcharhino-proxy_ID_
-
To view the lifecycle environments available for your orcharhino Proxy, enter the following command and note the ID and the organization name:
# hammer proxy content available-lifecycle-environments \ --id Myorcharhino-proxy_ID_
-
Add the lifecycle environment to your orcharhino Proxy:
# hammer proxy content add-lifecycle-environment \ --id Myorcharhino-proxy_ID_ \ --lifecycle-environment-id My_Lifecycle_Environment_ID --organization "My_Organization"
Repeat for each lifecycle environment you want to add to orcharhino Proxy.
-
Synchronize the content from orcharhino to orcharhino Proxy.
-
To synchronize all content from your orcharhino Server environment to orcharhino Proxy, enter the following command:
# hammer proxy content synchronize \ --id Myorcharhino-proxy_ID_
-
To synchronize a specific lifecycle environment from your orcharhino Server to orcharhino Proxy, enter the following command:
# hammer proxy content synchronize \ --id Myorcharhino-proxy_ID_ --lifecycle-environment-id My_Lifecycle_Environment_ID
-
To synchronize all content from your orcharhino Server to your orcharhino Proxy without checking metadata:
# hammer proxy content synchronize \ --id Myorcharhino-proxy_ID_ \ --skip-metadata-check true
This equals selecting Complete Sync in the orcharhino management UI.
-
3.5. Enabling power management on hosts
To perform power management tasks on hosts using the intelligent platform management interface (IPMI) or a similar protocol, you must enable the baseboard management controller (BMC) module on orcharhino Proxy.
-
All hosts must have a network interface of BMC type. orcharhino Proxy uses this NIC to pass the appropriate credentials to the host. For more information, see Configuring a Baseboard Management Controller (BMC) Interface in Managing hosts.
-
To enable BMC, enter the following command:
# foreman-installer \ --foreman-proxy-bmc "true" \ --foreman-proxy-bmc-default-provider "freeipmi"
3.6. Configuring DNS, DHCP, and TFTP on orcharhino Proxy
To configure the DNS, DHCP, and TFTP services on orcharhino Proxy, use the foreman-installer
command with the options appropriate for your environment.
Any changes to the settings require entering the foreman-installer
command again.
You can enter the command multiple times and each time it updates all configuration files with the changed values.
-
You must have the correct network name (
dns-interface
) for the DNS server. -
You must have the correct interface name (
dhcp-interface
) for the DHCP server. -
Contact your network administrator to ensure that you have the correct settings.
-
Enter the
foreman-installer
command with the options appropriate for your environment. The following example shows configuring full provisioning services:# foreman-installer \ --foreman-proxy-dns true \ --foreman-proxy-dns-managed true \ --foreman-proxy-dns-zone example.com \ --foreman-proxy-dns-reverse 2.0.192.in-addr.arpa \ --foreman-proxy-dhcp true \ --foreman-proxy-dhcp-managed true \ --foreman-proxy-dhcp-range "192.0.2.100 192.0.2.150" \ --foreman-proxy-dhcp-gateway 192.0.2.1 \ --foreman-proxy-dhcp-nameservers 192.0.2.2 \ --foreman-proxy-tftp true \ --foreman-proxy-tftp-managed true \ --foreman-proxy-tftp-servername 192.0.2.3
You can monitor the progress of the foreman-installer
command displayed in your prompt.
You can view the logs in /var/log/foreman-installer/katello.log
.
-
For more information about the
foreman-installer
command, enterforeman-installer --help
. -
For more information about configuring DHCP, DNS, and TFTP services, see Configuring Network Services in Provisioning hosts.
4. Configuring orcharhino Proxy with external services
If you do not want to configure the DNS, DHCP, and TFTP services on orcharhino Proxy, use this section to configure your orcharhino Proxy to work with external DNS, DHCP, and TFTP services.
4.1. Configuring orcharhino Proxy with external DNS
You can configure orcharhino Proxy with external DNS.
orcharhino Proxy uses the nsupdate
utility to update DNS records on the remote server.
To make any changes persistent, you must enter the foreman-installer
command with the options appropriate for your environment.
-
You must have a configured external DNS server.
-
This guide assumes you have an existing installation.
-
Copy the
/etc/rndc.key
file from the external DNS server to orcharhino Proxy:# scp root@dns.example.com:/etc/rndc.key /etc/foreman-proxy/rndc.key
-
Configure the ownership, permissions, and SELinux context:
# restorecon -v /etc/foreman-proxy/rndc.key # chown -v root:foreman-proxy /etc/foreman-proxy/rndc.key # chmod -v 640 /etc/foreman-proxy/rndc.key
-
To test the
nsupdate
utility, add a host remotely:# echo -e "server DNS_IP_Address\n \ update add aaa.example.com 3600 IN A Host_IP_Address\n \ send\n" | nsupdate -k /etc/foreman-proxy/rndc.key # nslookup aaa.example.com DNS_IP_Address # echo -e "server DNS_IP_Address\n \ update delete aaa.example.com 3600 IN A Host_IP_Address\n \ send\n" | nsupdate -k /etc/foreman-proxy/rndc.key
-
Enter the
foreman-installer
command to make the following persistent changes to the/etc/foreman-proxy/settings.d/dns.yml
file:# foreman-installer --foreman-proxy-dns=true \ --foreman-proxy-dns-managed=false \ --foreman-proxy-dns-provider=nsupdate \ --foreman-proxy-dns-server="DNS_IP_Address" \ --foreman-proxy-keyfile=/etc/foreman-proxy/rndc.key
-
In the orcharhino management UI, navigate to Infrastructure > orcharhino Proxies.
-
Locate the orcharhino Proxy and select Refresh from the list in the Actions column.
-
Associate the DNS service with the appropriate subnets and domain.
4.2. Configuring orcharhino Proxy with external DHCP
To configure orcharhino Proxy with external DHCP, you must complete the following procedures:
4.2.1. Configuring an external DHCP server to use with orcharhino Proxy
To configure an external DHCP server running Enterprise Linux to use with orcharhino Proxy, you must install the ISC DHCP Service and Berkeley Internet Name Domain (BIND) utilities packages. You must also share the DHCP configuration and lease files with orcharhino Proxy. The example in this procedure uses the distributed Network File System (NFS) protocol to share the DHCP configuration and lease files.
Note
|
If you use dnsmasq as an external DHCP server, enable the |
-
On your Enterprise Linux host, install the ISC DHCP Service and Berkeley Internet Name Domain (BIND) utilities packages:
# dnf install dhcp-server bind-utils
-
Generate a security token:
# tsig-keygen -a hmac-md5 omapi_key
-
Edit the
dhcpd
configuration file for all subnets and add the key generated bytsig-keygen
. The following is an example:# cat /etc/dhcp/dhcpd.conf default-lease-time 604800; max-lease-time 2592000; log-facility local7; subnet 192.168.38.0 netmask 255.255.255.0 { range 192.168.38.10 192.168.38.100; option routers 192.168.38.1; option subnet-mask 255.255.255.0; option domain-search "virtual.lan"; option domain-name "virtual.lan"; option domain-name-servers 8.8.8.8; } omapi-port 7911; key omapi_key { algorithm hmac-md5; secret "My_Secret"; }; omapi-key omapi_key;
Note that the
option routers
value is the IP address of your orcharhino Server or orcharhino Proxy that you want to use with an external DHCP service. -
On orcharhino Server, define each subnet. Do not set DHCP orcharhino Proxy for the defined Subnet yet.
To prevent conflicts, set up the lease and reservation ranges separately. For example, if the lease range is 192.168.38.10 to 192.168.38.100, in the orcharhino management UI define the reservation range as 192.168.38.101 to 192.168.38.250.
-
Configure the firewall for external access to the DHCP server:
# firewall-cmd --add-service dhcp
-
Make the changes persistent:
# firewall-cmd --runtime-to-permanent
-
On orcharhino Server, determine the UID and GID of the
foreman
user:# id -u foreman 993 # id -g foreman 990
-
On the DHCP server, create the
foreman
user and group with the same IDs as determined in a previous step:# groupadd -g 990 foreman # useradd -u 993 -g 990 -s /sbin/nologin foreman
-
To ensure that the configuration files are accessible, restore the read and execute flags:
# chmod o+rx /etc/dhcp/ # chmod o+r /etc/dhcp/dhcpd.conf # chattr +i /etc/dhcp/ /etc/dhcp/dhcpd.conf
-
Enable and start the DHCP service:
# systemctl enable --now dhcpd
-
Export the DHCP configuration and lease files using NFS:
# dnf install nfs-utils # systemctl enable --now nfs-server
-
Create directories for the DHCP configuration and lease files that you want to export using NFS:
# mkdir -p /exports/var/lib/dhcpd /exports/etc/dhcp
-
To create mount points for the created directories, add the following line to the
/etc/fstab
file:/var/lib/dhcpd /exports/var/lib/dhcpd none bind,auto 0 0 /etc/dhcp /exports/etc/dhcp none bind,auto 0 0
-
Mount the file systems in
/etc/fstab
:# mount -a
-
Ensure the following lines are present in
/etc/exports
:/exports 192.168.38.1(rw,async,no_root_squash,fsid=0,no_subtree_check) /exports/etc/dhcp 192.168.38.1(ro,async,no_root_squash,no_subtree_check,nohide) /exports/var/lib/dhcpd 192.168.38.1(ro,async,no_root_squash,no_subtree_check,nohide)
Note that the IP address that you enter is the orcharhino or orcharhino Proxy IP address that you want to use with an external DHCP service.
-
Reload the NFS server:
# exportfs -rva
-
Configure the firewall for DHCP omapi port 7911:
# firewall-cmd --add-port=7911/tcp
-
Optional: Configure the firewall for external access to NFS. Clients are configured using NFSv3.
# firewall-cmd \ --add-service mountd \ --add-service nfs \ --add-service rpc-bind \ --zone public
-
Make the changes persistent:
# firewall-cmd --runtime-to-permanent
4.2.2. Configuring orcharhino Server with an external DHCP server
You can configure orcharhino Proxy with an external DHCP server.
-
Ensure that you have configured an external DHCP server and that you have shared the DHCP configuration and lease files with orcharhino Proxy. For more information, see Configuring an external DHCP server to use with orcharhino Proxy.
-
Install the
nfs-utils
package:# dnf install nfs-utils
-
Create the DHCP directories for NFS:
# mkdir -p /mnt/nfs/etc/dhcp /mnt/nfs/var/lib/dhcpd
-
Change the file owner:
# chown -R foreman-proxy /mnt/nfs
-
Verify communication with the NFS server and the Remote Procedure Call (RPC) communication paths:
# showmount -e DHCP_Server_FQDN # rpcinfo -p DHCP_Server_FQDN
-
Add the following lines to the
/etc/fstab
file:DHCP_Server_FQDN:/exports/etc/dhcp /mnt/nfs/etc/dhcp nfs ro,vers=3,auto,nosharecache,context="system_u:object_r:dhcp_etc_t:s0" 0 0 DHCP_Server_FQDN:/exports/var/lib/dhcpd /mnt/nfs/var/lib/dhcpd nfs ro,vers=3,auto,nosharecache,context="system_u:object_r:dhcpd_state_t:s0" 0 0
-
Mount the file systems on
/etc/fstab
:# mount -a
-
To verify that the
foreman-proxy
user can access the files that are shared over the network, display the DHCP configuration and lease files:# su foreman-proxy -s /bin/bash $ cat /mnt/nfs/etc/dhcp/dhcpd.conf $ cat /mnt/nfs/var/lib/dhcpd/dhcpd.leases $ exit
-
Enter the
foreman-installer
command to make the following persistent changes to the/etc/foreman-proxy/settings.d/dhcp.yml
file:# foreman-installer \ --enable-foreman-proxy-plugin-dhcp-remote-isc \ --foreman-proxy-dhcp-provider=remote_isc \ --foreman-proxy-dhcp-server=My_DHCP_Server_FQDN \ --foreman-proxy-dhcp=true \ --foreman-proxy-plugin-dhcp-remote-isc-dhcp-config /mnt/nfs/etc/dhcp/dhcpd.conf \ --foreman-proxy-plugin-dhcp-remote-isc-dhcp-leases /mnt/nfs/var/lib/dhcpd/dhcpd.leases \ --foreman-proxy-plugin-dhcp-remote-isc-key-name=omapi_key \ --foreman-proxy-plugin-dhcp-remote-isc-key-secret=My_Secret \ --foreman-proxy-plugin-dhcp-remote-isc-omapi-port=7911
-
Associate the DHCP service with the appropriate subnets and domain.
4.3. Configuring orcharhino Proxy with external TFTP
You can configure orcharhino Proxy with external TFTP services.
-
Create the TFTP directory for NFS:
# mkdir -p /mnt/nfs/var/lib/tftpboot
-
In the
/etc/fstab
file, add the following line:TFTP_Server_IP_Address:/exports/var/lib/tftpboot /mnt/nfs/var/lib/tftpboot nfs rw,vers=3,auto,nosharecache,context="system_u:object_r:tftpdir_rw_t:s0" 0 0
-
Mount the file systems in
/etc/fstab
:# mount -a
-
Enter the
foreman-installer
command to make the following persistent changes to the/etc/foreman-proxy/settings.d/tftp.yml
file:# foreman-installer \ --foreman-proxy-tftp-root /mnt/nfs/var/lib/tftpboot \ --foreman-proxy-tftp=true
-
If the TFTP service is running on a different server than the DHCP service, update the
tftp_servername
setting with the FQDN or IP address of the server that the TFTP service is running on:# foreman-installer --foreman-proxy-tftp-servername=TFTP_Server_FQDN
-
In the orcharhino management UI, navigate to Infrastructure > orcharhino Proxies.
-
Locate the orcharhino Proxy and select Refresh from the list in the Actions column.
-
Associate the TFTP service with the appropriate subnets and domain.
4.4. Configuring orcharhino Proxy with external IdM DNS
When orcharhino Server adds a DNS record for a host, it first determines which orcharhino Proxy is providing DNS for that domain. It then communicates with the orcharhino Proxy that is configured to provide DNS service for your deployment and adds the record. The hosts are not involved in this process. Therefore, you must install and configure the IdM client on the orcharhino or orcharhino Proxy that is currently configured to provide a DNS service for the domain you want to manage by using the IdM server.
orcharhino Proxy can be configured to use a Red Hat Identity Management (IdM) server to provide DNS service.
To configure orcharhino Proxy to use a Red Hat Identity Management (IdM) server to provide DNS service, use one of the following procedures:
To revert to internal DNS service, use the following procedure:
Note
|
You are not required to use orcharhino Proxy to manage DNS.
When you are using the realm enrollment feature of orcharhino, where provisioned hosts are enrolled automatically to IdM, the ipa-client-install script creates DNS records for the client.
Configuring orcharhino Proxy with external IdM DNS and realm enrollment are mutually exclusive.
For more information about configuring realm enrollment, see
Configuring orcharhino to manage the lifecycle of a host registered to a FreeIPA realm in Installing orcharhino Server.
|
4.4.1. Configuring dynamic DNS update with GSS-TSIG authentication
You can configure the IdM server to use the generic security service algorithm for secret key transaction (GSS-TSIG) technology defined in RFC3645. To configure the IdM server to use the GSS-TSIG technology, you must install the IdM client on the orcharhino Proxy base operating system.
-
You must ensure the IdM server is deployed and the host-based firewall is configured correctly.
-
You must contact the IdM server administrator to ensure that you obtain an account on the IdM server with permissions to create zones on the IdM server.
-
You should create a backup of the answer file. You can use the backup to restore the answer file to its original state if it becomes corrupted.
To configure dynamic DNS update with GSS-TSIG authentication, complete the following steps:
-
Obtain a Kerberos ticket for the account obtained from the IdM administrator:
# kinit idm_user
-
Create a new Kerberos principal for orcharhino Proxy to use to authenticate on the IdM server:
# ipa service-add orcharhino-proxy.example.com
-
On the base operating system of either the orcharhino or orcharhino Proxy that is managing the DNS service for your deployment, install the
ipa-client
package:# dnf install ipa-client
-
Configure the IdM client by running the installation script and following the on-screen prompts:
# ipa-client-install
-
Obtain a Kerberos ticket:
# kinit admin
-
Remove any preexisting
keytab
:# rm /etc/foreman-proxy/dns.keytab
-
Obtain the
keytab
for this system:# ipa-getkeytab -p orcharhinoproxy/orcharhino.example.com@EXAMPLE.COM \ -s idm1.example.com -k /etc/foreman-proxy/dns.keytab
NoteWhen adding a keytab to a standby system with the same host name as the original system in service, add the
r
option to prevent generating new credentials and rendering the credentials on the original system invalid. -
For the
dns.keytab
file, set the group and owner toforeman-proxy
:# chown foreman-proxy:foreman-proxy /etc/foreman-proxy/dns.keytab
-
Optional: To verify that the
keytab
file is valid, enter the following command:# kinit -kt /etc/foreman-proxy/dns.keytab \ orcharhinoproxy/orcharhino.example.com@EXAMPLE.COM
-
Create and configure the zone that you want to manage:
-
Navigate to Network Services > DNS > DNS Zones.
-
Select Add and enter the zone name. For example,
example.com
. -
Click Add and Edit.
-
Click the Settings tab and in the BIND update policy box, add the following to the semi-colon separated list:
grant orcharhinoproxy\047orcharhino.example.com@EXAMPLE.COM wildcard * ANY;
-
Set Dynamic update to True.
-
Enable Allow PTR sync.
-
Click Save to save the changes.
-
-
Create and configure the reverse zone:
-
Navigate to Network Services > DNS > DNS Zones.
-
Click Add.
-
Select Reverse zone IP network and add the network address in CIDR format to enable reverse lookups.
-
Click Add and Edit.
-
Click the Settings tab and in the BIND update policy box, add the following to the semi-colon separated list:
grant orcharhinoproxy\047orcharhino.example.com@EXAMPLE.COM wildcard * ANY;
-
Set Dynamic update to True.
-
Click Save to save the changes.
-
-
Configure your orcharhino Server or orcharhino Proxy to connect to your DNS service:
# foreman-installer \ --foreman-proxy-dns-managed=false \ --foreman-proxy-dns-provider=nsupdate_gss \ --foreman-proxy-dns-server="idm1.example.com" \ --foreman-proxy-dns-tsig-keytab=/etc/foreman-proxy/dns.keytab \ --foreman-proxy-dns-tsig-principal="orcharhinoproxy/orcharhino.example.com@EXAMPLE.COM" \ --foreman-proxy-dns=true
-
For each affected orcharhino Proxy, update the configuration of that orcharhino Proxy in the orcharhino management UI:
-
In the orcharhino management UI, navigate to Infrastructure > orcharhino Proxies, locate the orcharhino Proxy, and from the list in the Actions column, select Refresh.
-
Configure the domain:
-
In the orcharhino management UI, navigate to Infrastructure > Domains and select the domain name.
-
In the Domain tab, ensure DNS orcharhino Proxy is set to the orcharhino Proxy where the subnet is connected.
-
-
Configure the subnet:
-
In the orcharhino management UI, navigate to Infrastructure > Subnets and select the subnet name.
-
In the Subnet tab, set IPAM to None.
-
In the Domains tab, select the domain that you want to manage using the IdM server.
-
In the orcharhino Proxies tab, ensure Reverse DNS orcharhino Proxy is set to the orcharhino Proxy where the subnet is connected.
-
Click Submit to save the changes.
-
-
4.4.2. Configuring dynamic DNS update with TSIG authentication
You can configure an IdM server to use the secret key transaction authentication for DNS (TSIG) technology that uses the rndc.key
key file for authentication.
The TSIG protocol is defined in RFC2845.
-
You must ensure the IdM server is deployed and the host-based firewall is configured correctly.
-
You must obtain
root
user access on the IdM server. -
You must confirm whether orcharhino Server or orcharhino Proxy is configured to provide DNS service for your deployment.
-
You must configure DNS, DHCP and TFTP services on the base operating system of either the orcharhino or orcharhino Proxy that is managing the DNS service for your deployment.
-
You must create a backup of the answer file. You can use the backup to restore the answer file to its original state if it becomes corrupted.
To configure dynamic DNS update with TSIG authentication, complete the following steps:
-
On the IdM Server, add the following to the top of the
/etc/named.conf
file:######################################################################## include "/etc/rndc.key"; controls { inet _IdM_Server_IP_Address_ port 953 allow { _orcharhino_IP_Address_; } keys { "rndc-key"; }; }; ########################################################################
-
Reload the
named
service to make the changes take effect:# systemctl reload named
-
In the IdM web UI, navigate to Network Services > DNS > DNS Zones and click the name of the zone. In the Settings tab, apply the following changes:
-
Add the following in the
BIND update policy
box:grant "rndc-key" zonesub ANY;
-
Set Dynamic update to True.
-
Click Update to save the changes.
-
-
Copy the
/etc/rndc.key
file from the IdM server to the base operating system of your orcharhino Server. Enter the following command:# scp /etc/rndc.key root@orcharhino.example.com:/etc/rndc.key
-
To set the correct ownership, permissions, and SELinux context for the
rndc.key
file, enter the following command:# restorecon -v /etc/rndc.key # chown -v root:named /etc/rndc.key # chmod -v 640 /etc/rndc.key
-
Assign the
foreman-proxy
user to thenamed
group manually. Normally, foreman-installer ensures that theforeman-proxy
user belongs to thenamed
UNIX group, however, in this scenario orcharhino does not manage users and groups, therefore you need to assign theforeman-proxy
user to thenamed
group manually.# usermod -a -G named foreman-proxy
-
On orcharhino Server, enter the following
foreman-installer
command to configure orcharhino to use the external DNS server:# foreman-installer \ --foreman-proxy-dns-managed=false \ --foreman-proxy-dns-provider=nsupdate \ --foreman-proxy-dns-server="IdM_Server_IP_Address" \ --foreman-proxy-dns-ttl=86400 \ --foreman-proxy-dns=true \ --foreman-proxy-keyfile=/etc/rndc.key
-
Ensure that the key in the
/etc/rndc.key
file on orcharhino Server is the same key file that is used on the IdM server:key "rndc-key" { algorithm hmac-md5; secret "secret-key=="; };
-
On orcharhino Server, create a test DNS entry for a host. For example, host
test.example.com
with an A record of192.168.25.20
on the IdM server at192.168.25.1
.# echo -e "server 192.168.25.1\n \ update add test.example.com 3600 IN A 192.168.25.20\n \ send\n" | nsupdate -k /etc/rndc.key
-
On orcharhino Server, test the DNS entry:
# nslookup test.example.com 192.168.25.1
Example output:
Server: 192.168.25.1 Address: 192.168.25.1#53 Name: test.example.com Address: 192.168.25.20
-
To view the entry in the IdM web UI, navigate to Network Services > DNS > DNS Zones. Click the name of the zone and search for the host by name.
-
If resolved successfully, remove the test DNS entry:
# echo -e "server 192.168.25.1\n \ update delete test.example.com 3600 IN A 192.168.25.20\n \ send\n" | nsupdate -k /etc/rndc.key
-
Confirm that the DNS entry was removed:
# nslookup test.example.com 192.168.25.1
The above
nslookup
command fails and returns theSERVFAIL
error message if the record was successfully deleted.
4.4.3. Reverting to internal DNS service
You can revert to using orcharhino Server and orcharhino Proxy as your DNS providers. You can use a backup of the answer file that was created before configuring external DNS, or you can create a backup of the answer file.
On the orcharhino or orcharhino Proxy that you want to configure to manage DNS service for the domain, complete the following steps:
-
If you have created a backup of the answer file before configuring external DNS, restore the answer file and then enter the
foreman-installer
command:# foreman-installer
-
If you do not have a suitable backup of the answer file, create a backup of the answer file now. To configure orcharhino or orcharhino Proxy as DNS server without using an answer file, enter the following
foreman-installer
command on orcharhino or orcharhino Proxy:# foreman-installer \ --foreman-proxy-dns-managed=true \ --foreman-proxy-dns-provider=nsupdate \ --foreman-proxy-dns-server="127.0.0.1" \ --foreman-proxy-dns=true
For more information, see Configuring DNS, DHCP, and TFTP on orcharhino Proxy.
After you run the foreman-installer
command to make any changes to your orcharhino Proxy configuration, you must update the configuration of each affected orcharhino Proxy in the orcharhino management UI.
-
In the orcharhino management UI, navigate to Infrastructure > orcharhino Proxies.
-
For each orcharhino Proxy that you want to update, from the Actions list, select Refresh.
-
Configure the domain:
-
In the orcharhino management UI, navigate to Infrastructure > Domains and click the domain name that you want to configure.
-
In the Domain tab, set DNS orcharhino Proxy to the orcharhino Proxy where the subnet is connected.
-
-
Configure the subnet:
-
In the orcharhino management UI, navigate to Infrastructure > Subnets and select the subnet name.
-
In the Subnet tab, set IPAM to DHCP or Internal DB.
-
In the Domains tab, select the domain that you want to manage using orcharhino or orcharhino Proxy.
-
In the orcharhino Proxies tab, set Reverse DNS orcharhino Proxy to the orcharhino Proxy where the subnet is connected.
-
Click Submit to save the changes.
-
4.5. Configuring orcharhino to manage the lifecycle of a host registered to a FreeIPA realm
As well as providing access to orcharhino Server, hosts provisioned with orcharhino can also be integrated with FreeIPA realms. orcharhino has a realm feature that automatically manages the lifecycle of any system registered to a realm or domain provider.
Use this section to configure orcharhino Server or orcharhino Proxy for FreeIPA realm support, then add hosts to the FreeIPA realm group.
-
orcharhino Server that is registered to the Content Delivery Network or an external orcharhino Proxy that is registered to orcharhino Server.
-
A deployed realm or domain provider such as FreeIPA.
To use FreeIPA for provisioned hosts, complete the following steps to install and configure FreeIPA packages on orcharhino Server or orcharhino Proxy:
-
Install the
ipa-client
package on orcharhino Server or orcharhino Proxy:# dnf install ipa-client
-
Configure the server as a FreeIPA client:
# ipa-client-install
-
Create a realm proxy user,
realm-orcharhino-proxy
, and the relevant roles in FreeIPA:# foreman-prepare-realm admin realm-orcharhino-proxy
Note the principal name that returns and your FreeIPA server configuration details because you require them for the following procedure.
Complete the following procedure on orcharhino and every orcharhino Proxy that you want to use:
-
Copy the
/root/freeipa.keytab
file to any orcharhino Proxy that you want to include in the same principal and realm:# scp /root/freeipa.keytab root@orcharhino-proxy.example.com:/etc/foreman-proxy/freeipa.keytab
-
Move the
/root/freeipa.keytab
file to the/etc/foreman-proxy
directory and set the ownership settings to theforeman-proxy
user:# mv /root/freeipa.keytab /etc/foreman-proxy # chown foreman-proxy:foreman-proxy /etc/foreman-proxy/freeipa.keytab
-
Enter the following command on all orcharhino Proxies that you want to include in the realm. If you use the integrated orcharhino Proxy on orcharhino, enter this command on orcharhino Server:
# foreman-installer --foreman-proxy-realm true \ --foreman-proxy-realm-keytab /etc/foreman-proxy/freeipa.keytab \ --foreman-proxy-realm-principal realm-orcharhino-proxy@EXAMPLE.COM \ --foreman-proxy-realm-provider freeipa
You can also use these options when you first configure the orcharhino Server.
-
Ensure that the most updated versions of the ca-certificates package is installed and trust the FreeIPA Certificate Authority:
# cp /etc/ipa/ca.crt /etc/pki/ca-trust/source/anchors/ipa.crt # update-ca-trust enable # update-ca-trust
-
Optional: If you configure FreeIPA on an existing orcharhino Server or orcharhino Proxy, complete the following steps to ensure that the configuration changes take effect:
-
Restart the foreman-proxy service:
# systemctl restart foreman-proxy
-
In the orcharhino management UI, navigate to Infrastructure > orcharhino Proxies.
-
Locate the orcharhino Proxy you have configured for FreeIPA and from the list in the Actions column, select Refresh.
-
After you configure your integrated or external orcharhino Proxy with FreeIPA, you must create a realm and add the FreeIPA-configured orcharhino Proxy to the realm.
-
In the orcharhino management UI, navigate to Infrastructure > Realms and click Create Realm.
-
In the Name field, enter a name for the realm.
-
From the Realm Type list, select the type of realm.
-
From the Realm orcharhino Proxy list, select orcharhino Proxy where you have configured FreeIPA.
-
Click the Locations tab and from the Locations list, select the location where you want to add the new realm.
-
Click the Organizations tab and from the Organizations list, select the organization where you want to add the new realm.
-
Click Submit.
You must update any host groups that you want to use with the new realm information.
-
In the orcharhino management UI, navigate to Configure > Host Groups, select the host group that you want to update, and click the Network tab.
-
From the Realm list, select the realm you create as part of this procedure, and then click Submit.
FreeIPA supports the ability to set up automatic membership rules based on a system’s attributes.
orcharhino’s realm feature provides administrators with the ability to map the orcharhino host groups to the FreeIPA parameter userclass
which allow administrators to configure automembership.
When nested host groups are used, they are sent to the FreeIPA server as they are displayed in the orcharhino User Interface. For example, "Parent/Child/Child".
orcharhino Server or orcharhino Proxy sends updates to the FreeIPA server, however automembership rules are only applied at initial registration.
-
On the FreeIPA server, create a host group:
# ipa hostgroup-add hostgroup_name --desc=hostgroup_description
-
Create an
automembership
rule:# ipa automember-add --type=hostgroup hostgroup_name automember_rule
Where you can use the following options:
-
automember-add
flags the group as an automember group. -
--type=hostgroup
identifies that the target group is a host group, not a user group. -
automember_rule
adds the name you want to identify the automember rule by.
-
-
Define an automembership condition based on the
userclass
attribute:# ipa automember-add-condition --key=userclass --type=hostgroup --inclusive-regex=^webserver hostgroup_name ---------------------------------- Added condition(s) to "hostgroup_name" ---------------------------------- Automember Rule: automember_rule Inclusive Regex: userclass=^webserver ---------------------------- Number of conditions added 1 ----------------------------
Where you can use the following options:
-
automember-add-condition
adds regular expression conditions to identify group members. -
--key=userclass
specifies the key attribute asuserclass
. -
--type=hostgroup
identifies that the target group is a host group, not a user group. -
--inclusive-regex=
^webserver identifies matching values with a regular expression pattern. -
hostgroup_name – identifies the target host group’s name.
-
When a system is added to orcharhino Server’s hostgroup_name host group, it is added automatically to the FreeIPA server’s "hostgroup_name" host group. FreeIPA host groups allow for Host-Based Access Controls (HBAC), sudo policies and other FreeIPA functions.
5. Managing DHCP by using orcharhino Proxy
orcharhino can integrate with a DHCP service by using your orcharhino Proxy. A orcharhino Proxy has multiple DHCP providers that you can use to integrate orcharhino with your existing DHCP infrastructure or deploy a new one. You can use the DHCP module of orcharhino Proxy to query for available IP addresses, add new, and delete existing reservations. Note that your orcharhino Proxy cannot manage subnet declarations.
-
dhcp_infoblox
– For more information, see Using Infoblox as DHCP and DNS providers. -
dhcp_isc
– ISC DHCP server over OMAPI. For more information, see Configuring DNS, DHCP, and TFTP on orcharhino Proxy. -
dhcp_remote_isc
– ISC DHCP server over OMAPI with leases mounted through networking. For more information, see Configuring orcharhino Proxy with external DHCP. -
dhcp_libvirt
– dnsmasq DHCP via libvirt API -
dhcp_native_ms
– Microsoft Active Directory by using API
5.1. Configuring dhcp_libvirt
The dhcp_libvirt plugin manages IP reservations and leases using dnsmasq
through the libvirt API.
It uses ruby-libvirt
to connect to the local or remote instance of libvirt daemon.
-
You can use
foreman-installer
to configuredhcp_libvirt
:foreman-installer \ --foreman-proxy-dhcp true \ --foreman-proxy-dhcp-provider libvirt \ --foreman-proxy-libvirt-network default \ --foreman-proxy-libvirt-network qemu:///system
5.2. Securing the dhcpd API
orcharhino Proxy interacts with DHCP daemon using the dhcpd API to manage DHCP.
By default, the dhcpd API listens to any host without access control.
You can add an omapi_key
to provide basic security.
-
On your orcharhino Proxy, install the required packages:
# dnf install bind-utils
-
Generate a key:
# dnssec-keygen -r /dev/urandom -a HMAC-MD5 -b 512 -n HOST omapi_key # cat Komapi_key.+*.private | grep ^Key|cut -d ' ' -f2-
-
Use
foreman-installer
to secure the dhcpd API:# foreman-installer \ --foreman-proxy-dhcp-key-name "My_Name" \ --foreman-proxy-dhcp-key-secret "My_Secret"
6. Managing DNS by using orcharhino Proxy
orcharhino can manage DNS records by using your orcharhino Proxy. DNS management contains updating and removing DNS records from existing DNS zones. A orcharhino Proxy has multiple DNS providers that you can use to integrate orcharhino with your existing DNS infrastructure or deploy a new one.
After you have enabled DNS, your orcharhino Proxy can manipulate any DNS server that complies with RFC 2136 by using the dns_nsupdate
provider.
Other providers provide more direct integration, such as dns_infoblox
for Infoblox.
-
dns_dnscmd
– Static DNS records in Microsoft Active Directory. -
dhcp_infoblox
– For more information, see Using Infoblox as DHCP and DNS providers. -
dns_libvirt
– Dnsmasq DNS via libvirt API. For more information, see Configuring dns_libvirt. -
dns_nsupdate
– Dynamic DNS update using nsupdate. For more information, see Configuring dns_nsupdate. -
dns_nsupdate_gss
– Dynamic DNS update with GSS-TSIG. For more information, see Configuring dynamic DNS update with GSS-TSIG authentication. -
dns_powerdns
– PowerDNS. For more information, see Configuring dns_powerdns.
6.1. Configuring dns_nsupdate
The dns_nsupdate DNS provider manages DNS records using the nsupdate
utility.
You can use dns_nsupdate with any DNS server compatible with RFC2136.
By default, dns_nsupdate installs the ISC BIND server.
For installation without ISC BIND, see Configuring orcharhino Proxy with external DNS.
-
Configure
dns_nsupdate
:# foreman-installer \ --foreman-proxy-dns true \ --foreman-proxy-dns-provider nsupdate \ --foreman-proxy-dns-managed true \ --foreman-proxy-dns-zone example.com \ --foreman-proxy-dns-reverse 2.0.192.in-addr.arpa
6.2. Configuring dns_libvirt
The dns_libvirt DNS provider manages DNS records using dnsmasq through the libvirt API.
It uses ruby-libvirt
gem to connect to the local or a remote instance of libvirt daemon.
-
You can use
foreman-installer
to configuredns_libvirt
:# foreman-installer \ --foreman-proxy-dns true \ --foreman-proxy-dns-provider libvirt \ --foreman-proxy-libvirt-network default \ --foreman-proxy-libvirt-url qemu:///system
Note that you can only use one network and URL for both dns_libvirt and dhcp_libvirt.
6.3. Configuring dns_powerdns
The dns_powerdns DNS provider manages DNS records using the PowerDNS REST API.
-
You can use
foreman-installer
to configuredns_powerdns
:# foreman-installer \ --foreman-proxy-dns true \ --foreman-proxy-dns-provider powerdns \ --enable-foreman-proxy-plugin-dns-powerdns \ --foreman-proxy-plugin-dns-powerdns-rest-api-key api_key \ --foreman-proxy-plugin-dns-powerdns-rest-url http://localhost:8081/api/v1/servers/localhost
6.4. Configuring dns_route53
Route 53 is a DNS provider by Amazon. For more information, see aws.amazon.com/route53.
-
Enable Route 53 DNS on your orcharhino Proxy:
# foreman-installer \ --enable-foreman-proxy-plugin-dns-route53 \ --foreman-proxy-dns true \ --foreman-proxy-dns-provider route53 \ --foreman-proxy-plugin-dns-route53-aws-access-key My_AWS_Access_Key \ --foreman-proxy-plugin-dns-route53-aws-secret-key My_AWS_Secret_Key
7. Using Infoblox as DHCP and DNS providers
You can use orcharhino Proxy to connect to your Infoblox application to create and manage DHCP and DNS records, and to reserve IP addresses.
The supported Infoblox version is NIOS 8.0 or higher.
7.1. Infoblox limitations
All DHCP and DNS records can be managed only in a single Network or DNS view.
After you install the Infoblox modules on orcharhino Proxy and set up the view using the foreman-installer
command, you cannot edit the view.
orcharhino Proxy communicates with a single Infoblox node by using the standard HTTPS web API. If you want to configure clustering and High Availability, make the configurations in Infoblox.
Hosting PXE-related files by using the TFTP functionality of Infoblox is not supported. You must use orcharhino Proxy as a TFTP server for PXE provisioning. For more information, see Preparing networking in Provisioning hosts.
orcharhino IPAM feature cannot be integrated with Infoblox.
7.2. Infoblox prerequisites
-
You must have Infoblox account credentials to manage DHCP and DNS entries in orcharhino.
-
Ensure that you have Infoblox administration roles with the names:
DHCP Admin
andDNS Admin
. -
The administration roles must have permissions or belong to an admin group that permits the accounts to perform tasks through the Infoblox API.
7.3. Installing the Infoblox CA certificate
You must install Infoblox HTTPS CA certificate on the base system of orcharhino Proxy.
-
Download the certificate from the Infoblox web UI or you use the following OpenSSL commands to download the certificate:
# update-ca-trust enable # openssl s_client -showcerts -connect infoblox.example.com:443 </dev/null | \ openssl x509 -text >/etc/pki/ca-trust/source/anchors/infoblox.crt # update-ca-trust extract
The
infoblox.example.com
entry must match the host name for the Infoblox application in the X509 certificate.
-
Test the CA certificate by using a
curl
query:# curl -u admin:password https://infoblox.example.com/wapi/v2.0/network
Example positive response:
[ { "_ref": "network/ZG5zLm5ldHdvcmskMTkyLjE2OC4yMDIuMC8yNC8w:infoblox.example.com/24/default", "network": "192.168.202.0/24", "network_view": "default" } ]
7.4. Installing the DHCP Infoblox module
Install the DHCP Infoblox module on orcharhino Proxy. Note that you cannot manage records in separate views.
You can also install DHCP and DNS Infoblox modules simultaneously by combining this procedure and Installing the DNS Infoblox module.
If you want to use the DHCP and DNS Infoblox modules together, configure the DHCP Infoblox module with the fixedaddress
record type only.
The host
record type causes DNS conflicts and is not supported.
If you configure the DHCP Infoblox module with the host
record type, you have to unset both DNS orcharhino Proxy and Reverse DNS orcharhino Proxy options on your Infoblox-managed subnets, because Infoblox does DNS management by itself.
Using the host
record type leads to creating conflicts and being unable to rename hosts in orcharhino.
-
On orcharhino Proxy, enter the following command:
# foreman-installer --enable-foreman-proxy-plugin-dhcp-infoblox \ --foreman-proxy-dhcp true \ --foreman-proxy-dhcp-provider infoblox \ --foreman-proxy-dhcp-server infoblox.example.com \ --foreman-proxy-plugin-dhcp-infoblox-username admin \ --foreman-proxy-plugin-dhcp-infoblox-password infoblox \ --foreman-proxy-plugin-dhcp-infoblox-record-type fixedaddress \ --foreman-proxy-plugin-dhcp-infoblox-dns-view default \ --foreman-proxy-plugin-dhcp-infoblox-network-view default
-
Optional: In the orcharhino management UI, navigate to Infrastructure > orcharhino Proxies, select the orcharhino Proxy with the DHCP Infoblox module, and ensure that the dhcp feature is listed.
-
In the orcharhino management UI, navigate to Infrastructure > Subnets.
-
For all subnets managed through Infoblox, ensure that the IP address management (IPAM) method of the subnet is set to
DHCP
.
7.5. Installing the DNS Infoblox module
Install the DNS Infoblox module on orcharhino Proxy. You can also install DHCP and DNS Infoblox modules simultaneously by combining this procedure and Installing the DHCP Infoblox module.
-
On orcharhino Proxy, enter the following command to configure the Infoblox module:
# foreman-installer --enable-foreman-proxy-plugin-dns-infoblox \ --foreman-proxy-dns true \ --foreman-proxy-dns-provider infoblox \ --foreman-proxy-plugin-dns-infoblox-dns-server infoblox.example.com \ --foreman-proxy-plugin-dns-infoblox-username admin \ --foreman-proxy-plugin-dns-infoblox-password infoblox \ --foreman-proxy-plugin-dns-infoblox-dns-view default
Optionally, you can change the value of the
--foreman-proxy-plugin-dns-infoblox-dns-view
option to specify an Infoblox DNS view other than the default view. -
Optional: In the orcharhino management UI, navigate to Infrastructure > orcharhino Proxies, select the orcharhino Proxy with the Infoblox DNS module, and ensure that the dns feature is listed.
-
In the orcharhino management UI, navigate to Infrastructure > Domains.
-
For all domains managed through Infoblox, ensure that the DNS Proxy is set for those domains.
-
In the orcharhino management UI, navigate to Infrastructure > Subnets.
-
For all subnets managed through Infoblox, ensure that the DNS orcharhino Proxy and Reverse DNS orcharhino Proxy are set for those subnets.
Appendix A: orcharhino Proxy scalability considerations when managing Puppet clients
orcharhino Proxy scalability when managing Puppet clients depends on the number of CPUs, the run-interval distribution, and the number of Puppet managed resources. orcharhino Proxy has a limitation of 100 concurrent Puppet agents running at any single point in time. Running more than 100 concurrent Puppet agents results in a 503 HTTP error.
For example, assuming that Puppet agent runs are evenly distributed with less than 100 concurrent Puppet agents running at any single point during a run-interval, a orcharhino Proxy with 4 CPUs has a maximum of 1250 – 1600 Puppet clients with a moderate workload of 10 Puppet classes assigned to each Puppet client. Depending on the number of Puppet clients required, the orcharhino installation can scale out the number of orcharhino Proxies to support them.
If you want to scale your orcharhino Proxy when managing Puppet clients, the following assumptions are made:
-
There are no external Puppet clients reporting directly to the orcharhino integrated orcharhino Proxy.
-
All other Puppet clients report directly to an external orcharhino Proxy.
-
There is an evenly distributed run-interval of all Puppet agents.
Note
|
Deviating from the even distribution increases the risk of overloading orcharhino Server. The limit of 100 concurrent requests applies. |
The following table describes the scalability limits using the recommended 4 CPUs.
Puppet Managed Resources per Host | Run-Interval Distribution |
---|---|
1 |
3000 – 2500 |
10 |
2400 – 2000 |
20 |
1700 – 1400 |
The following table describes the scalability limits using the minimum 2 CPUs.
Puppet Managed Resources per Host | Run-Interval Distribution |
---|---|
1 |
1700 – 1450 |
10 |
1500 – 1250 |
20 |
850 – 700 |
Appendix B: dhcp_isc settings
The dhcp_isc provider uses a combination of the ISC DHCP server OMAPI management interface and parsing of configuration and lease files.
This requires it to be run on the same host as the DHCP server.
The following settings are defined in dhcp_isc.yml
:
:config: /etc/dhcp/dhcpd.conf :leases: /var/lib/dhcpd/dhcpd.leases
:key_name: My_OMAPI_Key :key_secret: My_Key_Secret
:omapi_port: My_DHCP_Server_Port # default: 7911
The server is defined in dhcp.yml
:
:server: My_DHCP_Server_FQDN
Appendix C: DHCP options for network configuration
- --foreman-proxy-dhcp
-
Enables the DHCP service. You can set this option to
true
orfalse
. - --foreman-proxy-dhcp-managed
-
Enables Foreman to manage the DHCP service. You can set this option to
true
orfalse
. - --foreman-proxy-dhcp-gateway
-
The DHCP pool gateway. Set this to the address of the external gateway for hosts on your private network.
- --foreman-proxy-dhcp-interface
-
Sets the interface for the DHCP service to listen for requests. Set this to
eth1
. - --foreman-proxy-dhcp-nameservers
-
Sets the addresses of the nameservers provided to clients through DHCP. Set this to the address for orcharhino Server on
eth1
. - --foreman-proxy-dhcp-range
-
A space-separated DHCP pool range for Discovered and Unmanaged services.
- --foreman-proxy-dhcp-server
-
Sets the address of the DHCP server to manage.
- --foreman-proxy-dhcp-subnets
-
Sets the subnets of the DHCP server to manage. Example:
--foreman-proxy-dhcp-subnets 192.168.205.0/255.255.255.128
or--foreman-proxy-dhcp-subnets 192.168.205.128/255.255.255.128
Run foreman-installer --help
to view more options related to DHCP and other orcharhino Proxy services.