1. Preparing your environment for installation

1.1. System requirements

The following requirements apply to the networked base operating system:

  • x86_64 architecture

  • 4-core 2.0 GHz CPU at a minimum

  • A minimum of 12 GB RAM is required for orcharhino Proxy to function. In addition, a minimum of 4 GB RAM of swap space is also recommended. orcharhino Proxy running with less RAM than the minimum value might not operate correctly.

  • Administrative user (root) access

  • Full forward and reverse DNS resolution using a fully-qualified domain name

orcharhino only supports UTF-8 encoding. If your territory is USA and your language is English, set en_US.utf-8 as the system-wide locale settings. For more information about configuring system locale in Enterprise Linux, see Configuring the system locale in Red Hat Enterprise Linux 9 Configuring basic system settings.

orcharhino Server and orcharhino Proxy do not support shortnames in the hostnames. When using custom certificates, the Common Name (CN) of the custom certificate must be a fully qualified domain name (FQDN) instead of a shortname. This does not apply to the clients of a orcharhino.

Before you install orcharhino Proxy, ensure that your environment meets the requirements for installation.

Warning

The version of orcharhino Proxy must match with the version of orcharhino installed. It should not be different. For example, the orcharhino Proxy version 7.0 cannot be registered with the orcharhino version 6.11.

orcharhino Proxy must be installed on a freshly provisioned system that serves no other function except to run orcharhino Proxy. The freshly provisioned system must not have the following users provided by external identity providers to avoid conflicts with the local users that orcharhino Proxy creates:

  • apache

  • foreman-proxy

  • postgres

  • pulp

  • puppet

  • redis

Synchronized system clock

The system clock on the base operating system where you are installing your orcharhino Proxy must be synchronized across the network. If the system clock is not synchronized, SSL certificate verification might fail.

1.2. Supported operating systems

The following operating systems are supported by the installer, have packages, and are tested for deploying orcharhino:

Table 1. Operating systems supported by foreman-installer

Operating System

Architecture

Notes

Enterprise Linux 9

x86_64 only

EPEL is not supported.

ATIX AG advises against using an existing system because the orcharhino installer will affect the configuration of several components.

1.3. Port and firewall requirements

For the components of orcharhino architecture to communicate, ensure that the required network ports are open and free on the base operating system. You must also ensure that the required network ports are open on any network-based firewalls.

The installation of a orcharhino Proxy fails if the ports between orcharhino Server and orcharhino Proxy are not open before installation starts.

Use this information to configure any network-based firewalls. Note that some cloud solutions must be specifically configured to allow communications between machines because they isolate machines similarly to network-based firewalls. If you use an application-based firewall, ensure that the application-based firewall permits all applications that are listed in the tables and known to your firewall. If possible, disable the application checking and allow open port communication based on the protocol.

Integrated orcharhino Proxy

orcharhino Server has an integrated orcharhino Proxy and any host that is directly connected to orcharhino Server is a Client of orcharhino in the context of this section. This includes the base operating system on which orcharhino Proxy is running.

Clients of orcharhino Proxy

Hosts which are clients of orcharhino Proxies, other than orcharhino’s integrated orcharhino Proxy, do not need access to orcharhino Server. For more information on orcharhino Topology, see orcharhino Proxy networking in Planning for orcharhino.

Required ports can change based on your configuration.

The following tables indicate the destination port and the direction of network traffic:

Table 2. orcharhino Proxy incoming traffic
Destination Port Protocol Service Source Required For Description

53

TCP and UDP

DNS

DNS Servers and clients

Name resolution

DNS (optional)

67

UDP

DHCP

Client

Dynamic IP

DHCP (optional)

69

UDP

TFTP

Client

TFTP Server (optional)

443, 80

TCP

HTTPS, HTTP

Client

Content Retrieval

Content

443, 80

TCP

HTTPS, HTTP

Client

Content Host Registration

orcharhino Proxy CA RPM installation

443

TCP

HTTPS

orcharhino

Content Mirroring

Management

443

TCP

HTTPS

orcharhino

orcharhino Proxy API

Smart Proxy functionality

443

TCP

HTTPS

Client

Content Host registration

Initiation

Uploading facts

Sending installed packages and traces

1883

TCP

MQTT

Client

Pull based REX (optional)

Content hosts for REX job notification (optional)

8000

TCP

HTTP

Client

Provisioning templates

Template retrieval for client installers, iPXE or UEFI HTTP Boot

8000

TCP

HTTP

Client

PXE Boot

Installation

8140

TCP

HTTPS

Client

Puppet agent

Client updates (optional)

8443

TCP

HTTPS

Client

Content Host registration

Deprecated and only needed for Client hosts deployed before upgrades

9090

TCP

HTTPS

orcharhino

orcharhino Proxy API

orcharhino Proxy functionality

9090

TCP

HTTPS

Client

Register Endpoint

Client registration with an external orcharhino Proxy

9090

TCP

HTTPS

Client

OpenSCAP

Configure Client (if the OpenSCAP plugin is installed)

9090

TCP

HTTPS

Discovered Node

Discovery

Host discovery and provisioning (if the discovery plugin is installed)

Any host that is directly connected to orcharhino Server is a client in this context because it is a client of the integrated orcharhino Proxy. This includes the base operating system on which a orcharhino Proxy is running.

A DHCP orcharhino Proxy performs ICMP ping and TCP echo connection attempts to hosts in subnets with DHCP IPAM set to find out if an IP address considered for use is free. This behavior can be turned off using foreman-installer --foreman-proxy-dhcp-ping-free-ip=false.

Table 3. orcharhino Proxy outgoing traffic
Destination Port Protocol Service Destination Required For Description

ICMP

ping

Client

DHCP

Free IP checking (optional)

7

TCP

echo

Client

DHCP

Free IP checking (optional)

22

TCP

SSH

Target host

Remote execution

Run jobs

53

TCP and UDP

DNS

DNS Servers on the Internet

DNS Server

Resolve DNS records (optional)

53

TCP and UDP

DNS

DNS Server

orcharhino Proxy DNS

Validation of DNS conflicts (optional)

68

UDP

DHCP

Client

Dynamic IP

DHCP (optional)

443

TCP

HTTPS

orcharhino

orcharhino Proxy

orcharhino Proxy

Configuration management

Template retrieval

OpenSCAP

Remote Execution result upload

443

TCP

HTTPS

orcharhino

Content

Sync

443

TCP

HTTPS

orcharhino

Client communication

Forward requests from Client to orcharhino

443

TCP

HTTPS

Infoblox DHCP Server

DHCP management

When using Infoblox for DHCP, management of the DHCP leases (optional)

623

Client

Power management

BMC On/Off/Cycle/Status

7911

TCP

DHCP, OMAPI

DHCP Server

DHCP

The DHCP target is configured using --foreman-proxy-dhcp-server and defaults to localhost

ISC and remote_isc use a configurable port that defaults to 7911 and uses OMAPI

8443

TCP

HTTPS

Client

Discovery

orcharhino Proxy sends reboot command to the discovered host (optional)

Note

ICMP to Port 7 UDP and TCP must not be rejected, but can be dropped. The DHCP orcharhino Proxy sends an ECHO REQUEST to the Client network to verify that an IP address is free. A response prevents IP addresses from being allocated.

1.4. Enabling connections from orcharhino Server and clients to a orcharhino Proxy

On the base operating system on which you want to install orcharhino Proxy, you must enable incoming connections from orcharhino Server and clients to orcharhino Proxy and make these rules persistent across reboots.

Procedure
  1. Open the ports for clients on orcharhino Proxy:

    # firewall-cmd \
    --add-port="8000/tcp" \
    --add-port="9090/tcp"
  2. Allow access to services on orcharhino Proxy:

    # firewall-cmd \
    --add-service=dns \
    --add-service=dhcp \
    --add-service=tftp \
    --add-service=http \
    --add-service=https \
    --add-service=puppetmaster
  3. Make the changes persistent:

    # firewall-cmd --runtime-to-permanent
Verification
  • Enter the following command:

    # firewall-cmd --list-all

For more information, see Using and configuring firewalld in Red Hat Enterprise Linux 9 Configuring firewalls and packet filters.

2. Installing orcharhino Proxy

Before you install orcharhino Proxy, you must ensure that your environment meets the requirements for installation. For more information, see Preparing your Environment for Installation.

2.1. Configuring repositories

Ensure the repositories required to install orcharhino Proxy are enabled on your Enterprise Linux host.

2.2. Optional: Using fapolicyd on orcharhino Proxy

By enabling fapolicyd on your orcharhino Server, you can provide an additional layer of security by monitoring and controlling access to files and directories. The fapolicyd daemon uses the RPM database as a repository of trusted binaries and scripts.

You can turn on or off the fapolicyd on your orcharhino Server or orcharhino Proxy at any point.

2.2.1. Installing fapolicyd on orcharhino Proxy

You can install fapolicyd along with orcharhino Proxy or can be installed on an existing orcharhino Proxy. If you are installing fapolicyd along with the new orcharhino Proxy, the installation process will detect the fapolicyd in your Enterprise Linux host and deploy the orcharhino Proxy rules automatically.

Prerequisites
  • Ensure your host has access to the BaseOS repositories of Enterprise Linux.

Procedure
  1. For a new installation, install fapolicyd:

    # dnf install fapolicyd
  2. For an existing installation, install fapolicyd using dnf install:

    # dnf install fapolicyd
  3. Start the fapolicyd service:

    # systemctl enable --now fapolicyd
Verification
  • Verify that the fapolicyd service is running correctly:

    # systemctl status fapolicyd
New orcharhino Server or orcharhino Proxy installations

In case of new orcharhino Server or orcharhino Proxy installation, follow the standard installation procedures after installing and enabling fapolicyd on your Enterprise Linux host.

Additional resources

For more information on fapolicyd, see Blocking and allowing applications using fapolicyd in Red Hat Enterprise Linux 9 Security hardening.

2.3. Installing orcharhino Proxy packages

Before installing orcharhino Proxy packages, you must update all packages that are installed on the base operating system.

Procedure

To install orcharhino Proxy, complete the following steps:

2.4. Installing orcharhino Proxy

2.5. Assigning the correct organization and location to orcharhino Proxy in the orcharhino management UI

Procedure

After installing orcharhino Proxy packages, if there is more than one organization or location, you must assign the correct organization and location to orcharhino Proxy to make orcharhino Proxy visible in the orcharhino management UI.

Procedure
  1. Log into the orcharhino management UI.

  2. From the Organization list in the upper-left of the screen, select Any Organization.

  3. From the Location list in the upper-left of the screen, select Any Location.

  4. In the orcharhino management UI, navigate to Hosts > All Hosts and select orcharhino Proxy.

  5. From the Select Actions list, select Assign Organization.

  6. From the Organization list, select the organization where you want to assign this orcharhino Proxy.

  7. Click Fix Organization on Mismatch.

  8. Click Submit.

  9. Select orcharhino Proxy. From the Select Actions list, select Assign Location.

  10. From the Location list, select the location where you want to assign this orcharhino Proxy.

  11. Click Fix Location on Mismatch.

  12. Click Submit.

  13. In the orcharhino management UI, navigate to Administer > Organizations and click the organization to which you have assigned orcharhino Proxy.

  14. Click orcharhino Proxies tab and ensure that orcharhino Proxy is listed under the Selected items list, then click Submit.

  15. In the orcharhino management UI, navigate to Administer > Locations and click the location to which you have assigned orcharhino Proxy.

  16. Click orcharhino Proxies tab and ensure that orcharhino Proxy is listed under the Selected items list, then click Submit.

Verification

Optionally, you can verify if orcharhino Proxy is correctly listed in the orcharhino management UI.

  1. Select the organization from the Organization list.

  2. Select the location from the Location list.

  3. In the orcharhino management UI, navigate to Hosts > All Hosts.

  4. In the orcharhino management UI, navigate to Infrastructure > orcharhino Proxies.

3. Performing additional configuration on orcharhino Proxy

After installation, you can configure additional settings on your orcharhino Proxy.

3.1. Configuring orcharhino Proxy for host registration and provisioning

Use this procedure to configure orcharhino Proxy so that you can register and provision hosts using your orcharhino Proxy instead of your orcharhino Server.

Procedure
  • On orcharhino Server, add the orcharhino Proxy to the list of trusted proxies.

    This is required for orcharhino to recognize hosts' IP addresses forwarded over the X-Forwarded-For HTTP header set by orcharhino Proxy. For security reasons, orcharhino recognizes this HTTP header only from localhost by default. You can enter trusted proxies as valid IPv4 or IPv6 addresses of orcharhino Proxies, or network ranges.

    Warning

    Do not use a network range that is too broad because that might cause a security risk.

    Enter the following command. Note that the command overwrites the list that is currently stored in orcharhino. Therefore, if you have set any trusted proxies previously, you must include them in the command as well:

    # foreman-installer \
    --foreman-trusted-proxies "127.0.0.1/8" \
    --foreman-trusted-proxies "::1" \
    --foreman-trusted-proxies "My_IP_address" \
    --foreman-trusted-proxies "My_IP_range"

    The localhost entries are required, do not omit them.

Verification
  1. List the current trusted proxies using the full help of orcharhino installer:

    # foreman-installer --full-help | grep -A 2 "trusted-proxies"
  2. The current listing contains all trusted proxies you require.

3.2. Configuring pull-based transport for remote execution

By default, remote execution uses push-based SSH as the transport mechanism for the Script provider. If your infrastructure prohibits outgoing connections from orcharhino Proxy to hosts, you can use remote execution with pull-based transport instead, because the host initiates the connection to orcharhino Proxy. The use of pull-based transport is not limited to those infrastructures.

The pull-based transport comprises pull-mqtt mode on orcharhino Proxies in combination with a pull client running on hosts.

Note

The pull-mqtt mode works only with the Script provider. Ansible and other providers will continue to use their default transport settings.

The mode is configured per orcharhino Proxy. Some orcharhino Proxys can be configured to use pull-mqtt mode while others use SSH. If this is the case, it is possible that one remote job on a given host will use the pull client and the next job on the same host will use SSH. If you wish to avoid this scenario, configure all orcharhino Proxys to use the same mode.

Procedure
  1. Enable the pull-based transport on your orcharhino Proxy:

    # foreman-installer --foreman-proxy-plugin-remote-execution-script-mode=pull-mqtt
  2. Configure the firewall to allow the MQTT service on port 1883:

    # firewall-cmd --add-service=mqtt
  3. Make the changes persistent:

    # firewall-cmd --runtime-to-permanent
  4. In pull-mqtt mode, hosts subscribe for job notifications to either your orcharhino Server or any orcharhino Proxy through which they are registered. Ensure that orcharhino Server sends remote execution jobs to that same orcharhino Server or orcharhino Proxy:

    1. In the orcharhino management UI, navigate to Administer > Settings.

    2. On the Content tab, set the value of Prefer registered through orcharhino Proxy for remote execution to Yes.

Next steps

3.3. Enabling OpenSCAP on orcharhino Proxies

On orcharhino Server and the integrated orcharhino Proxy of your orcharhino Server, OpenSCAP is enabled by default. To use the OpenSCAP plugin and content on external orcharhino Proxies, you must enable OpenSCAP on each orcharhino Proxy.

Procedure
  • To enable OpenSCAP, enter the following command:

    # foreman-installer \
    --enable-foreman-proxy-plugin-openscap \
    --foreman-proxy-plugin-openscap-ansible-module true \
    --foreman-proxy-plugin-openscap-puppet-module true

    If you want to use Puppet to deploy compliance policies, you must enable it first. For more information, see Configuring hosts by using Puppet.

3.4. Adding lifecycle environments to orcharhino Proxies

If your orcharhino Proxy has the content functionality enabled, you must add an environment so that orcharhino Proxy can synchronize content from orcharhino Server and provide content to host systems.

Do not assign the Library lifecycle environment to your orcharhino Proxy because it triggers an automated orcharhino Proxy sync every time the CDN updates a repository. This might consume multiple system resources on orcharhino Proxies, network bandwidth between orcharhino and orcharhino Proxies, and available disk space on orcharhino Proxies.

You can use Hammer CLI on orcharhino Server or the orcharhino management UI.

Procedure
  1. In the orcharhino management UI, navigate to Infrastructure > orcharhino Proxies, and select the orcharhino Proxy that you want to add a lifecycle to.

  2. Click Edit and click the Lifecycle Environments tab.

  3. From the left menu, select the lifecycle environments that you want to add to orcharhino Proxy and click Submit.

  4. To synchronize the content on the orcharhino Proxy, click the Overview tab and click Synchronize.

  5. Select either Optimized Sync or Complete Sync.

    For definitions of each synchronization type, see Recovering a Repository.

CLI procedure
  1. To display a list of all orcharhino Proxies, on orcharhino Server, enter the following command:

    # hammer proxy list

    Note the orcharhino Proxy ID of the orcharhino Proxy to which you want to add a lifecycle.

  2. Using the ID, verify the details of your orcharhino Proxy:

    # hammer proxy info \
    --id Myorcharhino-proxy_ID_
  3. To view the lifecycle environments available for your orcharhino Proxy, enter the following command and note the ID and the organization name:

    # hammer proxy content available-lifecycle-environments \
    --id Myorcharhino-proxy_ID_
  4. Add the lifecycle environment to your orcharhino Proxy:

    # hammer proxy content add-lifecycle-environment \
    --id Myorcharhino-proxy_ID_ \
    --lifecycle-environment-id My_Lifecycle_Environment_ID
    --organization "My_Organization"

    Repeat for each lifecycle environment you want to add to orcharhino Proxy.

  5. Synchronize the content from orcharhino to orcharhino Proxy.

    • To synchronize all content from your orcharhino Server environment to orcharhino Proxy, enter the following command:

      # hammer proxy content synchronize \
      --id Myorcharhino-proxy_ID_
    • To synchronize a specific lifecycle environment from your orcharhino Server to orcharhino Proxy, enter the following command:

      # hammer proxy content synchronize \
      --id Myorcharhino-proxy_ID_
      --lifecycle-environment-id My_Lifecycle_Environment_ID
    • To synchronize all content from your orcharhino Server to your orcharhino Proxy without checking metadata:

      # hammer proxy content synchronize \
      --id Myorcharhino-proxy_ID_ \
      --skip-metadata-check true

      This equals selecting Complete Sync in the orcharhino management UI.

3.5. Enabling power management on hosts

To perform power management tasks on hosts using the intelligent platform management interface (IPMI) or a similar protocol, you must enable the baseboard management controller (BMC) module on orcharhino Proxy.

Prerequisites
Procedure
  • To enable BMC, enter the following command:

    # foreman-installer \
    --foreman-proxy-bmc "true" \
    --foreman-proxy-bmc-default-provider "freeipmi"

3.6. Configuring DNS, DHCP, and TFTP on orcharhino Proxy

To configure the DNS, DHCP, and TFTP services on orcharhino Proxy, use the foreman-installer command with the options appropriate for your environment.

Any changes to the settings require entering the foreman-installer command again. You can enter the command multiple times and each time it updates all configuration files with the changed values.

Prerequisites
  • You must have the correct network name (dns-interface) for the DNS server.

  • You must have the correct interface name (dhcp-interface) for the DHCP server.

  • Contact your network administrator to ensure that you have the correct settings.

Procedure
  • Enter the foreman-installer command with the options appropriate for your environment. The following example shows configuring full provisioning services:

    # foreman-installer \
    --foreman-proxy-dns true \
    --foreman-proxy-dns-managed true \
    --foreman-proxy-dns-zone example.com \
    --foreman-proxy-dns-reverse 2.0.192.in-addr.arpa \
    --foreman-proxy-dhcp true \
    --foreman-proxy-dhcp-managed true \
    --foreman-proxy-dhcp-range "192.0.2.100 192.0.2.150" \
    --foreman-proxy-dhcp-gateway 192.0.2.1 \
    --foreman-proxy-dhcp-nameservers 192.0.2.2 \
    --foreman-proxy-tftp true \
    --foreman-proxy-tftp-managed true \
    --foreman-proxy-tftp-servername 192.0.2.3

You can monitor the progress of the foreman-installer command displayed in your prompt. You can view the logs in /var/log/foreman-installer/katello.log.

Additional resources
  • For more information about the foreman-installer command, enter foreman-installer --help.

  • For more information about configuring DHCP, DNS, and TFTP services, see Configuring Network Services in Provisioning hosts.

4. Configuring orcharhino Proxy with external services

If you do not want to configure the DNS, DHCP, and TFTP services on orcharhino Proxy, use this section to configure your orcharhino Proxy to work with external DNS, DHCP, and TFTP services.

4.1. Configuring orcharhino Proxy with external DNS

You can configure orcharhino Proxy with external DNS. orcharhino Proxy uses the nsupdate utility to update DNS records on the remote server.

To make any changes persistent, you must enter the foreman-installer command with the options appropriate for your environment.

Prerequisites
  • You must have a configured external DNS server.

  • This guide assumes you have an existing installation.

Procedure
  1. Copy the /etc/rndc.key file from the external DNS server to orcharhino Proxy:

    # scp root@dns.example.com:/etc/rndc.key /etc/foreman-proxy/rndc.key
  2. Configure the ownership, permissions, and SELinux context:

    # restorecon -v /etc/foreman-proxy/rndc.key
    # chown -v root:foreman-proxy /etc/foreman-proxy/rndc.key
    # chmod -v 640 /etc/foreman-proxy/rndc.key
  3. To test the nsupdate utility, add a host remotely:

    # echo -e "server DNS_IP_Address\n \
    update add aaa.example.com 3600 IN A Host_IP_Address\n \
    send\n" | nsupdate -k /etc/foreman-proxy/rndc.key
    # nslookup aaa.example.com DNS_IP_Address
    # echo -e "server DNS_IP_Address\n \
    update delete aaa.example.com 3600 IN A Host_IP_Address\n \
    send\n" | nsupdate -k /etc/foreman-proxy/rndc.key
  4. Enter the foreman-installer command to make the following persistent changes to the /etc/foreman-proxy/settings.d/dns.yml file:

    # foreman-installer --foreman-proxy-dns=true \
    --foreman-proxy-dns-managed=false \
    --foreman-proxy-dns-provider=nsupdate \
    --foreman-proxy-dns-server="DNS_IP_Address" \
    --foreman-proxy-keyfile=/etc/foreman-proxy/rndc.key
  5. In the orcharhino management UI, navigate to Infrastructure > orcharhino Proxies.

  6. Locate the orcharhino Proxy and select Refresh from the list in the Actions column.

  7. Associate the DNS service with the appropriate subnets and domain.

4.2. Configuring orcharhino Proxy with external DHCP

To configure orcharhino Proxy with external DHCP, you must complete the following procedures:

4.2.1. Configuring an external DHCP server to use with orcharhino Proxy

To configure an external DHCP server running Enterprise Linux to use with orcharhino Proxy, you must install the ISC DHCP Service and Berkeley Internet Name Domain (BIND) utilities packages. You must also share the DHCP configuration and lease files with orcharhino Proxy. The example in this procedure uses the distributed Network File System (NFS) protocol to share the DHCP configuration and lease files.

Note

If you use dnsmasq as an external DHCP server, enable the dhcp-no-override setting. This is required because orcharhino creates configuration files on the TFTP server under the grub2/ subdirectory. If the dhcp-no-override setting is disabled, hosts fetch the boot loader and its configuration from the root directory, which might cause an error.

Procedure
  1. On your Enterprise Linux host, install the ISC DHCP Service and Berkeley Internet Name Domain (BIND) utilities packages:

    # dnf install dhcp-server bind-utils
  2. Generate a security token:

    # tsig-keygen -a hmac-md5 omapi_key
  3. Edit the dhcpd configuration file for all subnets and add the key generated by tsig-keygen. The following is an example:

    # cat /etc/dhcp/dhcpd.conf
    default-lease-time 604800;
    max-lease-time 2592000;
    log-facility local7;
    
    subnet 192.168.38.0 netmask 255.255.255.0 {
    	range 192.168.38.10 192.168.38.100;
    	option routers 192.168.38.1;
    	option subnet-mask 255.255.255.0;
    	option domain-search "virtual.lan";
    	option domain-name "virtual.lan";
    	option domain-name-servers 8.8.8.8;
    }
    
    omapi-port 7911;
    key omapi_key {
    	algorithm hmac-md5;
    	secret "My_Secret";
    };
    omapi-key omapi_key;

    Note that the option routers value is the IP address of your orcharhino Server or orcharhino Proxy that you want to use with an external DHCP service.

  4. On orcharhino Server, define each subnet. Do not set DHCP orcharhino Proxy for the defined Subnet yet.

    To prevent conflicts, set up the lease and reservation ranges separately. For example, if the lease range is 192.168.38.10 to 192.168.38.100, in the orcharhino management UI define the reservation range as 192.168.38.101 to 192.168.38.250.

  5. Configure the firewall for external access to the DHCP server:

    # firewall-cmd --add-service dhcp
  6. Make the changes persistent:

    # firewall-cmd --runtime-to-permanent
  7. On orcharhino Server, determine the UID and GID of the foreman user:

    # id -u foreman
    993
    # id -g foreman
    990
  8. On the DHCP server, create the foreman user and group with the same IDs as determined in a previous step:

    # groupadd -g 990 foreman
    # useradd -u 993 -g 990 -s /sbin/nologin foreman
  9. To ensure that the configuration files are accessible, restore the read and execute flags:

    # chmod o+rx /etc/dhcp/
    # chmod o+r /etc/dhcp/dhcpd.conf
    # chattr +i /etc/dhcp/ /etc/dhcp/dhcpd.conf
  10. Enable and start the DHCP service:

    # systemctl enable --now dhcpd
  11. Export the DHCP configuration and lease files using NFS:

    # dnf install nfs-utils
    # systemctl enable --now nfs-server
  12. Create directories for the DHCP configuration and lease files that you want to export using NFS:

    # mkdir -p /exports/var/lib/dhcpd /exports/etc/dhcp
  13. To create mount points for the created directories, add the following line to the /etc/fstab file:

    /var/lib/dhcpd /exports/var/lib/dhcpd none bind,auto 0 0
    /etc/dhcp /exports/etc/dhcp none bind,auto 0 0
  14. Mount the file systems in /etc/fstab:

    # mount -a
  15. Ensure the following lines are present in /etc/exports:

    /exports 192.168.38.1(rw,async,no_root_squash,fsid=0,no_subtree_check)
    
    /exports/etc/dhcp 192.168.38.1(ro,async,no_root_squash,no_subtree_check,nohide)
    
    /exports/var/lib/dhcpd 192.168.38.1(ro,async,no_root_squash,no_subtree_check,nohide)

    Note that the IP address that you enter is the orcharhino or orcharhino Proxy IP address that you want to use with an external DHCP service.

  16. Reload the NFS server:

    # exportfs -rva
  17. Configure the firewall for DHCP omapi port 7911:

    # firewall-cmd --add-port=7911/tcp
  18. Optional: Configure the firewall for external access to NFS. Clients are configured using NFSv3.

    # firewall-cmd \
    --add-service mountd \
    --add-service nfs \
    --add-service rpc-bind \
    --zone public
  19. Make the changes persistent:

    # firewall-cmd --runtime-to-permanent

4.2.2. Configuring orcharhino Server with an external DHCP server

You can configure orcharhino Proxy with an external DHCP server.

Prerequisites
Procedure
  1. Install the nfs-utils package:

    # dnf install nfs-utils
  2. Create the DHCP directories for NFS:

    # mkdir -p /mnt/nfs/etc/dhcp /mnt/nfs/var/lib/dhcpd
  3. Change the file owner:

    # chown -R foreman-proxy /mnt/nfs
  4. Verify communication with the NFS server and the Remote Procedure Call (RPC) communication paths:

    # showmount -e DHCP_Server_FQDN
    # rpcinfo -p DHCP_Server_FQDN
  5. Add the following lines to the /etc/fstab file:

    DHCP_Server_FQDN:/exports/etc/dhcp /mnt/nfs/etc/dhcp nfs
    ro,vers=3,auto,nosharecache,context="system_u:object_r:dhcp_etc_t:s0" 0 0
    
    DHCP_Server_FQDN:/exports/var/lib/dhcpd /mnt/nfs/var/lib/dhcpd nfs
    ro,vers=3,auto,nosharecache,context="system_u:object_r:dhcpd_state_t:s0" 0 0
  6. Mount the file systems on /etc/fstab:

    # mount -a
  7. To verify that the foreman-proxy user can access the files that are shared over the network, display the DHCP configuration and lease files:

    # su foreman-proxy -s /bin/bash
    $ cat /mnt/nfs/etc/dhcp/dhcpd.conf
    $ cat /mnt/nfs/var/lib/dhcpd/dhcpd.leases
    $ exit
  8. Enter the foreman-installer command to make the following persistent changes to the /etc/foreman-proxy/settings.d/dhcp.yml file:

    # foreman-installer \
    --enable-foreman-proxy-plugin-dhcp-remote-isc \
    --foreman-proxy-dhcp-provider=remote_isc \
    --foreman-proxy-dhcp-server=My_DHCP_Server_FQDN \
    --foreman-proxy-dhcp=true \
    --foreman-proxy-plugin-dhcp-remote-isc-dhcp-config /mnt/nfs/etc/dhcp/dhcpd.conf \
    --foreman-proxy-plugin-dhcp-remote-isc-dhcp-leases /mnt/nfs/var/lib/dhcpd/dhcpd.leases \
    --foreman-proxy-plugin-dhcp-remote-isc-key-name=omapi_key \
    --foreman-proxy-plugin-dhcp-remote-isc-key-secret=My_Secret \
    --foreman-proxy-plugin-dhcp-remote-isc-omapi-port=7911
  9. Associate the DHCP service with the appropriate subnets and domain.

4.3. Configuring orcharhino Proxy with external TFTP

You can configure orcharhino Proxy with external TFTP services.

Procedure
  1. Create the TFTP directory for NFS:

    # mkdir -p /mnt/nfs/var/lib/tftpboot
  2. In the /etc/fstab file, add the following line:

    TFTP_Server_IP_Address:/exports/var/lib/tftpboot /mnt/nfs/var/lib/tftpboot nfs rw,vers=3,auto,nosharecache,context="system_u:object_r:tftpdir_rw_t:s0" 0 0
  3. Mount the file systems in /etc/fstab:

    # mount -a
  4. Enter the foreman-installer command to make the following persistent changes to the /etc/foreman-proxy/settings.d/tftp.yml file:

    # foreman-installer \
    --foreman-proxy-tftp-root /mnt/nfs/var/lib/tftpboot \
    --foreman-proxy-tftp=true
  5. If the TFTP service is running on a different server than the DHCP service, update the tftp_servername setting with the FQDN or IP address of the server that the TFTP service is running on:

    # foreman-installer --foreman-proxy-tftp-servername=TFTP_Server_FQDN
  6. In the orcharhino management UI, navigate to Infrastructure > orcharhino Proxies.

  7. Locate the orcharhino Proxy and select Refresh from the list in the Actions column.

  8. Associate the TFTP service with the appropriate subnets and domain.

4.4. Configuring orcharhino Proxy with external IdM DNS

When orcharhino Server adds a DNS record for a host, it first determines which orcharhino Proxy is providing DNS for that domain. It then communicates with the orcharhino Proxy that is configured to provide DNS service for your deployment and adds the record. The hosts are not involved in this process. Therefore, you must install and configure the IdM client on the orcharhino or orcharhino Proxy that is currently configured to provide a DNS service for the domain you want to manage by using the IdM server.

orcharhino Proxy can be configured to use a Red Hat Identity Management (IdM) server to provide DNS service.

To configure orcharhino Proxy to use a Red Hat Identity Management (IdM) server to provide DNS service, use one of the following procedures:

To revert to internal DNS service, use the following procedure:

Note
You are not required to use orcharhino Proxy to manage DNS. When you are using the realm enrollment feature of orcharhino, where provisioned hosts are enrolled automatically to IdM, the ipa-client-install script creates DNS records for the client. Configuring orcharhino Proxy with external IdM DNS and realm enrollment are mutually exclusive. For more information about configuring realm enrollment, see Configuring orcharhino to manage the lifecycle of a host registered to a FreeIPA realm in Installing orcharhino Server.

4.4.1. Configuring dynamic DNS update with GSS-TSIG authentication

You can configure the IdM server to use the generic security service algorithm for secret key transaction (GSS-TSIG) technology defined in RFC3645. To configure the IdM server to use the GSS-TSIG technology, you must install the IdM client on the orcharhino Proxy base operating system.

Prerequisites
  • You must ensure the IdM server is deployed and the host-based firewall is configured correctly.

  • You must contact the IdM server administrator to ensure that you obtain an account on the IdM server with permissions to create zones on the IdM server.

  • You should create a backup of the answer file. You can use the backup to restore the answer file to its original state if it becomes corrupted.

Procedure

To configure dynamic DNS update with GSS-TSIG authentication, complete the following steps:

Creating a Kerberos principal on the IdM server
  1. Obtain a Kerberos ticket for the account obtained from the IdM administrator:

    # kinit idm_user
  2. Create a new Kerberos principal for orcharhino Proxy to use to authenticate on the IdM server:

    # ipa service-add orcharhino-proxy.example.com
Installing and configuring the idM client
  1. On the base operating system of either the orcharhino or orcharhino Proxy that is managing the DNS service for your deployment, install the ipa-client package:

    # dnf install ipa-client
  2. Configure the IdM client by running the installation script and following the on-screen prompts:

    # ipa-client-install
  3. Obtain a Kerberos ticket:

    # kinit admin
  4. Remove any preexisting keytab:

    # rm /etc/foreman-proxy/dns.keytab
  5. Obtain the keytab for this system:

    # ipa-getkeytab -p orcharhinoproxy/orcharhino.example.com@EXAMPLE.COM \
    -s idm1.example.com -k /etc/foreman-proxy/dns.keytab
    Note

    When adding a keytab to a standby system with the same host name as the original system in service, add the r option to prevent generating new credentials and rendering the credentials on the original system invalid.

  6. For the dns.keytab file, set the group and owner to foreman-proxy:

    # chown foreman-proxy:foreman-proxy /etc/foreman-proxy/dns.keytab
  7. Optional: To verify that the keytab file is valid, enter the following command:

    # kinit -kt /etc/foreman-proxy/dns.keytab \
    orcharhinoproxy/orcharhino.example.com@EXAMPLE.COM
Configuring DNS zones in the IdM web UI
  1. Create and configure the zone that you want to manage:

    1. Navigate to Network Services > DNS > DNS Zones.

    2. Select Add and enter the zone name. For example, example.com.

    3. Click Add and Edit.

    4. Click the Settings tab and in the BIND update policy box, add the following to the semi-colon separated list:

      grant orcharhinoproxy\047orcharhino.example.com@EXAMPLE.COM wildcard * ANY;
    5. Set Dynamic update to True.

    6. Enable Allow PTR sync.

    7. Click Save to save the changes.

  2. Create and configure the reverse zone:

    1. Navigate to Network Services > DNS > DNS Zones.

    2. Click Add.

    3. Select Reverse zone IP network and add the network address in CIDR format to enable reverse lookups.

    4. Click Add and Edit.

    5. Click the Settings tab and in the BIND update policy box, add the following to the semi-colon separated list:

      grant orcharhinoproxy\047orcharhino.example.com@EXAMPLE.COM wildcard * ANY;
    6. Set Dynamic update to True.

    7. Click Save to save the changes.

Configuring the orcharhino or orcharhino Proxy that manages the DNS service for the domain
  1. Configure your orcharhino Server or orcharhino Proxy to connect to your DNS service:

    # foreman-installer \
    --foreman-proxy-dns-managed=false \
    --foreman-proxy-dns-provider=nsupdate_gss \
    --foreman-proxy-dns-server="idm1.example.com" \
    --foreman-proxy-dns-tsig-keytab=/etc/foreman-proxy/dns.keytab \
    --foreman-proxy-dns-tsig-principal="orcharhinoproxy/orcharhino.example.com@EXAMPLE.COM" \
    --foreman-proxy-dns=true
  2. For each affected orcharhino Proxy, update the configuration of that orcharhino Proxy in the orcharhino management UI:

    1. In the orcharhino management UI, navigate to Infrastructure > orcharhino Proxies, locate the orcharhino Proxy, and from the list in the Actions column, select Refresh.

    2. Configure the domain:

      1. In the orcharhino management UI, navigate to Infrastructure > Domains and select the domain name.

      2. In the Domain tab, ensure DNS orcharhino Proxy is set to the orcharhino Proxy where the subnet is connected.

    3. Configure the subnet:

      1. In the orcharhino management UI, navigate to Infrastructure > Subnets and select the subnet name.

      2. In the Subnet tab, set IPAM to None.

      3. In the Domains tab, select the domain that you want to manage using the IdM server.

      4. In the orcharhino Proxies tab, ensure Reverse DNS orcharhino Proxy is set to the orcharhino Proxy where the subnet is connected.

      5. Click Submit to save the changes.

4.4.2. Configuring dynamic DNS update with TSIG authentication

You can configure an IdM server to use the secret key transaction authentication for DNS (TSIG) technology that uses the rndc.key key file for authentication. The TSIG protocol is defined in RFC2845.

Prerequisites
  • You must ensure the IdM server is deployed and the host-based firewall is configured correctly.

  • You must obtain root user access on the IdM server.

  • You must confirm whether orcharhino Server or orcharhino Proxy is configured to provide DNS service for your deployment.

  • You must configure DNS, DHCP and TFTP services on the base operating system of either the orcharhino or orcharhino Proxy that is managing the DNS service for your deployment.

  • You must create a backup of the answer file. You can use the backup to restore the answer file to its original state if it becomes corrupted.

Procedure

To configure dynamic DNS update with TSIG authentication, complete the following steps:

Enabling external updates to the DNS zone in the IdM server
  1. On the IdM Server, add the following to the top of the /etc/named.conf file:

    ########################################################################
    
    include "/etc/rndc.key";
    controls  {
    inet _IdM_Server_IP_Address_ port 953 allow { _orcharhino_IP_Address_; } keys { "rndc-key"; };
    };
    ########################################################################
  2. Reload the named service to make the changes take effect:

    # systemctl reload named
  3. In the IdM web UI, navigate to Network Services > DNS > DNS Zones and click the name of the zone. In the Settings tab, apply the following changes:

    1. Add the following in the BIND update policy box:

      grant "rndc-key" zonesub ANY;
    2. Set Dynamic update to True.

    3. Click Update to save the changes.

  4. Copy the /etc/rndc.key file from the IdM server to the base operating system of your orcharhino Server. Enter the following command:

    # scp /etc/rndc.key root@orcharhino.example.com:/etc/rndc.key
  5. To set the correct ownership, permissions, and SELinux context for the rndc.key file, enter the following command:

    # restorecon -v /etc/rndc.key
    # chown -v root:named /etc/rndc.key
    # chmod -v 640 /etc/rndc.key
  6. Assign the foreman-proxy user to the named group manually. Normally, foreman-installer ensures that the foreman-proxy user belongs to the named UNIX group, however, in this scenario orcharhino does not manage users and groups, therefore you need to assign the foreman-proxy user to the named group manually.

    # usermod -a -G named foreman-proxy
  7. On orcharhino Server, enter the following foreman-installer command to configure orcharhino to use the external DNS server:

    # foreman-installer \
    --foreman-proxy-dns-managed=false \
    --foreman-proxy-dns-provider=nsupdate \
    --foreman-proxy-dns-server="IdM_Server_IP_Address" \
    --foreman-proxy-dns-ttl=86400 \
    --foreman-proxy-dns=true \
    --foreman-proxy-keyfile=/etc/rndc.key
Testing external updates to the DNS zone in the IdM server
  1. Ensure that the key in the /etc/rndc.key file on orcharhino Server is the same key file that is used on the IdM server:

    key "rndc-key" {
            algorithm hmac-md5;
            secret "secret-key==";
    };
  2. On orcharhino Server, create a test DNS entry for a host. For example, host test.example.com with an A record of 192.168.25.20 on the IdM server at 192.168.25.1.

    # echo -e "server 192.168.25.1\n \
    update add test.example.com 3600 IN A 192.168.25.20\n \
    send\n" | nsupdate -k /etc/rndc.key
  3. On orcharhino Server, test the DNS entry:

    # nslookup test.example.com 192.168.25.1

    Example output:

    Server:		192.168.25.1
    Address:	192.168.25.1#53
    
    Name:	test.example.com
    Address: 192.168.25.20
  4. To view the entry in the IdM web UI, navigate to Network Services > DNS > DNS Zones. Click the name of the zone and search for the host by name.

  5. If resolved successfully, remove the test DNS entry:

    # echo -e "server 192.168.25.1\n \
    update delete test.example.com 3600 IN A 192.168.25.20\n \
    send\n" | nsupdate -k /etc/rndc.key
  6. Confirm that the DNS entry was removed:

    # nslookup test.example.com 192.168.25.1

    The above nslookup command fails and returns the SERVFAIL error message if the record was successfully deleted.

4.4.3. Reverting to internal DNS service

You can revert to using orcharhino Server and orcharhino Proxy as your DNS providers. You can use a backup of the answer file that was created before configuring external DNS, or you can create a backup of the answer file.

Procedure

On the orcharhino or orcharhino Proxy that you want to configure to manage DNS service for the domain, complete the following steps:

Configuring orcharhino or orcharhino Proxy as a DNS server
  • If you have created a backup of the answer file before configuring external DNS, restore the answer file and then enter the foreman-installer command:

    # foreman-installer
  • If you do not have a suitable backup of the answer file, create a backup of the answer file now. To configure orcharhino or orcharhino Proxy as DNS server without using an answer file, enter the following foreman-installer command on orcharhino or orcharhino Proxy:

    # foreman-installer \
    --foreman-proxy-dns-managed=true \
    --foreman-proxy-dns-provider=nsupdate \
    --foreman-proxy-dns-server="127.0.0.1" \
    --foreman-proxy-dns=true

After you run the foreman-installer command to make any changes to your orcharhino Proxy configuration, you must update the configuration of each affected orcharhino Proxy in the orcharhino management UI.

Updating the configuration in the orcharhino management UI
  1. In the orcharhino management UI, navigate to Infrastructure > orcharhino Proxies.

  2. For each orcharhino Proxy that you want to update, from the Actions list, select Refresh.

  3. Configure the domain:

    1. In the orcharhino management UI, navigate to Infrastructure > Domains and click the domain name that you want to configure.

    2. In the Domain tab, set DNS orcharhino Proxy to the orcharhino Proxy where the subnet is connected.

  4. Configure the subnet:

    1. In the orcharhino management UI, navigate to Infrastructure > Subnets and select the subnet name.

    2. In the Subnet tab, set IPAM to DHCP or Internal DB.

    3. In the Domains tab, select the domain that you want to manage using orcharhino or orcharhino Proxy.

    4. In the orcharhino Proxies tab, set Reverse DNS orcharhino Proxy to the orcharhino Proxy where the subnet is connected.

    5. Click Submit to save the changes.

4.5. Configuring orcharhino to manage the lifecycle of a host registered to a FreeIPA realm

As well as providing access to orcharhino Server, hosts provisioned with orcharhino can also be integrated with FreeIPA realms. orcharhino has a realm feature that automatically manages the lifecycle of any system registered to a realm or domain provider.

Use this section to configure orcharhino Server or orcharhino Proxy for FreeIPA realm support, then add hosts to the FreeIPA realm group.

Prerequisites
  • orcharhino Server that is registered to the Content Delivery Network or an external orcharhino Proxy that is registered to orcharhino Server.

  • A deployed realm or domain provider such as FreeIPA.

To install and configure FreeIPA packages on orcharhino Server or orcharhino Proxy:

To use FreeIPA for provisioned hosts, complete the following steps to install and configure FreeIPA packages on orcharhino Server or orcharhino Proxy:

  1. Install the ipa-client package on orcharhino Server or orcharhino Proxy:

    # dnf install ipa-client
  2. Configure the server as a FreeIPA client:

    # ipa-client-install
  3. Create a realm proxy user, realm-orcharhino-proxy, and the relevant roles in FreeIPA:

    # foreman-prepare-realm admin realm-orcharhino-proxy

    Note the principal name that returns and your FreeIPA server configuration details because you require them for the following procedure.

To configure orcharhino Server or orcharhino Proxy for FreeIPA realm support:

Complete the following procedure on orcharhino and every orcharhino Proxy that you want to use:

  1. Copy the /root/freeipa.keytab file to any orcharhino Proxy that you want to include in the same principal and realm:

    # scp /root/freeipa.keytab root@orcharhino-proxy.example.com:/etc/foreman-proxy/freeipa.keytab
  2. Move the /root/freeipa.keytab file to the /etc/foreman-proxy directory and set the ownership settings to the foreman-proxy user:

    # mv /root/freeipa.keytab /etc/foreman-proxy
    # chown foreman-proxy:foreman-proxy /etc/foreman-proxy/freeipa.keytab
  3. Enter the following command on all orcharhino Proxies that you want to include in the realm. If you use the integrated orcharhino Proxy on orcharhino, enter this command on orcharhino Server:

    # foreman-installer --foreman-proxy-realm true \
    --foreman-proxy-realm-keytab /etc/foreman-proxy/freeipa.keytab \
    --foreman-proxy-realm-principal realm-orcharhino-proxy@EXAMPLE.COM \
    --foreman-proxy-realm-provider freeipa

    You can also use these options when you first configure the orcharhino Server.

  4. Ensure that the most updated versions of the ca-certificates package is installed and trust the FreeIPA Certificate Authority:

    # cp /etc/ipa/ca.crt /etc/pki/ca-trust/source/anchors/ipa.crt
    # update-ca-trust enable
    # update-ca-trust
  5. Optional: If you configure FreeIPA on an existing orcharhino Server or orcharhino Proxy, complete the following steps to ensure that the configuration changes take effect:

    1. Restart the foreman-proxy service:

      # systemctl restart foreman-proxy
    2. In the orcharhino management UI, navigate to Infrastructure > orcharhino Proxies.

    3. Locate the orcharhino Proxy you have configured for FreeIPA and from the list in the Actions column, select Refresh.

To create a realm for the FreeIPA-enabled orcharhino Proxy

After you configure your integrated or external orcharhino Proxy with FreeIPA, you must create a realm and add the FreeIPA-configured orcharhino Proxy to the realm.

Procedure
  1. In the orcharhino management UI, navigate to Infrastructure > Realms and click Create Realm.

  2. In the Name field, enter a name for the realm.

  3. From the Realm Type list, select the type of realm.

  4. From the Realm orcharhino Proxy list, select orcharhino Proxy where you have configured FreeIPA.

  5. Click the Locations tab and from the Locations list, select the location where you want to add the new realm.

  6. Click the Organizations tab and from the Organizations list, select the organization where you want to add the new realm.

  7. Click Submit.

Updating host groups with realm information

You must update any host groups that you want to use with the new realm information.

  1. In the orcharhino management UI, navigate to Configure > Host Groups, select the host group that you want to update, and click the Network tab.

  2. From the Realm list, select the realm you create as part of this procedure, and then click Submit.

Adding hosts to a FreeIPA host group

FreeIPA supports the ability to set up automatic membership rules based on a system’s attributes. orcharhino’s realm feature provides administrators with the ability to map the orcharhino host groups to the FreeIPA parameter userclass which allow administrators to configure automembership.

When nested host groups are used, they are sent to the FreeIPA server as they are displayed in the orcharhino User Interface. For example, "Parent/Child/Child".

orcharhino Server or orcharhino Proxy sends updates to the FreeIPA server, however automembership rules are only applied at initial registration.

To add hosts to a FreeIPA host group:
  1. On the FreeIPA server, create a host group:

    # ipa hostgroup-add hostgroup_name --desc=hostgroup_description
  2. Create an automembership rule:

    # ipa automember-add --type=hostgroup hostgroup_name automember_rule

    Where you can use the following options:

    • automember-add flags the group as an automember group.

    • --type=hostgroup identifies that the target group is a host group, not a user group.

    • automember_rule adds the name you want to identify the automember rule by.

  3. Define an automembership condition based on the userclass attribute:

    # ipa automember-add-condition --key=userclass --type=hostgroup --inclusive-regex=^webserver hostgroup_name
    ----------------------------------
    Added condition(s) to "hostgroup_name"
    ----------------------------------
    Automember Rule: automember_rule
    Inclusive Regex: userclass=^webserver
    ----------------------------
    Number of conditions added 1
    ----------------------------

    Where you can use the following options:

    • automember-add-condition adds regular expression conditions to identify group members.

    • --key=userclass specifies the key attribute as userclass.

    • --type=hostgroup identifies that the target group is a host group, not a user group.

    • --inclusive-regex= ^webserver identifies matching values with a regular expression pattern.

    • hostgroup_name – identifies the target host group’s name.

When a system is added to orcharhino Server’s hostgroup_name host group, it is added automatically to the FreeIPA server’s "hostgroup_name" host group. FreeIPA host groups allow for Host-Based Access Controls (HBAC), sudo policies and other FreeIPA functions.

5. Managing DHCP by using orcharhino Proxy

orcharhino can integrate with a DHCP service by using your orcharhino Proxy. A orcharhino Proxy has multiple DHCP providers that you can use to integrate orcharhino with your existing DHCP infrastructure or deploy a new one. You can use the DHCP module of orcharhino Proxy to query for available IP addresses, add new, and delete existing reservations. Note that your orcharhino Proxy cannot manage subnet declarations.

Available DHCP providers

5.1. Configuring dhcp_libvirt

The dhcp_libvirt plugin manages IP reservations and leases using dnsmasq through the libvirt API. It uses ruby-libvirt to connect to the local or remote instance of libvirt daemon.

Procedure
  • You can use foreman-installer to configure dhcp_libvirt:

    foreman-installer \
    --foreman-proxy-dhcp true \
    --foreman-proxy-dhcp-provider libvirt \
    --foreman-proxy-libvirt-network default \
    --foreman-proxy-libvirt-network qemu:///system

5.2. Securing the dhcpd API

orcharhino Proxy interacts with DHCP daemon using the dhcpd API to manage DHCP. By default, the dhcpd API listens to any host without access control. You can add an omapi_key to provide basic security.

Procedure
  1. On your orcharhino Proxy, install the required packages:

    # dnf install bind-utils
  2. Generate a key:

    # dnssec-keygen -r /dev/urandom -a HMAC-MD5 -b 512 -n HOST omapi_key
    # cat Komapi_key.+*.private | grep ^Key|cut -d ' ' -f2-
  3. Use foreman-installer to secure the dhcpd API:

    # foreman-installer \
    --foreman-proxy-dhcp-key-name "My_Name" \
    --foreman-proxy-dhcp-key-secret "My_Secret"

6. Managing DNS by using orcharhino Proxy

orcharhino can manage DNS records by using your orcharhino Proxy. DNS management contains updating and removing DNS records from existing DNS zones. A orcharhino Proxy has multiple DNS providers that you can use to integrate orcharhino with your existing DNS infrastructure or deploy a new one.

After you have enabled DNS, your orcharhino Proxy can manipulate any DNS server that complies with RFC 2136 by using the dns_nsupdate provider. Other providers provide more direct integration, such as dns_infoblox for Infoblox.

Available DNS providers

6.1. Configuring dns_nsupdate

The dns_nsupdate DNS provider manages DNS records using the nsupdate utility. You can use dns_nsupdate with any DNS server compatible with RFC2136. By default, dns_nsupdate installs the ISC BIND server. For installation without ISC BIND, see Configuring orcharhino Proxy with external DNS.

Procedure
  • Configure dns_nsupdate:

    # foreman-installer \
    --foreman-proxy-dns true \
    --foreman-proxy-dns-provider nsupdate \
    --foreman-proxy-dns-managed true \
    --foreman-proxy-dns-zone example.com \
    --foreman-proxy-dns-reverse 2.0.192.in-addr.arpa

6.2. Configuring dns_libvirt

The dns_libvirt DNS provider manages DNS records using dnsmasq through the libvirt API. It uses ruby-libvirt gem to connect to the local or a remote instance of libvirt daemon.

Procedure
  • You can use foreman-installer to configure dns_libvirt:

    # foreman-installer \
    --foreman-proxy-dns true \
    --foreman-proxy-dns-provider libvirt \
    --foreman-proxy-libvirt-network default \
    --foreman-proxy-libvirt-url qemu:///system

    Note that you can only use one network and URL for both dns_libvirt and dhcp_libvirt.

6.3. Configuring dns_powerdns

The dns_powerdns DNS provider manages DNS records using the PowerDNS REST API.

Procedure
  • You can use foreman-installer to configure dns_powerdns:

    # foreman-installer \
    --foreman-proxy-dns true \
    --foreman-proxy-dns-provider powerdns \
    --enable-foreman-proxy-plugin-dns-powerdns \
    --foreman-proxy-plugin-dns-powerdns-rest-api-key api_key \
    --foreman-proxy-plugin-dns-powerdns-rest-url http://localhost:8081/api/v1/servers/localhost

6.4. Configuring dns_route53

Route 53 is a DNS provider by Amazon. For more information, see aws.amazon.com/route53.

Procedure
  • Enable Route 53 DNS on your orcharhino Proxy:

    # foreman-installer \
    --enable-foreman-proxy-plugin-dns-route53 \
    --foreman-proxy-dns true \
    --foreman-proxy-dns-provider route53 \
    --foreman-proxy-plugin-dns-route53-aws-access-key My_AWS_Access_Key \
    --foreman-proxy-plugin-dns-route53-aws-secret-key My_AWS_Secret_Key

7. Using Infoblox as DHCP and DNS providers

You can use orcharhino Proxy to connect to your Infoblox application to create and manage DHCP and DNS records, and to reserve IP addresses.

The supported Infoblox version is NIOS 8.0 or higher.

7.1. Infoblox limitations

All DHCP and DNS records can be managed only in a single Network or DNS view. After you install the Infoblox modules on orcharhino Proxy and set up the view using the foreman-installer command, you cannot edit the view.

orcharhino Proxy communicates with a single Infoblox node by using the standard HTTPS web API. If you want to configure clustering and High Availability, make the configurations in Infoblox.

Hosting PXE-related files by using the TFTP functionality of Infoblox is not supported. You must use orcharhino Proxy as a TFTP server for PXE provisioning. For more information, see Preparing networking in Provisioning hosts.

orcharhino IPAM feature cannot be integrated with Infoblox.

7.2. Infoblox prerequisites

  • You must have Infoblox account credentials to manage DHCP and DNS entries in orcharhino.

  • Ensure that you have Infoblox administration roles with the names: DHCP Admin and DNS Admin.

  • The administration roles must have permissions or belong to an admin group that permits the accounts to perform tasks through the Infoblox API.

7.3. Installing the Infoblox CA certificate

You must install Infoblox HTTPS CA certificate on the base system of orcharhino Proxy.

Procedure
  • Download the certificate from the Infoblox web UI or you use the following OpenSSL commands to download the certificate:

    # update-ca-trust enable
    # openssl s_client -showcerts -connect infoblox.example.com:443 </dev/null | \
    openssl x509 -text >/etc/pki/ca-trust/source/anchors/infoblox.crt
    # update-ca-trust extract

    The infoblox.example.com entry must match the host name for the Infoblox application in the X509 certificate.

Verification
  • Test the CA certificate by using a curl query:

    # curl -u admin:password https://infoblox.example.com/wapi/v2.0/network

    Example positive response:

    [
        {
            "_ref": "network/ZG5zLm5ldHdvcmskMTkyLjE2OC4yMDIuMC8yNC8w:infoblox.example.com/24/default",
            "network": "192.168.202.0/24",
            "network_view": "default"
        }
    ]

7.4. Installing the DHCP Infoblox module

Install the DHCP Infoblox module on orcharhino Proxy. Note that you cannot manage records in separate views.

You can also install DHCP and DNS Infoblox modules simultaneously by combining this procedure and Installing the DNS Infoblox module.

DHCP Infoblox record type considerations

If you want to use the DHCP and DNS Infoblox modules together, configure the DHCP Infoblox module with the fixedaddress record type only. The host record type causes DNS conflicts and is not supported.

If you configure the DHCP Infoblox module with the host record type, you have to unset both DNS orcharhino Proxy and Reverse DNS orcharhino Proxy options on your Infoblox-managed subnets, because Infoblox does DNS management by itself. Using the host record type leads to creating conflicts and being unable to rename hosts in orcharhino.

Procedure
  1. On orcharhino Proxy, enter the following command:

    # foreman-installer --enable-foreman-proxy-plugin-dhcp-infoblox \
    --foreman-proxy-dhcp true \
    --foreman-proxy-dhcp-provider infoblox \
    --foreman-proxy-dhcp-server infoblox.example.com \
    --foreman-proxy-plugin-dhcp-infoblox-username admin \
    --foreman-proxy-plugin-dhcp-infoblox-password infoblox \
    --foreman-proxy-plugin-dhcp-infoblox-record-type fixedaddress \
    --foreman-proxy-plugin-dhcp-infoblox-dns-view default \
    --foreman-proxy-plugin-dhcp-infoblox-network-view default
  2. Optional: In the orcharhino management UI, navigate to Infrastructure > orcharhino Proxies, select the orcharhino Proxy with the DHCP Infoblox module, and ensure that the dhcp feature is listed.

  3. In the orcharhino management UI, navigate to Infrastructure > Subnets.

  4. For all subnets managed through Infoblox, ensure that the IP address management (IPAM) method of the subnet is set to DHCP.

7.5. Installing the DNS Infoblox module

Install the DNS Infoblox module on orcharhino Proxy. You can also install DHCP and DNS Infoblox modules simultaneously by combining this procedure and Installing the DHCP Infoblox module.

Procedure
  1. On orcharhino Proxy, enter the following command to configure the Infoblox module:

    # foreman-installer --enable-foreman-proxy-plugin-dns-infoblox \
    --foreman-proxy-dns true \
    --foreman-proxy-dns-provider infoblox \
    --foreman-proxy-plugin-dns-infoblox-dns-server infoblox.example.com \
    --foreman-proxy-plugin-dns-infoblox-username admin \
    --foreman-proxy-plugin-dns-infoblox-password infoblox \
    --foreman-proxy-plugin-dns-infoblox-dns-view default

    Optionally, you can change the value of the --foreman-proxy-plugin-dns-infoblox-dns-view option to specify an Infoblox DNS view other than the default view.

  2. Optional: In the orcharhino management UI, navigate to Infrastructure > orcharhino Proxies, select the orcharhino Proxy with the Infoblox DNS module, and ensure that the dns feature is listed.

  3. In the orcharhino management UI, navigate to Infrastructure > Domains.

  4. For all domains managed through Infoblox, ensure that the DNS Proxy is set for those domains.

  5. In the orcharhino management UI, navigate to Infrastructure > Subnets.

  6. For all subnets managed through Infoblox, ensure that the DNS orcharhino Proxy and Reverse DNS orcharhino Proxy are set for those subnets.

Appendix A: orcharhino Proxy scalability considerations when managing Puppet clients

orcharhino Proxy scalability when managing Puppet clients depends on the number of CPUs, the run-interval distribution, and the number of Puppet managed resources. orcharhino Proxy has a limitation of 100 concurrent Puppet agents running at any single point in time. Running more than 100 concurrent Puppet agents results in a 503 HTTP error.

For example, assuming that Puppet agent runs are evenly distributed with less than 100 concurrent Puppet agents running at any single point during a run-interval, a orcharhino Proxy with 4 CPUs has a maximum of 1250 – 1600 Puppet clients with a moderate workload of 10 Puppet classes assigned to each Puppet client. Depending on the number of Puppet clients required, the orcharhino installation can scale out the number of orcharhino Proxies to support them.

If you want to scale your orcharhino Proxy when managing Puppet clients, the following assumptions are made:

  • There are no external Puppet clients reporting directly to the orcharhino integrated orcharhino Proxy.

  • All other Puppet clients report directly to an external orcharhino Proxy.

  • There is an evenly distributed run-interval of all Puppet agents.

Note

Deviating from the even distribution increases the risk of overloading orcharhino Server. The limit of 100 concurrent requests applies.

The following table describes the scalability limits using the recommended 4 CPUs.

Table 4. Puppet scalability using 4 CPUs
Puppet Managed Resources per Host Run-Interval Distribution

1

3000 – 2500

10

2400 – 2000

20

1700 – 1400

The following table describes the scalability limits using the minimum 2 CPUs.

Table 5. Puppet scalability using 2 CPUs
Puppet Managed Resources per Host Run-Interval Distribution

1

1700 – 1450

10

1500 – 1250

20

850 – 700

Appendix B: dhcp_isc settings

The dhcp_isc provider uses a combination of the ISC DHCP server OMAPI management interface and parsing of configuration and lease files. This requires it to be run on the same host as the DHCP server. The following settings are defined in dhcp_isc.yml:

Configuring the path to the config and leases files:
:config: /etc/dhcp/dhcpd.conf
:leases: /var/lib/dhcpd/dhcpd.leases
Securing the DHCP server with an omapi_key
:key_name: My_OMAPI_Key
:key_secret: My_Key_Secret
Setting a port on which the DHCP server listens
:omapi_port: My_DHCP_Server_Port  # default: 7911

The server is defined in dhcp.yml:

Setting the host on which the DHCP server runs on
:server: My_DHCP_Server_FQDN

Appendix C: DHCP options for network configuration

--foreman-proxy-dhcp

Enables the DHCP service. You can set this option to true or false.

--foreman-proxy-dhcp-managed

Enables Foreman to manage the DHCP service. You can set this option to true or false.

--foreman-proxy-dhcp-gateway

The DHCP pool gateway. Set this to the address of the external gateway for hosts on your private network.

--foreman-proxy-dhcp-interface

Sets the interface for the DHCP service to listen for requests. Set this to eth1.

--foreman-proxy-dhcp-nameservers

Sets the addresses of the nameservers provided to clients through DHCP. Set this to the address for orcharhino Server on eth1.

--foreman-proxy-dhcp-range

A space-separated DHCP pool range for Discovered and Unmanaged services.

--foreman-proxy-dhcp-server

Sets the address of the DHCP server to manage.

--foreman-proxy-dhcp-subnets

Sets the subnets of the DHCP server to manage. Example: --foreman-proxy-dhcp-subnets 192.168.205.0/255.255.255.128 or --foreman-proxy-dhcp-subnets 192.168.205.128/255.255.255.128

Run foreman-installer --help to view more options related to DHCP and other orcharhino Proxy services.