1. Introduction to provisioning

Provisioning is a process that starts with a bare physical or virtual machine and ends with a fully configured, ready-to-use operating system. Using Foreman, you can define and automate fine-grained provisioning for a large number of hosts.

1.1. Provisioning methods in Foreman

With Foreman, you can provision hosts by using the following methods.

Bare-metal hosts

Foreman provisions bare-metal hosts primarily by using PXE boot and MAC address identification. When provisioning bare-metal hosts with Foreman, you can do the following:

  • Create host entries and specify the MAC address of the physical host to provision.

  • Boot blank hosts to use the Foreman Discovery service, which creates a pool of hosts that are ready for provisioning.

  • Boot and provision hosts by using PXE-less methods.

Cloud providers

Foreman connects to private and public cloud providers to provision instances of hosts from images stored in the cloud environment. When provisioning from cloud with Foreman, you can do the following:

  • Select which hardware profile to use.

  • Provision cloud instances from specific providers by using their APIs.

Virtualization infrastructure

Foreman connects to virtualization infrastructure services, such as oVirt and VMware. When provisioning virtual machines with Foreman, you can do the following:

  • Provision virtual machines from virtual image templates.

  • Use the same PXE-based boot methods that you use to provision bare-metal hosts.

1.2. Supported host platforms in provisioning

Foreman supports the following operating systems and architectures for host provisioning.

Note

The following combinations have been tested. Provisioning templates can be extended for additional systems.

If you decide to extend the templates, please submit your changes to our repository. Thanks for your contribution!

Supported host operating systems

The hosts can use the following operating systems:

  • Amazon Linux

  • Debian

  • Enterprise Linux 9 and 8

  • Fedora

  • OpenSUSE

  • SUSE Linux Enterprise Server

  • Ubuntu

Supported host architectures

The hosts can use the following architectures:

  • AMD and Intel 64-bit architectures

1.3. Supported cloud providers

You can connect the following cloud providers as compute resources to Foreman:

  • OpenStack

  • Amazon EC2

  • Google Compute Engine

  • Microsoft Azure

1.4. Supported virtualization infrastructures

You can connect the following virtualization infrastructures as compute resources to Foreman:

  • KVM (libvirt)

  • oVirt

  • VMware

  • KubeVirt

  • Proxmox

1.5. Network boot provisioning workflow

The provisioning process follows a basic PXE workflow:

  1. You create a host and select a domain and subnet. Foreman requests an available IP address from the DHCP Smart Proxy server that is associated with the subnet or from the PostgreSQL database in Foreman. Foreman loads this IP address into the IP address field in the Create Host window. When you complete all the options for the new host, submit the new host request.

  2. Depending on the configuration specifications of the host and its domain and subnet, Foreman creates the following settings:

    • A DHCP record on Smart Proxy server that is associated with the subnet.

    • A forward DNS record on Smart Proxy server that is associated with the domain.

    • A reverse DNS record on the DNS Smart Proxy server that is associated with the subnet.

    • PXELinux, Grub, Grub2, and iPXE configuration files for the host in the TFTP Smart Proxy server that is associated with the subnet.

    • A Puppet certificate on the associated Puppet server.

    • A realm on the associated identity server.

  3. The host is configured to boot from the network as the first device and HDD as the second device.

  4. The new host requests a DHCP reservation from the DHCP server.

  5. The DHCP server responds to the reservation request and returns TFTP next-server and filename options.

  6. The host requests the boot loader and menu from the TFTP server according to the PXELoader setting.

  7. A boot loader is returned over TFTP.

  8. The boot loader fetches configuration for the host through its provisioning interface MAC address.

  9. The boot loader fetches the operating system installer kernel, init RAM disk, and boot parameters.

  10. The installer requests the provisioning template from Foreman.

  11. Foreman renders the provision template and returns the result to the host.

  12. The installer performs installation of the operating system.

    • The installer notifies Foreman of a successful build in the postinstall script.

  13. The PXE configuration files revert to a local boot template.

  14. The host reboots.

  15. The new host requests a DHCP reservation from the DHCP server.

  16. The DHCP server responds to the reservation request and returns TFTP next-server and filename options.

  17. The host requests the boot loader and menu from the TFTP server according to the PXELoader setting.

  18. A boot loader is returned over TFTP.

  19. The boot loader fetches the configuration for the host through its provision interface MAC address.

  20. The boot loader initiates boot from the local drive.

  21. If you configured the host to use Puppet classes, the host uses the modules to configure itself.

The fully provisioned host performs the following workflow:

  1. The host is configured to boot from the network as the first device and HDD as the second device.

  2. The new host requests a DHCP reservation from the DHCP server.

  3. The DHCP server responds to the reservation request and returns TFTP next-server and filename options.

  4. The host requests the boot loader and menu from the TFTP server according to the PXELoader setting.

  5. A boot loader is returned over TFTP.

  6. The boot loader fetches the configuration settings for the host through its provisioning interface MAC address.

  7. For BIOS hosts:

    • The boot loader returns non-bootable device so BIOS skips to the next device (boot from HDD).

  8. For EFI hosts:

    • The boot loader finds Grub2 on a ESP partition and chainboots it.

  9. If the host is unknown to Foreman, a default boot loader configuration is provided. When Discovery service is enabled, it boots into discovery, otherwise it boots from HDD.

This workflow differs depending on custom options. For example:

Discovery

If you use the discovery service, Foreman automatically detects the MAC address of the new host and restarts the host after you submit a request. Note that TCP port 8443 must be reachable by the Smart Proxy to which the host is attached for Foreman to restart the host.

PXE-less Provisioning

After you submit a new host request, you must boot the specific host with the boot disk that you download from Foreman and transfer by using an external storage device.

Compute Resources

Foreman creates the virtual machine and retrieves the MAC address and stores the MAC address in Foreman. If you use image-based provisioning, the host does not follow the standard PXE boot and operating system installation. The compute resource creates a copy of the image for the host to use. Depending on image settings in Foreman, seed data can be passed in for initial configuration, for example by using cloud-init. Foreman can connect to the host by using SSH and execute a template to finish the customization.

1.6. Required boot order for network boot

For physical or virtual BIOS hosts
  1. Set the first booting device as boot configuration with network.

  2. Set the second booting device as boot from hard drive. Foreman manages TFTP boot configuration files, so hosts can be easily provisioned just by rebooting.

For physical or virtual EFI hosts
  1. Set the first booting device as boot configuration with network.

  2. Depending on the EFI firmware type and configuration, the operating system installer typically configures the operating system boot loader as the first entry.

  3. To reboot into installer again, use the efibootmgr utility to switch back to boot from network.

2. Configuring provisioning resources

2.1. Provisioning contexts

A provisioning context is the combination of an organization and location that you specify for Foreman components. The organization and location that a component belongs to sets the ownership and access for that component.

Organizations divide Foreman components into logical groups based on ownership, purpose, content, security level, and other divisions. You can create and manage multiple organizations through Foreman and assign components to each individual organization. This ensures Foreman server provisions hosts within a certain organization and only uses components that are assigned to that organization. For more information about organizations, see Managing Organizations in Managing organizations and locations in Foreman.

Locations function similar to organizations. The difference is that locations are based on physical or geographical setting. Users can nest locations in a hierarchy. For more information about locations, see Managing Locations in Managing organizations and locations in Foreman.

2.2. Setting the provisioning context

When you set a provisioning context, you define which organization and location to use for provisioning hosts.

The organization and location menus are located in the menu bar, on the upper left of the Foreman web UI. If you have not selected an organization and location to use, the menu displays: Any Organization and Any Location.

Procedure
  1. Click Any Organization and select the organization.

  2. Click Any Location and select the location to use.

Each user can set their default provisioning context in their account settings. Click the user name in the upper right of the Foreman web UI and select My account to edit your user account settings.

CLI procedure
  • When using the CLI, include either --organization or --organization-label and --location or --location-id as an option. For example:

    # hammer host list --organization "My_Organization" --location "My_Location"

    This command outputs hosts allocated to My_Organization and My_Location.

2.3. Creating operating systems

An operating system is a collection of resources that define how Foreman server installs a base operating system on a host. Operating system entries combine previously defined resources, such as installation media, partition tables, provisioning templates, and others.

You can add operating systems using the following procedure. To use the CLI instead of the Foreman web UI, see the CLI procedure.

Procedure
  1. In the Foreman web UI, navigate to Hosts > Operating systems and click New Operating system.

  2. In the Name field, enter a name to represent the operating system entry.

  3. In the Major field, enter the number that corresponds to the major version of the operating system.

  4. In the Minor field, enter the number that corresponds to the minor version of the operating system.

  5. In the Description field, enter a description of the operating system.

  6. From the Family list, select the operating system’s family.

  7. From the Root Password Hash list, select the encoding method for the root password.

  8. From the Architectures list, select the architectures that the operating system uses.

  9. Click the Partition table tab and select the possible partition tables that apply to this operating system.

  10. Click the Installation Media tab and enter the information for the installation media source. For more information, see Adding Installation Media to Foreman.

  11. Click the Templates tab and select a PXELinux template, a Provisioning template, and a Finish template for your operating system to use. You can select other templates, for example an iPXE template, if you plan to use iPXE for provisioning.

  12. Click Submit to save your provisioning template.

CLI procedure
  • Create the operating system using the hammer os create command:

    # hammer os create \
    --architectures "x86_64" \
    --description "My_Operating_System" \
    --family "Redhat" \
    --major 8 \
    --media "Red Hat" \
    --minor 8 \
    --name "Enterprise Linux" \
    --partition-tables "My_Partition_Table" \
    --provisioning-templates "My_Provisioning_Template"

2.4. Creating an operating system for Debian 12

Create an operating system in Foreman to provision hosts running Debian 12. This example creates an operating system entry for Debian 12.

Procedure
  1. In the Foreman web UI, navigate to Hosts > Provisioning Setup > Operating Systems.

  2. Click Create Operating System to create an operating system entry for Debian.

  3. Enter Debian 12 as Name for the operating system. Choose a name as reported by Ansible, Puppet, or Salt as fact.

  4. Set the Major Version to 12.

  5. Leave the Minor Version field empty.

  6. Set the Release Name to bookworm for Debian 12.

  7. On the Partition Table tab, select the Preseed partition table.

    For more information, see Creating partition tables.

  8. On the Templates tab, select the Preseed Finish template as Finish template, the Preseed template as Provisioning Template, and the Preseed PXELinux template as PXELinux template.

    For more information, see Creating provisioning templates.

  9. Click Submit to save the operating system entry for Debian.

2.5. Creating an operating system for Debian 11

Create an operating system in Foreman to provision hosts running Debian 11. This example creates an operating system entry for Debian 11.

Procedure
  1. In the Foreman web UI, navigate to Hosts > Provisioning Setup > Operating Systems.

  2. Click Create Operating System to create an operating system entry for Debian.

  3. Enter Debian 11 as Name for the operating system. Choose a name as reported by Ansible, Puppet, or Salt as fact.

  4. Set the Major Version to 11.

  5. Leave the Minor Version field empty.

  6. Set the Release Name to bullseye for Debian 11.

  7. On the Partition Table tab, select the Preseed partition table.

    For more information, see Creating partition tables.

  8. On the Templates tab, select the Preseed Finish template as Finish template, the Preseed template as Provisioning Template, and the Preseed PXELinux template as PXELinux template.

    For more information, see Creating provisioning templates.

  9. Click Submit to save the operating system entry for Debian.

2.6. Creating an operating system for Ubuntu 24.04

Create an operating system in Foreman to provision hosts running Ubuntu 24.04. This example creates an operating system entry for Ubuntu 24.04.

Procedure
  1. In the Foreman web UI, navigate to Hosts > Provisioning Setup > Operating Systems.

  2. Click Create Operating System to create an operating system entry for Ubuntu.

  3. Enter Ubuntu 24.04 as Name for the operating system. Choose a name as reported by Ansible, Puppet, or Salt as fact.

  4. Set the Major Version to 24.04.

  5. Leave the Minor Version field empty.

  6. Set the Release Name to noble for Ubuntu 24.04.

  7. On the Partition Table tab, select the Preseed partition table.

    For more information, see Creating partition tables.

  8. On the Templates tab, select the Preseed Finish template as Finish template, the Preseed template as Provisioning Template, and the Preseed PXELinux template as PXELinux template.

    For more information, see Creating provisioning templates.

  9. Click Submit to save the operating system entry for Ubuntu.

2.7. Creating an operating system for Ubuntu 22.04

Create an operating system in Foreman to provision hosts running Ubuntu 22.04. This example creates an operating system entry for Ubuntu 22.04.

Procedure
  1. In the Foreman web UI, navigate to Hosts > Provisioning Setup > Operating Systems.

  2. Click Create Operating System to create an operating system entry for Ubuntu.

  3. Enter Ubuntu 22.04 as Name for the operating system. Choose a name as reported by Ansible, Puppet, or Salt as fact.

  4. Set the Major Version to 22.04.

  5. Leave the Minor Version field empty.

  6. Set the Release Name to jammy for Ubuntu 22.04.

  7. On the Partition Table tab, select the Preseed partition table.

    For more information, see Creating partition tables.

  8. On the Templates tab, select the Preseed Finish template as Finish template, the Preseed template as Provisioning Template, and the Preseed PXELinux template as PXELinux template.

    For more information, see Creating provisioning templates.

  9. Click Submit to save the operating system entry for Ubuntu.

2.8. Updating the details of multiple operating systems

Use this procedure to update the details of multiple operating systems. This example shows you how to assign each operating system a partition table called Kickstart default, a configuration template called Kickstart default PXELinux, and a provisioning template called Kickstart Default.

Procedure
  1. On Foreman server, run the following Bash script:

    PARTID=$(hammer --csv partition-table list | grep "Kickstart default," | cut -d, -f1)
    PXEID=$(hammer --csv template list --per-page=1000 | grep "Kickstart default PXELinux" | cut -d, -f1)
    FOREMAN_ID=$(hammer --csv template list --per-page=1000 | grep "provision" | grep ",Kickstart default" | cut -d, -f1)
    
    for i in $(hammer --no-headers --csv os list | awk -F, {'print $1'})
    do
       hammer partition-table add-operatingsystem --id="${PARTID}" --operatingsystem-id="${i}"
       hammer template add-operatingsystem --id="${PXEID}" --operatingsystem-id="${i}"
       hammer os set-default-template --id="${i}" --config-template-id=${PXEID}
       hammer os add-config-template --id="${i}" --config-template-id=${FOREMAN_ID}
       hammer os set-default-template --id="${i}" --config-template-id=${FOREMAN_ID}
    done
  2. Display information about the updated operating system to verify that the operating system is updated correctly:

    # hammer os info --id 1

2.9. Creating architectures

An architecture in Foreman represents a logical grouping of hosts and operating systems. Architectures are created by Foreman automatically when hosts check in with Puppet. The x86_64 architecture is already preset in Foreman.

Use this procedure to create an architecture in Foreman.

Procedure
  1. In the Foreman web UI, navigate to Hosts > Provisioning Setup > Architectures.

  2. Click Create Architecture.

  3. In the Name field, enter a name for the architecture.

  4. From the Operating Systems list, select an operating system. If none are available, you can create and assign them under Hosts > Provisioning Setup > Operating Systems.

  5. Click Submit.

CLI procedure
  • Enter the hammer architecture create command to create an architecture. Specify its name and operating systems that include this architecture:

    # hammer architecture create \
    --name "My_Architecture" \
    --operatingsystems "My_Operating_System"

2.10. Creating hardware models

Use this procedure to create a hardware model in Foreman so that you can specify which hardware model a host uses.

Procedure
  1. In the Foreman web UI, navigate to Hosts > Provisioning Setup > Hardware Models.

  2. Click Create Model.

  3. In the Name field, enter a name for the hardware model.

  4. Optionally, in the Hardware Model and Vendor Class fields, you can enter corresponding information for your system.

  5. In the Info field, enter a description of the hardware model.

  6. Click Submit to save your hardware model.

CLI procedure
  • Create a hardware model using the hammer model create command. The only required parameter is --name. Optionally, enter the hardware model with the --hardware-model option, a vendor class with the --vendor-class option, and a description with the --info option:

    # hammer model create \
    --hardware-model "My_Hardware_Model" \
    --info "My_Description" \
    --name "My_Hardware_Model_Name" \
    --vendor-class "My_Vendor_Class"

2.11. Using a synchronized Kickstart repository for a host’s operating system

Foreman contains a set of synchronized Kickstart repositories that you use to install the provisioned host’s operating system. For more information about adding repositories, see Syncing Repositories in Managing content.

Use this procedure to set up a Kickstart repository.

Prerequisites

You must enable both BaseOS and Appstream Kickstart before provisioning.

Procedure
  1. Add the synchronized Kickstart repository that you want to use to the existing content view, or create a new content view and add the Kickstart repository.

    For Red Hat Enterprise Linux 8, ensure that you add both Red Hat Enterprise Linux 8 for x86_64 - AppStream Kickstart x86_64 8 and Red Hat Enterprise Linux 8 for x86_64 - BaseOS Kickstart x86_64 8 repositories.

  2. Publish a new version of the content view where the Kickstart repository is added and promote it to a required lifecycle environment. For more information, see Managing content views in Managing content.

  3. When you create a host, in the Operating System tab, for Media Selection, select the Synced Content checkbox.

To view the Kickstart tree, enter the following command:

# hammer medium list --organization "My_Organization"

2.12. Adding installation media to Foreman

Installation media are sources of packages that Foreman server uses to install a base operating system on a machine from an external repository.

You can view installation media by navigating to Hosts > Provisioning Setup > Installation Media.

Installation media must be in the format of an operating system installation tree and must be accessible from the machine hosting the installer through an HTTP URL.

Note

Other protocols, such as HTTPS or NFS, are known to work but have not been tested. Foreman community recommends using HTTP.

You can hypothetically use NFS share for PXE-based provisioning in a semi-automated way by copying the initrd.img and vmlinuz files from the NFS source to /var/lib/tftpboot/boot on Smart Proxy with the expected names. Only then PXE booting can proceed further.

By default, Foreman includes installation media for some official Linux distributions. Note that some of those installation media are targeted for a specific version of an operating system. For example CentOS mirror (7.x) must be used for CentOS 7 or earlier, and CentOS mirror (8.x) must be used for CentOS 8 or later.

If you want to improve download performance when using installation media to install operating systems on multiple hosts, you must modify the Path of the installation medium to point to the closest mirror or a local copy.

To use the CLI instead of the Foreman web UI, see the CLI procedure.

Procedure
  1. In the Foreman web UI, navigate to Hosts > Provisioning Setup > Installation Media.

  2. Click Create Medium.

  3. In the Name field, enter a name to represent the installation media entry.

  4. In the Path enter the URL that contains the installation tree. You can use following variables in the path to represent multiple different system architectures and versions:

    • $arch – The system architecture.

    • $version – The operating system version.

    • $major – The operating system major version.

    • $minor – The operating system minor version.

      Example HTTP path:

      http://download.example.com/centos/$version/Server/$arch/os/

      Example NFS path:

      nfs://download.example.com/centos/$version/Server/$arch/os/

      Synchronized content on Smart Proxy servers always uses an HTTP path. Smart Proxy server managed content does not support NFS paths.

  5. From the Operating system family list, select the distribution or family of the installation medium. For example, CentOS and Fedora are in the Red Hat family. Debian and Ubuntu belong to the Debian family.

  6. Click the Organizations and Locations tabs, to change the provisioning context. Foreman server adds the installation medium to the set provisioning context.

  7. Click Submit to save your installation medium.

CLI procedure
  • Create the installation medium using the hammer medium create command:

    # hammer medium create \
    --locations "My_Location" \
    --name "My_Operating_System" \
    --organizations "My_Organization" \
    --os-family "Redhat" \
    --path "http://download.example.com/centos/$version/Server/$arch/os/"

2.13. Creating an installation medium for Debian/Ubuntu

Create an installation medium in Foreman to provision hosts running Debian/Ubuntu.

Procedure
  1. In the Foreman web UI, navigate to Hosts > Provisioning Setup > Installation Media.

  2. Click Create Medium to create an installation medium entry.

  3. Enter a Name for the installation medium.

  4. Set the Path to the upstream URL http://deb.debian.org/debian/.

    Alternatively, you can also use a Debian/Ubuntu Mirror. Note that synchronized content from Foreman does not work for two reasons: first, the linux and initrd.gz files are not synchronized; second, the Release file is not signed with the official Debian/Ubuntu GPG private key.

  5. Set the Operating System Family to Debian.

  6. Click Submit to save the installation medium to Foreman.

2.14. Creating an installation medium for Ubuntu 22.04

To provision Ubuntu 22.04, you need to provide both the ISO image and the extracted ISO image on your Foreman server.

Procedure
  1. Connect to your Foreman server using SSH:

    # ssh root@foreman.example.com
  2. Download the ISO image:

    # cd /tmp/
    # wget https://releases.ubuntu.com/22.04/ubuntu-22.04.3-live-server-amd64.iso
  3. Mount the ISO image:

    # mount ubuntu-22.04.3-live-server-amd64.iso /mnt
  4. Provide the ISO image and the extracted directory under foreman.example.com/pub:

    # mkdir -p /var/www/html/pub/installation_media/ubuntu/22.04-x86_64/
    # cp ubuntu-22.04.3-live-server-amd64.iso /var/www/html/pub/installation_media/ubuntu/22.04-x86_64.iso
    # cp -a /mnt/* /var/www/html/pub/installation_media/ubuntu/22.04-x86_64/

    Ensure the path in /pub/ matches the path in your Preseed default PXELinux Autoinstall template.

  5. Unmount and delete the ISO image:

    # umount /mnt/
    # rm -f ubuntu-22.04.3-live-server-amd64.iso

Use http://foreman.example.com/pub/installation_media/ubuntu/22.04-x86_64/ to set up your installation media entry in Foreman. For more information, see Adding installation media to Foreman.

2.15. Provisioning Ubuntu autoinstall through Smart Proxies

Perform these steps to provision hosts running Ubuntu 20.04.3+ or Ubuntu 22.04.

Prerequisites
Procedure
  1. Provide the extracted ISO and ISO image itself on each Smart Proxy server. For more information, see Creating an installation medium for Ubuntu 22.04.

  2. In the Foreman web UI, navigate to Hosts > Provisioning Setup > Installation Media.

  3. Create an installation medium. Set the Path to the extracted ISO image on your Smart Proxy server.

  4. In the Foreman web UI, navigate to Hosts > Provisioning Setup > Operating Systems.

  5. Assign the installation medium to your operating system entry for Ubuntu 22.04.

  6. In the Foreman web UI, navigate to Configure > Host Groups.

  7. Select the host group that you use to deploy Ubuntu 22.04 through a Smart Proxy server and set the installation medium entry accordingly.

2.16. Creating partition tables

A partition table is a type of template that defines the way Foreman server configures the disks available on a new host. A Partition table uses the same ERB syntax as provisioning templates. Foreman contains a set of default partition tables to use, including a Kickstart default. You can also edit partition table entries to configure the preferred partitioning scheme, or create a partition table entry and add it to the operating system entry.

To use the CLI instead of the Foreman web UI, see the CLI procedure.

Procedure
  1. In the Foreman web UI, navigate to Hosts > Templates > Partition Tables.

  2. Click Create Partition Table.

  3. In the Name field, enter a name for the partition table.

  4. Select the Default checkbox if you want to set the template to automatically associate with new organizations or locations.

  5. Select the Snippet checkbox if you want to identify the template as a reusable snippet for other partition tables.

  6. From the Operating System Family list, select the distribution or family of the partitioning layout. For example, Red Hat Enterprise Linux, CentOS, and Fedora are in the Red Hat family.

  7. In the Template editor field, enter the layout for the disk partition.

    The format of the layout must match that for the intended operating system. For example, Enterprise Linux requires a layout that matches a Kickstart file, such as:

    zerombr
    clearpart --all --initlabel
    autopart

    For more information, see Dynamic partition example.

    You can also use the file browser in the template editor to import the layout from a file.

  8. In the Audit Comment field, add a summary of changes to the partition layout.

  9. Click the Organizations and Locations tabs to add any other provisioning contexts that you want to associate with the partition table. Foreman adds the partition table to the current provisioning context.

  10. Click Submit to save your partition table.

CLI procedure
  1. Create a plain text file, such as ~/My_Partition_Table, that contains the partition layout.

    The format of the layout must match that for the intended operating system. For example, Enterprise Linux requires a layout that matches a Kickstart file, such as:

    zerombr
    clearpart --all --initlabel
    autopart

    For more information, see Dynamic partition example.

  2. Create the installation medium using the hammer partition-table create command:

    # hammer partition-table create \
    --file "~/My_Partition_Table" \
    --locations "My_Location" \
    --name "My_Partition_Table" \
    --organizations "My_Organization" \
    --os-family "Redhat" \
    --snippet false

2.17. Associating partition tables with disk encryption

Foreman contains partition tables that encrypt the disk of your host by using Linux Unified Key Setup (LUKS) during host provisioning. Encrypted disks on hosts protect data at rest. Optionally, you can also bind the disk to a Tang server through Clevis for decryption during boot.

Associate the partition table with your operating system entry. Then, you assign the partition table to your host group or select it manually during provisioning.

Prerequisites
  • Your host has access to the AppStream repository to install clevis during provisioning.

Procedure
  1. In the Foreman web UI, navigate to Hosts > Provisioning Setup > Operating Systems.

  2. Select your Enterprise Linux entry.

  3. On the Partition Table tab, associate Kickstart default encrypted with your operating system entry.

  4. Create a host group that uses the Kickstart default encrypted partition table. For more information, see Creating a host group in Managing hosts.

  5. Decrypt the disk of your host during boot time by using one of the following options:

    • LUKS encryption: Add the host parameter disk_enc_passphrase as type string and your cleartext passphrase of the LUKS container as the value.

    • Clevis and Tang: Add the host parameter disk_enc_tang_servers as type array and your list of Tang servers (example: ["1.2.3.4"] or ["server.example.com", "5.6.7.8"]).

      If you set disk_enc_tang_servers, do not set disk_enc_passphrase because the passphrase slot is removed from the LUKS container after provisioning.

2.18. Dynamic partition example

Using an Anaconda Kickstart template, the following section instructs Anaconda to erase the whole disk, automatically partition, enlarge one partition to maximum size, and then proceed to the next sequence of events in the provisioning process:

zerombr
clearpart --all --initlabel
autopart <%= host_param('autopart_options') %>

Dynamic partitioning is executed by the installation program. Therefore, you can write your own rules to specify how you want to partition disks according to runtime information from the node, for example, disk sizes, number of drives, vendor, or manufacturer.

If you want to provision servers and use dynamic partitioning, add the following example as a template. When the #Dynamic entry is included, the content of the template loads into a %pre shell scriplet and creates a /tmp/diskpart.cfg that is then included into the Kickstart partitioning section.

#Dynamic (do not remove this line)

MEMORY=$((`grep MemTotal: /proc/meminfo | sed 's/^MemTotal: *//'|sed 's/ .*//'` / 1024))
if [ "$MEMORY" -lt 2048 ]; then
    SWAP_MEMORY=$(($MEMORY * 2))
elif [ "$MEMORY" -lt 8192 ]; then
    SWAP_MEMORY=$MEMORY
elif [ "$MEMORY" -lt 65536 ]; then
    SWAP_MEMORY=$(($MEMORY / 2))
else
    SWAP_MEMORY=32768
fi

cat <<EOF > /tmp/diskpart.cfg
zerombr
clearpart --all --initlabel
part /boot --fstype ext4 --size 200 --asprimary
part swap --size "$SWAP_MEMORY"
part / --fstype ext4 --size 1024 --grow
EOF

2.19. Provisioning templates

A provisioning template defines the way Foreman server installs an operating system on a host.

Foreman includes many template examples. In the Foreman web UI, navigate to Hosts > Templates > Provisioning Templates to view them. You can create a template or clone a template and edit the clone. For help with templates, navigate to Hosts > Templates > Provisioning Templates > Create Template > Help.

Templates accept the Embedded Ruby (ERB) syntax. For more information, see Template Writing Reference in Managing hosts.

You can download provisioning templates. Before you can download the template, you must create a debug certificate. For more information, see Creating an Organization Debug Certificate in Managing organizations and locations in Foreman.

You can synchronize templates between Foreman server and a Git repository or a local directory. For more information, see Synchronizing Templates Repositories in Managing hosts.

To view the history of changes applied to a template, navigate to Hosts > Templates > Provisioning Templates, select one of the templates, and click History. Click Revert to override the content with the previous version. You can also revert to an earlier change. Click Show Diff to see information about a specific change:

  • The Template Diff tab displays changes in the body of a provisioning template.

  • The Details tab displays changes in the template description.

  • The History tab displays the user who made a change to the template and date of the change.

2.20. Kinds of provisioning templates

There are various kinds of provisioning templates:

Provision

The main template for the provisioning process. For example, a Kickstart template. For more information about Kickstart syntax and commands, see the following resources:

PXELinux, PXEGrub, PXEGrub2

PXE-based templates that deploy to the template Smart Proxy associated with a subnet to ensure that the host uses the installer with the correct kernel options. For BIOS provisioning, select PXELinux template. For UEFI provisioning, select PXEGrub2.

Finish

Post-configuration scripts to execute using an SSH connection when the main provisioning process completes. You can use Finish templates only for image-based provisioning in virtual or cloud environments that do not support user_data. Do not confuse an image with a foreman discovery ISO, which is sometimes called a Foreman discovery image. An image in this context is an install image in a virtualized environment for easy deployment.

When a finish script successfully exits with the return code 0, Foreman treats the code as a success and the host exits the build mode.

Note that there are a few finish scripts with a build mode that uses a call back HTTP call. These scripts are not used for image-based provisioning, but for post configuration of operating-system installations such as Debian, Ubuntu, and BSD.

user_data

Post-configuration scripts for providers that accept custom data, also known as seed data. You can use the user_data template to provision virtual machines in cloud or virtualised environments only. This template does not require Foreman to be able to reach the host; the cloud or virtualization platform is responsible for delivering the data to the image.

Ensure that the image that you want to provision has the software to read the data installed and set to start during boot. For example, cloud-init, which expects YAML input, or ignition, which expects JSON input.

cloud_init

Some environments, such as VMWare, either do not support custom data or have their own data format that limits what can be done during customization. In this case, you can configure a cloud-init client with the foreman plugin, which attempts to download the template directly from Foreman over HTTP or HTTPS. This technique can be used in any environment, preferably virtualized.

Ensure that you meet the following requirements to use the cloud_init template:

  • Ensure that the image that you want to provision has the software to read the data installed and set to start during boot.

  • A provisioned host is able to reach Foreman from the IP address that matches the host’s provisioning interface IP.

    Note that cloud-init does not work behind NAT.

Bootdisk

Templates for PXE-less boot methods.

Kernel Execution (kexec)

Kernel execution templates for PXE-less boot methods.

Script

An arbitrary script not used by default but useful for custom tasks.

ZTP

Zero Touch Provisioning templates.

POAP

PowerOn Auto Provisioning templates.

iPXE

Templates for iPXE or gPXE environments to use instead of PXELinux.

2.21. Creating provisioning templates

A provisioning template defines the way Foreman server installs an operating system on a host. Use this procedure to create a new provisioning template.

Procedure
  1. In the Foreman web UI, navigate to Hosts > Templates > Provisioning Templates and click Create Template.

  2. In the Name field, enter a name for the provisioning template.

  3. Fill in the rest of the fields as required. The Help tab provides information about the template syntax and details the available functions, variables, and methods that can be called on different types of objects within the template.

CLI procedure
  1. Before you create a template with the CLI, create a plain text file that contains the template. This example uses the ~/my-template file.

  2. Create the template using the hammer template create command and specify the type with the --type option:

    # hammer template create \
    --file ~/my-template \
    --locations "My_Location" \
    --name "My_Provisioning_Template" \
    --organizations "My_Organization" \
    --type provision

2.22. Cloning provisioning templates

A provisioning template defines the way Foreman server installs an operating system on a host. Use this procedure to clone a template and add your updates to the clone.

Procedure
  1. In the Foreman web UI, navigate to Hosts > Templates > Provisioning Templates.

  2. Find the template that you want to use.

  3. Click Clone to duplicate the template.

  4. In the Name field, enter a name for the provisioning template.

  5. Select the Default checkbox to set the template to associate automatically with new organizations or locations.

  6. In the Template editor field, enter the body of the provisioning template. You can also use the Template file browser to upload a template file.

  7. In the Audit Comment field, enter a summary of changes to the provisioning template for auditing purposes.

  8. Click the Type tab and if your template is a snippet, select the Snippet checkbox. A snippet is not a standalone provisioning template, but a part of a provisioning template that can be inserted into other provisioning templates.

  9. From the Type list, select the type of the template. For example, Provisioning template.

  10. Click the Association tab and from the Applicable Operating Systems list, select the names of the operating systems that you want to associate with the provisioning template.

  11. Optionally, click Add combination and select a host group from the Host Group list or an environment from the Environment list to associate provisioning template with the host groups and environments.

  12. Click the Organizations and Locations tabs to add any additional contexts to the template.

  13. Click Submit to save your provisioning template.

2.23. Creating custom provisioning snippets

You can execute custom code before and/or after the host provisioning process.

Prerequisites
  • Check your provisioning template to ensure that it supports the custom snippets you want to use.

    You can view all provisioning templates under Hosts > Templates > Provisioning Templates.

Procedure
  1. In the Foreman web UI, navigate to Hosts > Templates > Provisioning Templates and click Create Template.

  2. In the Name field, enter a name for your custom provisioning snippet. The name must start with the name of a provisioning template that supports including custom provisioning snippets:

    • Append custom pre to the name of a provisioning template to run code before provisioning a host.

    • Append custom post to the name of a provisioning template to run code after provisioning a host.

  3. On the Type tab, select Snippet.

  4. Click Submit to create your custom provisioning snippet.

CLI procedure
  1. Create a plain text file that contains your custom snippet.

  2. Create the template using hammer:

    # hammer template create \
    --file "/path/to/My_Snippet" \
    --locations "My_Location" \
    --name "My_Template_Name_custom_pre" \
    --organizations "_My_Organization" \
    --type snippet

2.24. Custom provisioning snippet example for Debian

You can use Custom Post snippets to call external APIs from within the provisioning template directly after provisioning a host.

Preseed default custom post Example for Debian/Ubuntu
echo "Calling API to report successful host deployment"

apt install -y curl ca-certificates

curl -X POST \
-H "Content-Type: application/json" \
-d '{"name": "<%= @host.name %>", "operating_system": "<%= @host.operatingsystem.name %>", "status": "provisioned",}' \
"https://api.example.com/"

2.25. Custom provisioning snippet example for Enterprise Linux

You can use Custom Post snippets to call external APIs from within the provisioning template directly after provisioning a host.

Kickstart default finish custom post Example for Enterprise Linux
echo "Calling API to report successful host deployment"

yum install -y curl ca-certificates

curl -X POST \
-H "Content-Type: application/json" \
-d '{"name": "<%= @host.name %>", "operating_system": "<%= @host.operatingsystem.name %>", "status": "provisioned",}' \
"https://api.example.com/"

2.26. Associating templates with operating systems

You can associate templates with operating systems in Foreman. The following example adds a provisioning template to an operating system entry.

Procedure
  1. In the Foreman web UI, navigate to Hosts > Templates > Provisioning Templates.

  2. Select a provisioning template.

  3. On the Association tab, select all applicable operating systems.

  4. Click Submit to save your changes.

CLI procedure
  1. Optional: View all templates:

    # hammer template list
  2. Optional: View all operating systems:

    # hammer os list
  3. Associate a template with an operating system:

    # hammer template add-operatingsystem \
    --id My_Template_ID \
    --operatingsystem-id My_Operating_System_ID

2.27. Creating compute profiles

You can use compute profiles to predefine virtual machine hardware details such as CPUs, memory, and storage.

To use the CLI instead of the Foreman web UI, see the CLI procedure.

A default installation of Foreman contains three predefined profiles:

  • 1-Small

  • 2-Medium

  • 3-Large

You can apply compute profiles to all supported compute resources:

Procedure
  1. In the Foreman web UI, navigate to Infrastructure > Compute Profiles and click Create Compute Profile.

  2. In the Name field, enter a name for the profile.

  3. Click Submit. A new window opens with the name of the compute profile.

  4. In the new window, click the name of each compute resource and edit the attributes you want to set for this compute profile.

CLI procedure
  1. Create a new compute profile:

    # hammer compute-profile create --name "My_Compute_Profile"
  2. Set attributes for the compute profile:

    # hammer compute-profile values create \
    --compute-attributes "flavor=m1.small,cpus=2,memory=4GB,cpu_mode=default \
    --compute-resource "My_Compute_Resource" \
    --compute-profile "My_Compute_Profile" \
    --volume size=40GB
  3. Optional: To update the attributes of a compute profile, specify the attributes you want to change. For example, to change the number of CPUs and memory size:

    # hammer compute-profile values update \
    --compute-resource "My_Compute_Resource" \
    --compute-profile "My_Compute_Profile" \
    --attributes "cpus=2,memory=4GB" \
    --interface "type=network,bridge=br1,index=1" \
    --volume "size=40GB"
  4. Optional: To change the name of the compute profile, use the --new-name attribute:

    # hammer compute-profile update \
    --name "My_Compute_Profile" \
    --new-name "My_New_Compute_Profile"
Additional resources
  • For more information about creating compute profiles by using Hammer, enter hammer compute-profile --help.

2.28. Setting a default encrypted root password for hosts

If you do not want to set a plain text default root password for the hosts that you provision, you can use a default encrypted password.

The default root password can be inherited by a host group and consequentially by hosts in that group.

If you change the password and reprovision the hosts in the group that inherits the password, the password will be overwritten on the hosts.

Procedure
  1. Generate an encrypted password:

    $ python3 -c 'import crypt,getpass;pw=getpass.getpass(); print(crypt.crypt(pw)) if (pw==getpass.getpass("Confirm: ")) else exit()'
  2. Copy the password for later use.

  3. In the Foreman web UI, navigate to Administer > Settings.

  4. On the Settings page, select the Provisioning tab.

  5. In the Name column, navigate to Root password, and click Click to edit.

  6. Paste the encrypted password, and click Save.

2.29. Using noVNC to access virtual machines

You can use your browser to access the VNC console of VMs created by Foreman.

Foreman supports using noVNC on the following virtualization platforms:

  • VMware

  • Libvirt

  • oVirt

Prerequisites
  • You must have a virtual machine created by Foreman.

  • For existing virtual machines, ensure that the Display type in the Compute Resource settings is VNC.

  • You must import the Katello root CA certificate into your Foreman server. Adding a security exception in the browser is not enough for using noVNC. For more information, see Installing the Katello Root CA Certificate in Administering Foreman.

Procedure
  1. On your Foreman server, configure the firewall to allow VNC service on ports 5900 to 5930.

    • On operating systems with the iptables command:

      # iptables -A INPUT -p tcp --dport 5900:5930 -j ACCEPT
      # service iptables save
    • On operating systems with the firewalld service:

      # firewall-cmd --add-port=5900-5930/tcp
      # firewall-cmd --add-port=5900-5930/tcp --permanent

      If you do not use firewall-cmd to configure the Linux firewall, implement using the command of your choice.

  2. In the Foreman web UI, navigate to Infrastructure > Compute Resources and select the name of a compute resource.

  3. In the Virtual Machines tab, select the name of your virtual machine. Ensure the machine is powered on and then select Console.

2.30. Removing a virtual machine upon host deletion

By default, when you delete a host provisioned by Foreman, Foreman does not remove the actual VM on the compute resource. You can configure Foreman to remove the VM when deleting the host entry on Foreman.

Note

If you do not remove the associated VM and attempt to create a new VM with the same FQDN later, it will fail because that VM already exists in the compute resource. You can still re-register the existing VM to Foreman.

To use the CLI instead of the Foreman web UI, see the CLI procedure.

Prerequisites
  • Your Foreman account has a role that grants the view_settings and edit_settings permissions.

Procedure
  1. In the Foreman web UI, navigate to Administer > Settings > Provisioning.

  2. Change the value of the Destroy associated VM on host delete setting to Yes.

CLI procedure
  • Configure Foreman to remove a VM upon host deletion by using Hammer:

    # hammer settings set \
    --name destroy_vm_on_host_delete \
    --value true \
    --location "My_Location" \
    --organization "My_Organization"
Next steps
  • You can delete a host and Foreman removes its associated VM in the compute resource.

3. Configuring networking

Each provisioning type requires some network configuration. Use this chapter to configure network services in your integrated Smart Proxy on Foreman server.

New hosts must have access to your Smart Proxy server. Smart Proxy server can be either your integrated Smart Proxy on Foreman server or an external Smart Proxy server. You might want to provision hosts from an external Smart Proxy server when the hosts are on isolated networks and cannot connect to Foreman server directly, or when the content is synchronized with Smart Proxy server. Provisioning by using Smart Proxy servers can save on network bandwidth.

Configuring Smart Proxy server has two basic requirements:

  1. Configuring network services. This includes:

    • Content delivery services

    • Network services (DHCP, DNS, and TFTP)

    • Puppet configuration

  2. Defining network resource data in Foreman server to help configure network interfaces on new hosts.

The following instructions have similar applications to configuring standalone Smart Proxies managing a specific network. To configure Foreman to use external DHCP, DNS, and TFTP services, see Configuring External Services in Installing Foreman Server nightly on Debian/Ubuntu.

3.1. Facts and NIC filtering

Facts describe aspects such as total memory, operating system version, or architecture as reported by the host. You can find facts in Monitor > Facts and search hosts through facts or use facts in templates.

Foreman collects facts from multiple sources:

  • subscription manager

  • ansible

  • puppet

Foreman is an inventory system for hosts and network interfaces. For hypervisors or container hosts, adding thousands of interfaces per host and updating the inventory every few minutes is inadequate. For each individual NIC reported, Foreman creates a NIC entry and those entries are never removed from the database. Parsing all the facts and comparing all records in the database makes Foreman extremely slow and unusable. To optimize the performance of various actions, most importantly fact import, you can use the options available on the Facts tab under Administer > Settings.

3.2. Optimizing performance by removing NICs from database

Filter and exclude the connections using the Exclude pattern for facts stored in Foreman and Ignore interfaces with matching identifier option. By default, these options are set to most common hypervisors. If you name the virtual interfaces differently, you can update this filter to use it according to your requirements.

Procedure
  1. In the Foreman web UI, navigate to Administer > Settings and select the Facts tab.

  2. To filter out all interfaces starting with specific names, for example, blu, add blu* to the Ignore interfaces with matching identifier option.

  3. To prevent databases from storing facts related to interfaces starting with specific names, for example, blu, add blu* to the Exclude pattern for facts stored in Foreman option.

    By default, it contains the same list as the Ignore interfaces with matching identifier option. You can override it based on the your requirements. This filters out facts completely without storing them.

  4. To remove facts from the database, enter the following command:

    # foreman-rake facts:clean

    This command removes all facts matching with the filter added in Administer > Settings > Facts > the Exclude pattern for facts stored in Foreman option.

  5. To remove interfaces from the database, enter the following command:

    # foreman-rake interfaces:clean

    This command removes all interfaces matching with the filter added in Administer > Settings > Facts > the Ignore interfaces with matching identifier option.

3.3. Network resources

Foreman contains networking resources that you must set up and configure to create a host. It includes the following networking resources:

Domain

You must assign every host that is managed by Foreman to a domain. Using the domain, Foreman can manage A, AAAA, and PTR records. Even if you do not want Foreman to manage your DNS servers, you still must create and associate at least one domain. Domains are included in the naming conventions Foreman hosts, for example, a host with the name test123 in the example.com domain has the fully qualified domain name test123.example.com.

Subnet

You must assign every host managed by Foreman to a subnet. Using subnets, Foreman can then manage IPv4 reservations. If there are no reservation integrations, you still must create and associate at least one subnet. When you manage a subnet in Foreman, you cannot create DHCP records for that subnet outside of Foreman. In Foreman, you can use IP Address Management (IPAM) to manage IP addresses with one of the following options:

  • DHCP: DHCP Smart Proxy manages the assignment of IP addresses by finding the next available IP address starting from the first address of the range and skipping all addresses that are reserved. Before assigning an IP address, Smart Proxy sends an ICMP and TCP pings to check whether the IP address is in use. Note that if a host is powered off, or has a firewall configured to disable connections, Foreman makes a false assumption that the IP address is available. This check does not work for hosts that are turned off, therefore, the DHCP option can only be used with subnets that Foreman controls and that do not have any hosts created externally.

    The Smart Proxy DHCP module retains the offered IP addresses for a short period of time to prevent collisions during concurrent access, so some IP addresses in the IP range might remain temporarily unused.

  • Internal DB: Foreman finds the next available IP address from the Subnet range by excluding all IP addresses from the Foreman database in sequence. The primary source of data is the database, not DHCP reservations. This IPAM is not safe when multiple hosts are being created in parallel; in that case, use DHCP or Random DB IPAM instead.

  • Random DB: Foreman finds the next available IP address from the Subnet range by excluding all IP addresses from the Foreman database randomly. The primary source of data is the database, not DHCP reservations. This IPAM is safe to use with concurrent host creation as IP addresses are returned in random order, minimizing the chance of a conflict.

  • EUI-64: Extended Unique Identifier (EUI) 64bit IPv6 address generation, as per RFC2373, is obtained through the 48-bit MAC address.

  • External IPAM: Delegates IPAM to an external system through Smart Proxy feature. Foreman currently does not ship with any external IPAM implementations, but several plugins are in development.

  • None: IP address for each host must be entered manually.

    Options DHCP, Internal DB and Random DB can lead to DHCP conflicts on subnets with records created externally. These subnets must be under exclusive Foreman control.

    For more information about adding a subnet, see Adding a subnet to Foreman server.

DHCP Ranges

You can define the same DHCP range in Foreman server for both discovered and provisioned systems, but use a separate range for each service within the same subnet.

3.4. Foreman and DHCP options

Foreman manages DHCP reservations through a DHCP Smart Proxy. Foreman also sets the next-server and filename DHCP options.

The next-server option

The next-server option provides the IP address of the TFTP server to boot from. This option is not set by default and must be set for each TFTP Smart Proxy. You can use the foreman-installer command with the --foreman-proxy-tftp-servername option to set the TFTP server in the /etc/foreman-proxy/settings.d/tftp.yml file:

# foreman-installer --foreman-proxy-tftp-servername 1.2.3.4

Each TFTP Smart Proxy then reports this setting through the API and Foreman can retrieve the configuration information when it creates the DHCP record.

When the PXE loader is set to none, Foreman does not populate the next-server option into the DHCP record.

If the next-server option remains undefined, Foreman uses reverse DNS search to find a TFTP server address to assign, but you might encounter the following problems:

  • DNS timeouts during provisioning

  • Querying of incorrect DNS server. For example, authoritative rather than caching

  • Errors about incorrect IP address for the TFTP server. For example, PTR record was invalid

If you encounter these problems, check the DNS setup on both Foreman and Smart Proxy, specifically the PTR record resolution.

The filename option

The filename option contains the full path to the file that downloads and executes during provisioning. The PXE loader that you select for the host or host group defines which filename option to use. When the PXE loader is set to none, Foreman does not populate the filename option into the DHCP record. Depending on the PXE loader option, the filename changes as follows:

PXE loader option filename entry Notes

PXELinux BIOS

pxelinux.0

PXELinux UEFI

pxelinux.efi

iPXE Chain BIOS

undionly.kpxe

PXEGrub2 UEFI

grub2/grubx64.efi

x64 can differ depending on architecture

iPXE UEFI HTTP

http://smartproxy.example.com:8000/httpboot/ipxe-x64.efi

Requires the httpboot feature and renders the filename as a full URL where smartproxy.example.com is a known host name of Smart Proxy in Foreman.

Grub2 UEFI HTTP

http://smartproxy.example.com:8000/httpboot/grub2/grubx64.efi

Requires the httpboot feature and renders the filename as a full URL where smartproxy.example.com is a known host name of Smart Proxy in Foreman.

3.5. Troubleshooting DHCP problems in Foreman

Foreman can manage an ISC DHCP server on internal or external DHCP Smart Proxy. Foreman can list, create, and delete DHCP reservations and leases. However, there are a number of problems that you might encounter on occasions.

Out of sync DHCP records

When an error occurs during DHCP orchestration, DHCP records in the Foreman database and the DHCP server might not match. To fix this, you must add missing DHCP records from the Foreman database to the DHCP server and then remove unwanted records from the DHCP server as per the following steps:

Procedure
  1. To preview the DHCP records that are going to be added to the DHCP server, enter the following command:

    # foreman-rake orchestration:dhcp:add_missing subnet_name=NAME
  2. If you are satisfied by the preview changes in the previous step, apply them by entering the above command with the perform=1 argument:

    # foreman-rake orchestration:dhcp:add_missing subnet_name=NAME perform=1
  3. To keep DHCP records in Foreman and in the DHCP server synchronized, you can remove unwanted DHCP records from the DHCP server. Note that Foreman assumes that all managed DHCP servers do not contain third-party records, therefore, this step might delete those unexpected records. To preview what records are going to be removed from the DHCP server, enter the following command:

    # foreman-rake orchestration:dhcp:remove_offending subnet_name=NAME
  4. If you are satisfied by the preview changes in the previous step, apply them by entering the above command with the perform=1 argument:

    # foreman-rake orchestration:dhcp:remove_offending subnet_name=NAME perform=1
PXE loader option change

When the PXE loader option is changed for an existing host, this causes a DHCP conflict. The only workaround is to overwrite the DHCP entry.

This is a known issue. Until Issue 27877 is fixed, the only workaround is to overwrite the DHCP entry.

Incorrect permissions on DHCP files

An operating system update can update the dhcpd package. This causes the permissions of important directories and files to reset so that the DHCP Smart Proxy cannot read the required information.

For more information, see ERF12-6899 - Unable to set DHCP entry.

Changing the DHCP Smart Proxy entry

Foreman manages DHCP records only for hosts that are assigned to subnets with a DHCP Smart Proxy set. If you create a host and then clear or change the DHCP Smart Proxy, when you attempt to delete the host, the action fails.

If you create a host without setting the DHCP Smart Proxy and then try to set the DHCP Smart Proxy, this causes DHCP conflicts.

Deleted hosts entries in the dhcpd.leases file

Any changes to a DHCP lease are appended to the end of the dhcpd.leases file. Because entries are appended to the file, it is possible that two or more entries of the same lease can exist in the dhcpd.leases file at the same time. When there are two or more entries of the same lease, the last entry in the file takes precedence. Group, subgroup and host declarations in the lease file are processed in the same manner. If a lease is deleted, { deleted; } is appended to the declaration.

3.6. Prerequisites for image-based provisioning

Post-boot configuration method

Images that use the finish post-boot configuration scripts require a managed DHCP server, such as Foreman’s integrated Smart Proxy or an external Smart Proxy. The host must be created with a subnet associated with a DHCP Smart Proxy, and the IP address of the host must be a valid IP address from the DHCP range.

It is possible to use an external DHCP service, but IP addresses must be entered manually. The SSH credentials corresponding to the configuration in the image must be configured in Foreman to enable the post-boot configuration to be made.

Check the following items when troubleshooting a virtual machine booted from an image that depends on post-configuration scripts:

  • The host has a subnet assigned in Foreman server.

  • The subnet has a DHCP Smart Proxy assigned in Foreman server.

  • The host has a valid IP address assigned in Foreman server.

  • The IP address acquired by the virtual machine by using DHCP matches the address configured in Foreman server.

  • The virtual machine created from an image responds to SSH requests.

  • The virtual machine created from an image authorizes the user and password, over SSH, which is associated with the image being deployed.

  • Foreman server has access to the virtual machine via SSH keys. This is required for the virtual machine to receive post-configuration scripts from Foreman server.

Pre-boot initialization configuration method

Images that use the cloud-init scripts require a DHCP server to avoid having to include the IP address in the image. A managed DHCP Smart Proxy is preferred. The image must have the cloud-init service configured to start when the system boots and fetch a script or configuration data to use in completing the configuration.

Check the following items when troubleshooting a virtual machine booted from an image that depends on initialization scripts included in the image:

  • There is a DHCP server on the subnet.

  • The virtual machine has the cloud-init service installed and enabled.

3.7. Configuring network services

Some provisioning methods use Smart Proxy server services. For example, a network might require Smart Proxy server to act as a DHCP server. A network can also use PXE boot services to install the operating system on new hosts. This requires configuring Smart Proxy server to use the main PXE boot services: DHCP, DNS, and TFTP.

Use the foreman-installer command with the options to configure these services on Foreman server.

Procedure
  1. Enter the foreman-installer command to configure the required network services:

    # foreman-installer --foreman-proxy-dhcp true \
    --foreman-proxy-dhcp-gateway "192.168.140.1" \
    --foreman-proxy-dhcp-managed true \
    --foreman-proxy-dhcp-nameservers "192.168.140.2" \
    --foreman-proxy-dhcp-range "192.168.140.10 192.168.140.110" \
    --foreman-proxy-dhcp-server "192.168.140.2" \
    --foreman-proxy-dns true \
    --foreman-proxy-dns-forwarders "8.8.8.8" \
    --foreman-proxy-dns-forwarders "8.8.4.4" \
    --foreman-proxy-dns-managed true \
    --foreman-proxy-dns-reverse "140.168.192.in-addr.arpa" \
    --foreman-proxy-dns-server "127.0.0.1" \
    --foreman-proxy-dns-zone "example.com" \
    --foreman-proxy-tftp true \
    --foreman-proxy-tftp-managed true
  2. Find Smart Proxy server that you configure:

    # hammer proxy list
  3. Refresh features of Smart Proxy server to view the changes:

    # hammer proxy refresh-features --name "foreman.example.com"
  4. Verify the services configured on Smart Proxy server:

    # hammer proxy info --name "foreman.example.com"

3.7.1. Multiple subnets or domains using installer

The foreman-installer options allow only for a single DHCP subnet or DNS domain. One way to define more than one subnet is by using a custom configuration file.

For every additional subnet or domain, create an entry in /etc/foreman-installer/custom-hiera.yaml file:

dhcp::pools:
 isolated.lan:
   network: 192.168.99.0
   mask: 255.255.255.0
   gateway: 192.168.99.1
   range: 192.168.99.5 192.168.99.49

dns::zones:
  # creates @ SOA $::fqdn root.example.com.
  # creates $::fqdn A $::ipaddress
  example.com: {}

  # creates @ SOA test.example.net. hostmaster.example.com.
  # creates test.example.net A 192.0.2.100
  example.net:
    soa: test.example.net
    soaip: 192.0.2.100
    contact: hostmaster.example.com.

  # creates @ SOA $::fqdn root.example.org.
  # does NOT create an A record
  example.org:
    reverse: true

  # creates @ SOA $::fqdn hostmaster.example.com.
  2.0.192.in-addr.arpa:
    reverse: true
    contact: hostmaster.example.com.

Execute foreman-installer to perform the changes and verify that the /etc/dhcp/dhcpd.conf contains appropriate entries. Subnets must be then defined in Foreman database.

3.7.2. DHCP options for network configuration

--foreman-proxy-dhcp

Enables the DHCP service. You can set this option to true or false.

--foreman-proxy-dhcp-managed

Enables Foreman to manage the DHCP service. You can set this option to true or false.

--foreman-proxy-dhcp-gateway

The DHCP pool gateway. Set this to the address of the external gateway for hosts on your private network.

--foreman-proxy-dhcp-interface

Sets the interface for the DHCP service to listen for requests. Set this to eth1.

--foreman-proxy-dhcp-nameservers

Sets the addresses of the nameservers provided to clients through DHCP. Set this to the address for Foreman server on eth1.

--foreman-proxy-dhcp-range

A space-separated DHCP pool range for Discovered and Unmanaged services.

--foreman-proxy-dhcp-server

Sets the address of the DHCP server to manage.

Run foreman-installer --help to view more options related to DHCP and other Smart Proxy services.

3.7.3. DNS options for network configuration

--foreman-proxy-dns

Enables the DNS feature. You can set this option to true or false.

--foreman-proxy-dns-provider

Selects the provider to be used.

--foreman-proxy-dns-managed

Let the installer manage ISC BIND. This is only relevant when using the nsupdate and nsupdate_gss providers. You can set this option to true or false.

--foreman-proxy-dns-forwarders

Sets the DNS forwarders. Only used when ISC BIND is managed by the installer. Set this to your DNS recursors.

--foreman-proxy-dns-interface

Sets the interface to listen for DNS requests. Only used when ISC BIND is managed by the installer. Set this to eth1.

--foreman-proxy-dns-reverse

The DNS reverse zone name. Only used when ISC BIND is managed by the installer.

--foreman-proxy-dns-server

Sets the address of the DNS server. Only used by the nsupdate, nsupdate_gss, and infoblox providers.

--foreman-proxy-dns-zone

Sets the DNS zone name. Only used when ISC BIND is managed by the installer.

Run foreman-installer --help to view more options related to DNS and other Smart Proxy services.

3.7.4. TFTP options for network configuration

--foreman-proxy-tftp

Enables TFTP service. You can set this option to true or false.

--foreman-proxy-tftp-managed

Enables Foreman to manage the TFTP service. You can set this option to true or false.

--foreman-proxy-tftp-servername

Sets the TFTP server to use. Ensure that you use Smart Proxy’s IP address.

Run foreman-installer --help to view more options related to TFTP and other Smart Proxy services.

3.7.5. Using TFTP services through NAT

You can use Foreman TFTP services through NAT. To do this, on all NAT routers or firewalls, you must enable a TFTP service on UDP port 69 and enable the TFTP state tracking feature. For more information, see the documentation for your NAT device.

Using NAT on Linux with firewalld:
  1. Allow the TFTP service in the firewall configuration:

    # firewall-cmd --add-service=tftp
  2. Make the changes persistent:

    # firewall-cmd --runtime-to-permanent
Using NAT on linux with iptables:
  1. Configure the firewall to allow TFTP service UDP on port 69:

    # iptables \
    --sport 69 \
    --state ESTABLISHED \
    -A OUTPUT \
    -i eth0 \
    -j ACCEPT \
    -m state \
    -p udp
    # service iptables save
  2. Load the ip_conntrack_tftp kernel TFTP state module. In the /etc/sysconfig/iptables-config file, locate IPTABLES_MODULES and add ip_conntrack_tftp as follows:

    IPTABLES_MODULES="ip_conntrack_tftp"

3.8. Adding a domain to Foreman server

Foreman server defines domain names for each host on the network. Foreman server must have information about the domain and Smart Proxy server responsible for domain name assignment.

Checking for existing domains

Foreman server might already have the relevant domain created as part of Foreman server installation. Switch the context to Any Organization and Any Location then check the domain list to see if it exists.

DNS server configuration considerations

During the DNS record creation, Foreman performs conflict DNS lookups to verify that the host name is not in active use. This check runs against one of the following DNS servers:

  • The system-wide resolver if Administer > Settings > Query local nameservers is set to true.

  • The nameservers that are defined in the subnet associated with the host.

  • The authoritative NS-Records that are queried from the SOA from the domain name associated with the host.

If you experience timeouts during DNS conflict resolution, check the following settings:

  • The subnet nameservers must be reachable from Foreman server.

  • The domain name must have a Start of Authority (SOA) record available from Foreman server.

  • The system resolver in the /etc/resolv.conf file must have a valid and working configuration.

To use the CLI instead of the Foreman web UI, see the CLI procedure.

Procedure
  1. In the Foreman web UI, navigate to Infrastructure > Domains and click Create Domain.

  2. In the DNS Domain field, enter the full DNS domain name.

  3. In the Fullname field, enter the plain text name of the domain.

  4. Click the Parameters tab and configure any domain level parameters to apply to hosts attached to this domain. For example, user defined Boolean or string parameters to use in templates.

  5. Click Add Parameter and fill in the Name and Value fields.

  6. Click the Locations tab, and add the location where the domain resides.

  7. Click the Organizations tab, and add the organization that the domain belongs to.

  8. Click Submit to save the changes.

CLI procedure
  • Use the hammer domain create command to create a domain:

    # hammer domain create \
    --description "My_Domain" \
    --dns-id My_DNS_ID \
    --locations "My_Location" \
    --name "my-domain.tld" \
    --organizations "My_Organization"

In this example, the --dns-id option uses 1, which is the ID of your integrated Smart Proxy on Foreman server.

3.9. Adding a subnet to Foreman server

You must add information for each of your subnets to Foreman server because Foreman configures interfaces for new hosts. To configure interfaces, Foreman server must have all the information about the network that connects these interfaces.

To use the CLI instead of the Foreman web UI, see the CLI procedure.

Procedure
  1. In the Foreman web UI, navigate to Infrastructure > Subnets, and in the Subnets window, click Create Subnet.

  2. In the Name field, enter a name for the subnet.

  3. In the Description field, enter a description for the subnet.

  4. In the Network address field, enter the network address for the subnet.

  5. In the Network prefix field, enter the network prefix for the subnet.

  6. In the Network mask field, enter the network mask for the subnet.

  7. In the Gateway address field, enter the external gateway for the subnet.

  8. In the Primary DNS server field, enter a primary DNS for the subnet.

  9. In the Secondary DNS server, enter a secondary DNS for the subnet.

  10. From the IPAM list, select the method that you want to use for IP address management (IPAM). For more information about IPAM, see Configuring networking.

  11. Enter the information for the IPAM method that you select.

  12. If you use the remote execution plugin, click the Remote Execution tab and select the Smart Proxy that controls the remote execution.

  13. Click the Domains tab and select the domains that apply to this subnet.

  14. Click the Smart Proxies tab and select the Smart Proxy that applies to each service in the subnet, including DHCP, TFTP, and reverse DNS services.

  15. Click the Parameters tab and configure any subnet level parameters to apply to hosts attached to this subnet. For example, user defined Boolean or string parameters to use in templates.

  16. Click the Locations tab and select the locations that use this Smart Proxy.

  17. Click the Organizations tab and select the organizations that use this Smart Proxy.

  18. Click Submit to save the subnet information.

CLI procedure
  • Create the subnet with the following command:

    # hammer subnet create \
    --boot-mode "DHCP" \
    --description "My_Description" \
    --dhcp-id My_DHCP_ID \
    --dns-id My_DNS_ID \
    --dns-primary "192.168.140.2" \
    --dns-secondary "8.8.8.8" \
    --domains "my-domain.tld" \
    --from "192.168.140.111" \
    --gateway "192.168.140.1" \
    --ipam "DHCP" \
    --locations "_My_Location" \
    --mask "255.255.255.0" \
    --name "My_Network" \
    --network "192.168.140.0" \
    --organizations "My_Organization" \
    --tftp-id My_TFTP_ID \
    --to "192.168.140.250"
Note

In this example, the --dhcp-id, --dns-id, and --tftp-id options use 1, which is the ID of the integrated Smart Proxy in Foreman server.

4. Using PXE to provision hosts

You can provision bare-metal instances with Foreman by using one of the following methods:

Unattended Provisioning

New hosts are identified by a MAC address. Foreman server provisions the host by using a PXE boot process.

Unattended Provisioning with Discovery

New hosts use PXE boot to load the Foreman Discovery service. This service identifies hardware information about the host and lists it as an available host to provision. For more information, see Discovering hosts on a network.

PXE-less Provisioning

New hosts are provisioned with a boot disk or PXE-less discovery image that Foreman server generates.

PXE-less Provisioning with Discovery

New hosts use an ISO boot disk that loads the Foreman Discovery service. This service identifies hardware information about the host and lists it as an available host to provision. For more information, see Discovery in PXE-less mode.

Note

Discovery workflows are only available when the Discovery plugin is installed. For more information, see Discovering hosts on a network.

BIOS and UEFI support

With Foreman, you can perform both BIOS and UEFI based PXE provisioning. Both BIOS and UEFI interfaces work as interpreters between the operating system and firmware of a computer, initializing hardware components and starting the operating system at boot time.

PXE loaders

In Foreman provisioning, the PXE loader option defines the DHCP filename option to use during provisioning.

  • For BIOS systems, select the PXELinux BIOS option to enable a provisioned host to download the pxelinux.0 file over TFTP.

  • For UEFI systems, select the Grub2 UEFI option to enable a TFTP client to download grubx64.efi file, or select the Grub2 UEFI HTTP option to enable an UEFI HTTP client to download grubx64.efi with the HTTP Boot feature.

Foreman supports UEFI Secure Boot. SecureBoot PXE loaders enable a client to download the shim.efi bootstrap boot loader that then loads the signed grubx64.efi. Use the Grub2 UEFI SecureBoot PXE loader for PXE-boot provisioning or Grub2 UEFI HTTPS SecureBoot for HTTP-boot provisioning.

By default, you can provision operating systems from the vendor of the operating system of your Foreman server on Secure Boot enabled hosts. To provision operating systems on Secure Boot enabled hosts from different vendors, you have to provide signed shim and GRUB2 binaries provided by the vendor of your operating system. For more information, see:

Other PXE loaders like PXELinux UEFI, Grub2 ELF or iPXE Chain, require additional configuration. These workflows are not documented.

Template association with operating systems

For BIOS provisioning, you must associate a PXELinux template with the operating system. For UEFI provisioning, you must associate a PXEGrub2 template with the operating system. If you associate both PXELinux and PXEGrub2 templates, Foreman deploys configuration files for both on a TFTP server, so that you can switch between PXE loaders easily.

Bonded network interfaces

You can configure a bonded interface that Foreman will use during the installation process, for example, to download installation content. After provisioning completes, the provisioned system can also use the bonded interface.

Important

Foreman cannot PXE boot a bonded interface that requires configuration on a network switch as well as on your host.

After your host loads the kernel of an installer or the kernel of an operating system, bonding works as expected. Therefore, you can use a boot disk to work around PXE boot limitations when your bonded interface requires configuration on both a switch and your host.

4.1. Prerequisites for bare-metal provisioning

The requirements for bare-metal provisioning include:

  • A Smart Proxy server managing the network for bare-metal hosts. For unattended provisioning and discovery-based provisioning, Foreman server requires PXE server settings.

    For more information about networking requirements, see Configuring networking.

    For more information about the Discovery service, Discovering hosts on a network.

  • A bare-metal host or a blank VM.

  • Provide the installation medium for the operating systems that you want to use to provision hosts.

For information about the security token for unattended and PXE-less provisioning, see Configuring the security token validity duration.

4.2. Configuring the security token validity duration

When performing any kind of provisioning, as a security measure, Foreman automatically generates a unique token and adds this token to the OS installer recipe URL in the PXE configuration file (PXELinux, Grub2). By default, the token is valid for 360 minutes. When you provision a host, ensure that you reboot the host within this time frame. If the token expires, it is no longer valid and you receive a 404 error and the operating system installer download fails.

Procedure
  1. In the Foreman web UI, navigate to Administer > Settings, and click the Provisioning tab.

  2. Find the Token duration option and click the edit icon and edit the duration, or enter 0 to disable token generation. If token generation is disabled, an attacker can spoof client IP address and download OS installer recipe from Foreman server, including the encrypted root password.

4.3. Creating hosts with unattended provisioning

Unattended provisioning is the simplest form of host provisioning. You enter the host details on Foreman server and boot your host. Foreman server automatically manages the PXE configuration, organizes networking services, and provides the operating system and configuration for the host.

This method of provisioning hosts uses minimal interaction during the process.

To use the CLI instead of the Foreman web UI, see the CLI procedure.

Procedure
  1. In the Foreman web UI, navigate to Hosts > Create Host.

  2. In the Name field, enter a name for the host.

  3. Optional: Click the Organization tab and change the organization context to match your requirement.

  4. Optional: Click the Location tab and change the location context to match your requirement.

  5. From the Host Group list, select a host group that you want to assign your host to. That host group will populate the form.

  6. Click the Interfaces tab, and on the interface of the host, click Edit.

  7. Verify that the fields are populated with values. Note in particular:

    • Foreman automatically assigns an IP address for the new host.

    • In the MAC address field, enter a MAC address of the provisioning interface of the host. This ensures the identification of the host during the PXE boot process.

    • The Name from the Host tab becomes the DNS name.

    • Ensure that Foreman automatically selects the Managed, Primary, and Provision options for the first interface on the host. If not, select them.

  8. Click OK to save. To add another interface, click Add Interface. You can select only one interface for Provision and Primary.

  9. Click the Operating System tab, and verify that all fields contain values. Confirm each aspect of the operating system.

  10. Optional: Click Resolve in Provisioning template to check the new host can identify the right provisioning templates to use.

    For more information about associating provisioning templates, see Provisioning templates.

  11. Click Submit to save the host details.

    For more information about network interfaces, see Configuring network interfaces in Managing hosts.

This creates the host entry and the relevant provisioning settings. This also includes creating the necessary directories and files for PXE booting the bare-metal host. If you start the physical host and set its boot mode to PXE, the host detects the DHCP service of Foreman server’s integrated Smart Proxy, receives HTTP endpoint of the Kickstart tree and installs the operating system.

CLI procedure
  1. Create the host with the hammer host create command:

    # hammer host create \
    --build true \
    --enabled true \
    --hostgroup "My_Host_Group" \
    --location "My_Location" \
    --mac "My_MAC_Address" \
    --managed true \
    --name "My_Host_Name" \
    --organization "My_Organization"
  2. Ensure the network interface options are set using the hammer host interface update command:

    # hammer host interface update \
    --host "_My_Host_Name_" \
    --managed true \
    --primary true \
    --provision true

4.4. Creating hosts with PXE-less provisioning

Some hardware does not provide a PXE boot interface. In Foreman, you can provision a host without PXE boot. This is also known as PXE-less provisioning and involves generating a boot ISO that hosts can use. Using this ISO, the host can connect to Foreman server, boot the installation media, and install the operating system.

Foreman also provides a PXE-less discovery service that operates without PXE-based services, such as DHCP and TFTP. For more information, see Discovery in PXE-less mode.

Boot ISO types

There are the following types of boot ISOs:

Host image

A boot ISO for the specific host. This image contains only the boot files that are necessary to access the installation media on Foreman server. The user defines the subnet data in Foreman and the image is created with static networking. The image is based on iPXE boot firmware, only a limited number of network cards is supported.

Full host image

A boot ISO that contains the kernel and initial RAM disk image for the specific host. This image is useful if the host fails to chainload correctly. The provisioning template still downloads from Foreman server.

Generic image

A boot ISO that is not associated with a specific host. The ISO sends the host’s MAC address to Foreman server, which matches it against the host entry. The image does not store IP address details and requires access to a DHCP server on the network to bootstrap. This image is also available from the /disks/generic URL on your Foreman server, for example, https://foreman.example.com/disks/generic.

Subnet image

A boot ISO that is not associated with a specific host. The ISO sends the host’s MAC address to Smart Proxy server, which matches it against the host entry. The image does not store IP address details and requires access to a DHCP server on the network to bootstrap. This image is generic to all hosts with a provisioning NIC on the same subnet. The image is based on iPXE boot firmware, only a limited number of network cards is supported.

Note

The Full host image is based on SYSLINUX and Grub and works with most network cards. When using a Host image, Generic image, or Subnet image, see supported hardware on ipxe.org for a list of network card drivers expected to work with an iPXE-based boot disk.

Host image and Full host image contain provisioning tokens, therefore the generated image has limited lifespan. For more information about configuring security tokens, read Configuring the security token validity duration.

To use the CLI instead of the Foreman web UI, see the CLI procedure.

Procedure
  1. In the Foreman web UI, navigate to Hosts > Create Host.

  2. In the Name field, enter a name for the host.

  3. Optional: Click the Organization tab and change the organization context to match your requirement.

  4. Optional: Click the Location tab and change the location context to match your requirement.

  5. From the Host Group list, select a host group that you want to assign your host to. That host group will populate the form.

  6. Click the Interfaces tab, and on the interface of the host, click Edit.

  7. Verify that the fields are populated with values. Note in particular:

    • Foreman automatically assigns an IP address for the new host.

    • In the MAC address field, enter a MAC address of the provisioning interface of the host. This ensures the identification of the host during the PXE boot process.

    • The Name from the Host tab becomes the DNS name.

    • Ensure that Foreman automatically selects the Managed, Primary, and Provision options for the first interface on the host. If not, select them.

  8. Click OK to save. To add another interface, click Add Interface. You can select only one interface for Provision and Primary.

  9. Click the Operating System tab, and verify that all fields contain values. Confirm each aspect of the operating system.

  10. Click Resolve in Provisioning Templates to check the new host can identify the right provisioning templates to use.

    For more information about associating provisioning templates, see Provisioning templates.

  11. Click Submit to save the host details. This creates a host entry and the host details page appears.

  12. Download the boot disk from Foreman server.

    • For Host image, on the host details page, click the vertical ellipsis and select Host 'My_Host_Name' image.

    • For Full host image, on the host details page, click the vertical ellipsis and select Full host 'My_Host_Name' image.

    • For Generic image, navigate to Infrastructure > Subnets, click Boot disk and select Generic image.

    • For Subnet image, navigate to Infrastructure > Subnets, click the dropdown menu in the Actions column of the required subnet and select Subnet generic image.

  13. Write the ISO to a USB storage device using the dd utility or livecd-tools if required.

  14. When you start the host and boot from the ISO or the USB storage device, the host connects to Foreman server and starts installing operating system from its Kickstart tree.

CLI procedure
  1. Create the host using the hammer host create command.

    # hammer host create \
    --build true \
    --enabled true \
    --hostgroup "My_Host_Group" \
    --location "My_Location" \
    --mac "My_MAC_Address" \
    --managed true \
    --name "My_Host_Name" \
    --organization "My_Organization"
  2. Ensure that your network interface options are set using the hammer host interface update command.

    # hammer host interface update \
    --host "My_Host_Name" \
    --managed true \
    --primary true \
    --provision true
  3. Download the boot disk from Foreman server using the hammer bootdisk command:

    • For Host image:

      # hammer bootdisk host --host My_Host_Name
    • For Full host image:

      # hammer bootdisk host \
      --full true \
      --host My_Host_Name
    • For Generic image:

      # hammer bootdisk generic
    • For Subnet image:

      # hammer bootdisk subnet --subnet My_Subnet_Name

    This creates a boot ISO for your host to use.

  4. Write the ISO to a USB storage device using the dd utility or livecd-tools if required.

  5. When you start the physical host and boot from the ISO or the USB storage device, the host connects to Foreman server and starts installing operating system from its Kickstart tree.

4.5. Creating hosts with UEFI HTTP boot provisioning

You can provision hosts from Foreman using the UEFI HTTP Boot. This is the only method with which you can provision hosts in IPv6 network.

To use the CLI instead of the Foreman web UI, see the CLI procedure.

Prerequisites
  • Ensure that you meet the requirements for HTTP booting. For more information, see HTTP Booting Requirements in Planning for Foreman.

Procedure
  1. Enable foreman-proxy-http, foreman-proxy-httpboot, and foreman-proxy-tftp features.

    # foreman-installer \
    --foreman-proxy-http true \
    --foreman-proxy-httpboot true \
    --foreman-proxy-tftp true
  2. Ensure that the Smart Proxy has TFTP and HTTPBoot features recognized. In the Foreman web UI, navigate to Infrastructure > Smart Proxies and click on Smart Proxy to see the list of recognized features. Click Refresh Features if any of the features are missing.

  3. Ensure that Smart Proxy is associated with the provisioning subnet. In the Foreman web UI, navigate to Infrastructure > Subnets > Edit Subnet > Smart Proxies and select the Smart Proxy for both TFTP and HTTPBoot options.

  4. Click OK to save.

  5. In the Foreman web UI, navigate to Hosts > Create Host.

  6. In the Name field, enter a name for the host.

  7. Optional: Click the Organization tab and change the organization context to match your requirement.

  8. Optional: Click the Location tab and change the location context to match your requirement.

  9. From the Host Group list, select a host group that you want to assign your host to. That host group will populate the form.

  10. Click the Interfaces tab, and on the interface of the host, click Edit.

  11. Verify that the fields are populated with values. Note in particular:

    • Foreman automatically assigns an IP address for the new host.

    • In the MAC address field, enter a MAC address of the provisioning interface of the host. This ensures the identification of the host during the PXE boot process.

    • The Name from the Host tab becomes the DNS name.

    • Ensure that Foreman automatically selects the Managed, Primary, and Provision options for the first interface on the host. If not, select them.

  12. Click OK to save. To add another interface, click Add Interface. You can select only one interface for Provision and Primary.

  13. Click the Operating System tab, and verify that all fields contain values. Confirm each aspect of the operating system.

  14. From the PXE Loader list, select Grub2 UEFI HTTP.

  15. Optional: Click Resolve in Provisioning template to check the new host can identify the right provisioning templates to use.

    For more information about associating provisioning templates, see Creating provisioning templates.

  16. Click Submit to save the host details.

    For more information about network interfaces, see Configuring network interfaces in Managing hosts.

  17. Set the host to boot in UEFI mode from network.

  18. Start the host.

  19. From the boot menu, select Kickstart default PXEGrub2.

This creates the host entry and the relevant provisioning settings. This also includes creating the necessary directories and files for UEFI booting the bare-metal host. When you start the physical host and set its boot mode to UEFI HTTP, the host detects the defined DHCP service, receives HTTP endpoint of Smart Proxy with the Kickstart tree and installs the operating system.

CLI procedure
  1. Enable foreman-proxy-http, foreman-proxy-httpboot, and foreman-proxy-tftp true features:

    # foreman-installer \
    --foreman-proxy-http true \
    --foreman-proxy-httpboot true \
    --foreman-proxy-tftp true
  2. Create the host with the hammer host create command.

    # hammer host create \
    --build true \
    --enabled true \
    --hostgroup "My_Host_Group" \
    --location "My_Location" \
    --mac "My_MAC_Address" \
    --managed true \
    --name "My_Host_Name" \
    --organization "My_Organization" \
    --pxe-loader "Grub2 UEFI HTTP"
  3. Ensure the network interface options are set using the hammer host interface update command:

    # hammer host interface update \
    --host "My_Host_Name" \
    --managed true \
    --primary true \
    --provision true
  4. Set the host to boot in UEFI mode from network.

  5. Start the host.

  6. From the boot menu, select Kickstart default PXEGrub2.

This creates the host entry and the relevant provisioning settings. This also includes creating the necessary directories and files for UEFI booting the bare-metal host. When you start the physical host and set its boot mode to UEFI HTTP, the host detects the defined DHCP service, receives HTTP endpoint of Smart Proxy with the Kickstart tree and installs the operating system.

4.6. Configuring Smart Proxy to provision AlmaLinux on Secure Boot enabled hosts

Secure Boot follows a chain of trust from the start of the host to the loading of Linux kernel modules. The first shim that is loaded determines which distribution can be booted or loaded by using a kexec system call until the next reboot.

To provision AlmaLinux on Secure Boot enabled hosts with the Grub2 UEFI SecureBoot and Grub2 UEFI HTTPS SecureBoot PXE loaders, you have to provide signed shim and GRUB2 binaries provided by the vendor of your operating system.

Important

You have to perform the following configuration steps on each TFTP Smart Proxy for a subnet to provision Secure Boot enabled hosts on that subnet.

The following example works for AlmaLinux on x86_64 architecture.

Prerequisites
  • Ensure that the cpio package is installed on your Smart Proxy.

Procedure
  1. Set the path for the shim and GRUB2 binaries for the operating system of your host:

    # BOOTLOADER_PATH="/var/lib/tftpboot/bootloader-universe/pxegrub2/My_Operating_System_In_Lowercase/default/x86_64"

    If you require specific versions of the shim and GRUB2 binaries for the version of the operating system of your host, replace default with the Major and Minor version of the operating system separated by a dot. If no Minor version is set, replace default with the Major version.

    Foreman community recommends to not use version-specific shim and GRUB2 binaries unless it is really necessary.

  2. Create the directory to store the shim and GRUB2 binaries for the operating system of your host:

    # install -o foreman-proxy -g foreman-proxy -d $BOOTLOADER_PATH
  3. Download the shim and GRUB2 packages for the operating system of your host:

    # wget -O /tmp/grub2-efi-x64.rpm https://server.example.com/grub2-efi-x64.rpm
    # wget -O /tmp/shim-x64.rpm https://server.example.com/shim-x64.rpm

    You can download the grub2-efi-x64 package from https://repo.almalinux.org/almalinux/9/BaseOS/x86_64/os/Packages/. You can download the shim-x64 package from https://repo.almalinux.org/almalinux/9/BaseOS/x86_64/os/Packages/.

  4. Extract the shim and GRUB2 binaries:

    # rpm2cpio /tmp/grub2-efi-x64.rpm | cpio -idv --directory /tmp
    # rpm2cpio /tmp/shim-x64.rpm | cpio -idv --directory /tmp
  5. Make the shim and GRUB2 binaries available for host provisioning:

    # cp /tmp/boot/efi/EFI/almalinux/grubx64.efi $BOOTLOADER_PATH/grubx64.efi
    # cp /tmp/boot/efi/EFI/almalinux/shimx64.efi $BOOTLOADER_PATH/shimx64.efi
    # ln -sr $BOOTLOADER_PATH/grubx64.efi $BOOTLOADER_PATH/boot.efi
    # ln -sr $BOOTLOADER_PATH/shimx64.efi $BOOTLOADER_PATH/boot-sb.efi
    # chmod 644 $BOOTLOADER_PATH/grubx64.efi
    # chmod 644 $BOOTLOADER_PATH/shimx64.efi
Verification
  • Verify the contents of your boot loader directory:

    # tree /var/lib/tftpboot/bootloader-universe
    /var/lib/tftpboot/bootloader-universe
    └── pxegrub2
        └── My_Operating_System_In_Lowercase
            └── default
                └── x86_64
                    ├── boot.efi -> grubx64.efi
                    ├── boot-sb.efi -> shimx64.efi
                    ├── grubx64.efi
                    └── shimx64.efi
Next steps
  • You can now provision Secure Boot enabled AlmaLinux hosts by using the Grub2 UEFI SecureBoot and Grub2 UEFI HTTPS SecureBoot PXE loaders.

4.7. Configuring Smart Proxy to provision Debian on Secure Boot enabled hosts

Secure Boot follows a chain of trust from the start of the host to the loading of Linux kernel modules. The first shim that is loaded determines which distribution can be booted or loaded by using a kexec system call until the next reboot.

To provision Debian on Secure Boot enabled hosts with the Grub2 UEFI SecureBoot and Grub2 UEFI HTTPS SecureBoot PXE loaders, you have to provide signed shim and GRUB2 binaries provided by the vendor of your operating system.

Important

You have to perform the following configuration steps on each TFTP Smart Proxy for a subnet to provision Secure Boot enabled hosts on that subnet.

The following example works for Debian on x86_64 architecture.

Prerequisites
  • Ensure that the binutils and xz-utils packages are installed on your Smart Proxy.

Procedure
  1. Set the path for the shim and GRUB2 binaries for the operating system of your host:

    # BOOTLOADER_PATH="/var/lib/tftpboot/bootloader-universe/pxegrub2/My_Operating_System_In_Lowercase/default/x86_64"

    If you require specific versions of the shim and GRUB2 binaries for the version of the operating system of your host, replace default with the Major and Minor version of the operating system separated by a dot. If no Minor version is set, replace default with the Major version.

    Foreman community recommends to not use version-specific shim and GRUB2 binaries unless it is really necessary.

  2. Create the directory to store the shim and GRUB2 binaries for the operating system of your host:

    # install -o foreman-proxy -g foreman-proxy -d $BOOTLOADER_PATH
  3. Download the shim and GRUB2 packages for the operating system of your host:

    # wget -O /tmp/grub-efi-amd64-signed.deb https://server.example.com/grub-efi-amd64-signed.deb
    # wget -O /tmp/shim-signed.deb https://server.example.com/shim-signed.deb

    You can download the grub-efi-amd64-signed package from http://security.debian.org/debian-security/pool/updates/main/g/grub-efi-amd64-signed/. You can download the shim-signed package from http://ftp.de.debian.org/debian/pool/main/s/shim-signed/.

  4. Extract the shim and GRUB2 binaries:

    # cd /tmp && ar x /tmp/grub-efi-amd64-signed.deb && tar -xf data.tar.xz && cd -
    # cd /tmp && ar x /tmp/shim-signed.deb && tar -xf data.tar.xz && cd -
  5. Make the shim and GRUB2 binaries available for host provisioning:

    # cp /tmp/usr/lib/grub/x86_64-efi-signed/grubnetx64.efi.signed $BOOTLOADER_PATH/grubx64.efi
    # cp /tmp/usr/lib/shim/shimx64.efi.signed $BOOTLOADER_PATH/shimx64.efi
    # ln -sr $BOOTLOADER_PATH/grubx64.efi $BOOTLOADER_PATH/boot.efi
    # ln -sr $BOOTLOADER_PATH/shimx64.efi $BOOTLOADER_PATH/boot-sb.efi
    # chmod 644 $BOOTLOADER_PATH/grubx64.efi
    # chmod 644 $BOOTLOADER_PATH/shimx64.efi
  6. Link the grub.cfg file from the TFTP servers grub2 folder to the legacy grub folder:

    # ln --relative --symbolic /var/lib/tftpboot/grub2/grub.cfg /var/lib/tftpboot/grub/grub.cfg
Verification
  • Verify the contents of your boot loader directory:

    # tree /var/lib/tftpboot/bootloader-universe
    /var/lib/tftpboot/bootloader-universe
    └── pxegrub2
        └── My_Operating_System_In_Lowercase
            └── default
                └── x86_64
                    ├── boot.efi -> grubx64.efi
                    ├── boot-sb.efi -> shimx64.efi
                    ├── grubx64.efi
                    └── shimx64.efi
Next steps
  • You can now provision Secure Boot enabled Debian hosts by using the Grub2 UEFI SecureBoot and Grub2 UEFI HTTPS SecureBoot PXE loaders.

4.8. Configuring Smart Proxy to provision Rocky Linux on Secure Boot enabled hosts

Secure Boot follows a chain of trust from the start of the host to the loading of Linux kernel modules. The first shim that is loaded determines which distribution can be booted or loaded by using a kexec system call until the next reboot.

To provision Rocky Linux on Secure Boot enabled hosts with the Grub2 UEFI SecureBoot and Grub2 UEFI HTTPS SecureBoot PXE loaders, you have to provide signed shim and GRUB2 binaries provided by the vendor of your operating system.

Important

You have to perform the following configuration steps on each TFTP Smart Proxy for a subnet to provision Secure Boot enabled hosts on that subnet.

The following example works for Rocky Linux on x86_64 architecture.

Prerequisites
  • Ensure that the cpio package is installed on your Smart Proxy.

Procedure
  1. Set the path for the shim and GRUB2 binaries for the operating system of your host:

    # BOOTLOADER_PATH="/var/lib/tftpboot/bootloader-universe/pxegrub2/My_Operating_System_In_Lowercase/default/x86_64"

    If you require specific versions of the shim and GRUB2 binaries for the version of the operating system of your host, replace default with the Major and Minor version of the operating system separated by a dot. If no Minor version is set, replace default with the Major version.

    Foreman community recommends to not use version-specific shim and GRUB2 binaries unless it is really necessary.

  2. Create the directory to store the shim and GRUB2 binaries for the operating system of your host:

    # install -o foreman-proxy -g foreman-proxy -d $BOOTLOADER_PATH
  3. Download the shim and GRUB2 packages for the operating system of your host:

    # wget -O /tmp/grub2-efi-x64.rpm https://server.example.com/grub2-efi-x64.rpm
    # wget -O /tmp/shim-x64.rpm https://server.example.com/shim-x64.rpm

    You can download the grub2-efi-x64 package from http://dl.rockylinux.org/pub/rocky/9/BaseOS/x86_64/os/Packages/g/. You can download the shim-x64 package from http://dl.rockylinux.org/pub/rocky/9/BaseOS/x86_64/os/Packages/s/.

  4. Extract the shim and GRUB2 binaries:

    # rpm2cpio /tmp/grub2-efi-x64.rpm | cpio -idv --directory /tmp
    # rpm2cpio /tmp/shim-x64.rpm | cpio -idv --directory /tmp
  5. Make the shim and GRUB2 binaries available for host provisioning:

    # cp /tmp/boot/efi/EFI/rocky/grubx64.efi $BOOTLOADER_PATH/grubx64.efi
    # cp /tmp/boot/efi/EFI/rocky/shimx64.efi $BOOTLOADER_PATH/shimx64.efi
    # ln -sr $BOOTLOADER_PATH/grubx64.efi $BOOTLOADER_PATH/boot.efi
    # ln -sr $BOOTLOADER_PATH/shimx64.efi $BOOTLOADER_PATH/boot-sb.efi
    # chmod 644 $BOOTLOADER_PATH/grubx64.efi
    # chmod 644 $BOOTLOADER_PATH/shimx64.efi
Verification
  • Verify the contents of your boot loader directory:

    # tree /var/lib/tftpboot/bootloader-universe
    /var/lib/tftpboot/bootloader-universe
    └── pxegrub2
        └── My_Operating_System_In_Lowercase
            └── default
                └── x86_64
                    ├── boot.efi -> grubx64.efi
                    ├── boot-sb.efi -> shimx64.efi
                    ├── grubx64.efi
                    └── shimx64.efi
Next steps
  • You can now provision Secure Boot enabled Rocky Linux hosts by using the Grub2 UEFI SecureBoot and Grub2 UEFI HTTPS SecureBoot PXE loaders.

4.9. Configuring Smart Proxy to provision Ubuntu on Secure Boot enabled hosts

Secure Boot follows a chain of trust from the start of the host to the loading of Linux kernel modules. The first shim that is loaded determines which distribution can be booted or loaded by using a kexec system call until the next reboot.

To provision Ubuntu on Secure Boot enabled hosts with the Grub2 UEFI SecureBoot and Grub2 UEFI HTTPS SecureBoot PXE loaders, you have to provide signed shim and GRUB2 binaries provided by the vendor of your operating system.

Important

You have to perform the following configuration steps on each TFTP Smart Proxy for a subnet to provision Secure Boot enabled hosts on that subnet.

The following example works for Ubuntu on x86_64 architecture.

Prerequisites
  • Ensure that the binutils, xz-utils, and zstd packages are installed on your Smart Proxy.

Procedure
  1. Set the path for the shim and GRUB2 binaries for the operating system of your host:

    # BOOTLOADER_PATH="/var/lib/tftpboot/bootloader-universe/pxegrub2/My_Operating_System_In_Lowercase/default/x86_64"

    If you require specific versions of the shim and GRUB2 binaries for the version of the operating system of your host, replace default with the Major and Minor version of the operating system separated by a dot. If no Minor version is set, replace default with the Major version.

    Foreman community recommends to not use version-specific shim and GRUB2 binaries unless it is really necessary.

  2. Create the directory to store the shim and GRUB2 binaries for the operating system of your host:

    # install -o foreman-proxy -g foreman-proxy -d $BOOTLOADER_PATH
  3. Download the shim and GRUB2 packages for the operating system of your host:

    # wget -O /tmp/grub-efi-amd64-signed.deb https://server.example.com/grub-efi-amd64-signed.deb
    # wget -O /tmp/shim-signed.deb https://server.example.com/shim-signed.deb

    You can download the grub-efi-amd64-signed package from http://security.ubuntu.com/ubuntu/pool/main/g/grub2-signed/. You can download the shim-signed package from http://de.archive.ubuntu.com/ubuntu/pool/main/s/shim-signed/.

  4. Extract the shim and GRUB2 binaries:

    # cd /tmp && ar x /tmp/grub-efi-amd64-signed.deb && tar --use-compress-program=unzstd -xf data.tar.zst && cd -
    # cd /tmp && ar x /tmp/shim-signed.deb && tar -xf data.tar.xz && cd -
  5. Make the shim and GRUB2 binaries available for host provisioning:

    # cp /tmp/usr/lib/grub/x86_64-efi-signed/grubnetx64.efi.signed $BOOTLOADER_PATH/grubx64.efi
    # cp /tmp/usr/lib/shim/shimx64.efi.signed.latest $BOOTLOADER_PATH/shimx64.efi
    # ln -sr $BOOTLOADER_PATH/grubx64.efi $BOOTLOADER_PATH/boot.efi
    # ln -sr $BOOTLOADER_PATH/shimx64.efi $BOOTLOADER_PATH/boot-sb.efi
    # chmod 644 $BOOTLOADER_PATH/grubx64.efi
    # chmod 644 $BOOTLOADER_PATH/shimx64.efi
  6. Link the grub.cfg file from the TFTP servers grub2 folder to the legacy grub folder:

    # ln --relative --symbolic /var/lib/tftpboot/grub2/grub.cfg /var/lib/tftpboot/grub/grub.cfg
Verification
  • Verify the contents of your boot loader directory:

    # tree /var/lib/tftpboot/bootloader-universe
    /var/lib/tftpboot/bootloader-universe
    └── pxegrub2
        └── My_Operating_System_In_Lowercase
            └── default
                └── x86_64
                    ├── boot.efi -> grubx64.efi
                    ├── boot-sb.efi -> shimx64.efi
                    ├── grubx64.efi
                    └── shimx64.efi
Next steps
  • You can now provision Secure Boot enabled Ubuntu hosts by using the Grub2 UEFI SecureBoot and Grub2 UEFI HTTPS SecureBoot PXE loaders.

4.10. Deploying SSH keys during provisioning

Use this procedure to deploy SSH keys added to a user during provisioning. For information on adding SSH keys to a user, see Managing SSH Keys for a User in Administering Foreman.

Procedure
  1. In the Foreman web UI, navigate to Hosts > Templates > Provisioning Templates.

  2. Create a provisioning template, or clone and edit an existing template. For more information, see Creating provisioning templates.

  3. In the template, click the Template tab.

  4. In the Template editor field, add the create_users snippet to the %post section:

    <%= snippet('create_users') %>
  5. Select the Default checkbox.

  6. Click the Association tab.

  7. From the Application Operating Systems list, select an operating system.

  8. Click Submit to save the provisioning template.

  9. Create a host that is associated with the provisioning template or rebuild a host using the operating system associated with the modified template. For more information, see Creating a Host in Managing hosts.

    The SSH keys of the Owned by user are added automatically when the create_users snippet is executed during the provisioning process. You can set Owned by to an individual user or a user group. If you set Owned by to a user group, the SSH keys of all users in the user group are added automatically.

5. Using iPXE to reduce provisioning times

iPXE is an open-source network-boot firmware. It provides a full PXE implementation enhanced with additional features, such as booting from an HTTP server. For more information about iPXE, see iPXE website.

You can use iPXE if the following restrictions prevent you from using PXE:

  • A network with unmanaged DHCP servers.

  • A PXE service that is unreachable because of, for example, a firewall restriction.

  • A TFTP UDP-based protocol that is unreliable because of, for example, a low-bandwidth network.

5.1. Prerequisites for using iPXE

You can use iPXE to boot virtual machines in the following cases:

  • Your virtual machines run on a hypervisor that uses iPXE as primary firmware.

  • Your virtual machines are in BIOS mode. In this case, you can configure PXELinux to chainboot iPXE and boot by using the HTTP protocol.

For booting virtual machines in UEFI mode by using HTTP, you can follow Creating hosts with UEFI HTTP boot provisioning instead.

BIOS and UEFI support

Only BIOS systems are known to work reliably. For configuring iPXE with some EFI hosts, read a separate tutorial.

Host requirements
  • The MAC address of the provisioning interface matches the host configuration.

  • The provisioning interface of the host has a valid DHCP reservation.

  • The NIC is capable of PXE booting. For more information, see supported hardware on ipxe.org for a list of hardware drivers expected to work with an iPXE-based boot disk.

  • The NIC is compatible with iPXE.

5.2. Configuring iPXE environment

Configure an iPXE environment on all Smart Proxies that you want to use for iPXE provisioning.

Prerequisites
Procedure
  1. Enable the TFTP and HTTPboot services on your Smart Proxy:

    # foreman-installer \
    --foreman-proxy-httpboot true \
    --foreman-proxy-tftp true
  2. Install the ipxe package on your Smart Proxy:

    # apt install ipxe
  3. Copy iPXE firmware to the TFTP directory.

    • Copy the iPXE firmware with the Linux kernel header:

      # cp /usr/share/ipxe/ipxe.lkrn /var/lib/tftpboot/
    • Copy the UNDI iPXE firmware:

      # cp /usr/share/ipxe/undionly.kpxe /var/lib/tftpboot/undionly-ipxe.0
  4. Set the HTTP URL.

    • If you want to use Foreman server for booting, run the following command on Foreman server:

      # foreman-installer \
      --foreman-proxy-dhcp-ipxefilename "http://foreman.example.com/unattended/iPXE?bootstrap=1"
    • If you want to use Smart Proxy server for booting, run the following command on Smart Proxy server:

      # foreman-installer --foreman-proxy-dhcp-ipxe-bootstrap true

5.3. Booting virtual machines

Some virtualization hypervisors use iPXE as primary firmware for PXE booting. If you use such a hypervisor, you can boot virtual machines without TFTP and PXELinux.

Booting a virtual machine has the following workflow:

  1. Virtual machine starts.

  2. iPXE retrieves the network credentials, including an HTTP URL, by using DHCP.

  3. iPXE loads the iPXE bootstrap template from Smart Proxy.

  4. iPXE loads the iPXE template with MAC as a URL parameter from Smart Proxy.

  5. iPXE loads the kernel and initial RAM disk of the installer.

Prerequisites
  • Your hypervisor must support iPXE. The following virtualization hypervisors support iPXE:

  • You have configured your iPXE environment. For more information, see Configuring iPXE environment.

Note

You can use the original templates shipped in Foreman as described below. If you require modification to an original template, clone the template, edit the clone, and associate the clone instead of the original template. For more information, see Cloning provisioning templates.

Procedure
  1. In the Foreman web UI, navigate to Hosts > Templates > Provisioning Templates.

  2. Search for the required template:

    • The AutoYaST default iPXE template for SUSE Linux Enterprise Server hosts.

    • The Kickstart default iPXE template for Enterprise Linux hosts.

    • The Preseed default iPXE template for Debian/Ubuntu hosts.

  3. Click the name of the template.

  4. Click the Association tab and select the operating systems that your host uses.

  5. Click the Locations tab and add the location where the host resides.

  6. Click the Organizations tab and add the organization that the host belongs to.

  7. Click Submit to save the changes.

  8. In the Foreman web UI, navigate to Hosts > Operating systems and select the operating system of your host.

  9. Click the Templates tab.

  10. From the iPXE template list, select the required template:

    • The AutoYaST default iPXE template for SUSE Linux Enterprise Server hosts.

    • The Kickstart default iPXE template for Enterprise Linux hosts.

    • The Preseed default iPXE template for Debian/Ubuntu hosts.

  11. Click Submit to save the changes.

  12. In the Foreman web UI, navigate to Hosts > All Hosts.

  13. In the Hosts page, select the host that you want to use.

  14. Select the Operating System tab.

  15. Set PXE Loader to iPXE Embedded.

  16. Select the Templates tab.

  17. In Provisioning Templates, click Resolve and verify that the iPXE template resolves to the required template.

  18. Click Submit to save host settings.

5.4. Chainbooting iPXE from PXELinux

You can set up iPXE to use a built-in driver for network communication (ipxe.lkrn) or Universal Network Device Interface (UNDI) (undionly-ipxe.0). You can choose to load either file depending on the networking hardware capabilities and iPXE driver availability.

UNDI is a minimalistic UDP/IP stack that implements TFTP client. However, UNDI cannot support other protocols like HTTP. To use HTTP with iPXE, use the iPXE build with built-in drivers (ipxe.lkrn).

Chainbooting iPXE has the following workflow:

  1. Host powers on.

  2. PXE driver retrieves the network credentials by using DHCP.

  3. PXE driver retrieves the PXELinux firmware pxelinux.0 by using TFTP.

  4. PXELinux searches for the configuration file on the TFTP server.

  5. PXELinux chainloads iPXE ipxe.lkrn or undionly-ipxe.0.

  6. iPXE retrieves the network credentials, including an HTTP URL, by using DHCP again.

  7. iPXE chainloads the iPXE template from your Templates Smart Proxy.

  8. iPXE loads the kernel and initial RAM disk of the installer.

Prerequisites
Note

You can use the original templates shipped in Foreman as described below. If you require modification to an original template, clone the template, edit the clone, and associate the clone instead of the original template. For more information, see Cloning provisioning templates.

Procedure
  1. In the Foreman web UI, navigate to Hosts > Templates > Provisioning Templates.

  2. Search for the required PXELinux template:

    • PXELinux chain iPXE to use ipxe.lkrn

    • PXELinux chain iPXE UNDI to use undionly-ipxe.0

  3. Click the name of the template you want to use.

  4. Click the Association tab and select the operating systems that your host uses.

  5. Click the Locations tab and add the location where the host resides.

  6. Click the Organizations tab and add the organization that the host belongs to.

  7. Click Submit to save the changes.

  8. On the Provisioning Templates page, search for the required template:

    • The AutoYaST default iPXE template for SUSE Linux Enterprise Server hosts.

    • The Kickstart default iPXE template for Enterprise Linux hosts.

    • The Preseed default iPXE template for Debian/Ubuntu hosts.

  9. Click the name of the template.

  10. Click the Association tab and associate the template with the operating system that your host uses.

  11. Click the Locations tab and add the location where the host resides.

  12. Click the Organizations tab and add the organization that the host belongs to.

  13. Click Submit to save the changes.

  14. In the Foreman web UI, navigate to Hosts > Operating systems and select the operating system of your host.

  15. Click the Templates tab.

  16. From the PXELinux template list, select the template you want to use.

  17. From the iPXE template list, select the required template:

    • The AutoYaST default iPXE template for SUSE Linux Enterprise Server hosts.

    • The Kickstart default iPXE template for Enterprise Linux hosts.

    • The Preseed default iPXE template for Debian/Ubuntu hosts.

  18. Click Submit to save the changes.

  19. In the Foreman web UI, navigate to Configure > Host Groups, and select the host group you want to configure.

  20. Select the Operating System tab.

  21. Select the Architecture and Operating system.

  22. Set the PXE Loader:

    • Select PXELinux BIOS to chainboot iPXE (ipxe.lkrn) from PXELinux.

    • Select iPXE Chain BIOS to load undionly-ipxe.0 directly.

6. Discovering hosts on a network

Foreman can detect hosts on a network that are not in your Foreman inventory. These hosts boot the Discovery image that performs hardware detection and relays this information back to Foreman server. This method creates a list of ready-to-provision hosts in Foreman server without needing to enter the MAC address of each host.

6.1. Prerequisites for using Discovery

  • Ensure that the DHCP range of all subnets that you plan to use for Discovery does not overlap with the DHCP lease pool configured for the managed DHCP service. The DHCP range is set in the Foreman web UI, whereas the lease pool range is set by using the foreman-installer command.

    For example, in the 10.1.0.0/16 network range, you can allocate the following IP address blocks:

    • 10.1.0.0 to 10.1.127.255 for leases.

    • 10.1.128.0 to 10.1.255.254 for reservations.

  • Ensure the host or virtual machine being discovered has at least 1200 MB of memory. Insufficient memory can cause various random kernel panic errors because the Discovery image is extracted in memory.

6.2. Installing the Discovery service

Enable the Discovery service on Foreman server. Additionally, you can enable the Discovery service on any Smart Proxy servers that provide the TFTP service.

The Discovery service requires a Discovery image, which is provided with Foreman. The Discovery image uses a minimal operating system that is booted on hosts to acquire initial hardware information and check in with Foreman.

Foreman provides multiple versions of the Foreman Discovery image (FDI):

  • FDI version 4.0 and newer are based on CentOS Stream 8.

  • FDI versions older than 4.0 are based on CentOS Stream 7.

Procedure
  1. Install the Discovery plugin on Foreman server:

    # foreman-installer \
    --enable-foreman-plugin-discovery \
    --enable-foreman-proxy-plugin-discovery \
    --foreman-proxy-plugin-discovery-install-images=true

    This command downloads the latest Discovery ISO. If you require another version, add the following argument:

    --foreman-proxy-plugin-discovery-source-url=https://downloads.theforeman.org/discovery/releases/x.y/
  2. If you want to use Smart Proxy server, install the Discovery plugin on Smart Proxy server:

    # foreman-installer \
    --enable-foreman-proxy-plugin-discovery \
    --foreman-proxy-plugin-discovery-install-images=true

    This command downloads the latest Discovery image. If you require another version, add the following argument:

    --foreman-proxy-plugin-discovery-source-url=https://downloads.theforeman.org/discovery/releases/x.y/
  3. Configure the Discovery Smart Proxy for the subnet with discoverable hosts:

    1. In the Foreman web UI, navigate to Infrastructure > Subnets.

    2. Select a subnet.

    3. On the Proxies tab, select the Discovery Proxy that you want to use.

    Perform this for each subnet that you want to use.

6.3. Discovery in PXE mode

Foreman provides a PXE-based Discovery service that uses DHCP and TFTP services. You discover unknown nodes by booting them into the Discovery kernel and initial RAM disk images from Foreman server or Smart Proxy server. When a discovered node is scheduled for installation, it reboots and continues with the configured PXE-based host provisioning.

Discovery workflow in PXE mode
Figure 1. Discovery workflow in PXE mode

6.3.1. Setting Discovery as the default PXE boot option

Set the Discovery service as the default service that boots for hosts that are not present in your current Foreman inventory.

When you start an unknown host in PXE mode, Foreman server or Smart Proxy server provides a boot menu with a default boot option. The boot menu has two basic options: local and discovery. The default setting of the global PXE templates is to select local to boot the host from the local hard drive. Change the setting to select discovery to boot from the Discovery image.

Prerequisites
  • Your Foreman account has the view_settings, edit_settings, and view_provisioning_templates permissions.

Procedure
  1. In the Foreman web UI, navigate to Administer > Settings.

  2. On the Provisioning tab, enter discovery in the Default PXE global template entry field.

  3. Navigate to Hosts > Templates > Provisioning Templates.

  4. Click Build PXE Default.

    The boot menus are built as the following files:

    • /var/lib/tftpboot/pxelinux.cfg/default

    • /var/lib/tftpboot/grub2/grub.cfg

    Foreman propagates the default boot menus to all TFTP Smart Proxies.

6.3.2. Performing Discovery in PXE mode

Discovery in PXE mode uses the Discovery PXE boot images and runs unattended.

Prerequisites
Procedure
  • Power on or reboot your host. After a few minutes, the Discovery image completes booting and the host displays a status screen.

Verification
  • Foreman web UI displays a notification about a new discovered host.

Next steps
  • In the Foreman web UI, navigate to Hosts > Discovered Hosts and view the newly discovered host. For more information about provisioning discovered hosts, see Creating hosts from discovered hosts.

6.3.3. Customizing the Discovery PXE boot

Foreman builds PXE boot menus from the following global provisioning templates:

  • PXELinux global default for BIOS provisioning.

  • PXEGrub global default and PXEGrub2 global default for UEFI provisioning.

The PXE boot menus are available on Foreman server and Smart Proxies that have TFTP enabled.

The Discovery menu item uses a Linux kernel for the operating system and passes kernel parameters to configure the Discovery service. You can customize the passed kernel parameters by changing the following snippets:

  • pxelinux_discovery: This snippet is included in the PXELinux global default template.

    This snippet renders the Discovery boot menu option. The KERNEL and APPEND options boot the Discovery kernel and initial RAM disk. The APPEND option contains kernel parameters.

  • pxegrub_discovery: This snippet is included in the PXEGrub global default template. However, Discovery is not implemented for GRUB 1.x.

  • pxegrub2_discovery: This snippet is included in the PXEGrub2 global default template.

    This snippet renders the Discovery GRUB2 menu entry. The common variable contains kernel parameters.

For information about the kernel parameters, see Kernel parameters for Discovery customization.

Procedure
  1. In the Foreman web UI, navigate to Hosts > Templates > Provisioning Templates.

  2. Clone and edit the snippet you want to customize. For more information, see Cloning provisioning templates.

  3. Clone and edit the template that contains the original snippet. Include your custom snippet instead of the original snippet. For more information, see Cloning provisioning templates.

  4. Navigate to Administer > Settings.

  5. Click the Provisioning tab.

  6. In the appropriate Global default PXE\ template* setting, select your custom template.

  7. Navigate to Hosts > Templates > Provisioning Templates.

  8. Click Build PXE Default. This refreshes the default PXE boot menus on Foreman server and any TFTP Smart Proxies.

6.4. Discovery in PXE-less mode

Foreman provides a PXE-less Discovery service for environments without DHCP and TFTP services. You discover unknown nodes by using the Discovery ISO from Foreman server. When a discovered node is scheduled for installation, the kexec command reloads a Linux kernel with an operating system installer without rebooting the node.

Known issues
  • The console might freeze during the process.

  • On some hardware, you might experience graphical hardware problems.

Discovery workflow in PXE-less mode
Figure 2. Discovery workflow in PXE-less mode

6.4.1. Performing Discovery in PXE-less mode

Discovery in PXE-less mode uses the Discovery ISO and requires you to attend to the process.

Prerequisites
Procedure
  1. Copy the Discovery ISO to a CD, DVD, or a USB flash drive. For example, to copy to a USB drive at /dev/sdb:

    # dd bs=4M \
    if=/usr/share/foreman-discovery-image/foreman-discovery-image-version.iso \
    of=/dev/sdb
  2. Insert the Discovery boot media into a host, start the host, and boot from the media.

  3. The Discovery image displays options for either Manual network setup or Discovery with DHCP:

    • Manual network setup:

      1. On the Primary interface screen, select the primary network interface that connects to Foreman server or Smart Proxy server. Optionally, enter a VLAN ID. Hit Select to continue.

      2. On the Network configuration screen, enter the Address, Gateway, and DNS. Hit Next to continue.

    • Discovery with DHCP:

      1. On the Primary interface screen, select the primary network interface that connects to Foreman server or Smart Proxy server. Optionally, enter a VLAN ID. Hit Select to continue.

      2. The Discovery image attempts to automatically configure the network interface by using a DHCP server, such as one that a Smart Proxy server provides.

  4. On the Credentials screen, enter the following options:

    • In the Server URL field, enter the URL of Foreman server or Discovery Smart Proxy server. If you refer to a Smart Proxy server, include the Smart Proxy port number.

    • In the Connection type field, select the connection type: Server for Foreman server or Foreman Proxy for Smart Proxy server.

    Hit Next to continue.

  5. Optional: On the Custom facts screen, enter custom facts for the Facter tool to relay back to Foreman server. Enter a name and value for each custom fact you need.

  6. Hit Confirm to proceed.

Verification
  • Foreman web UI displays a notification about a new discovered host.

Next steps
  • In the Foreman web UI, navigate to Hosts > Discovered Hosts and view the newly discovered host. For more information about provisioning discovered hosts, see Creating hosts from discovered hosts.

6.4.2. Customizing the Discovery ISO

You can create a customized Discovery ISO to automate the image configuration process after booting. The Discovery image uses a Linux kernel for the operating system, which passes kernel parameters to configure the Discovery service.

By using this tool, remaster the image to include custom kernel parameters.

Procedure
  1. Run the discovery-remaster tool. Enter the kernel parameters as a single string. For example:

    # discovery-remaster ~/iso/foreman-discovery-image-version.iso \
    "fdi.pxip=192.168.140.20/24 \
    fdi.pxgw=192.168.140.1 \
    fdi.pxdns=192.168.140.2 \
    proxy.url=https://foreman.example.com:8443 \
    proxy.type=proxy \
    fdi.pxfactname1=My_Custom_Hostname \
    fdi.pxfactvalue1=My_Host \
    fdi.pxmac=52:54:00:be:8e:8c fdi.pxauto=1"

    For more information about kernel parameters, see Kernel parameters for Discovery customization.

  2. Copy the new ISO to either a CD, DVD, or a USB stick. For example, to copy to a USB stick at /dev/sdb:

    # dd bs=4M \
    if=/usr/share/foreman-discovery-image/foreman-discovery-image-version.iso \
    of=/dev/sdb
Next steps
  • Insert the Discovery boot medium into a bare metal host, start the host, and boot from the medium.

    For more information about provisioning discovered hosts, see Creating hosts from discovered hosts.

6.5. Automatic contexts for discovered hosts

Foreman server assigns an organization and location to discovered hosts automatically according to the following sequence of rules:

  1. If a discovered host uses a subnet defined in Foreman, the host uses the first organization and location associated with the subnet.

  2. If the default location and organization is configured in global settings, the discovered hosts are placed in this organization and location. To configure these settings, navigate to Administer > Settings > Discovery and select values for the Discovery organization and Discovery location settings. Ensure that the subnet of discovered host also belongs to the selected organization and location, otherwise Foreman refuses to set it for security reasons.

  3. If none of the previous conditions is met, Foreman assigns the first organization and location ordered by name.

You can change the organization or location manually by using the bulk actions on the Discovered Hosts page. Select the discovered hosts to modify and, from the Select Action menu, select Assign Organization or Assign Location.

6.6. Creating hosts from discovered hosts

Provisioning discovered hosts follows a provisioning process that is similar to PXE provisioning. The main difference is that instead of manually entering the host’s MAC address, you can select the host to provision from the list of discovered hosts.

To use the CLI instead of the Foreman web UI, see the CLI procedure.

Prerequisites
  • Configure a domain and subnet on Foreman. For more information about networking requirements, see Configuring networking.

  • You have one or more discovered hosts in your Foreman inventory.

  • Provide the installation medium for the operating systems that you want to use to provision hosts.

  • You have associated a Discovery kexec-kind template and provisioning-kind template with the operating system. For more information, see Associating templates with operating systems.

For information about the security tokens, see Configuring the security token validity duration.

Procedure
  1. In the Foreman web UI, navigate to Hosts > Discovered hosts.

  2. Select the host you want to provision and click Provision to the right of the list.

  3. Select one of the following options:

    • To provision a host from a host group, select a host group, organization, and location, and then click Create Host.

    • To provision a host with further customization, click Customize Host and enter the additional details you want to specify for the new host.

  4. Verify that the fields are populated with values. Note in particular:

    • The Name from the Host tab becomes the DNS name.

    • Foreman server automatically assigns an IP address for the new host.

    • Foreman server automatically populates the MAC address from the Discovery results.

  5. Ensure that Foreman server automatically selects the Managed, Primary, and Provision options for the first interface on the host. If not, select them.

  6. Click the Operating System tab, and verify that all fields contain values. Confirm each aspect of the operating system.

  7. In Provisioning templates, click Resolve to check if the new host can identify the correct provisioning templates.

  8. Click Submit to save the host details.

When the host provisioning is complete, the discovered host moves to Hosts > All Hosts.

CLI procedure
  1. Identify the discovered host to provision:

    # hammer discovery list
  2. Select the host and provision it by using a host group. Set a new host name with the --new-name option:

    # hammer discovery provision \
    --build true \
    --enabled true \
    --hostgroup "My_Host_Group" \
    --location "My_Location" \
    --managed true \
    --name "My_Host_Name" \
    --new-name "My_New_Host_Name" \
    --organization "My_Organization"

    This removes the host from the discovered host listing and creates a host entry with the provisioning settings. The Discovery image automatically reboots the host to PXE or initiates kernel execution. The host detects the DHCP service and starts installing the operating system. The rest of the process is identical to the normal PXE workflow described in Creating hosts with unattended provisioning.

6.7. Creating Discovery rules

As a method of automating the provisioning process for discovered hosts, Foreman provides a feature to create Discovery rules. These rules define how discovered hosts automatically provision themselves, based on the assigned host group. For example, you can automatically provision hosts with a high CPU count as hypervisors. Likewise, you can provision hosts with large hard disks as storage servers.

To use the CLI instead of the Foreman web UI, see the CLI procedure.

NIC considerations

Auto provisioning does not currently allow configuring network interface cards (NICs). All systems are being provisioned with the NIC configuration that was detected during discovery. However, you can set the NIC in the OS installer recipe scriptlet, by using a script, or by using configuration management at a later stage.

Procedure
  1. In the Foreman web UI, navigate to Configure > Discovery rules, and select Create Rule.

  2. In the Name field, enter a name for the rule.

  3. In the Search field, enter the rules to determine whether to provision a host. This field provides suggestions for values you enter and allows operators for multiple rules. For example: cpu_count > 8.

  4. From the Host Group list, select the host group to use as a template for this host.

  5. In the Hostname field, enter the pattern to determine host names for multiple hosts. This uses the same ERB syntax that provisioning templates use. The host name can use the @host attribute for host-specific values and the rand macro for a random number or the sequence_hostgroup_param_next macro for incrementing the value. For more information about provisioning templates, see Provisioning templates and the API documentation.

    • myhost-<%= sequence_hostgroup_param_next("EL7/MyHostgroup", 10, "discovery_host") %>

    • myhost-<%= rand(99999) %>

    • abc-<%= @host.facts['bios_vendor'] %>-<%= rand(99999) %>

    • xyz-<%= @host.hostgroup.name %>

    • srv-<%= @host.discovery_rule.name %>

    • server-<%= @host.ip.gsub('.','-') + '-' + @host.hostgroup.subnet.name %>

      When creating host name patterns, ensure that the resulting host names are unique, do not start with numbers, and do not contain underscores or dots. A good approach is to use unique information provided by Facter, such as the MAC address, BIOS, or serial ID.

  6. In the Hosts limit field, enter the maximum number of hosts that you can provision with the rule. Enter 0 for unlimited.

  7. In the Priority field, enter a number to set the precedence the rule has over other rules. Rules with lower values have a higher priority.

  8. From the Enabled list, select whether you want to enable the rule.

  9. To set a different provisioning context for the rule, click the Organizations and Locations tabs and select the contexts you want to use.

  10. Click Submit to save your rule.

  11. In the Foreman web UI, navigate to Hosts > Discovered Host and select one of the following two options:

    • From the Discovered hosts list on the right, select Auto-Provision to automatically provisions a single host.

    • On the upper right of the window, click Auto-Provision All to automatically provisions all hosts.

CLI procedure
  1. Create the rule by using Hammer:

    # hammer discovery-rule create \
    --enabled true \
    --hostgroup "My_Host_Group" \
    --hostname "hypervisor-<%= rand(99999) %>" \
    --hosts-limit 5 \
    --name "My_Hypervisor" \
    --priority 5 \
    --search "cpu_count  > 8"
  2. Automatically provision a host with the hammer discovery auto-provision command:

    # hammer discovery auto-provision --name "macabcdef123456"

6.8. Extending the Discovery image

You can extend the Foreman Discovery image with custom facts, software, or device drivers. You can also provide a compressed archive file containing extra code for the image to use.

Procedure
  1. Create the following directory structure:

    .
    ├── autostart.d
    │   └── 01_zip.sh
    ├── bin
    │   └── ntpdate
    ├── facts
    │   └── test.rb
    └── lib
        ├── libcrypto.so.1.0.0
        └── ruby
            └── test.rb
    • The autostart.d directory contains scripts that are executed in POSIX order by the Discovery kernel when it starts but before the host is registered to Foreman.

    • The bin directory is added to the $PATH variable; you can place binary files in this directory and use them in the autostart scripts.

    • The facts directory is added to the FACTERLIB variable so that custom facts can be configured and sent to Foreman.

    • The lib directory is added to the LD_LIBRARY_PATH variable and lib/ruby is added to the RUBYLIB variable, so that binary files in /bin can be executed correctly.

  2. After creating the directory structure, create a .zip file archive with the following command:

    # zip -r my_extension.zip .
  3. Inform the Discovery kernel of the extensions it must use. Place your zip files on your TFTP server with the Discovery image and customize the Discovery PXE boot with the fdi.zips parameter where the paths are relative to the TFTP root.

    For example, if you have two archives at $TFTP/zip1.zip and $TFTP/boot/zip2.zip, use the following syntax:

    fdi.zips=zip1.zip,boot/zip2.zip

    For more information, see Customizing the Discovery PXE boot.

You can append new directives and options to the existing environment variables (PATH, LD_LIBRARY_PATH, RUBYLIB and FACTERLIB). If you want to specify the path explicitly in your scripts, the .zip file contents are extracted to the /opt/extension directory on the image.

You can create multiple .zip files but be aware that they are extracted to the same location on the Discovery image. Files extracted from in later .zip files overwrite earlier versions if they contain the same file name.

6.9. Building a custom Discovery image

You can build a custom Discovery ISO or rebuild an ISO if you change configuration files.

The Discovery image uses a minimal operating system that is booted on hosts to acquire initial hardware information and check in with Foreman. Discovered hosts keep running the Discovery operating system until they are rebooted into an operating system installer, which then initiates the provisioning process.

Important

Do not use your production Foreman server nor Smart Proxy server to perform this procedure. Use either a dedicated environment or copy the repositories and Kickstart file to a separate server.

The following procedure demonstrates the building process on Enterprise Linux 8 and uses CentOS Stream 8 repositories.

Prerequisites
  • Ensure that hardware virtualization is available on your server.

  • Install the following packages:

    # dnf install git-core lorax anaconda pykickstart wget qemu-kvm
  • Clone the foreman-discovery-image repository:

    $ git clone https://github.com/theforeman/foreman-discovery-image.git
    $ cd foreman-discovery-image
Procedure
  1. Replace the contents of the 00-repos-centos8.ks file with the following:

    url --mirrorlist=http://mirrorlist.centos.org/?release=8&arch=$basearch&repo=baseos
    repo --name="AppStream" --mirrorlist=http://mirrorlist.centos.org/?release=8&arch=$basearch&repo=appstream
    repo --name="foreman-el8" --baseurl=http://yum.theforeman.org/releases/nightly/el8/$basearch/
    repo --name="foreman-plugins-el8" --baseurl=http://yum.theforeman.org/plugins/nightly/el8/$basearch/
    module --name=ruby --stream=2.7
    module --name=postgresql --stream=12
    module --name=foreman --stream=el8
  2. Prepare the Kickstart file:

    $ ./build-livecd fdi-centos8.ks
  3. Build the ISO image:

    $ sudo ./build-livecd-root custom ./result "nomodeset nokaslr"

    nomodeset disables mode setting. nokaslr disables address space layout randomization.

    For example, you can build a fully automated Discovery image with a static network configuration:

    $ sudo ./build-livecd-root custom ./result \
    "nomodeset nokaslr \
    fdi.pxip=192.168.140.20/24 fdi.pxgw=192.168.140.1 \
    fdi.pxdns=192.168.140.2 proxy.url=https://foreman.example.com:8443 \
    proxy.type=foreman fdi.pxmac=52:54:00:be:8e:8c fdi.pxauto=1"

    For more information about configuration options, see Kernel parameters for Discovery customization.

  4. Verify that your ./result/fdi-custom-XXXXXXX.iso file is created:

    $ ls -h ./result/*.iso
  5. Convert the .iso file to an .iso hybrid file for local booting:

    # isohybrid --partok fdi.iso

    If you have grub2 packages installed, you can also use the following command to install a grub2 boot loader:

    # isohybrid --partok --uefi fdi.iso
  6. Add md5 checksum to the .iso file so that it can pass installation media validation tests in Foreman:

    # implantisomd5 fdi.iso

6.10. Kernel parameters for Discovery customization

Discovery uses a Linux kernel for the operating system and passes kernel parameters to configure the Discovery service. These kernel parameters include the following entries:

fdi.cachefacts

Number of fact uploads without caching. By default, Foreman does not cache any uploaded facts.

fdi.countdown

Number of seconds to wait until the text-user interface is refreshed after the initial discovery attempt. This value defaults to 45 seconds. Increase this value if the status page reports the IP address as N/A.

fdi.dhcp_timeout

NetworkManager DHCP timeout. The default value is 300 seconds.

fdi.dns_nameserver

Nameserver to use for DNS SRV record.

fdi.dns_ndots

ndots option to use for DNS SRV record.

fdi.dns_search

Search domain to use for DNS SRV record.

fdi.initnet

By default, the image initializes all network interfaces (value all). When this setting is set to bootif, only the network interface it was network-booted from will be initialized.

fdi.ipv4.method

By default, NetworkManager IPv4 method setting is set to auto. This option overrides it, set it to ignore to disable the IPv4 stack. This option works only in DHCP mode.

fdi.ipv6.method

By default, NetworkManager IPv6 method setting is set to auto. This option overrides it, set it to ignore to disable the IPv6 stack. This option only works in DHCP mode.

fdi.ipwait

Duration in seconds to wait for IP to be available in HTTP proxy SSL cert start. By default, Foreman waits for 120 seconds.

fdi.nmwait

nmcli -wait option for NetworkManager. By default, nmcli waits for 120 seconds.

fdi.proxy_cert_days

Number of days the self-signed HTTPS cert is valid for. By default, the certificate is valid for 999 days.

fdi.pxauto

To set automatic or semi-automatic mode. If set to 0, the image uses semi-automatic mode, which allows you to confirm your choices through a set of dialog options. If set to 1, the image uses automatic mode and proceeds without any confirmation.

fdi.pxfactname1, fdi.pxfactname2 …​ fdi.pxfactnameN

Use to specify custom fact names.

fdi.pxfactvalue1, fdi.pxfactvalue2 …​ fdi.pxfactvalueN

The values for each custom fact. Each value corresponds to a fact name. For example, fdi.pxfactvalue1 sets the value for the fact named with fdi.pxfactname1.

fdi.pxip, fdi.pxgw, fdi.pxdns

Manually configures IP address (fdi.pxip), the gateway (fdi.pxgw), and the DNS (fdi.pxdns) for the primary network interface. If you omit these parameters, the image uses DHCP to configure the network interface. You can add multiple DNS entries in a comma-separated [1] list, for example fdi.pxdns=192.168.1.1,192.168.200.1.

fdi.pxmac

The MAC address of the primary interface in the format of AA:BB:CC:DD:EE:FF. This is the interface you aim to use for communicating with Smart Proxy server. In automated mode, the first NIC (using network identifiers in alphabetical order) with a link is used. In semi-automated mode, a screen appears and requests you to select the correct interface.

fdi.rootpw

By default, the root account is locked. Use this option to set a root password. You can enter both clear and encrypted passwords.

fdi.ssh

By default, the SSH service is disabled. Set this to 1 or true to enable SSH access.

fdi.uploadsleep

Duration in seconds between facter runs. By default, facter runs every 30 seconds.

fdi.vlan.primary

VLAN tagging ID to set for the primary interface. If you want to use tagged VLAN provisioning and you want the Discovery service to send a discovery request, add the following parameter to the Discovery snippet:

fdi.vlan.primary=My_VLAN_ID
fdi.zips

Filenames with extensions to be downloaded and started during boot. For more information, see Extending the Discovery image.

fdi.zipserver

TFTP server to use to download extensions from. For more information, see Extending the Discovery image.

net.ifnames and biosdevname

Because network interface names are not expected to always be the same between major versions of Red Hat Enterprise Linux, hosts can be created with incorrect network configurations. You can disable the new naming scheme by a kernel command line parameter:

  • For Dell servers, use the biosdevname=1 parameter.

  • For other hardware or virtual machines, use the net.ifnames=1 parameter.

proxy.type

The proxy type. By default, this parameter is set to foreman, where communication goes directly to Foreman server. Set this parameter to proxy if you point to Smart Proxy in proxy.url.

proxy.url

The URL of the server providing the Discovery service. By default, this parameter contains the foreman_server_url macro as its argument. This macro resolves to the full URL of Foreman server. There is no macro for a Smart Proxy URL. You have to set a Smart Proxy explicitly. For example:

proxy.url=https://smartproxy.example.com:8443 proxy.type=proxy

You can use an IP address or FQDN in this parameter. Add a SSL port number if you point to Smart Proxy.

6.11. Troubleshooting Discovery

If a machine is not listed in the Foreman web UI in Hosts > Discovered Hosts, it means that Discovery has failed. Inspect the following configuration areas to help isolate the problem:

Inspecting prerequisites
Inspecting problems on Foreman
  • Ensure you have set Discovery for booting and built the PXE boot configuration files. For more information, see Setting Discovery as the default PXE boot option.

  • Verify that these configuration files are present on your TFTP Smart Proxy and have discovery set as the default boot option:

    • /var/lib/tftpboot/pxelinux.cfg/default

    • /var/lib/tftpboot/grub2/grub.cfg

  • Verify that the values of the proxy.url and proxy.type options in the PXE Discovery snippet you are using. The default snippets are named pxelinux_discovery, pxegrub_discovery, or pxegrub2_discovery.

Inspecting problems with networking
  • Ensure adequate network connectivity between hosts, Smart Proxy server, and Foreman server.

  • Ensure that the DHCP server provides IP addresses to the booted Discovery image correctly.

  • Ensure that DNS is working correctly for the discovered hosts or use an IP address in the proxy.url option in the PXE Discovery snippet included in the PXE template you are using.

Inspecting problems on the host
  • If the host boots into the Discovery image but Discovery is not successful, enable the root account and SSH access on the Discovery image. You can enable SSH and set the root password by using the following Discovery kernel options:

    fdi.ssh=1 fdi.rootpw=My_Password
  • Using TTY2 or higher, log in to a Discovery-booted host to review system logs. For example, these logs are useful for troubleshooting:

    discover-host

    Initial facts upload

    foreman-discovery

    Facts refresh, reboot remote commands

    nm-prepare

    Boot script which pre-configures NetworkManager

    NetworkManager

    Networking information

  • For gathering important system facts, use the discovery-debug command on the Discovery-booted host. It prints out system logs, network configuration, list of facts, and other information on the standard output. You can redirect this output to a file and copy it with the scp command for further investigation.

Additional resources

7. Using a Lorax Composer image for provisioning

In Foreman, you can enable integration with Cockpit to perform actions and monitor your hosts. Using Cockpit, you can access Lorax Composer and build images that you can then upload to an HTTP server and use this image to provision hosts. When you configure Foreman for image provisioning, Anaconda installer partitions disks, downloads and mounts the image and copies files over to a host. The preferred image type is TAR.

Ensure that your blueprint to build the TAR image includes a kernel package.

Prerequisites
  • An existing TAR image created using Lorax Composer.

Procedure
  1. Copy the TAR image to an existing HTTP server which installed hosts can reach.

  2. In the Foreman web UI, navigate to Configure > Host Groups, and select the host group that you want to use.

  3. Click the Parameters tab, and then click Add Parameter.

  4. In the Name field, enter kickstart_liveimg.

  5. From the Type list, select string.

  6. In the Value field, enter the absolute path or a relative path in the following format custom/product/repository/image_name that points to the exact location where you store the image.

  7. Click Submit to save your changes.

You can use this image for bare-metal provisioning and provisioning using a compute resource. For more information about bare-metal provisioning, see Using PXE to provision hosts. For more information about provisioning with different compute resources, see the relevant chapter for the compute resource that you want to use.

8. Provisioning virtual machines on KVM (libvirt)

Kernel-based Virtual Machines (KVMs) use an open source virtualization daemon and API called libvirt running on Red Hat Enterprise Linux. Foreman can connect to the libvirt API on a KVM server, provision hosts on the hypervisor, and control certain virtualization functions.

Only Virtual Machines created through Foreman can be managed. Virtual Machines with other than directory storage pool types are unsupported.

You can use KVM provisioning to create hosts over a network connection or from an existing image.

Prerequisites
  • Provide the installation medium for the operating systems that you want to use to provision hosts.

  • A Smart Proxy server managing a network on the KVM server. Ensure no other DHCP services run on this network to avoid conflicts with Smart Proxy server. For more information about network service configuration for Smart Proxy servers, see Configuring Networking in Provisioning hosts.

  • A server running KVM virtualization tools (libvirt daemon).

  • A virtual network running on the libvirt server. Only NAT and isolated virtual networks can be managed through Foreman.

  • An existing virtual machine image if you want to use image-based provisioning. Ensure that this image exists in a storage pool on the KVM host. The default storage pool is usually located in /var/lib/libvirt/images. Only directory pool storage types can be managed through Foreman.

  • Optional: The examples in these procedures use the root user for KVM. If you want to use a non-root user on the KVM server, you must add the user to the libvirt group on the KVM server:

    # usermod -a -G libvirt non_root_user
Additional resources

8.1. Configuring Foreman server for KVM connections

Before adding the KVM connection, create an SSH key pair for the foreman user to ensure a secure connection between Foreman server and KVM.

Procedure
  1. On Foreman server, switch to the foreman user:

    # su foreman -s /bin/bash
  2. Generate the key pair:

    $ ssh-keygen
  3. Copy the public key to the KVM server:

    $ ssh-copy-id root@kvm.example.com
  4. Exit the bash shell for the foreman user:

    $ exit
  5. Install the libvirt-client package:

    # apt install libvirt-client
  6. Use the following command to test the connection to the KVM server:

    # su foreman -s /bin/bash -c 'virsh -c qemu+ssh://root@kvm.example.com/system list'

8.2. Adding a KVM connection to Foreman server

Use this procedure to add KVM as a compute resource in Foreman. To use the CLI instead of the Foreman web UI, see the CLI procedure.

Procedure
  1. In the Foreman web UI, navigate to Infrastructure > Compute Resources and click Create Compute Resource.

  2. In the Name field, enter a name for the new compute resource.

  3. From the Provider list, select Libvirt.

  4. In the Description field, enter a description for the compute resource.

  5. In the URL field, enter the connection URL to the KVM server. For example:

     qemu+ssh://root@kvm.example.com/system
  6. From the Display type list, select either VNC or Spice.

  7. Optional: To secure console access for new hosts with a randomly generated password, select the Set a randomly generated password on the display connection checkbox. You can retrieve the password for the VNC console to access the guest virtual machine console from the output of the following command executed on the KVM server:

    # virsh edit your_VM_name
    <graphics type='vnc' port='-1' autoport='yes' listen='0.0.0.0' passwd='your_randomly_generated_password'>

    The password is randomly generated every time the console for the virtual machine is opened, for example, with virt-manager.

  8. Click Test Connection to ensure that Foreman server connects to the KVM server without fault.

  9. Verify that the Locations and Organizations tabs are automatically set to your current context. If you want, add additional contexts to these tabs.

  10. Click Submit to save the KVM connection.

CLI procedure
  • To create a compute resource, enter the hammer compute-resource create command:

    # hammer compute-resource create --name "My_KVM_Server" \
    --provider "Libvirt" --description "KVM server at kvm.example.com" \
    --url "qemu+ssh://root@kvm.example.com/system" --locations "New York" \
    --organizations "My_Organization"

8.3. Adding KVM images to Foreman server

To create hosts using image-based provisioning, you must add information about the image, such as access details and the image location, to your Foreman server.

Note that you can manage only directory pool storage types through Foreman.

To use the CLI instead of the Foreman web UI, see the CLI procedure.

Procedure
  1. In the Foreman web UI, navigate to Infrastructure > Compute Resources and click the name of the KVM connection.

  2. Click Create Image.

  3. In the Name field, enter a name for the image.

  4. From the Operating System list, select the base operating system of the image.

  5. From the Architecture list, select the operating system architecture.

  6. In the Username field, enter the SSH user name for image access. This is normally the root user.

  7. In the Password field, enter the SSH password for image access.

  8. In the Image path field, enter the full path that points to the image on the KVM server. For example:

     /var/lib/libvirt/images/TestImage.qcow2
  9. Optional: Select the User Data checkbox if the image supports user data input, such as cloud-init data.

  10. Click Submit to save the image details.

CLI procedure
  • Create the image with the hammer compute-resource image create command. Use the --uuid field to store the full path of the image location on the KVM server.

    # hammer compute-resource image create \
    --name "KVM Image" \
    --compute-resource "My_KVM_Server"
    --operatingsystem "RedHat version" \
    --architecture "x86_64" \
    --username root \
    --user-data false \
    --uuid "/var/lib/libvirt/images/KVMimage.qcow2" \

8.4. Adding KVM details to a compute profile

Use this procedure to add KVM hardware settings to a compute profile. When you create a host on KVM using this compute profile, these settings are automatically populated.

To use the CLI instead of the Foreman web UI, see the CLI procedure.

Procedure
  1. In the Foreman web UI, navigate to Infrastructure > Compute Profiles.

  2. In the Compute Profiles window, click the name of an existing compute profile, or click Create Compute Profile, enter a Name, and click Submit.

  3. Click the name of the KVM compute resource.

  4. In the CPUs field, enter the number of CPUs to allocate to the new host.

  5. In the Memory field, enter the amount of memory to allocate to the new host.

  6. From the Image list, select the image to use if performing image-based provisioning.

  7. From the Network Interfaces list, select the network parameters for the host’s network interface. You can create multiple network interfaces. However, at least one interface must point to a Smart Proxy-managed network.

  8. In the Storage area, enter the storage parameters for the host. You can create multiple volumes for the host.

  9. Click Submit to save the settings to the compute profile.

CLI procedure
  1. To create a compute profile, enter the following command:

    # hammer compute-profile create --name "Libvirt CP"
  2. To add the values for the compute profile, enter the following command:

    # hammer compute-profile values create --compute-profile "Libvirt CP" \
    --compute-resource "My_KVM_Server" \
    --interface "compute_type=network,compute_model=virtio,compute_network=examplenetwork" \
    --volume "pool_name=default,capacity=20G,format_type=qcow2" \
    --compute-attributes "cpus=1,memory=1073741824"

8.5. Creating hosts on KVM

In Foreman, you can use KVM provisioning to create hosts over a network connection or from an existing image:

  • If you want to create a host over a network connection, the new host must be able to access either Foreman server’s integrated Smart Proxy or an external Smart Proxy server on a KVM virtual network, so that the host has access to PXE provisioning services. This new host entry triggers the KVM server to create and start a virtual machine. If the virtual machine detects the defined Smart Proxy server through the virtual network, the virtual machine boots to PXE and begins to install the chosen operating system.

  • If you want to create a host with an existing image, the new host entry triggers the KVM server to create the virtual machine using a pre-existing image as a basis for the new volume.

To use the CLI instead of the Foreman web UI, see the CLI procedure.

DHCP conflicts

For network-based provisioning, if you use a virtual network on the KVM server for provisioning, select a network that does not provide DHCP assignments. This causes DHCP conflicts with Foreman server when booting new hosts.

Procedure
  1. In the Foreman web UI, navigate to Hosts > Create Host.

  2. In the Name field, enter a name for the host.

  3. Optional: Click the Organization tab and change the organization context to match your requirement.

  4. Optional: Click the Location tab and change the location context to match your requirement.

  5. From the Host Group list, select a host group that you want to assign your host to. That host group will populate the form.

  6. From the Deploy on list, select the KVM connection.

  7. From the Compute Profile list, select a profile to use to automatically populate virtual machine settings. The KVM-specific fields are populated with settings from your compute profile. Modify these settings if required.

  8. Click the Interfaces tab, and on the interface of the host, click Edit.

  9. Verify that the fields are populated with values. Note in particular:

    • Foreman automatically assigns an IP address for the new host.

    • Ensure that the MAC address field is blank. KVM assigns a MAC address to the host during provisioning.

    • The Name from the Host tab becomes the DNS name.

    • Ensure that Foreman automatically selects the Managed, Primary, and Provision options for the first interface on the host. If not, select them.

  10. Click OK to save. To add another interface, click Add Interface. You can select only one interface for Provision and Primary.

  11. Click the Operating System tab, and confirm that all fields automatically contain values.

  12. Select the Provisioning Method that you want to use:

    • For network-based provisioning, click Network Based.

    • For image-based provisioning, click Image Based.

  13. Click Resolve in Provisioning templates to check the new host can identify the right provisioning templates to use.

  14. Click the Virtual Machine tab and confirm that these settings are populated with details from the host group and compute profile. Modify these settings to suit your needs.

  15. Click Submit to save the host entry.

CLI procedure
  • To use network-based provisioning, create the host with the hammer host create command and include --provision-method build. Replace the values in the following example with the appropriate values for your environment.

    # hammer host create \
    --build true \
    --compute-attributes="cpus=1,memory=1073741824" \
    --compute-resource "My_KVM_Server" \
    --enabled true \
    --hostgroup "My_Host_Group" \
    --interface "managed=true,primary=true,provision=true,compute_type=network,compute_network=examplenetwork" \
    --location "My_Location" \
    --managed true \
    --name "My_Host_Name" \
    --organization "My_Organization" \
    --provision-method "build" \
    --root-password "My_Password" \
    --volume="pool_name=default,capacity=20G,format_type=qcow2"
  • To use image-based provisioning, create the host with the hammer host create command and include --provision-method image. Replace the values in the following example with the appropriate values for your environment.

    # hammer host create \
    --compute-attributes="cpus=1,memory=1073741824" \
    --compute-resource "My_KVM_Server" \
    --enabled true \
    --hostgroup "My_Host_Group" \
    --image "My_KVM_Image" \
    --interface "managed=true,primary=true,provision=true,compute_type=network,compute_network=examplenetwork" \
    --location "My_Location" \
    --managed true \
    --name "My_Host_Name" \
    --organization "My_Organization" \
    --provision-method "image" \
    --volume="pool_name=default,capacity=20G,format_type=qcow2"

For more information about additional host creation parameters for this compute resource, enter the hammer host create --help command.

9. Provisioning virtual machines on oVirt

oVirt is an enterprise-grade server and desktop virtualization platform. In Foreman, you can manage virtualization functions through oVirt’s REST API. This includes creating virtual machines and controlling their power states.

You can use oVirt provisioning to create virtual machines over a network connection or from an existing image.

You can use cloud-init to configure the virtual machines that you provision. Using cloud-init avoids any special configuration on the network, such as a managed DHCP and TFTP, to finish the installation of the virtual machine. This method does not require Foreman to connect to the provisioned virtual machine over SSH to run the finish script.

Prerequisites
  • Provide the installation medium for the operating systems that you want to use to provision hosts.

  • A Smart Proxy server managing a logical network on the oVirt environment. Ensure no other DHCP services run on this network to avoid conflicts with Smart Proxy server. For more information, see Configuring Networking in Provisioning hosts.

  • An existing template, other than the blank template, if you want to use image-based provisioning. For more information about creating templates for virtual machines, see Templates in the oVirt documentation.

  • An administration-like user on oVirt for communication with Foreman server. Do not use the admin@internal user for this communication. Instead, create a new oVirt user with the following permissions:

    • System > Configure System > Login Permissions

    • Network > Configure vNIC Profile > Create

    • Network > Configure vNIC Profile > Edit Properties

    • Network > Configure vNIC Profile > Delete

    • Network > Configure vNIC Profile > Assign vNIC Profile to VM

    • Network > Configure vNIC Profile > Assign vNIC Profile to Template

    • Template > Provisioning Operations > Import/Export

    • VM > Provisioning Operations > Create

    • VM > Provisioning Operations > Delete

    • VM > Provisioning Operations > Import/Export

    • VM > Provisioning Operations > Edit Storage

    • Disk > Provisioning Operations > Create

    • Disk > Disk Profile > Attach Disk Profile

      For more information about how to create a user and add permissions in oVirt, see Users and Roles in the oVirt documentation.

Additional resources

9.1. Installing the oVirt plugin

Install the oVirt plugin to attach an oVirt compute resource provider to Foreman. This allows you to manage and deploy hosts to oVirt.

Procedure
  • Install the oVirt compute resource provider on your Foreman server:

    # foreman-installer --enable-foreman-compute-ovirt
Verification
  • Optional: In the Foreman web UI, navigate to Administer > About and select the Compute resources tab to verify the installation of the oVirt plugin.

9.2. Adding the oVirt connection to Foreman server

Use this procedure to add oVirt as a compute resource in Foreman. To use the CLI instead of the Foreman web UI, see the CLI procedure.

Procedure
  1. In the Foreman web UI, navigate to Infrastructure > Compute Resources and click Create Compute Resource.

  2. In the Name field, enter a name for the new compute resource.

  3. From the Provider list, select oVirt.

  4. In the Description field, enter a description for the compute resource.

  5. In the URL field, enter the connection URL for the oVirt Engine’s API in the following form: https://ovirt.example.com/ovirt-engine/api/v4.

  6. In the User field, enter the name of a user with permissions to access oVirt Engine’s resources.

  7. In the Password field, enter the password of the user.

  8. Click Load Datacenters to populate the Datacenter list with data centers from your oVirt environment.

  9. From the Datacenter list, select a data center.

  10. From the Quota ID list, select a quota to limit resources available to Foreman.

  11. In the X509 Certification Authorities field, enter the certificate authority for SSL/TLS access. Alternatively, if you leave the field blank, a self-signed certificate is generated on the first API request by the server.

  12. Click the Locations tab and select the location you want to use.

  13. Click the Organizations tab and select the organization you want to use.

  14. Click Submit to save the compute resource.

CLI procedure
  • Enter the hammer compute-resource create command with Ovirt for --provider and the name of the data center you want to use for --datacenter.

    # hammer compute-resource create \
    --name "My_oVirt" --provider "Ovirt" \
    --description "oVirt server at ovirt.example.com" \
    --url "https://ovirt.example.com/ovirt-engine/api/v4" \
    --user "Foreman_User" --password "My_Password" \
    --locations "New York" --organizations "My_Organization" \
    --datacenter "My_Datacenter"

9.3. Preparing cloud-init images in oVirt

To use cloud-init during provisioning, you must prepare an image with cloud-init installed in oVirt, and then import the image to Foreman to use for provisioning.

Procedure
  1. In oVirt, create a virtual machine to use for image-based provisioning in Foreman.

  2. On the virtual machine, install cloud-init:

    • On Debian or Ubuntu:

      # apt install cloud-init
    • On Enterprise Linux 8+:

      # dnf install cloud-init
    • On OpenSUSE and SUSE Linux Enterprise Server:

      # zypper install cloud-init
  3. To the /etc/cloud/cloud.cfg file, add the following information:

    datasource_list: ["NoCloud", "ConfigDrive"]
  4. In oVirt, create an image from this virtual machine.

When you add this image to Foreman, ensure that you select the User Data checkbox.

9.4. Adding oVirt images to Foreman server

To create hosts using image-based provisioning, you must add information about the image, such as access details and the image location, to your Foreman server.

To use the CLI instead of the Foreman web UI, see the CLI procedure.

Procedure
  1. In the Foreman web UI, navigate to Infrastructure > Compute Resources and click the name of the oVirt connection.

  2. Click Create Image.

  3. In the Name field, enter a name for the image.

  4. From the Operating System list, select the base operating system of the image.

  5. From the Architecture list, select the operating system architecture.

  6. In the Username field, enter the SSH user name for image access. This is normally the root user.

  7. In the Password field, enter the SSH password for image access.

  8. From the Image list, select an image from the oVirt compute resource.

  9. Optional: Select the User Data checkbox if the image supports user data input, such as cloud-init data.

  10. Click Submit to save the image details.

CLI procedure
  • Create the image with the hammer compute-resource image create command. Use the --uuid option to store the template UUID on the oVirt server.

    # hammer compute-resource image create \
    --name "oVirt_Image" \
    --compute-resource "My_oVirt"
    --operatingsystem "RedHat version" \
    --architecture "x86_64" \
    --username root \
    --uuid "9788910c-4030-4ae0-bad7-603375dd72b1" \

9.5. Preparing a cloud-init template

Procedure
  1. In the Foreman web UI, navigate to Hosts > Templates > Provisioning Templates, and click Create Template.

  2. In the Name field, enter a name for the template.

  3. In the Editor field, enter the following template details:

    <%#
    kind: user_data
    name: Cloud-init
    -%>
    #cloud-config
    hostname: <%= @host.shortname %>
    
    <%# Allow user to specify additional SSH key as host parameter -%>
    <% if @host.params['sshkey'].present? || @host.params['remote_execution_ssh_keys'].present? -%>
    ssh_authorized_keys:
    <% if @host.params['sshkey'].present? -%>
      - <%= @host.params['sshkey'] %>
    <% end -%>
    <% if @host.params['remote_execution_ssh_keys'].present? -%>
    <% @host.params['remote_execution_ssh_keys'].each do |key| -%>
      - <%= key %>
    <% end -%>
    <% end -%>
    <% end -%>
    runcmd:
      - |
        #!/bin/bash
    <%= indent 4 do
        snippet 'subscription_manager_registration'
    end %>
    <% if @host.info['parameters']['realm'] && @host.realm && @host.realm.realm_type == 'Red Hat Identity Management' -%>
      <%= indent 4 do
        snippet 'freeipa_register'
      end %>
    <% end -%>
    <% unless @host.operatingsystem.atomic? -%>
        # update all the base packages from the updates repository
        yum -t -y -e 0 update
    <% end -%>
    <%
        # safemode renderer does not support unary negation
        non_atomic = @host.operatingsystem.atomic? ? false : true
        pm_set = @host.puppetmaster.empty? ? false : true
        puppet_enabled = non_atomic && (pm_set || @host.params['force-puppet'])
    %>
    <% if puppet_enabled %>
        yum install -y puppet
        cat > /etc/puppet/puppet.conf << EOF
      <%= indent 4 do
        snippet 'puppet.conf'
      end %>
        EOF
        # Setup puppet to run on system reboot
        /sbin/chkconfig --level 345 puppet on
    
        /usr/bin/puppet agent --config /etc/puppet/puppet.conf --onetime --tags no_such_tag <%= @host.puppetmaster.blank? ? '' : "--server #{@host.puppetmaster}" %> --no-daemonize
        /sbin/service puppet start
    <% end -%>
    phone_home:
     url: <%= foreman_url('built') %>
     post: []
     tries: 10pp
  4. Click the Type tab and from the Type list, select User data template.

  5. Click the Association tab, and from the Applicable Operating Systems list, select the operating system that you want associate with the template.

  6. Click the Locations tab, and from the Locations list, select the location that you want to associate with the template.

  7. Click the Organizations tab, and from the Organization list, select the organization that you want to associate with the template.

  8. Click Submit.

  9. In the Foreman web UI, navigate to Hosts > Provisioning Setup > Operating Systems.

  10. Select the operating system you want to associate with the template.

  11. Click the Templates tab, and from the User data template list, select the name of the new template.

  12. Click Submit.

9.6. Adding oVirt details to a compute profile

Use this procedure to add oVirt hardware settings to a compute profile. When you create a host on KVM using this compute profile, these settings are automatically populated.

To use the CLI instead of the Foreman web UI, see the CLI procedure.

Procedure
  1. In the Foreman web UI, navigate to Infrastructure > Compute Profiles.

  2. In the Compute Profiles window, click the name of an existing compute profile, or click Create Compute Profile, enter a Name, and click Submit.

  3. Click the name of the oVirt compute resource.

  4. From the Cluster list, select the target host cluster in the oVirt environment.

  5. From the Template list, select the oVirt template to use for the Cores and Memory settings.

  6. In the Cores field, enter the number of CPU cores to allocate to the new host.

  7. In the Memory field, enter the amount of memory to allocate to the new host.

  8. From the Image list, select image to use for image-based provisioning.

  9. In the Network Interfaces area, enter the network parameters for the host’s network interface. You can create multiple network interfaces. However, at least one interface must point to a Smart Proxy-managed network. For each network interface, enter the following details:

    1. In the Name field, enter the name of the network interface.

    2. From the Network list, select The logical network that you want to use.

  10. In the Storage area, enter the storage parameters for the host. You can create multiple volumes for the host. For each volume, enter the following details:

    1. In the Size (GB) enter the size, in GB, for the new volume.

    2. From the Storage domain list, select the storage domain for the volume.

    3. From the Preallocate disk, select either thin provisioning or preallocation of the full disk.

    4. From the Bootable list, select whether you want a bootable or non-bootable volume.

  11. Click Submit to save the compute profile.

CLI procedure
  1. To create a compute profile, enter the following command:

    # hammer compute-profile create --name "oVirt CP"
  2. To set the values for the compute profile, enter the following command:

    # hammer compute-profile values create --compute-profile "oVirt CP" \
    --compute-resource "My_oVirt" \
    --interface "compute_interface=Interface_Type,compute_name=eth0,compute_network=satnetwork" \
    --volume "size_gb=20G,storage_domain=Data,bootable=true" \
    --compute-attributes "cluster=Default,cores=1,memory=1073741824,start=true""

9.7. Creating hosts on oVirt

In Foreman, you can use oVirt provisioning to create hosts over a network connection or from an existing image:

  • If you want to create a host over a network connection, the new host must be able to access either Foreman server’s integrated Smart Proxy or an external Smart Proxy server on a oVirt virtual network, so that the host has access to PXE provisioning services. This new host entry triggers the oVirt server to create and start a virtual machine. If the virtual machine detects the defined Smart Proxy server through the virtual network, the virtual machine boots to PXE and begins to install the chosen operating system.

  • If you want to create a host with an existing image, the new host entry triggers the oVirt server to create the virtual machine using a pre-existing image as a basis for the new volume.

To use the CLI instead of the Foreman web UI, see the CLI procedure.

DHCP conflicts

For network-based provisioning, if you use a virtual network on the oVirt server for provisioning, select a network that does not provide DHCP assignments. This causes DHCP conflicts with Foreman server when booting new hosts.

Procedure
  1. In the Foreman web UI, navigate to Hosts > Create Host.

  2. In the Name field, enter a name for the host.

  3. Optional: Click the Organization tab and change the organization context to match your requirement.

  4. Optional: Click the Location tab and change the location context to match your requirement.

  5. From the Host Group list, select a host group that you want to assign your host to. That host group will populate the form.

  6. From the Deploy on list, select the oVirt connection.

  7. From the Compute Profile list, select a profile to use to automatically populate virtual machine settings. The oVirt-specific fields are populated with settings from your compute profile. Modify these settings if required.

  8. Click the Interfaces tab, and on the interface of the host, click Edit.

  9. Verify that the fields are populated with values. Note in particular:

    • Foreman automatically assigns an IP address for the new host.

    • Ensure that the MAC address field is blank. oVirt assigns a MAC address to the host during provisioning.

    • The Name from the Host tab becomes the DNS name.

    • Ensure that Foreman automatically selects the Managed, Primary, and Provision options for the first interface on the host. If not, select them.

  10. Click OK to save. To add another interface, click Add Interface. You can select only one interface for Provision and Primary.

  11. Click the Operating System tab, and confirm that all fields automatically contain values.

  12. Select the Provisioning Method that you want to use:

    • For network-based provisioning, click Network Based.

    • For image-based provisioning, click Image Based.

  13. Click Resolve in Provisioning templates to check the new host can identify the right provisioning templates to use.

  14. Click the Virtual Machine tab and confirm that these settings are populated with details from the host group and compute profile. Modify these settings to suit your needs.

  15. Click Submit to save the host entry.

CLI procedure
  • To use network-based provisioning, create the host with the hammer host create command and include --provision-method build. Replace the values in the following example with the appropriate values for your environment.

    # hammer host create \
    --build true \
    --compute-attributes="cluster=Default,cores=1,memory=1073741824,start=true" \
    --compute-resource "MyoVirt_" \
    --enabled true \
    --hostgroup "My_Host_Group" \
    --interface "managed=true,primary=true,provision=true,compute_name=eth0,compute_network=satnetwork" \
    --location "My_Location" \
    --managed true \
    --name "My_Host_Name" \
    --organization "My_Organization" \
    --provision-method build \
    --volume="size_gb=20G,storage_domain=Data,bootable=true"
  • To use image-based provisioning, create the host with the hammer host create command and include --provision-method image. Replace the values in the following example with the appropriate values for your environment.

    # hammer host create \
    --compute-attributes="cluster=Default,cores=1,memory=1073741824,start=true" \
    --compute-resource "MyoVirt_" \
    --enabled true \
    --hostgroup "My_Host_Group" \
    --image "MyoVirt_Image_" \
    --interface "managed=true,primary=true,provision=true,compute_name=eth0,compute_network=satnetwork" \
    --location "My_Location" \
    --managed true \
    --name "My_Host_Name" \
    --organization "My_Organization" \
    --provision-method "image" \
    --volume="size_gb=20G,storage_domain=Data,bootable=true"

For more information about additional host creation parameters for this compute resource, enter the hammer host create --help command.

10. Provisioning virtual machines in VMware vSphere

VMware vSphere is an enterprise-level virtualization platform from VMware. Foreman can interact with the vSphere platform, including creating new virtual machines and controlling their power management states.

10.1. Installing VMware plugin

Install the VMware plugin to attach the VMware compute resource provider to Foreman. You can use the VMware provider to manage and deploy hosts to VMware.

Procedure
  1. Install the VMware compute resource provider on your Foreman server:

    # foreman-installer --enable-foreman-compute-vmware
Verification
  1. In the Foreman web UI, navigate to Administer > About.

  2. On the System Status card, select the Available Providers tab to verify that the VMware provider is installed.

10.2. Prerequisites for VMware provisioning

The requirements for VMware vSphere provisioning include:

  • A supported version of VMware vCenter Server. The following versions have been fully tested with Foreman:

    • vCenter Server 8.0

    • vCenter Server 7.0

    • vCenter Server 6.7 (EOL)

    • vCenter Server 6.5 (EOL)

  • A Smart Proxy server managing a network on the vSphere environment. Ensure no other DHCP services run on this network to avoid conflicts with Smart Proxy server. For more information, see Configuring networking.

  • An existing VMware template if you want to use image-based provisioning.

  • Provide the installation medium for the operating systems that you want to use to provision hosts.

10.3. Creating a VMware user

The VMware vSphere server requires an administration-like user for Foreman server communication. For security reasons, do not use the administrator user for such communication. Instead, create a user with the following permissions:

For VMware vCenter Server version 8.0, 7.0, or 6.7, set the following permissions:

  • All Privileges → Datastore → Allocate Space, Browse datastore, Update Virtual Machine files, Low level file operations

  • All Privileges → Network → Assign Network

  • All Privileges → Resource → Assign virtual machine to resource pool

  • All Privileges → Virtual Machine → Change Config (All)

  • All Privileges → Virtual Machine → Interaction (All)

  • All Privileges → Virtual Machine → Edit Inventory (All)

  • All Privileges → Virtual Machine → Provisioning (All)

  • All Privileges → Virtual Machine → Guest Operations (All)

For VMware vCenter Server version 6.5, set the following permissions:

  • All Privileges → Datastore → Allocate Space, Browse datastore, Update Virtual Machine files, Low level file operations

  • All Privileges → Network → Assign Network

  • All Privileges → Resource → Assign virtual machine to resource pool

  • All Privileges → Virtual Machine → Configuration (All)

  • All Privileges → Virtual Machine → Interaction (All)

  • All Privileges → Virtual Machine → Inventory (All)

  • All Privileges → Virtual Machine → Provisioning (All)

  • All Privileges → Virtual Machine → Guest Operations (All)

10.4. Adding a VMware connection to Foreman server

Use this procedure to add a VMware vSphere connection in Foreman server’s compute resources. To use the CLI instead of the Foreman web UI, see the CLI procedure.

Prerequisites
  • Ensure that the host and network-based firewalls are configured to allow communication from Foreman server to vCenter on TCP port 443.

  • Verify that Foreman server and vCenter can resolve each other’s host names.

Procedure
  1. In the Foreman web UI, navigate to Infrastructure > Compute Resources, and in the Compute Resources window, click Create Compute Resource.

  2. In the Name field, enter a name for the resource.

  3. From the Provider list, select VMware.

  4. In the Description field, enter a description for the resource.

  5. In the VCenter/Server field, enter the IP address or host name of the vCenter server.

  6. In the User field, enter the user name with permission to access the vCenter’s resources.

  7. In the Password field, enter the password for the user.

  8. Click Load Datacenters to populate the list of data centers from your VMware vSphere environment.

  9. From the Datacenter list, select a specific data center to manage from this list.

  10. In the Fingerprint field, ensure that this field is populated with the fingerprint from the data center.

  11. From the Display Type list, select a console type, for example, VNC or VMRC. Note that VNC consoles are unsupported on VMware ESXi 6.5 and later.

  12. Optional: In the VNC Console Passwords field, select the Set a randomly generated password on the display connection checkbox to secure console access for new hosts with a randomly generated password. You can retrieve the password for the VNC console to access guest virtual machine console from the libvirtd host from the output of the following command:

    # virsh edit your_VM_name
    <graphics type='vnc' port='-1' autoport='yes' listen='0.0.0.0' passwd='your_randomly_generated_password'>

    The password randomly generates every time the console for the virtual machine opens, for example, with virt-manager.

  13. From the Enable Caching list, you can select whether to enable caching of compute resources. For more information, see Caching of compute resources.

  14. Click the Locations and Organizations tabs and verify that the values are automatically set to your current context. You can also add additional contexts.

  15. Click Submit to save the connection.

CLI procedure
  • Create the connection with the hammer compute-resource create command. Select Vmware as the --provider and set the instance UUID of the data center as the --uuid:

    # hammer compute-resource create \
    --datacenter "My_Datacenter" \
    --description "vSphere server at vsphere.example.com" \
    --locations "My_Location" \
    --name "My_vSphere" \
    --organizations "My_Organization" \
    --password "My_Password" \
    --provider "Vmware" \
    --server "vsphere.example.com" \
    --user "My_User"

10.5. Adding VMware images to Foreman server

VMware vSphere uses templates as images for creating new virtual machines. If using image-based provisioning to create new hosts, you need to add VMware template details to your Foreman server. This includes access details and the template name.

To use the CLI instead of the Foreman web UI, see the CLI procedure.

Procedure
  1. In the Foreman web UI, navigate to Infrastructure > Compute Resources.

  2. Select your VMware compute resource.

  3. Click Create Image.

  4. In the Name field, enter a name for the image.

  5. From the Operating System list, select the base operating system of the image.

  6. From the Architecture list, select the operating system architecture.

  7. In the Username field, enter the SSH user name for image access. By default, this is set to root.

  8. If your image supports user data input such as cloud-init data, click the User data checkbox.

  9. Optional: In the Password field, enter the SSH password to access the image.

  10. From the Image list, select an image from VMware.

  11. Click Submit to save the image details.

CLI procedure
  • Create the image with the hammer compute-resource image create command. Use the --uuid field to store the relative template path on the vSphere environment:

    # hammer compute-resource image create \
    --architecture "My_Architecture" \
    --compute-resource "My_VMware"
    --name "My_Image" \
    --operatingsystem "My_Operating_System" \
    --username root \
    --uuid "My_UUID"

10.6. Adding VMware details to a compute profile

You can predefine certain hardware settings for virtual machines on VMware vSphere. You achieve this through adding these hardware settings to a compute profile. To use the CLI instead of the Foreman web UI, see the CLI procedure.

Procedure
  1. In the Foreman web UI, navigate to Infrastructure > Compute Profiles.

  2. Select a compute profile.

  3. Select a VMware compute resource.

  4. In the CPUs field, enter the number of CPUs to allocate to the host.

  5. In the Cores per socket field, enter the number of cores to allocate to each CPU.

  6. In the Memory field, enter the amount of memory in MiB to allocate to the host.

  7. In the Firmware checkbox, select either BIOS or UEFI as firmware for the host. By default, this is set to automatic.

  8. In the Cluster list, select the name of the target host cluster on the VMware environment.

  9. From the Resource pool list, select an available resource allocations for the host.

  10. In the Folder list, select the folder to organize the host.

  11. From the Guest OS list, select the operating system you want to use in VMware vSphere.

  12. From the Virtual H/W version list, select the underlying VMware hardware abstraction to use for virtual machines.

  13. If you want to add more memory while the virtual machine is powered on, select the Memory hot add checkbox.

  14. If you want to add more CPUs while the virtual machine is powered on, select the CPU hot add checkbox.

  15. If you want to add a CD-ROM drive, select the CD-ROM drive checkbox.

  16. From the Boot order list, define the order in which the virtual machines tried to boot.

  17. Optional: In the Annotation Notes field, enter an arbitrary description.

  18. If you use image-based provisioning, select the image from the Image list.

  19. From the SCSI controller list, select the disk access method for the host.

  20. If you want to use eager zero thick provisioning, select the Eager zero checkbox. By default, the disk uses lazy zero thick provisioning.

  21. From the Network Interfaces list, select the network parameters for the host’s network interface. At least one interface must point to a Smart Proxy-managed network.

  22. Optional: Click Add Interface to create another network interfaces.

  23. Click Submit to save the compute profile.

CLI procedure
  1. Create a compute profile:

    # hammer compute-profile create --name "My_Compute_Profile"
  2. Set VMware details to a compute profile:

    # hammer compute-profile values create \
    --compute-attributes "cpus=1,corespersocket=2,memory_mb=1024,cluster=MyCluster,path=MyVMs,start=true" \
    --compute-profile "My_Compute_Profile" \
    --compute-resource "My_VMware" \
    --interface "compute_type=VirtualE1000,compute_network=mynetwork \
    --volume "size_gb=20G,datastore=Data,name=myharddisk,thin=true"

10.7. Creating hosts on VMware

The VMware vSphere provisioning process provides the option to create hosts over a network connection or using an existing image.

For network-based provisioning, you must create a host to access either Foreman server’s integrated Smart Proxy or an external Smart Proxy server on a VMware vSphere virtual network, so that the host has access to PXE provisioning services. The new host entry triggers the VMware vSphere server to create the virtual machine. If the virtual machine detects the defined Smart Proxy server through the virtual network, the virtual machine boots to PXE and begins to install the chosen operating system.

DHCP conflicts

If you use a virtual network on the VMware vSphere server for provisioning, ensure that you select a virtual network that does not provide DHCP assignments. This causes DHCP conflicts with Foreman server when booting new hosts.

For image-based provisioning, use the pre-existing image as a basis for the new volume.

To use the CLI instead of the Foreman web UI, see the CLI procedure.

Procedure
  1. In the Foreman web UI, navigate to Hosts > Create Host.

  2. In the Name field, enter a name for the host.

  3. Optional: Click the Organization tab and change the organization context to match your requirement.

  4. Optional: Click the Location tab and change the location context to match your requirement.

  5. From the Host Group list, select a host group that you want to assign your host to. That host group will populate the form.

  6. From the Deploy on list, select the VMware vSphere connection.

  7. From the Compute Profile list, select a profile to use to automatically populate virtual machine-based settings.

  8. Click the Interfaces tab, and on the interface of the host, click Edit.

  9. Verify that the fields are populated with values. Note in particular:

    • Foreman automatically assigns an IP address for the new host.

    • Ensure that the MAC address field is blank. VMware assigns a MAC address to the host during provisioning.

    • The Name from the Host tab becomes the DNS name.

    • Ensure that Foreman automatically selects the Managed, Primary, and Provision options for the first interface on the host. If not, select them.

  10. In the interface window, review the VMware-specific fields that are populated with settings from our compute profile. Modify these settings to suit your needs.

  11. Click OK to save. To add another interface, click Add Interface. You can select only one interface for Provision and Primary.

  12. Click the Operating System tab, and confirm that all fields automatically contain values.

  13. Select the Provisioning Method that you want:

    • For network-based provisioning, click Network Based.

    • For image-based provisioning, click Image Based.

    • If the foreman_bootdisk plugin is installed, and you want to use boot-disk provisioning, click Boot disk based.

  14. Click Resolve in Provisioning templates to check the new host can identify the right provisioning templates to use.

  15. Click the Virtual Machine tab and confirm that these settings are populated with details from the host group and compute profile. Modify these settings to suit your requirements.

  16. Click Submit to provision your host on VMware.

CLI procedure
  • Create the host from a network with the hammer host create command and include --provision-method build to use network-based provisioning:

    # hammer host create \
    --build true \
    --compute-attributes="cpus=1,corespersocket=2,memory_mb=1024,cluster=MyCluster,path=MyVMs,start=true" \
    --compute-resource "My_VMware" \
    --enabled true \
    --hostgroup "My_Host_Group" \
    --interface "managed=true,primary=true,provision=true,compute_type=VirtualE1000,compute_network=mynetwork" \
    --location "My_Location" \
    --managed true \
    --name "My_Host" \
    --organization "My_Organization" \
    --provision-method build \
    --volume="size_gb=20G,datastore=Data,name=myharddisk,thin=true"
  • Create the host from an image with the hammer host create command and include --provision-method image to use image-based provisioning:

    # hammer host create \
    --compute-attributes="cpus=1,corespersocket=2,memory_mb=1024,cluster=MyCluster,path=MyVMs,start=true" \
    --compute-resource "My_VMware" \
    --enabled true \
    --hostgroup "My_Host_Group" \
    --image "My_VMware_Image" \
    --interface "managed=true,primary=true,provision=true,compute_type=VirtualE1000,compute_network=mynetwork" \
    --location "My_Location" \
    --managed true \
    --name "My_Host" \
    --organization "My_Organization" \
    --provision-method image \
    --volume="size_gb=20G,datastore=Data,name=myharddisk,thin=true"

For more information about additional host creation parameters for this compute resource, enter the hammer host create --help command.

10.8. Using VMware cloud-init and userdata templates for provisioning

You can use VMware with the Cloud-init and Userdata templates to insert user data into the new virtual machine, to make further VMware customization, and to enable the VMware-hosted virtual machine to call back to Foreman.

You can use the same procedures to set up a VMware compute resource within Foreman, with a few modifications to the workflow.

VMware template provisioning
Figure 3. VMware cloud-init provisioning overview

When you set up the compute resource and images for VMware provisioning in Foreman, the following sequence of provisioning events occurs:

  • The user provisions one or more virtual machines using the Foreman web UI, API, or hammer

  • Foreman calls the VMware vCenter to clone the virtual machine template

  • Foreman userdata provisioning template adds customized identity information

  • When provisioning completes, the Cloud-init provisioning template instructs the virtual machine to call back to Smart Proxy when cloud-init runs

  • VMware vCenter clones the template to the virtual machine

  • VMware vCenter applies customization for the virtual machine’s identity, including the host name, IP, and DNS

  • The virtual machine builds, cloud-init is invoked and calls back Foreman on port 80, which then redirects to 443

Prerequisites
  • Configure port and firewall settings to open any necessary connections. Because of the cloud-init service, the virtual machine always calls back to Foreman even if you register the virtual machine to Smart Proxy. For more information, see Port and firewall requirements in Installing Foreman Server nightly on Debian/Ubuntu and Port and firewall requirements in Installing a Smart Proxy Server nightly on Debian/Ubuntu.

  • If you want to use Smart Proxy servers instead of your Foreman server, ensure that you have configured your Smart Proxy servers accordingly. For more information, see Configuring Smart Proxy for Host Registration and Provisioning in Installing a Smart Proxy Server nightly on Debian/Ubuntu.

  • Back up the following configuration files:

    • /etc/cloud/cloud.cfg.d/01_network.cfg

    • /etc/cloud/cloud.cfg.d/10_datasource.cfg

    • /etc/cloud/cloud.cfg

Associating the Userdata and Cloud-init templates with the operating system
  1. In the Foreman web UI, navigate to Hosts > Templates > Provisioning Templates.

  2. Search for the CloudInit default template and click its name.

  3. Click the Association tab.

  4. Select all operating systems to which the template applies and click Submit.

  5. Repeat the steps above for the UserData open-vm-tools template.

  6. Navigate to Hosts > Provisioning Setup > Operating Systems.

  7. Select the operating system that you want to use for provisioning.

  8. Click the Templates tab.

  9. From the Cloud-init template list, select CloudInit default.

  10. From the User data template list, select UserData open-vm-tools.

  11. Click Submit to save the changes.

Preparing an image to use the cloud-init template

To prepare an image, you must first configure the settings that you require on a virtual machine that you can then save as an image to use in Foreman.

To use the cloud-init template for provisioning, you must configure a virtual machine so that cloud-init is installed, enabled, and configured to call back to Foreman server.

For security purposes, you must install a CA certificate to use HTTPS for all communication. This procedure includes steps to clean the virtual machine so that no unwanted information transfers to the image you use for provisioning.

If you have an image with cloud-init, you must still follow this procedure to enable cloud-init to communicate with Foreman because cloud-init is disabled by default.

These instructions are for Enterprise Linux or Fedora, follow similar steps for other Linux distributions.

Procedure
  1. On the virtual machine that you use to create the image, install the required packages:

    # dnf install cloud-init open-vm-tools perl-interpreter perl-File-Temp
  2. Disable network configuration by cloud-init:

    # cat << EOM > /etc/cloud/cloud.cfg.d/01_network.cfg
    network:
      config: disabled
    EOM
  3. Configure cloud-init to fetch data from Foreman:

    # cat << EOM > /etc/cloud/cloud.cfg.d/10_datasource.cfg
    datasource_list: [NoCloud]
    datasource:
      NoCloud:
        seedfrom: https://foreman.example.com/userdata/
    EOM

    If you intend to provision through Smart Proxy server, use the URL of your Smart Proxy server in the seedfrom option, such as https://smartproxy.example.com:8443/userdata/.

  4. Configure modules to use in cloud-init:

    # cat << EOM > /etc/cloud/cloud.cfg
    cloud_init_modules:
     - bootcmd
     - ssh
    
    cloud_config_modules:
     - runcmd
    
    cloud_final_modules:
     - scripts-per-once
     - scripts-per-boot
     - scripts-per-instance
     - scripts-user
     - phone-home
    
    system_info:
      distro: rhel
      paths:
        cloud_dir: /var/lib/cloud
        templates_dir: /etc/cloud/templates
      ssh_svcname: sshd
    EOM
  5. Enable the CA certificates for the image:

    # update-ca-trust enable
  6. Copy the CA certificate from the Apache configuration to /etc/pki/ca-trust/source/anchors/cloud-init-ca.crt.

  7. Update the record of certificates:

    # update-ca-trust extract
  8. Clean the image:

    # systemctl stop rsyslog
    # systemctl stop auditd
    # package-cleanup --oldkernels --count=1
    # dnf clean all
  9. Reduce logspace, remove old logs, and truncate logs:

    # logrotate -f /etc/logrotate.conf
    # rm -f /var/log/*-???????? /var/log/*.gz
    # rm -f /var/log/dmesg.old
    # rm -rf /var/log/anaconda
    # cat /dev/null > /var/log/audit/audit.log
    # cat /dev/null > /var/log/wtmp
    # cat /dev/null > /var/log/lastlog
    # cat /dev/null > /var/log/grubby
  10. Remove udev hardware rules:

    # rm -f /etc/udev/rules.d/70*
  11. Remove the ifcfg scripts related to existing network configurations:

    # rm -f /etc/sysconfig/network-scripts/ifcfg-ens*
    # rm -f /etc/sysconfig/network-scripts/ifcfg-eth*
  12. Remove the SSH host keys:

    # rm -f /etc/ssh/ssh_host_*
  13. Remove root user’s SSH history:

    # rm -rf ~root/.ssh/known_hosts
  14. Remove root user’s shell history:

    # rm -f ~root/.bash_history
    # unset HISTFILE
  15. Create an image from this virtual machine.

  16. Add your image to Foreman.

10.9. Deleting a VM on VMware

You can delete VMs running on VMware from within Foreman.

Procedure
  1. In the Foreman web UI, navigate to Infrastructure > Compute Resources.

  2. Select your VMware provider.

  3. On the Virtual Machines tab, click Delete from the Actions menu. This deletes the virtual machine from the VMware compute resource while retaining any associated hosts within Foreman. If you want to delete the orphaned host, navigate to Hosts > All Hosts and delete the host manually.

Additional resources

10.10. Importing a virtual machine from VMware into Foreman

You can import existing virtual machines running on VMware into Foreman.

Procedure
  1. In the Foreman web UI, navigate to Infrastructure > Compute Resources.

  2. Select your VMware compute resource.

  3. On the Virtual Machines tab, click Import as managed Host or Import as unmanaged Host from the Actions menu. The following page looks identical to creating a host with the compute resource being already selected. For more information, see Creating a host in Foreman in Managing hosts.

  4. Click Submit to import the virtual machine into Foreman.

10.11. Caching of compute resources

Caching of compute resources speeds up rendering of VMware information.

10.11.1. Enabling caching of compute resources

To enable or disable caching of compute resources:

Procedure
  1. In the Foreman web UI, navigate to Infrastructure > Compute Resources.

  2. Click the Edit button to the right of the VMware server you want to update.

  3. Select the Enable caching checkbox.

10.11.2. Refreshing the compute resources cache

Refresh the cache of compute resources to update compute resources information.

Procedure
  1. In the Foreman web UI, navigate to Infrastructure > Compute Resources.

  2. Select a VMware server you want to refresh the compute resources cache for and click Refresh Cache.

CLI procedure
  • Use this API call to refresh the compute resources cache:

    # curl -H "Accept:application/json" \
    -H "Content-Type:application/json" -X PUT \
    -u username:password -k \
    https://foreman.example.com/api/compute_resources/compute_resource_id/refresh_cache

    Use hammer compute-resource list to determine the ID of the VMware server you want to refresh the compute resources cache for.

11. Provisioning virtual machines in Proxmox

Proxmox Virtual Environment is an open-source server management platform for enterprise virtualization. Proxmox tightly integrates the KVM hypervisor and Linux Containers (LXC). Foreman can interact with Proxmox, including creating virtual machines and controlling their power management states.

Additional resources

11.1. Installing the Proxmox plugin

Install the Proxmox plugin to attach a Proxmox compute resource provider to Foreman. This allows you to manage and deploy hosts to Proxmox.

Procedure
  1. Install the Proxmox plugin on your Foreman server:

    # foreman-installer --enable-foreman-plugin-proxmox
  2. Optional: In the Foreman web UI, navigate to Administer > About and select the Compute Resources tab to verify the installation of the Proxmox plugin.

11.2. Adding a Proxmox connection to Foreman server

Use this procedure to add a Proxmox connection to Foreman. If your Proxmox instance consists of multiple nodes, you have to create a Proxmox connection to Foreman for each node.

Prerequisites
  • Ensure that the host and network-based firewalls are configured to allow communication from Foreman to Proxmox on TCP port 443.

  • Verify that Foreman server is able to resolve the host name of your Proxmox compute resource and Proxmox is able to resolve the host name of Foreman server.

Procedure
  1. In the Foreman web UI, navigate to Infrastructure > Compute Resources.

  2. Click Create Compute Resource.

  3. In the Name field, enter a name for the compute resource.

  4. From the Provider list, select Proxmox.

  5. Optional: In the Description field, enter a description for the compute resource.

  6. Optional: In the Authentication method list, select User token if you do not want to use access tickets to authenticate on Proxmox.

  7. In the Url field, enter the IP address or host name of your Proxmox node. Ensure to specify its port and add /api2/json to its path.

  8. In the Username field, enter the user name to access Proxmox.

  9. In the Password field, enter the password for the user.

  10. Optional: Select the SSL verify peer checkbox to verify the certificate used for the encrypted connection from Foreman to Proxmox.

  11. In the X509 Certificate Authorities field, enter a certificate authority or a correctly ordered chain of certificate authorities.

  12. Optional: Click Test Connection to verify the URL and its corresponding credentials.

  13. Click Submit to save the connection.

11.3. Adding Proxmox images to Foreman server

Proxmox uses templates as images for creating new virtual machines. When you use image-based provisioning to create new hosts, you need to add Proxmox template details to your Foreman server. This includes access details and the template name.

Procedure
  1. In the Foreman web UI, navigate to Infrastructure > Compute Resources.

  2. Select your Proxmox compute resource.

  3. Click Create Image.

  4. In the Name field, enter a name for the image.

  5. From the Operating System list, select the base operating system of the image.

  6. From the Architecture list, select the operating system architecture.

  7. In the Username field, enter the SSH user name for image access. By default, this is set to root. This is only necessary if the image has an inbuilt user account.

  8. From the User data list, select whether you want the images to support user data input, such as cloud-init data.

  9. In the Password field, enter the SSH password for image access.

  10. In the Image field, enter the relative path and name of the template on the Proxmox environment. Do not include the data center in the relative path.

  11. Click Submit to save the image details.

11.4. Adding Proxmox details to a compute profile

You can predefine certain hardware settings for virtual machines on Proxmox. You achieve this through adding these hardware settings to a compute profile.

Note that this procedure assumes you are using KVM/qemu-based virtualization.

Procedure
  1. In the Foreman web UI, navigate to Infrastructure > Compute Profiles.

  2. Select a compute profile.

  3. Select a Proxmox compute resource.

  4. From the Type list, select a virtualization option:

    • LXC container: Linux containers that focus on isolating applications and reusing the host kernel

    • KVM/Qemu server: Kernel-based virtual machines that run their own kernel and let you boot various guest operating systems

  5. Click the General tab.

  6. From the Node list, select a Proxmox node.

  7. From the Image list, select an image that is available on your Proxmox compute resource.

  8. From the Pool list, select a resource allocation pool.

  9. Optional: Click Advanced Options, Hardware, Network Interfaces, or Storage to configure compute profile settings.

  10. On the Storage tab, Foreman community recommends to use either SCSI or VirtIO Block in the Controller field as the type of the disk controller of your host in Proxmox.

  11. On the Storage tab, Foreman community does not recommend caching.

  12. Click Submit to save the compute profile.

11.5. Creating hosts on Proxmox

The Proxmox provisioning process provides the option to create hosts over a network connection or using an existing image.

For network-based provisioning, you must create a host to access either integrated Smart Proxy or an external Smart Proxy server on a Proxmox virtual network, so that the host has access to PXE provisioning services. The new host entry triggers the Proxmox node to create the virtual machine. If the virtual machine detects the defined Smart Proxy server through the virtual network, the virtual machine boots to PXE and begins to install the chosen operating system.

DHCP conflicts

If you use a virtual network on the Proxmox node for provisioning, ensure that you select a virtual network that does not provide DHCP assignments. This causes DHCP conflicts with Foreman server when booting new hosts.

For image-based provisioning, use the pre-existing image as a basis for the new volume.

Procedure
  1. In the Foreman web UI, navigate to Hosts > Create Host.

  2. In the Name field, enter a name for the host.

  3. Optional: Click the Organization tab and change the organization context to match your requirement.

  4. Optional: Click the Location tab and change the location context to match your requirement.

  5. From the Host Group list, select a host group that you want to assign your host to. That host group will populate the form.

  6. From the Deploy on list, select your Proxmox compute resource.

  7. From the Compute Profile list, select a profile to use to automatically populate virtual machine-based settings.

  8. Click the Interfaces tab, and on the interface of the host, click Edit.

  9. Verify that the fields are populated with values. Note in particular:

    • Foreman automatically assigns an IP address for the new host.

    • Ensure that the MAC address field is blank. Proxmox assigns a MAC address to the host during provisioning.

    • The Name from the Host tab becomes the DNS name.

    • Ensure that Foreman automatically selects the Managed, Primary, and Provision options for the first interface on the host. If not, select them.

  10. Click OK to save. To add another interface, click Add Interface. You can select only one interface for Provision and Primary.

  11. On the Operating System tab, confirm that all fields automatically contain values.

  12. Select the Provisioning Method that you want:

    • For network-based provisioning, click Network Based.

    • For image-based provisioning, click Image Based.

    • If the foreman_bootdisk plugin is installed and you want to use boot-disk provisioning, click Boot disk based.

  13. Click Resolve in Provisioning templates to check the new host can identify the right provisioning templates to use.

  14. Click the Virtual Machine tab and confirm that these settings are populated with details from the host group and compute profile. Modify these settings to suit your requirements.

  15. Click Submit to provision your host on Proxmox.

11.6. Deleting a virtual machine on Proxmox

You can delete virtual machines running on Proxmox from within Foreman.

Procedure
  1. In the Foreman web UI, navigate to Infrastructure > Compute Resources.

  2. Select your Proxmox compute resource.

  3. On the Virtual Machines tab, click Delete from the Actions menu. This deletes the virtual machine from your Proxmox compute resource while retaining any associated hosts within Foreman.

  4. Optional: If you want to delete the orphaned host, navigate to Hosts > All Hosts and delete the host manually.

Additional resources

11.7. Importing a virtual machine from Proxmox into Foreman

You can import existing virtual machines running on Proxmox into Foreman.

Procedure
  1. In the Foreman web UI, navigate to Infrastructure > Compute Resources.

  2. Select your Proxmox compute resource.

  3. On the Virtual Machines tab, click Import as managed Host or Import as unmanaged Host from the Actions menu. The following page looks identical to creating a host with the compute resource being already selected. For more information, see Creating a host in Foreman in Managing hosts.

  4. Click Submit to import the virtual machine into Foreman.

11.8. Adding a remote console connection for a host on Proxmox

Use this procedure to add a remote console connection using VNC for a host on Proxmox.

Limitation
  • This procedure only works for hosts based on qemu. It does not work for LXC container on Proxmox.

Prerequisites
  • Ensure that your host has Standard VGA selected from the VGA list.

Procedure
  1. On your Proxmox, open the required port for VNC:

    # firewall-cmd --add-port=My_Port/tcp --permanent

    The port is the sum of 5900 and the VMID of your host. You can view the VMID on the VM tab for your host in the Foreman web UI. For example, if your host has the VMID 142, open port 6042 on your Proxmox.

    To enable a remote console connection for multiple hosts, open a range of ports using firewall-cmd --add-port=5900-My_Upper_Port_Range_Limit/tcp --permanent.

  2. Reload the firewall configuration:

    # firewall-cmd --reload
  3. On your Proxmox, enable VNC for your host in /etc/pve/nodes/My_Proxmox_Instance/qemu-server/My_VMID.conf:

    args: -vnc 0.0.0.0:_My_VMID_

    For more information, see VNC Client Access in the Proxmox documentation.

  4. Restart your host on Proxmox.

  5. Optional: Verify that the required port for VNC is open:

    # netstat -ltunp

12. Provisioning virtual machines on KubeVirt

KubeVirt addresses the needs of development teams that have adopted or want to adopt Kubernetes but possess existing virtual machine (VM) workloads that cannot be easily containerized. This technology provides a unified development platform where developers can build, modify, and deploy applications residing in application containers and VMs in a shared environment. These capabilities support rapid application modernization across the open hybrid cloud.

You can create a compute resource for KubeVirt so that you can provision and manage virtual machines in Kubernetes by using Foreman.

Prerequisites
  • Provide the installation medium for the operating systems that you want to use to provision hosts.

  • You must have cluster-admin permissions for the Kubernetes cluster.

  • A Smart Proxy server managing a network on the Kubernetes cluster. Ensure that no other DHCP services run on this network to avoid conflicts with Smart Proxy server. For more information about network service configuration for Smart Proxy servers, see Configuring Networking in Provisioning hosts.

Additional resources

12.1. Adding a KubeVirt connection to Foreman server

Use this procedure to add KubeVirt as a compute resource in Foreman.

Procedure
  1. Enter the following foreman-installer command to enable the KubeVirt plugin for Foreman:

    # foreman-installer --enable-foreman-plugin-kubevirt
  2. Obtain a token to use for HTTP and HTTPs authentication:

    1. Log in to the Kubernetes cluster and list the secrets that contain tokens:

      $ kubectl get secrets
    2. Obtain the token for your secret:

      $ kubectl get secrets MY_SECRET -o jsonpath='{.data.token}' | base64 -d | xargs
    3. Record the token to use later in this procedure.

  3. In the Foreman web UI, navigate to Infrastructure > Compute Resources, and click Create Compute Resource.

  4. In the Name field, enter a name for the new compute resource.

  5. From the Provider list, select KubeVirt.

  6. In the Description field, enter a description for the compute resource.

  7. In the Hostname field, enter the FQDN, hostname, or IP address of the Kubernetes cluster.

  8. In the API Port field, enter the port number that you want to use for provisioning requests from Foreman to KubeVirt.

  9. In the Namespace field, enter the user name of the Kubernetes cluster.

  10. In the Token field, enter the bearer token for HTTP and HTTPs authentication.

  11. Optional: In the X509 Certification Authorities field, enter a certificate to enable client certificate authentication for API server calls.

13. Provisioning cloud instances on OpenStack

OpenStack provides the foundation to build a private or public Infrastructure-as-a-Service (IaaS) cloud. It offers a massively scalable, fault-tolerant platform for the development of cloud-enabled workloads. In Foreman, you can interact with OpenStack REST API to create cloud instances and control their power management states.

Prerequisites
  • Provide the installation medium for the operating systems that you want to use to provision hosts.

  • A Smart Proxy server managing a network in your OpenStack environment. For more information, see Configuring Networking in Provisioning hosts.

  • An image added to OpenStack Image Storage (glance) service for image-based provisioning. For more information, see the OpenStack Instances and Images Guide.

Additional resources

13.1. Adding a OpenStack connection to Foreman server

You can add OpenStack as a compute resource in Foreman. To use the CLI instead of the Foreman web UI, see the CLI procedure.

Procedure
  1. In the Foreman web UI, navigate to Infrastructure > Compute Resources.

  2. Click Create Compute Resource.

  3. In the Name field, enter a name for the new compute resource.

  4. From the Provider list, select RHEL OpenStack Platform.

  5. Optional: In the Description field, enter a description for the compute resource.

  6. In the URL field, enter the URL for the OpenStack Authentication keystone service’s API at the tokens resource, such as http://openstack.example.com:5000/v2.0/tokens or http://openstack.example.com:5000/v3/auth/tokens.

  7. In the Username and Password fields, enter the user authentication for Foreman to access the environment.

  8. Optional: In the Project (Tenant) name field, enter the name of your tenant (v2) or project (v3) for Foreman server to manage.

  9. In the User domain field, enter the user domain for v3 authentication.

  10. In the Project domain name field, enter the project domain name for v3 authentication.

  11. In the Project domain ID field, enter the project domain ID for v3 authentication.

  12. Optional: Select Allow external network as main network to use external networks as primary networks for hosts.

  13. Optional: Click Test Connection to verify that Foreman can connect to your compute resource.

  14. Click the Locations and Organizations tabs and verify that the location and organization that you want to use are set to your current context. Add any additional contexts that you want to these tabs.

  15. Click Submit to save the OpenStack connection.

CLI procedure
  • To create a compute resource, enter the hammer compute-resource create command:

    # hammer compute-resource create --name "My_OpenStack" \
    --provider "OpenStack" \
    --description "My OpenStack environment at openstack.example.com" \
    --url "http://openstack.example.com:5000/v3/auth/tokens" \
    --user "My_Username" --password "My_Password" \
    --tenant "My_Openstack" --domain "My_User_Domain" \
    --project-domain-id "My_Project_Domain_ID" \
    --project-domain-name "My_Project_Domain_Name" \
    --locations "New York" --organizations "My_Organization"

13.2. Adding OpenStack images to Foreman server

To create hosts using image-based provisioning, you must add information about the image, such as access details and the image location, to your Foreman server.

To use the CLI instead of the Foreman web UI, see the CLI procedure.

Procedure
  1. In the Foreman web UI, navigate to Infrastructure > Compute Resources and click the name of the OpenStack connection.

  2. Click Create Image.

  3. In the Name field, enter a name for the image.

  4. From the Operating System list, select the base operating system of the image.

  5. From the Architecture list, select the operating system architecture.

  6. In the Username field, enter the SSH user name for image access. This is normally the root user.

  7. In the Password field, enter the SSH password for image access.

  8. From the Image list, select an image from the OpenStack compute resource.

  9. Optional: Select the User Data checkbox if the image supports user data input, such as cloud-init data.

  10. Click Submit to save the image details.

CLI procedure
  • Create the image with the hammer compute-resource image create command. Use the --uuid field to store the full path of the image location on the OpenStack server.

    # hammer compute-resource image create \
    --name "OpenStack Image" \
    --compute-resource "My_OpenStack_Platform"
    --operatingsystem "RedHat version" \
    --architecture "x86_64" \
    --username root \
    --user-data true \
    --uuid "/path/to/OpenstackImage.qcow2"

13.3. Adding OpenStack details to a compute profile

Use this procedure to add OpenStack hardware settings to a compute profile. When you create a host on OpenStack using this compute profile, these settings are automatically populated.

Procedure
  1. In the Foreman web UI, navigate to Infrastructure > Compute Profiles.

  2. In the Compute Profiles window, click the name of an existing compute profile, or click Create Compute Profile, enter a Name, and click Submit.

  3. Click the name of the OpenStack compute resource.

  4. From the Flavor list, select the hardware profile on OpenStack to use for the host.

  5. From the Availability zone list, select the target cluster to use within the OpenStack environment.

  6. From the Image list, select the image to use for image-based provisioning.

  7. From the Tenant list, select the tenant or project for the OpenStack instance.

  8. From the Security Group list, select the cloud-based access rules for ports and IP addresses.

  9. From the Internal network, select the private networks for the host to join.

  10. From the Floating IP network, select the external networks for the host to join and assign a floating IP address.

  11. From the Boot from volume, select whether a volume is created from the image. If not selected, the instance boots the image directly.

  12. In the New boot volume size (GB) field, enter the size, in GB, of the new boot volume.

  13. Click Submit to save the compute profile.

CLI procedure
  • Set OpenStack details to a compute profile:

    # hammer compute-profile values create
    --compute-resource "My_Laptop" \
    --compute-profile "My_Compute_Profile" \
    --compute-attributes "availability_zone=My_Zone,image_ref=My_Image,flavor_ref=m1.small,tenant_id=openstack,security_groups=default,network=My_Network,boot_from_volume=false"

13.4. Creating image-based hosts on OpenStack

In Foreman, you can use OpenStack provisioning to create hosts from an existing image. The new host entry triggers the OpenStack server to create the instance using the pre-existing image as a basis for the new volume.

To use the CLI instead of the Foreman web UI, see the CLI procedure.

Procedure
  1. In the Foreman web UI, navigate to Hosts > Create Host.

  2. In the Name field, enter a name for the host.

  3. Optional: Click the Organization tab and change the organization context to match your requirement.

  4. Optional: Click the Location tab and change the location context to match your requirement.

  5. From the Host Group list, select a host group that you want to assign your host to. That host group will populate the form.

  6. From the Deploy on list, select the OpenStack connection.

  7. From the Compute Profile list, select a profile to use to automatically populate virtual machine settings.

  8. From the Lifecycle Environment list, select the environment.

  9. Click the Interfaces tab, and on the interface of the host, click Edit.

  10. Verify that the fields are populated with values. Note in particular:

    • Foreman automatically assigns an IP address for the new host.

    • Ensure that the MAC address field is blank. OpenStack assigns a MAC address to the host during provisioning.

    • The Name from the Host tab becomes the DNS name.

    • Ensure that Foreman automatically selects the Managed, Primary, and Provision options for the first interface on the host. If not, select them.

  11. Click OK to save. To add another interface, click Add Interface. You can select only one interface for Provision and Primary.

  12. Click the Operating System tab, and confirm that all fields automatically contain values.

  13. If you want to change the image that populates automatically from your compute profile, from the Images list, select a different image to base the new host’s root volume on.

  14. Click Resolve in Provisioning templates to check the new host can identify the right provisioning templates to use.

  15. Click the Virtual Machine tab and confirm that these settings are populated with details from the host group and compute profile. Modify these settings to suit your needs.

  16. Click Submit to save the host entry.

CLI procedure
  • Create the host with the hammer host create command and include --provision-method image. Replace the values in the following example with the appropriate values for your environment.

    # hammer host create \
    --compute-attributes="flavor_ref=m1.small,tenant_id=openstack,security_groups=default,network=mynetwork" \
    --compute-resource "My_OpenStack_Platform" \
    --enabled true \
    --hostgroup "My_Host_Group" \
    --image "My_OpenStack_Image" \
    --interface "managed=true,primary=true,provision=true" \
    --location "My_Location" \
    --managed true \
    --name "My_Host_Name" \
    --organization "My_Organization" \
    --provision-method image

For more information about additional host creation parameters for this compute resource, enter the hammer host create --help command.

14. Provisioning cloud instances in Amazon EC2

Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides public cloud compute resources. Using Foreman, you can interact with Amazon EC2’s public API to create cloud instances and control their power management states. Use the procedures in this chapter to add a connection to an Amazon EC2 account and provision a cloud instance.

14.1. Prerequisites for Amazon EC2 provisioning

The requirements for Amazon EC2 provisioning include:

  • A Smart Proxy server managing a network in your EC2 environment. Use a Virtual Private Cloud (VPC) to ensure a secure network between the hosts and Smart Proxy server.

  • An Amazon Machine Image (AMI) for image-based provisioning.

  • Provide the installation medium for the operating systems that you want to use to provision hosts.

14.2. Installing Amazon EC2 plugin

Install the Amazon EC2 plugin to attach an EC2 compute resource provider to Foreman. This allows you to manage and deploy hosts to EC2.

Procedure
  1. Install the EC2 compute resource provider on your Foreman server:

    # foreman-installer --enable-foreman-compute-ec2
  2. Optional: In the Foreman web UI, navigate to Administer > About and select the compute resources tab to verify the installation of the Amazon EC2 plugin.

14.3. Adding an Amazon EC2 connection to the Foreman server

Use this procedure to add the Amazon EC2 connection in Foreman server’s compute resources. To use the CLI instead of the Foreman web UI, see the CLI procedure.

Prerequisites
  • An AWS EC2 user performing this procedure needs the AmazonEC2FullAccess permissions. You can attach these permissions from AWS.

Time settings and Amazon Web Services

Amazon Web Services uses time settings as part of the authentication process. Ensure that Foreman server’s time is correctly synchronized. Ensure that an NTP service, such as ntpd or chronyd, is running properly on Foreman server. Failure to provide the correct time to Amazon Web Services can lead to authentication failures.

Procedure
  1. In the Foreman web UI, navigate to Infrastructure > Compute Resources and in the Compute Resources window, click Create Compute Resource.

  2. In the Name field, enter a name to identify the Amazon EC2 compute resource.

  3. From the Provider list, select EC2.

  4. In the Description field, enter information that helps distinguish the resource for future use.

  5. Optional: From the HTTP proxy list, select an HTTP proxy to connect to external API services. You must add HTTP proxies to Foreman before you can select a proxy from this list. For more information, see Using an HTTP proxy with compute resources.

  6. In the Access Key and Secret Key fields, enter the access keys for your Amazon EC2 account. For more information, see Managing Access Keys for your AWS Account on the Amazon documentation website.

  7. Optional: Click Load Regions to populate the Regions list.

  8. From the Region list, select the Amazon EC2 region or data center to use.

  9. Click the Locations tab and ensure that the location you want to use is selected, or add a different location.

  10. Click the Organizations tab and ensure that the organization you want to use is selected, or add a different organization.

  11. Click Submit to save the Amazon EC2 connection.

  12. Select the new compute resource and then click the SSH keys tab, and click Download to save a copy of the SSH keys to use for SSH authentication. Until BZ1793138 is resolved, you can download a copy of the SSH keys only immediately after creating the Amazon EC2 compute resource. If you require SSH keys at a later stage, follow the procedure in Connecting to an Amazon EC2 instance using SSH.

CLI procedure
  • Create the connection with the hammer compute-resource create command. Use --user and --password options to add the access key and secret key respectively.

    # hammer compute-resource create \
    --description "Amazon EC2 Public Cloud` \
    --locations "My_Location" \
    --name "My_EC2_Compute_Resource" \
    --organizations "My_Organization" \
    --password "My_Secret_Key" \
    --provider "EC2" \
    --region "My_Region" \
    --user "My_User_Name"

14.4. Using an HTTP proxy with compute resources

In some cases, the EC2 compute resource that you use might require a specific HTTP proxy to communicate with Foreman. In Foreman, you can create an HTTP proxy and then assign the HTTP proxy to your EC2 compute resource.

However, if you configure an HTTP proxy for Foreman in Administer > Settings, and then add another HTTP proxy for your compute resource, the HTTP proxy that you define in Administer > Settings takes precedence.

Procedure
  1. In the Foreman web UI, navigate to Infrastructure > HTTP Proxies, and select New HTTP Proxy.

  2. In the Name field, enter a name for the HTTP proxy.

  3. In the URL field, enter the URL for the HTTP proxy, including the port number.

  4. Optional: Enter a username and password to authenticate to the HTTP proxy, if your HTTP proxy requires authentication.

  5. Click Test Connection to ensure that you can connect to the HTTP proxy from Foreman.

  6. Click the Locations tab and add a location.

  7. Click the Organization tab and add an organization.

  8. Click Submit.

14.5. Creating an image for Amazon EC2

You can create images for Amazon EC2 from within Foreman.

Procedure
  1. In the Foreman web UI, navigate to Infrastructure > Compute Resources.

  2. Select your Amazon EC2 provider.

  3. Click Create Image.

    • In the Name field, enter a meaningful and unique name for your EC2 image.

    • From the Operating System list, select an operating system to associate with the image.

    • From the Architecture list, select an architecture to associate with the image.

    • In the Username field, enter the username needed to SSH into the machine.

    • In the Image ID field, enter the image ID provided by Amazon or an operating system vendor. You can find the ID within Amazon AWS or on operating system specific pages such as debian.org or ubuntu.com.

    • Optional: Select the User Data check box to enable support for user data input.

    • Optional: Set an Iam Role for Fog to use when creating this image.

    • Click Submit to save your changes to Foreman.

14.6. Adding Amazon EC2 images to Foreman server

Amazon EC2 uses image-based provisioning to create hosts. You must add image details to your Foreman server. This includes access details and image location.

To use the CLI instead of the Foreman web UI, see the CLI procedure.

Procedure
  1. In the Foreman web UI, navigate to Infrastructure > Compute Resources and select an Amazon EC2 connection.

  2. Click the Images tab, and then click Create Image.

  3. In the Name field, enter a name to identify the image for future use.

  4. From the Operating System list, select the operating system that corresponds with the image you want to add.

  5. From the Architecture list, select the operating system’s architecture.

  6. In the Username field, enter the SSH user name for image access. This is normally the root user.

  7. In the Password field, enter the SSH password for image access.

  8. In the Image ID field, enter the Amazon Machine Image (AMI) ID for the image. This is usually in the following format: ami-xxxxxxxx.

  9. Optional: Select the User Data checkbox if the images support user data input, such as cloud-init data. If you enable user data, the Finish scripts are automatically disabled. This also applies in reverse: if you enable the Finish scripts, this disables user data.

  10. Optional: In the IAM role field, enter the Amazon security role used for creating the image.

  11. Click Submit to save the image details.

CLI procedure
  • Create the image with the hammer compute-resource image create command. Use the --uuid field to store the full path of the image location on the Amazon EC2 server.

    # hammer compute-resource image create \
    --architecture "My_Architecture" \
    --compute-resource "My_EC2_Compute_Resource" \
    --name "My_Amazon_EC2_Image" \
    --operatingsystem "My_Operating_System" \
    --user-data true \
    --username root \
    --uuid "ami-My_AMI_ID"

14.7. Adding Amazon EC2 details to a compute profile

You can add hardware settings for instances on Amazon EC2 to a compute profile.

Procedure
  1. In the Foreman web UI, navigate to Infrastructure > Compute Profiles and click the name of your profile, then click an EC2 connection.

  2. From the Flavor list, select the hardware profile on EC2 to use for the host.

  3. From the Image list, select the image to use for image-based provisioning.

  4. From the Availability zone list, select the target cluster to use within the chosen EC2 region.

  5. From the Subnet list, add the subnet for the EC2 instance. If you have a VPC for provisioning new hosts, use its subnet.

  6. From the Security Groups list, select the cloud-based access rules for ports and IP addresses to apply to the host.

  7. From the Managed IP list, select either a Public IP or a Private IP.

  8. Click Submit to save the compute profile.

CLI procedure
  • Set Amazon EC2 details to a compute profile:

    # hammer compute-profile values create
    --compute-resource "My_Laptop" \
    --compute-profile "My_Compute_Profile" \
    --compute-attributes "flavor_id=1,availability_zone= My_Zone,subnet_id=1,security_group_ids=1,managed_ip=public_ip"

14.8. Creating image-based hosts on Amazon EC2

The Amazon EC2 provisioning process creates hosts from existing images on the Amazon EC2 server. To use the CLI instead of the Foreman web UI, see the CLI procedure.

Procedure
  1. In the Foreman web UI, navigate to Hosts > Create Host.

  2. In the Name field, enter a name for the host.

  3. Optional: Click the Organization tab and change the organization context to match your requirement.

  4. Optional: Click the Location tab and change the location context to match your requirement.

  5. From the Host Group list, select a host group that you want to assign your host to. That host group will populate the form.

  6. From the Deploy on list, select the EC2 connection.

  7. From the Compute Profile list, select a profile to use to automatically populate virtual machine-based settings.

  8. Click the Interfaces tab, and on the interface of the host, click Edit.

  9. Verify that the fields are populated with values. Note in particular:

    • Foreman automatically assigns an IP address for the new host.

    • Ensure that the MAC address field is blank. EC2 assigns a MAC address to the host during provisioning.

    • The Name from the Host tab becomes the DNS name.

    • Ensure that Foreman automatically selects the Managed, Primary, and Provision options for the first interface on the host. If not, select them.

  10. Click OK to save. To add another interface, click Add Interface. You can select only one interface for Provision and Primary.

  11. Click the Operating System tab and confirm that all fields are populated with values.

  12. Click the Virtual Machine tab and confirm that all fields are populated with values.

  13. Click Submit to save your changes.

This new host entry triggers the Amazon EC2 server to create the instance, using the pre-existing image as a basis for the new volume.

CLI procedure
  • Create the host with the hammer host create command and include --provision-method image to use image-based provisioning.

    # hammer host create \
    --compute-attributes="flavor_id=m1.small,image_id=TestImage,availability_zones=us-east-1a,security_group_ids=Default,managed_ip=Public" \
    --compute-resource "My_EC2_Compute_Resource" \
    --enabled true \
    --hostgroup "My_Host_Group" \
    --image "My_Amazon_EC2_Image" \
    --interface "managed=true,primary=true,provision=true,subnet_id=EC2" \
    --location "My_Location" \
    --managed true \
    --name "My_Host_Name_" \
    --organization "My_Organization" \
    --provision-method image

For more information about additional host creation parameters for this compute resource, enter the hammer host create --help command.

14.9. Connecting to an Amazon EC2 instance using SSH

You can connect remotely to an Amazon EC2 instance from Foreman server using SSH. However, to connect to any Amazon Web Services EC2 instance that you provision through Foreman, you must first access the private key that is associated with the compute resource in the Foreman database, and use this key for authentication.

Procedure
  1. To locate the compute resource list, on your Foreman server base system, enter the following command, and note the ID of the compute resource that you want to use:

    # hammer compute-resource list
  2. Connect to the Foreman database as the user postgres:

    # su - postgres -c psql foreman
  3. Select the secret from key_pairs where compute_resource_id = 3:

    # select secret from key_pairs where compute_resource_id = 3; secret
  4. Copy the key from after -----BEGIN RSA PRIVATE KEY----- until -----END RSA PRIVATE KEY-----.

  5. Create a .pem file and paste your key into the file:

    # vim Keyname.pem
  6. Ensure that you restrict access to the .pem file:

    # chmod 600 Keyname.pem
  7. To connect to the Amazon EC2 instance, enter the following command:

    ssh -i Keyname.pem   ec2-user@example.aws.com

14.10. Configuring a finish template for an Amazon Web Service EC2 environment

You can use Foreman finish templates during the provisioning of Linux instances in an Amazon EC2 environment.

If you want to use a Finish template with SSH, Foreman must reside within the EC2 environment and in the correct security group. Foreman currently performs SSH finish provisioning directly, not using Smart Proxy server. If Foreman server does not reside within EC2, the EC2 virtual machine reports an internal IP rather than the necessary external IP with which it can be reached.

Procedure
  1. In the Foreman web UI, navigate to Hosts > Templates > Provisioning Templates.

  2. In the Provisioning Templates page, enter Kickstart default finish into the search field and click Search.

  3. On the Kickstart default finish template, select Clone.

  4. In the Name field, enter a unique name for the template.

  5. In the template, prefix each command that requires root privileges with sudo, except for yum or equivalent commands, or add the following line to run the entire template as the sudo user:

    sudo -s << EOS
    _Template_ _Body_
    EOS
  6. Click the Association tab, and associate the template with a Red Hat Enterprise Linux operating system that you want to use.

  7. Click the Locations tab, and add the the location where the host resides.

  8. Click the Organizations tab, and add the organization that the host belongs to.

  9. Make any additional customizations or changes that you require, then click Submit to save your template.

  10. In the Foreman web UI, navigate to Hosts > Operating systems and select the operating system that you want for your host.

  11. Click the Templates tab, and from the Finish Template list, select your finish template.

  12. In the Foreman web UI, navigate to Hosts > Create Host.

  13. In the Name field, enter a name for the host.

  14. Optional: Click the Organization tab and change the organization context to match your requirement.

  15. Optional: Click the Location tab and change the location context to match your requirement.

  16. From the Host Group list, select a host group that you want to assign your host to. That host group will populate the form.

  17. Click the Parameters tab and navigate to Host parameters.

  18. If you have the Remote Execution plugin installed, in Host parameters, click Add Parameter.

    1. In the Name field, enter remote_execution_ssh_user. In the corresponding Value field, enter ec2-user.

  19. Click Submit to save the changes.

14.11. Deleting a virtual machine on Amazon EC2

You can delete virtual machines running on Amazon EC2 from within Foreman.

Procedure
  1. In the Foreman web UI, navigate to Infrastructure > Compute Resources.

  2. Select your Amazon EC2 provider.

  3. On the Virtual Machines tab, click Delete from the Actions menu. This deletes the virtual machine from the Amazon EC2 compute resource while retaining any associated hosts within Foreman. If you want to delete an orphaned host, navigate to Hosts > All Hosts and delete the host manually.

Additional resources

14.12. Uninstalling Amazon EC2 plugin

If you have previously installed the Amazon EC2 plugin but do not use it anymore to manage and deploy hosts to EC2, you can uninstall it from your Foreman server.

Procedure
  1. Uninstall the EC2 compute resource provider from your Foreman server:

    # apt remove foreman-ec2
    # foreman-installer --no-enable-foreman-compute-ec2
  2. Optional: In the Foreman web UI, navigate to Administer > About and select the Available Providers tab to verify the removal of the Amazon EC2 plugin.

14.13. More information about Amazon Web Services and Foreman

For information about how to locate Red Hat Gold Images on Amazon Web Services EC2, see How to Locate Red Hat Cloud Access Gold Images on AWS EC2.

For information about how to install and use the Amazon Web Service Client on Linux, see Install the AWS Command Line Interface on Linux in the Amazon Web Services documentation.

For information about importing and exporting virtual machines in Amazon Web Services, see VM Import/Export in the Amazon Web Services documentation.

15. Provisioning cloud instances on Google Compute Engine

Foreman can interact with Google Compute Engine (GCE), including creating new virtual machines and controlling their power management states.

Prerequisites
  • Configure a domain and subnet on Foreman. For more information about networking requirements, see Configuring networking.

  • Provide the installation medium for the operating systems that you want to use to provision hosts.

  • In your GCE project, configure a service account with the necessary IAM Compute role. For more information, see Compute Engine IAM roles in the GCE documentation.

  • In your GCE project-wise metadata, set the enable-oslogin to FALSE. For more information, see Enabling or disabling OS Login in the GCE documentation.

  • Optional: If you want to use Puppet with GCE hosts, navigate to Administer > Settings > Puppet and enable the Use UUID for certificates setting to configure Puppet to use consistent Puppet certificate IDs.

  • Based on your needs, associate a finish or user_data provisioning template with the operating system you want to use. For more information, see Provisioning Templates in Provisioning hosts.

15.1. Installing Google GCE plugin

Install the Google GCE plugin to attach an GCE compute resource provider to Foreman. This allows you to manage and deploy hosts to GCE.

Procedure
  1. Install the Google GCE compute resource provider on your Foreman server:

    # foreman-installer \
    --enable-foreman-cli-google \
    --enable-foreman-plugin-google
  2. Optional: In the Foreman web UI, navigate to Administer > About and select the Compute Resources tab to verify the installation of the Google GCE plugin.

15.2. Adding a Google GCE connection to Foreman server

Use this procedure to add Google Compute Engine (GCE) as a compute resource in Foreman. To use the CLI instead of the Foreman web UI, see the CLI procedure.

Procedure
  1. In Google GCE, generate a service account key in JSON format. For more information, see Create and manage service account keys in the GCE documentation.

  2. In the Foreman web UI, navigate to Infrastructure > Compute Resources and click Create Compute Resource.

  3. In the Name field, enter a name for the compute resource.

  4. From the Provider list, select Google.

  5. Optional: In the Description field, enter a description for the resource.

  6. In the JSON key field, click Choose File and locate your service account key for upload from your local machine.

  7. Click Load Zones to populate the list of zones from your GCE environment.

  8. From the Zone list, select the GCE zone to use.

  9. Click Submit.

CLI procedure
  1. In Google GCE, generate a service account key in JSON format. For more information, see Create and manage service account keys in the GCE documentation.

  2. Copy the file from your local machine to Foreman server:

    # scp My_GCE_Key.json root@foreman.example.com:/etc/foreman/My_GCE_Key.json
  3. On Foreman server, change the owner for your service account key to the foreman user:

    # chown root:foreman /etc/foreman/My_GCE_Key.json
  4. On Foreman server, configure permissions for your service account key to ensure that the file is readable:

    # chmod 0640 /etc/foreman/My_GCE_Key.json
  5. Use the hammer compute-resource create command to add a GCE compute resource to Foreman:

    # hammer compute-resource create \
    --key-path "/etc/foreman/My_GCE_Key.json" \
    --name "My_GCE_Compute_Resource" \
    --provider "gce" \
    --zone "My_Zone"

15.3. Adding Google Compute Engine images to Foreman server

To create hosts using image-based provisioning, you must add information about the image, such as access details and the image location, to your Foreman server.

To use the CLI instead of the Foreman web UI, see the CLI procedure.

Procedure
  1. In the Foreman web UI, navigate to Infrastructure > Compute Resources and click the name of the Google Compute Engine connection.

  2. Click Create Image.

  3. In the Name field, enter a name for the image.

  4. From the Operating System list, select the base operating system of the image.

  5. From the Architecture list, select the operating system architecture.

  6. In the Username field, enter the SSH user name for image access. Specify a user other than root, because the root user cannot connect to a GCE instance using SSH keys. The username must begin with a letter and consist of lowercase letters and numbers.

  7. From the Image list, select an image from the Google Compute Engine compute resource.

  8. Optional: Select the User Data checkbox if the image supports user data input, such as cloud-init data.

  9. Click Submit to save the image details.

CLI procedure
  • Create the image with the hammer compute-resource image create command. With the --username option, specify a user other than root, because the root user cannot connect to a GCE instance using SSH keys. The username must begin with a letter and consist of lowercase letters and numbers.

    # hammer compute-resource image create \
    --name 'gce_image_name' \
    --compute-resource 'gce_cr' \
    --operatingsystem-id 1 \
    --architecture-id 1 \
    --uuid '3780108136525169178' \
    --username 'admin'

15.4. Adding Google GCE details to a compute profile

Use this procedure to add Google GCE hardware settings to a compute profile. When you create a host on Google GCE using this compute profile, these settings are automatically populated.

To use the CLI instead of the Foreman web UI, see the CLI procedure.

Procedure
  1. In the Foreman web UI, navigate to Infrastructure > Compute Profiles.

  2. In the Compute Profiles window, click the name of an existing compute profile, or click Create Compute Profile, enter a Name, and click Submit.

  3. Click the name of the GCE compute resource.

  4. From the Machine Type list, select the machine type to use for provisioning.

  5. From the Image list, select the image to use for provisioning.

  6. From the Network list, select the Google GCE network to use for provisioning.

  7. Optional: Select the Associate Ephemeral External IP checkbox to assign a dynamic ephemeral IP address that Foreman uses to communicate with the host. This public IP address changes when you reboot the host. If you need a permanent IP address, reserve a static public IP address on Google GCE and attach it to the host.

  8. In the Size (GB) field, enter the size of the storage to create on the host.

  9. Click Submit to save the compute profile.

CLI procedure
  1. Create a compute profile to use with the Google GCE compute resource:

    # hammer compute-profile create --name My_GCE_Compute_Profile
  2. Add GCE details to the compute profile:

    # hammer compute-profile values create \
    --compute-attributes "machine_type=f1-micro,associate_external_ip=true,network=default" \
    --compute-profile "My_GCE_Compute_Profile" \
    --compute-resource "My_GCE_Compute_Resource" \
    --volume "size_gb=20"

15.5. Creating image-based hosts on Google Compute Engine

In Foreman, you can use Google Compute Engine provisioning to create hosts from an existing image. The new host entry triggers the Google Compute Engine server to create the instance using the pre-existing image as a basis for the new volume.

To use the CLI instead of the Foreman web UI, see the CLI procedure.

Procedure
  1. In the Foreman web UI, navigate to Hosts > Create Host.

  2. In the Name field, enter a name for the host.

  3. Optional: Click the Organization tab and change the organization context to match your requirement.

  4. Optional: Click the Location tab and change the location context to match your requirement.

  5. From the Host Group list, select a host group that you want to assign your host to. That host group will populate the form.

  6. From the Deploy on list, select the Google Compute Engine connection.

  7. From the Compute Profile list, select a profile to use to automatically populate virtual machine settings.

  8. From the Lifecycle Environment list, select the environment.

  9. Click the Interfaces tab, and on the interface of the host, click Edit.

  10. Verify that the fields are populated with values. Note in particular:

    • Foreman automatically assigns an IP address for the new host.

    • Ensure that the MAC address field is blank. Google Compute Engine assigns a MAC address to the host during provisioning.

    • The Name from the Host tab becomes the DNS name.

    • The Domain field is populated with the required domain.

    • Ensure that Foreman automatically selects the Managed, Primary, and Provision options for the first interface on the host. If not, select them.

  11. Click OK to save. To add another interface, click Add Interface. You can select only one interface for Provision and Primary.

  12. Click the Operating System tab, and confirm that all fields automatically contain values.

  13. Click Resolve in Provisioning templates to check the new host can identify the right provisioning templates to use.

  14. Click the Virtual Machine tab and confirm that these settings are populated with details from the host group and compute profile. Modify these settings to suit your needs.

  15. Click Submit to save the host entry.

CLI procedure
  • Create the host with the hammer host create command and include --provision-method image. Replace the values in the following example with the appropriate values for your environment.

    # hammer host create \
    --architecture x86_64 \
    --compute-profile "My_Compute_Profile" \
    --compute-resource "My_Compute_Resource" \
    --image "My_GCE_Image" \
    --interface "type=interface,domain_id=1,managed=true,primary=true,provision=true" \
    --location "My_Location" \
    --name "My_Host_Name" \
    --operatingsystem "My_Operating_System" \
    --organization "My_Organization" \
    --provision-method "image" \
    --puppet-ca-proxy-id My_Puppet_CA_Proxy_ID \
    --puppet-environment-id My_Puppet_Environment_ID \
    --puppet-proxy-id My_Puppet_Proxy_ID \
    --root-password "My_Root_Password"

For more information about additional host creation parameters for this compute resource, enter the hammer host create --help command.

15.6. Deleting a VM on Google GCE

You can delete VMs running on Google GCE on your Foreman server.

Procedure
  1. In the Foreman web UI, navigate to Infrastructure > Compute Resources.

  2. Select your Google GCE provider.

  3. On the Virtual Machines tab, click Delete from the Actions menu. This deletes the virtual machine from the Google GCE compute resource while retaining any associated hosts within Foreman. If you want to delete the orphaned host, navigate to Hosts > All Hosts and delete the host manually.

Additional resources

16. Provisioning cloud instances on Microsoft Azure Resource Manager

Foreman can interact with Microsoft Azure Resource Manager, including creating new virtual machines and controlling their power management states. Only image-based provisioning is supported for creating Azure hosts. This includes provisioning by using Marketplace images, custom images, and shared image gallery.

For more information about Azure Resource Manager concepts, see Azure Resource Manager documentation.

Prerequisites
  • Provide the installation medium for the operating systems that you want to use to provision hosts.

  • Ensure that you have the correct permissions to create an Azure Active Directory application. For more information, see Check Azure AD permissions in the Microsoft identity platform (Azure Active Directory for developers) documentation.

  • You must create and configure an Azure Active Directory application and service principle to obtain Application or client ID, Directory or tenant ID, and Client Secret. For more information, see Use the portal to create an Azure AD application and service principal that can access resources in the Microsoft identity platform (Azure Active Directory for developers) documentation.

  • Optional: If you want to use Puppet with Azure hosts, navigate to Administer > Settings > Puppet and enable the Use UUID for certificates setting to configure Puppet to use consistent Puppet certificate IDs.

  • Based on your needs, associate a finish or user_data provisioning template with the operating system you want to use. For more information about provisioning templates, see Provisioning Templates.

  • Optional: If you want the virtual machine to use a static private IP address, create a subnet in Foreman with the Network Address field matching the Azure subnet address.

  • Before creating RHEL BYOS images, you must accept the image terms either in the Azure CLI or Portal so that the image can be used to create and manage virtual machines for your subscription.

16.1. Installing Microsoft Azure plugin

Install the Microsoft Azure plugin to attach an Azure compute resource provider to Foreman. This allows you to manage and deploy hosts to Azure.

Procedure
  1. Install the Azure compute resource provider on your Foreman server:

    # foreman-installer --enable-foreman-plugin-azure
  2. Optional: In the Foreman web UI, navigate to Administer > About and select the compute resources tab to verify the installation of the Microsoft Azure plugin.

16.2. Adding a Microsoft Azure Resource Manager connection to Foreman server

Use this procedure to add Microsoft Azure as a compute resource in Foreman. Note that you must add a separate compute resource for each Microsoft Azure region that you want to use.

To use the CLI instead of the Foreman web UI, see the CLI procedure.

Procedure
  1. In the Foreman web UI, navigate to Infrastructure > Compute Resources and click Create Compute Resource.

  2. In the Name field, enter a name for the compute resource.

  3. From the Provider list, select Azure Resource Manager.

  4. Optional: In the Description field, enter a description for the resource.

  5. By default, the Cloud is set to Public/Standard. Azure Government Cloud supports the following regions:

    • US Government

    • China

    • Germany

  6. In the Client ID field, enter your Application or client ID.

  7. In the Client Secret field, enter your client secret.

  8. In the Subscription ID field, enter your subscription ID.

  9. In the Tenant ID field, enter your Directory or tenant ID.

  10. Click Load Regions. This tests if your connection to Azure Resource Manager is successful and loads the regions available in your subscription.

  11. From the Azure Region list, select the Azure region to use.

  12. Click Submit.

CLI procedure
  • Use hammer compute-resource create to add an Azure compute resource to Foreman.

    # hammer compute-resource create \
    --app-ident My_Client_ID \
    --name My_Compute_Resource_Name \
    --provider azurerm \
    --region "My_Region" \
    --secret-key My_Client_Secret \
    --sub-id My_Subscription_ID \
    --tenant My_Tenant_ID

    Note that the value for the --region option must be in lowercase and must not contain special characters.

Important

If you are using Azure Government Cloud then you must pass in the --cloud parameter. The values for the cloud parameter are:

Name of Azure Government Cloud Value for hammer --cloud

US Government

azureusgovernment

China

azurechina

Germany

azuregermancloud

16.3. Adding Microsoft Azure Resource Manager images to Foreman server

To create hosts using image-based provisioning, you must add information about the image, such as access details and the image location, to your Foreman server.

To use the CLI instead of the Foreman web UI, see the CLI procedure.

Procedure
  1. In the Foreman web UI, navigate to Infrastructure > Compute Resources and click the name of the Microsoft Azure Resource Manager connection.

  2. Click Create Image.

  3. In the Name field, enter a name for the image.

  4. From the Operating System list, select the base operating system of the image.

  5. From the Architecture list, select the operating system architecture.

  6. In the Username field, enter the SSH user name for image access. You cannot use the root user.

  7. Optional: In the Password field, enter a password to authenticate with.

  8. In the Azure Image Name field, enter an image name in the format prefix://UUID.

    • For a custom image, use the prefix custom. For example, custom://image-name.

    • For a shared gallery image, use the prefix gallery. For example, gallery://image-name.

    • For public and RHEL Bring Your Own Subscription (BYOS) images, use the prefix marketplace. For example, marketplace://OpenLogicCentOS:7.5:latest.

  9. Optional: Select the User Data checkbox if the image supports user data input, such as cloud-init data.

  10. Click Submit to save the image details.

CLI procedure
  • Create the image with the hammer compute-resource image create command. Note that the username that you enter for the image must be the same that you use when you create a host with this image. The --password option is optional when creating an image. You cannot use the root user.

    # hammer compute-resource image create \
    --name Azure_image_name \
    --compute-resource azure_cr_name \
    --uuid 'marketplace://RedHat:RHEL:7-RAW:latest' \
    --username 'azure_username' \
    --user-data no

16.4. Adding Microsoft Azure Resource Manager details to a compute profile

Use this procedure to add Microsoft Azure hardware settings to a compute profile. When you create a host on Microsoft Azure using this compute profile, these settings are automatically populated.

To use the CLI instead of the Foreman web UI, see the CLI procedure.

Procedure
  1. In the Foreman web UI, navigate to Infrastructure > Compute Profiles.

  2. In the Compute Profiles window, click the name of an existing compute profile, or click Create Compute Profile, enter a Name, and click Submit.

  3. Click the name of the Azure compute resource.

  4. From the Resource group list, select the resource group to provision to.

  5. From the VM Size list, select a size of a virtual machine to provision.

  6. From the Platform list, select Linux.

  7. In the Username field, enter a user name to authenticate with. Note that the username that you enter for compute profile must be the same that you use when creating an image.

  8. To authenticate the user, use one of the following options:

    • To authenticate using a password, enter a password in the Password field.

    • To authenticate using an SSH key, enter an SSH key in the SSH Key field.

  9. Optional: If you want the virtual machine to use a premium virtual machine disk, select the Premium OS Disk checkbox.

  10. From the OS Disk Caching list, select the disc caching setting.

  11. Optional: In the Custom Script Command field, enter commands to perform on the virtual machine when the virtual machine is provisioned.

  12. Optional: If you want to run custom scripts when provisioning finishes, in the Comma separated file URIs field, enter comma-separated file URIs of scripts to use. The scripts must contain sudo at the beginning because Foreman downloads files to the /var/lib/waagent/custom-script/download/0/ directory on the host and scripts require sudo privileges to be executed.

  13. Optional: You can add a NVIDIA Driver by selecting the NVIDIA driver / CUDA checkbox. For more information, refer to the following Microsoft Azure documentation:

  14. Optional: If you want to create an additional volume on the VM, click the Add Volume button, enter the Size in GB and select the Data Disk Caching method.

    • Note that the maximum number of these disks depends on the VM Size selected. For more information on Microsoft Azure VM storage requirements, see the Microsoft Azure documentation.

  15. Click Add Interface.

    Important

    The maximum number of interfaces depends on the VM Size selected. For more information, see the Microsoft Azure documentation link above.

  16. From the Azure Subnet list, select the Azure subnet to provision to.

  17. From the Public IP list, select the public IP setting.

  18. Optional: If you want the virtual machine to use a static private IP, select the Static Private IP checkbox.

  19. Click Submit.

CLI procedure
  1. Create a compute profile to use with the Azure Resource Manager compute resource:

    # hammer compute-profile create --name compute_profile_name
  2. Add Azure details to the compute profile. With the username setting, enter the SSH user name for image access. Note that the username that you enter for compute profile must be the same that you use when creating an image.

    # hammer compute-profile values create \
    --compute-attributes="resource_group=resource_group,vm_size=Standard_B1s,username=azure_user,password=azure_password,platform=Linux,script_command=touch /var/tmp/text.txt" \
    --compute-profile "compute_profile_name" \
    --compute-resource azure_cr_name \
    --interface="compute_public_ip=Dynamic,compute_network=mysubnetID,compute_private_ip=false" \
    --volume="disk_size_gb=5,data_disk_caching=None"

    Optional: If you want to run scripts on the virtual machine after provisioning, specify the following settings:

    • To enter the script directly, with the script_command setting, enter a command to be executed on the provisioned virtual machine.

    • To run a script from a URI, with the script_uris setting, enter comma-separated file URIs of scripts to use. The scripts must contain sudo at the beginning because Foreman downloads files to the /var/lib/waagent/custom-script/download/0/ directory on the host and therefore scripts require sudo privileges to be executed.

16.5. Creating image-based hosts on Microsoft Azure Resource Manager

In Foreman, you can use Microsoft Azure Resource Manager provisioning to create hosts from an existing image. The new host entry triggers the Microsoft Azure Resource Manager server to create the instance using the pre-existing image as a basis for the new volume.

To use the CLI instead of the Foreman web UI, see the CLI procedure.

Procedure
  1. In the Foreman web UI, navigate to Hosts > Create Host.

  2. In the Name field, enter a name for the host.

  3. Optional: Click the Organization tab and change the organization context to match your requirement.

  4. Optional: Click the Location tab and change the location context to match your requirement.

  5. From the Host Group list, select a host group that you want to assign your host to. That host group will populate the form.

  6. From the Deploy on list, select the Microsoft Azure Resource Manager connection.

  7. From the Compute Profile list, select a profile to use to automatically populate virtual machine settings.

  8. From the Lifecycle Environment list, select the environment.

  9. Click the Interfaces tab, and on the interface of the host, click Edit.

  10. Verify that the fields are populated with values. Note in particular:

    • Foreman automatically assigns an IP address for the new host.

    • Ensure that the MAC address field is blank. Microsoft Azure Resource Manager assigns a MAC address to the host during provisioning.

    • The Name from the Host tab becomes the DNS name.

    • The Azure Subnet field is populated with the required Azure subnet.

    • Optional: If you want to use a static private IP address, from the IPv4 Subnet list select the Foreman subnet with the Network Address field matching the Azure subnet address. In the IPv4 Address field, enter an IPv4 address within the range of your Azure subnet.

    • Ensure that Foreman automatically selects the Managed, Primary, and Provision options for the first interface on the host. If not, select them.

  11. Click OK to save. To add another interface, click Add Interface. You can select only one interface for Provision and Primary.

  12. Click the Operating System tab, and confirm that all fields automatically contain values.

  13. For Provisioning Method, ensure Image Based is selected.

  14. From the Image list, select the Azure Resource Manager image that you want to use for provisioning.

  15. In the Root Password field, enter the root password to authenticate with.

  16. Click Resolve in Provisioning templates to check the new host can identify the right provisioning templates to use.

  17. Click the Virtual Machine tab and confirm that these settings are populated with details from the host group and compute profile. Modify these settings to suit your needs.

  18. Click Submit to save the host entry.

CLI procedure
  • Create the host with the hammer host create command and include --provision-method image. Replace the values in the following example with the appropriate values for your environment.

    # hammer host create \
    --architecture x86_64 \
    --compute-profile "My_Compute_Profile" \
    --compute-resource "My_Compute_Resource" \
    --domain "My_Domain" \
    --image "My_Azure_Image" \
    --location "My_Location" \
    --name "My_Host_Name" \
    --operatingsystem "My_Operating_System" \
    --organization "My_Organization" \
    --provision-method "image"

For more information about additional host creation parameters for this compute resource, enter the hammer host create --help command.

16.6. Deleting a VM on Microsoft Azure

You can delete VMs running on Microsoft Azure from within Foreman.

Procedure
  1. In the Foreman web UI, navigate to Infrastructure > Compute Resources.

  2. Select your Microsoft Azure provider.

  3. On the Virtual Machines tab, click Delete from the Actions menu. This deletes the virtual machine from the Microsoft Azure compute resource while retaining any associated hosts within Foreman. If you want to delete the orphaned host, navigate to Hosts > All Hosts and delete the host manually.

Additional resources

16.7. Uninstalling Microsoft Azure plugin

If you have previously installed the Microsoft Azure plugin but don’t use it anymore to manage and deploy hosts to Azure, you can uninstall it from your Foreman server.

Procedure
  1. Uninstall the Azure compute resource provider from your Foreman server:

    # apt remove rubygem-foreman_azure_rm rubygem-ms_rest_azure
    # foreman-installer --no-enable-foreman-plugin-azure
  2. Optional: In the Foreman web UI, navigate to Administer > About and select the Available Providers tab to verify the removal of the Microsoft Azure plugin.

Appendix A: Provisioning FIPS-compliant hosts

Foreman supports provisioning hosts that comply with the National Institute of Standards and Technology’s Security Requirements for Cryptographic Modules standard, reference number FIPS 140-2, referred to here as FIPS.

To enable the provisioning of hosts that are FIPS-compliant, complete the following tasks:

  • Change the provisioning password hashing algorithm for the operating system

  • Create a host group and set a host group parameter to enable FIPS

For more information, see Creating a Host Group in Managing hosts.

The provisioned hosts have the FIPS-compliant settings applied. To confirm that these settings are enabled, complete the steps in Verifying FIPS mode is enabled.

A.1. Changing the provisioning password hashing algorithm

To provision FIPS-compliant hosts, you must first set the password hashing algorithm that you use in provisioning to SHA256. This configuration setting must be applied for each operating system you want to deploy as FIPS-compliant.

Procedure
  1. Identify the Operating System IDs:

    # hammer os list
  2. Update each operating system’s password hash value.

    # hammer os update \
    --password-hash SHA256
    --title "My_Operating_System"

    Note that you cannot use a comma-separated list of values.

A.2. Setting the FIPS-enabled parameter

To provision a FIPS-compliant host, you must create a host group and set the host group parameter fips_enabled to true. If this is not set to true, or is absent, the FIPS-specific changes do not apply to the system. You can set this parameter when you provision a host or for a host group.

To set this parameter when provisioning a host, append --parameters fips_enabled=true to the Hammer command.

# hammer hostgroup set-parameter \
--hostgroup "My_Host_Group" \
--name fips_enabled \
--value "true"

For more information, see the output of the command hammer hostgroup set-parameter --help.

A.3. Verifying FIPS mode is enabled

To verify these FIPS compliance changes have been successful, you must provision a host and check its configuration.

Procedure
  1. Log in to the host as root or with an admin-level account.

  2. Enter the following command:

    $ cat /proc/sys/crypto/fips_enabled

    A value of 1 confirms that FIPS mode is enabled.

Appendix B: Host parameter hierarchy

You can access host parameters when provisioning hosts. Hosts inherit their parameters from the following locations, in order of increasing precedence:

Parameter Level Set in Foreman web UI

Globally defined parameters

Configure > Global parameters

Organization-level parameters

Administer > Organizations

Location-level parameters

Administer > Locations

Domain-level parameters

Infrastructure > Domains

Subnet-level parameters

Infrastructure > Subnets

Operating system-level parameters

Hosts > Provisioning Setup > Operating Systems

Host group-level parameters

Configure > Host Groups

Host parameters

Hosts > All Hosts

Appendix C: Permissions required to provision hosts

The following list provides an overview of the permissions a non-admin user requires to provision hosts.

Resource name Permissions Additional details

Activation Keys

view_activation_keys

Ansible role

view_ansible_roles

Required if Ansible is used.

Architecture

view_architectures

Compute profile

view_compute_profiles

Compute resource

view_compute_resources, create_compute_resources, destroy_compute_resources, power_compute_resources

Required to provision bare-metal hosts.

view_compute_resources_vms, create_compute_resources_vms, destroy_compute_resources_vms, power_compute_resources_vms

Required to provision virtual machines.

Domain

view_domains

Environment

view_environments

Host

view_hosts, create_hosts, edit_hosts, destroy_hosts, build_hosts, power_hosts, play_roles_on_host

view_discovered_hosts, submit_discovered_hosts, auto_provision_discovered_hosts, provision_discovered_hosts, edit_discovered_hosts, destroy_discovered_hosts

Required if the Discovery service is enabled.

Hostgroup

view_hostgroups, create_hostgroups, edit_hostgroups, play_roles_on_hostgroup

Image

view_images

Location

view_locations

Medium

view_media

Operatingsystem

view_operatingsystems

Organization

view_organizations

Parameter

view_params, create_params, edit_params, destroy_params

Provisioning template

view_provisioning_templates

Ptable

view_ptables

Smart Proxy

view_smart_proxies, view_smart_proxies_puppetca

view_openscap_proxies

Required if the OpenSCAP plugin is enabled.

Subnet

view_subnets

Additional resources

1. NetworkManager expects ; as a list separator but currently also accepts ,. For more information, see man nm-settings-keyfile and Shell-like scripting in GRUB