orcharhino overview and concepts

orcharhino is a centralized tool for provisioning, remote management, and monitoring of multiple Enterprise Linux deployments. With orcharhino, you can deploy, configure, and maintain your systems across physical, virtual, and cloud environments.

1. Content and patch management with orcharhino

With orcharhino, you can provide content and apply patches to hosts systematically in all lifecycle stages.

1.1. Content flow in orcharhino

Content flow in orcharhino involves management and distribution of content from external sources to hosts.

Content in orcharhino flows from external content sources to orcharhino Server. orcharhino Proxies mirror the content from orcharhino Server to hosts.

External content sources

You can configure many content sources with orcharhino. The supported content sources include the Red Hat Customer Portal, custom Deb and Yum repositories, Git repositories, Ansible collections, Docker Hub, SCAP repositories, or internal data stores of your organization.

orcharhino Server

On your orcharhino Server, you plan and manage the content lifecycle.

orcharhino Proxies

By creating orcharhino Proxies, you can establish content sources in various locations based on your needs. For example, you can establish a content source for each geographical location or multiple content sources for a data center with separate networks.

Hosts

By assigning a host system to a orcharhino Proxy or directly to your orcharhino Server, you ensure the host receives the content they provide. Hosts can be physical or virtual.

The graphics in this section are Red Hat illustrations. Non-Red Hat illustrations are welcome. If you want to contribute alternative images, raise a pull request in the Foreman Documentation GitHub page. Note that in Red Hat terminology, "Satellite" refers to Foreman and "Capsule" refers to Smart Proxy.

Content flow in orcharhino
Additional resources

1.2. Content views in orcharhino

A content view is a deliberately curated subset of content that your hosts can access. By creating a content view, you can define the software versions used by a particular environment or orcharhino Proxy.

Each content view creates a set of repositories across each environment. Your orcharhino Server stores and manages these repositories. For example, you can create content views in the following ways:

  • A content view with older package versions for a production environment and another content view with newer package versions for a Development environment.

  • A content view with a package repository required by an operating system and another content view with a package repository required by an application.

  • A composite content view for a modular approach to managing content views. For example, you can use one content view for content for managing an operating system and another content view for content for managing an application. By creating a composite content view that combines both content views, you create a new repository that merges the repositories from each of the content views. However, the repositories for the content views still exist and you can keep managing them separately as well.

Default Organization View

A Default Organization View is an application-controlled content view for all content that is synchronized to orcharhino. You can register a host to the Library environment on orcharhino to consume the Default Organization View without configuring content views and lifecycle environments.

You can access unprotected repositories in the Default Organization View content view. The URL consists of your orcharhino Proxy FQDN, /pulp/content/, your organization label, /Library/custom/, your product label, /, your repository label, and a trailing /, for example, https://orcharhino.example.com/pulp/content/Example/Library/custom/AlmaLinux_9/BaseOS/.

Promoting a content view across environments

When you promote a content view from one environment to the next environment in the application lifecycle, orcharhino updates the repository and publishes the packages.

Example 1. Promoting a package from Development to Testing

The repositories for Testing and Production contain the my-software-1.0-0.noarch.rpm package:

Development Testing Production

Version of the content view

Version 2

Version 1

Version 1

Contents of the content view

my-software-1.1-0.noarch.rpm

my-software-1.0-0.noarch.rpm

my-software-1.0-0.noarch.rpm

If you promote Version 2 of the content view from Development to Testing, the repository for Testing updates to contain the my-software-1.1-0.noarch.rpm package:

Development Testing Production

Version of the content view

Version 2

Version 2

Version 1

Contents of the content view

my-software-1.1-0.noarch.rpm

my-software-1.1-0.noarch.rpm

my-software-1.0-0.noarch.rpm

This ensures hosts are designated to a specific environment but receive updates when that environment uses a new version of the content view.

With Distribute archived content view versions enabled, you can access unprotected repositories in published content view versions. The URL consists of your orcharhino Proxy FQDN, /pulp/content/, your organization label, /content_views/, your content view, your content view version, /custom/, your product label, /, your repository label, and a trailing /, for example, https://orcharhino.example.com/pulp/content/Example/content_views/AlmaLinux_9/2.1/custom/AlmaLinux_9/BaseOS/.

If you want to access the latest published content view, the URL consists of your orcharhino Proxy FQDN, /pulp/content/, your organization label, your lifecycle, your content view, /custom/, your product label, /, your repository label, and a trailing /, for example, https://orcharhino.example.com/pulp/content/Example/Production/AlmaLinux_9/custom/AlmaLinux_9/AlmaLinux_9/.

You can use these URLs to provide versioned orcharhino Clients during host registration.

Additional resources

1.3. Lifecycle environments and environment paths

Application lifecycles are divided into lifecycle environments which represent each stage of the application lifecycle. By linking lifecycle environments, you create an environment path.

You can promote content along the environment path to the next lifecycle environment when required. When you promote a content view from one environment to the next environment in the application lifecycle, orcharhino updates the repository and publishes the packages. For example, if development ends on a particular version of an application, you can promote this version to the testing environment and start development on the next version.

The graphics in this section are Red Hat illustrations. Non-Red Hat illustrations are welcome. If you want to contribute alternative images, raise a pull request in the Foreman Documentation GitHub page. Note that in Red Hat terminology, "Satellite" refers to Foreman and "Capsule" refers to Smart Proxy.

An environment path containing four environments
Figure 1. An environment path containing four environments

1.4. Content types in orcharhino

With orcharhino, you can import and manage many content types. You can use content from Red Hat as well as from Canonical, Oracle, SUSE, and other custom content.

For example, orcharhino supports the following content types:

RPM packages

Import RPM packages from any repository, for example from Amazon, Oracle, Red Hat, SUSE, and custom repositories. orcharhino Server downloads the RPM packages and stores them locally. You can use these repositories and their RPM packages in content views.

Deb packages

Import Deb packages from repositories, for example, for Debian or Ubuntu. You can also import single Deb packages or synchronize custom repositories. You can use these repositories and their Deb files in content views.

Kickstart trees

Import the Kickstart trees to provision a host. New systems access these Kickstart trees over a network to use as base content for their installation. orcharhino contains predefined Kickstart templates. You can also create your own Kickstart templates.

Provisioning templates

Templates to provision hosts running EL based on synchronized content and Debian, Ubuntu, or SUSE Linux Enterprise Server based on local installation media. orcharhino contains predefined AutoYaST, Kickstart, and Preseed templates as well as the ability to create your own, which are used to provision systems and customize the installation.

ISO and KVM images

Download and manage media for installation and provisioning. For example, orcharhino downloads, stores, and manages ISO images and guest images for specific Enterprise Linux operating systems.

Custom file type

Manage custom content for any type of file you require, such as SSL certificates, ISO images, and OVAL files.

1.5. Additional resources

  • For information about how to manage content with orcharhino, see Managing content.

2. Provisioning management with orcharhino

With orcharhino, you can provision hosts on various compute resources with many provisioning methods from a unified interface.

2.1. Provisioning methods in orcharhino

With orcharhino, you can provision hosts by using the following methods.

Bare-metal hosts

orcharhino provisions bare-metal hosts primarily by using PXE boot and MAC address identification. When provisioning bare-metal hosts with orcharhino, you can do the following:

  • Create host entries and specify the MAC address of the physical host to provision.

  • Boot blank hosts to use the orcharhino Discovery service, which creates a pool of hosts that are ready for provisioning.

  • Boot and provision hosts by using PXE-less methods.

Cloud providers

orcharhino connects to private and public cloud providers to provision instances of hosts from images stored in the cloud environment. When provisioning from cloud with orcharhino, you can do the following:

  • Select which hardware profile to use.

  • Provision cloud instances from specific providers by using their APIs.

Virtualization infrastructure

orcharhino connects to virtualization infrastructure services, such as oVirt and VMware. When provisioning virtual machines with orcharhino, you can do the following:

  • Provision virtual machines from virtual image templates.

  • Use the same PXE-based boot methods that you use to provision bare-metal hosts.

For more information, see compute resources.

2.2. Additional resources

3. Major orcharhino components

A typical orcharhino deployment consists of the following components: a orcharhino Server, orcharhino Proxies that mirror content from orcharhino Server, and hosts that receive content and configuration from orcharhino Server and orcharhino Proxies.

3.1. orcharhino Server overview

orcharhino Server is the central component of a orcharhino deployment where you plan and manage the content lifecycle.

A typical orcharhino deployment includes one orcharhino Server on which you perform the following operations:

  • Content lifecycle management

  • Configuration of orcharhino Proxies

  • Configuration of hosts

  • Host provisioning

  • Patch management

  • Subscription management

orcharhino Server delegates content distribution, host provisioning, and communication to orcharhino Proxies. orcharhino Server itself also includes a orcharhino Proxy.

orcharhino Server also contains a fine-grained authentication system. You can grant orcharhino users permissions to access precisely the parts of the infrastructure for which they are responsible.

Additional resources

3.2. orcharhino Proxy overview

With orcharhino Proxies, you can extend the reach and scalability of your orcharhino deployment. orcharhino Proxies provide the following functionalities in a orcharhino deployment:

  • Mirroring content from orcharhino Server to establish content sources in various geographical or logical locations. By registering a host to a orcharhino Proxy, you can configure this host to receive content and configuration from the orcharhino Proxy in their location instead of from the central orcharhino Server.

  • Running localized services to discover, provision, control, and configure hosts.

By using content views, you can specify the exact subset of content that orcharhino Proxy makes available to hosts. For more information, see Content and patch management with orcharhino.

3.3. Overview of hosts in orcharhino

A host is any Linux client that orcharhino manages. Hosts can be physical or virtual.

You can deploy virtual hosts on any platform supported by orcharhino, such as Amazon EC2, Google Compute Engine, libvirt, Microsoft Azure, Oracle Linux Virtualization Manager, oVirt, Proxmox, RHV, and VMware vSphere.

With orcharhino, you can manage hosts at scale, including monitoring, provisioning, remote execution, configuration management, software management, and subscription management.

3.4. List of key open source components of orcharhino Server

orcharhino consists of several open source projects integrated with each other, such as the following:

Foreman

Foreman is a lifecycle management application for physical and virtual systems. It helps manage hosts throughout their lifecycle, from provisioning and configuration to orchestration and monitoring.

Katello

Katello is an optional plugin of Foreman that extends Foreman capabilities with additional features for content, subscription, and repository management. Katello enables orcharhino to subscribe to repositories and to download content.

Candlepin

Candlepin is a service for subscription management.

Pulp

Pulp is a service for repository and content management.

3.5. orcharhino Proxy features

orcharhino Proxies provide local host management services and can mirror content from orcharhino Server.

To mirror content from orcharhino Server, orcharhino Proxy provides the following functionalities:

Repository synchronization

orcharhino Proxies pull content for selected lifecycle environments from orcharhino Server and make this content available to the hosts they manage.

Content delivery

Hosts configured to use orcharhino Proxy download content from that orcharhino Proxy rather than from orcharhino Server.

Host action delivery

orcharhino Proxy executes scheduled actions on hosts.

Red Hat Subscription Management (RHSM) proxy

Hosts are registered to their associated orcharhino Proxies rather than to the central orcharhino Server or the Red Hat Customer Portal.

You can use orcharhino Proxy to run the following services for infrastructure and host management:

DHCP

orcharhino Proxy can manage a DHCP server, including integration with an existing solution, such as ISC DHCP servers, Active Directory, and Libvirt instances.

DNS

orcharhino Proxy can manage a DNS server, including integration with an existing solution, such as ISC BIND and Active Directory.

TFTP

orcharhino Proxy can integrate with any UNIX-based TFTP server.

Realm

orcharhino Proxy can manage Kerberos realms or domains so that hosts can join them automatically during provisioning. orcharhino Proxy can integrate with an existing infrastructure, including FreeIPA and Active Directory.

Puppet server

orcharhino Proxy can act as a configuration management server by running a Puppet server.

Puppet Certificate Authority

orcharhino Proxy can integrate with the Puppet certificate authority (CA) to provide certificates to hosts.

Baseboard Management Controller (BMC)

orcharhino Proxy can provide power management for hosts by using the Intelligent Platform Management Interface (IPMI) or Redfish standards.

Provisioning template proxy

orcharhino Proxy can serve provisioning templates to hosts.

OpenSCAP

orcharhino Proxy can perform security compliance scans on hosts.

Remote Execution (REX)

orcharhino Proxy can run remote job execution on hosts.

You can configure a orcharhino Proxy for a specific limited purpose by enabling only selected features on that orcharhino Proxy. Common configurations include the following:

Infrastructure orcharhino Proxies: DNS + DHCP + TFTP

orcharhino Proxies with these services provide infrastructure services for hosts and have all necessary services for provisioning new hosts.

Content orcharhino Proxies: Pulp

orcharhino Proxies with this service provide content synchronized from orcharhino Server to hosts.

Configuration orcharhino Proxies: Pulp + Puppet + PuppetCA

orcharhino Proxies with these services provide content and run configuration services for hosts.

orcharhino Proxies with DNS + DHCP + TFTP + Pulp + Puppet + PuppetCA

orcharhino Proxies with these services provide a full set of orcharhino Proxy features. By configuring a orcharhino Proxy with all these features, you can isolate hosts assigned to that orcharhino Proxy by providing a single point of connection for the hosts.

3.6. orcharhino Proxy networking

The communication between orcharhino Server and hosts registered to a orcharhino Proxy is routed through that orcharhino Proxy. orcharhino Proxy also provides orcharhino services to hosts.

Many of the services that orcharhino Proxy manages use dedicated network ports. However, orcharhino Proxy ensures that all communications from the host to orcharhino Server use a single source IP address, which simplifies firewall administration.

orcharhino topology with hosts connecting to a orcharhino Proxy

In this topology, orcharhino Proxy provides a single endpoint for all host network communications so that in remote network segments, only firewall ports to the orcharhino Proxy itself must be open.

The graphics in this section are Red Hat illustrations. Non-Red Hat illustrations are welcome. If you want to contribute alternative images, raise a pull request in the Foreman Documentation GitHub page. Note that in Red Hat terminology, "Satellite" refers to Foreman and "Capsule" refers to Smart Proxy.

orcharhino topology with a host
Figure 2. How orcharhino components interact when hosts connect to a orcharhino Proxy
orcharhino topology with hosts connecting directly to orcharhino Server

In this topology, hosts connect to orcharhino Server rather than a orcharhino Proxy. This applies also to orcharhino Proxies themselves because the orcharhino Proxy is a host of orcharhino Server.

orcharhino topology with a direct host
Figure 3. How orcharhino components interact when hosts connect directly to orcharhino Server
Additional resources

You can find complete instructions for configuring the host-based firewall to open the required ports in the following documents:

3.7. Additional resources

4. orcharhino infrastructure organization concepts

You can use several elements to structure and organize the resources within your orcharhino environment.

4.1. Organizations and locations in orcharhino

On your orcharhino Server, you can define organizations and locations to help organize content, hosts, and configurations. Organizations and locations enable you to arrange orcharhino resources into logically structured groups. For example, you can create groups based on ownership, purpose, content, or security level. You can create and manage multiple organizations through orcharhino, then divide and assign subscriptions to each individual organization.

Organizations

Organizations typically represent different business units, departments, or teams, such as Finance, Marketing, or Web Development. Each organization requires a separate Red Hat subscription manifest.

By creating organizations, you can create logical containers to isolate and manage their configurations separately according to their specific requirements.

Locations

Locations typically represent physical locations, such as countries or cities.

By creating locations, you can define geographical sites where hosts are located. For example, this is useful in environments with multiple data centers.

You can use locations to map the network infrastructure to prevent incorrect host placement or configuration. While you cannot assign a subnet, domain, or compute resources directly to a orcharhino Proxy, you can assign them to a location.

Unlike organizations, locations can have a hierarchical structure.

orcharhino Server defines all locations and organizations. Each orcharhino Proxy synchronizes content and handles configuration of hosts in a different location.

Your orcharhino Server retains the management function, while the content and configuration is synchronized between your orcharhino Server and orcharhino Proxies assigned to certain locations.

Example 2. Example of using organizations and locations in orcharhino

The structure of a multi-national company includes the Finance, Marketing, and Sales departments. The company operates across United States, United Kingdom, and Japan.

The system administrator creates the following organizations on their orcharhino Server:

  • Finance

  • Marketing

  • Sales

Additionally, the administrator creates the following locations on their orcharhino Server:

  • United States

  • United Kingdom

  • Japan

The administrator can define a nested location hierarchy to divide the United States location into additional locations based on specific cities:

  • Boston

  • Phoenix

  • San Francisco

4.2. Host groups overview

A host group acts as a template for common host settings.

With host groups, you can define many settings for hosts, such as lifecycle environment, content view, or Ansible roles that are available to the hosts. Instead of defining the settings individually for each host, you can use host groups to define common settings once and apply them to multiple hosts.

You can create nested host groups.

Important

When you change the settings of an existing host group, the new settings do not propagate to the hosts assigned to the host group. Only Puppet class settings get updated on hosts after you change them in the host group.

4.3. Host collections overview

A host collection is a group of content hosts.

With host collections, you can perform the same action on multiple hosts at once. These actions include the installation, removal, and update of packages and errata, change of assigned lifecycle environment, and change of content view.

For example, you can use host collections to group hosts by function, department, or business unit.

4.4. Additional resources

5. Tools for administration of orcharhino

You can use multiple tools to manage orcharhino.

5.1. orcharhino management UI overview

You can manage and monitor your orcharhino infrastructure from a browser with the orcharhino management UI. For example, you can use the following navigation features in the orcharhino management UI:

Navigation feature Description

Organization dropdown

Choose the organization you want to manage.

Location dropdown

Choose the location you want to manage.

Monitor

Provides summary dashboards and reports.

Content

Provides content management tools. This includes content views, activation keys, and lifecycle environments.

Hosts

Provides host inventory and provisioning configuration tools.

Configure

Provides general configuration tools and data, including host groups and Ansible content.

Infrastructure

Provides tools on configuring how orcharhino interacts with the environment.

notification bell

Provides event notifications to keep administrators informed of important environment changes.

Administer

Provides advanced configuration for settings such as users, role-based access control (RBAC), and general settings.

5.2. Hammer CLI overview

You can configure and manage your orcharhino Server with CLI commands by using Hammer.

Using Hammer has the following benefits:

  • Create shell scripts based on Hammer commands for basic task automation.

  • Redirect output from Hammer to other tools.

  • Use the --debug option with Hammer to test responses to API calls before applying the API calls in a script. For example: hammer --debug organization list.

To issue Hammer commands, a user must have access to your orcharhino Server.

Note

To ensure a user-friendly and intuitive experience, the orcharhino management UI takes priority when developing new functionality. Therefore, some features that are available in the orcharhino management UI might not yet be available for Hammer.

In the background, each Hammer command first establishes a binding to the API, then sends a request. This can have performance implications when executing a large number of Hammer commands in sequence. In contrast, scripts that use API commands communicate directly with the Satellite API and they establish the binding only once.

Additional resources

5.3. orcharhino API overview

You can write custom scripts and external applications that access the orcharhino API over HTTP with the Representational State Transfer (REST) API provided by orcharhino Server. Use the REST API to integrate with enterprise IT systems and third-party applications, perform automated maintenance or error checking tasks, and automate repetitive tasks with scripts.

Using the REST API has the following benefits:

  • Configure any programming language, framework, or system with support for HTTP protocol to use the API.

  • Create client applications that require minimal knowledge of the orcharhino infrastructure because users discover many details at runtime.

  • Adopt the resource-based REST model for intuitively managing a virtualization platform.

Scripts based on API commands communicate directly with the orcharhino API, which makes them faster than scripts based on Hammer commands or Ansible Playbooks relying on modules within theforeman.foreman.

Important

API commands differ between versions of orcharhino. When you prepare to upgrade orcharhino Server, update all the scripts that contain orcharhino API commands.

Additional resources

5.4. Remote execution in orcharhino

With remote execution, you can run jobs on hosts from orcharhino Proxies by using shell scripts or Ansible roles and playbooks.

Use remote execution for the following benefits in orcharhino:

  • Run jobs on multiple hosts at once.

  • Use variables in your commands for more granular control over the jobs you run.

  • Use host facts and parameters to populate the variable values.

  • Specify custom values for templates when you run the command.

Communication for remote execution occurs through orcharhino Proxy, which means that orcharhino Server does not require direct access to the target host, and can scale to manage many hosts.

To use remote execution, you must define a job template. A job template is a command that you want to apply to remote hosts. You can execute a job template multiple times.

orcharhino uses ERB syntax job templates. For more information, see Template Writing Reference in Managing hosts.

By default, orcharhino includes several job templates for shell scripts and Ansible. For more information, see Setting up Job Templates in Managing hosts.

Additional resources

5.5. Managing orcharhino with Ansible collections

orcharhino Ansible Collections is a set of Ansible modules that interact with the orcharhino API. You can manage and automate many aspects of orcharhino with orcharhino Ansible collections.

5.6. Kickstart workflow

You can automate the installation process of a orcharhino Server or orcharhino Proxy by creating a Kickstart file that contains all the information that is required for the installation.

When you run a orcharhino Kickstart script, the script performs the following actions:

  1. It specifies the installation location of a orcharhino Server or a orcharhino Proxy.

  2. It installs the predefined packages.

  3. It installs Subscription Manager.

  4. It uses Activation Keys to subscribe the hosts to orcharhino.

  5. It installs Puppet, and configures a puppet.conf file to indicate the orcharhino or orcharhino Proxy instance.

  6. It enables Puppet to run and request a certificate.

  7. It runs user defined snippets.

Additional resources

For more information about Kickstart, see Automated installation workflow in Automatically installing RHEL 8.

6. Supported usage and versions of orcharhino components

orcharhino supports the following use cases, architectures, and versions.

6.1. Client operating systems

Using orcharhino, you can manage multiple operating systems that have orcharhino clients:

  • AlmaLinux

  • Amazon Linux

  • CentOS

  • Debian

  • Oracle Linux

  • Red Hat Enterprise Linux

  • Rocky Linux

  • SUSE Linux Enterprise Server

  • Ubuntu

orcharhino can integrate with the following client features:

  • Ansible

  • OpenSCAP

  • OpenSSH

  • Puppet

  • Salt

  • Windows Remote Management (WinRM)

  • Operating system installers that can perform unattended installations, such as Anaconda or Debian-installer

The Katello plugin provides functionality for content and subscription management. The following utilities are provided for supported client operating systems:

  • Katello host tools

  • Subscription Manager

  • Tracer utility

orcharhino deployment planning

7. Deployment path for orcharhino

During installation and initial configuration of orcharhino, you can customize your deployment to fit your specific needs and operational environment. By customizing each stage of the deployment process, you can choose deployment options that meet the requirements of your organization.

7.1. Installing a orcharhino Server

Installing an instance of orcharhino Server on a dedicated server is the first step to a working orcharhino infrastructure.

Additional resources
  • For complete information on installing a orcharhino Server, including prerequisites and predefined tuning profiles, see Installing orcharhino Server.

7.1.1. Configuring orcharhino Server with external database

Running the foreman-installer command, used to install a orcharhino Server, also installs PostgreSQL databases on the server. However, you can configure your orcharhino Server to use external databases instead. Moving to external databases distributes the workload and can reduce overall orcharhino memory usage.

Consider using external databases if you plan to use your orcharhino deployment for the following scenarios:

  • Frequent remote execution tasks. This requires a high volume of records in PostgreSQL and generates heavy database workloads.

  • High disk I/O workloads from frequent repository synchronization or content view publishing. This requires orcharhino to create a record in PostgreSQL for each job.

  • High volume of hosts.

  • High volume of synchronized content.

Additional resources

7.1.2. Configuring DNS, DHCP, and TFTP

You can manage DNS, DHCP, and TFTP centrally within the orcharhino environment, or you can manage them independently after disabling their maintenance on orcharhino.

Additional resources

7.2. Configuring external authentication in orcharhino

orcharhino includes native support for authentication with a username and password. If you require additional methods of authentication, configure your orcharhino Server to use an external authentication source.

Table 1. External authentication sources supported by orcharhino and the authentication features they provide
Username and password Single sign-on (SSO) One-time password (OTP) Time-based one-time password (TOTP) PIV cards

Active Directory (direct integration)

Yes

Yes

No

No

No

FreeIPA

Yes (Linux and Active Directory users)

Yes (Linux users only)

No

No

No

Quarkus-based Keycloak

Yes

Yes

Yes

Yes

Yes

Wildfly-based Keycloak

Yes

Yes

Yes

Yes

Yes

LDAP

Yes

No

No

No

No

Additional resources

7.3. Planning organization and location context

Context in orcharhino consists of organizations and locations. You can associate most resources, for example hosts, subnets, and domains, with at least one organization and location context.

Resources and users can generally only access resources within their own context, which makes configuring organizations and locations an integral part of access management in orcharhino.

Important

If you use host groups to bundle provisioning and configuration information, avoid mismatching resources from mutually exclusive contexts. For example, setting a subnet from one organization or location and a compute resource from a different organization or location creates an invalid host group.

Additional resources

7.4. Installing orcharhino Proxies

By installing orcharhino Proxies, you extend the reach and scalability of your orcharhino deployment. Setting up a orcharhino Proxy registers the base operating system on which you are installing to orcharhino Server and configures the new orcharhino Proxy to provide the required services within your orcharhino deployment.

You can install a orcharhino Proxy in each of your geographic locations. By assigning a orcharhino Proxy to each location, you decrease the load on orcharhino Server, increase redundancy, and reduce bandwidth usage.

Note

The maximum number of orcharhino Proxies that orcharhino Server can support has no fixed limit. It was tested that a orcharhino Server can support 17 orcharhino Proxies with 2 vCPUs.

Decide what services you want to enable on each orcharhino Proxy. You can configure the DNS, DHCP, and TFTP services on one of your orcharhino Proxies or you can use an external server to provide these services to your orcharhino Proxies.

Additional resources

7.5. Adding a Red Hat subscription to orcharhino

Note

If you want to manage hosts running Red Hat Enterprise Linux, import a Red Hat manifest. For more information, see Managing Red Hat subscriptions in Managing content.

A Red Hat subscription manifest is a set of encrypted files that contains your subscription information. orcharhino Server uses this information to access the Red Hat CDN and find what repositories are available for the associated subscription.

Warning

Deleting a subscription manifest removes all the subscriptions attached to running hosts and activation keys.

Additional resources

7.6. Defining your content library

To ensure that your orcharhino Server can manage software and provide it to your hosts, you must create repositories and synchronize them.

Red Hat content

The Red Hat subscription manifest determines what Red Hat repositories your orcharhino Server can access. Red Hat content is already organized into products.

For example, Red Hat Enterprise Linux Server is a product in orcharhino. The repositories for the Red Hat Enterprise Linux Server product consist of different versions, architectures, and add-ons. When you enable a Red Hat repository, orcharhino automatically creates an associated product.

SUSE content

You can use orcharhino to manage hosts running SUSE Linux Enterprise Server. For more information, see Managing SUSE content in Managing content.

Other sources of content

To distribute content from custom sources, you must create products and repositories manually. You can organize other content into custom products however you want.

For example, you can create an EPEL (Extra Packages for Enterprise Linux) product and add an "EPEL 9 x86_64" repository to it.

Creating repositories allows you to choose the specific software required for your environment. By creating only the necessary repositories, you avoid downloading unnecessary content.

Synchronizing repositories downloads the content from Red Hat CDN or another source to your orcharhino Server. The synchronized content is stored on your orcharhino Server, eliminating the need for hosts to access the repositories. You can synchronize repositories manually, or you can create a sync plan to ensure synchronization runs on a regular basis.

Additional resources
  • For more information, including procedures for enabling and synchronizing repositories, see Importing custom content in Managing content.

7.7. Defining content access strategies for hosts

When defining your content lifecycle in orcharhino, you can use content views and lifecycle environments to define which hosts can access which content and content versions. By default, orcharhino includes the Default Organization View content view and the Library lifecycle environment.

Default Organization View

The Default Organization View is the default content view in orcharhino that contains all the content that is synchronized to orcharhino. After you update your content, such as by adding or removing a repository, the update is immediately reflected in Default Organization View.

Library

The Library lifecycle environment is the default lifecycle environment in orcharhino. Every newly published content view version is automatically published to the Library lifecycle environment. You can also promote specific content view versions to the Library lifecycle environment if needed.

In smaller deployments or when you do not require content versioning and environment promotion, you can associate a host to the Library environment under the Default Organization View without configuring additional lifecycle environments.

Additional resources

7.8. Defining role-based access control policies

Users in orcharhino can have one or more roles assigned. These roles are associated with permissions that enable users to perform specified administrative actions in orcharhino. Permission filters define the actions allowed for a certain resource type.

orcharhino provides a set of predefined roles with permissions sufficient for standard tasks. You can also configure custom roles.

Note

One of the predefined roles is the Default role. orcharhino assigns the Default role to every user in the system. By default, the Default role grants only a limited set of permissions. Be aware that if you add a permission to the Default role, every orcharhino users will gain that permission. Assigning a different role to a user does not remove the Default role from the user.

The following types of roles are commonly defined within various orcharhino deployments:

Roles related to applications or parts of infrastructure

For example, roles for owners of Enterprise Linux as the operating system as opposed to roles for owners of application servers and database servers.

Roles related to a particular stage of the software lifecycle

For example, roles divided among the development, testing, and production phases, where each phase has one or more owners.

Roles related to specific tasks

For example, you can create a role for security managers and a role for license managers, depending on the specific tasks users need to be able to perform within your organization.

Additional resources
  • For more information, including details about creating custom roles and granting permissions to roles, see Managing users and roles in Administering orcharhino.

7.8.1. Best practices for role-based access control in orcharhino

  • Define the expected tasks and responsibilities: Define the subset of the orcharhino infrastructure that you want the role to access as well as actions permitted on this subset. Think of the responsibilities of the role and how it differs from other roles.

  • Use predefined roles whenever possible: orcharhino provides several sample roles that you can use. Copying and editing an existing role can be a good start for creating a custom role.

  • Adopt a granular approach to user role management: Define roles with specific and well-scoped permissions. Note that each user can have multiple roles assigned and that permissions from these roles are cumulative.

  • Add permissions gradually and test the results: When creating a custom role, start with a limited set of permissions and add permissions one by one, while testing continuously. Ensure to test your custom role to verify that it works as intended.

  • Consider areas of interest and granting read-only access: Even though a role has a limited area of responsibility, it might need a wider set of permissions. Therefore, you can grant the role a read-only access to parts of orcharhino infrastructure that influence its area of responsibility.

7.9. Configuring provisioning

After your basic orcharhino infrastructure is in place, you can start configuring provisioning to ensure that orcharhino can seamlessly create, configure, and manage hosts.

The process depends on whether you want to provision bare-metal hosts, virtual machines, or cloud instances, but it includes defining installation media, configuring provisioning templates, and other tasks. If you are provisioning virtual machines or cloud instances, you must also integrate your compute provider with orcharhino by connecting the provider as a compute resource to orcharhino.

The following orcharhino features support automating the provisioning of your hosts:

  • Provisioning templates enable you to define the way orcharhino installs an operating system on your hosts.

  • The Discovery service enables you to detect unknown hosts and virtual machines on the provisioning network.

  • Host groups enable you to standardize provisioning of host configurations.

Additional resources

7.10. Planning for disaster recovery

Ensure to back up your orcharhino data so that you can recover your orcharhino deployment in case of a disaster.

To create backups of your orcharhino Server and orcharhino Proxies, use the foreman-maintain backup command. For more information, see Backing up orcharhino Server and orcharhino Proxy in Administering orcharhino.

To backup your hosts, you can use remote execution to configure recurring backup tasks that orcharhino will run on the hosts. For more information, see Configuring and setting up remote jobs in Managing hosts.

To create snapshots of hosts, you can use the Snapshot Management plugin. For more information, see Creating snapshots of a host in Managing hosts.

7.11. Additional deployment tasks

orcharhino offers a range of additional capabilities that you can use to further enhance your orcharhino deployment. For example:

Remote execution commands on hosts

With remote execution, you can perform various tasks on multiple hosts simultaneously. orcharhino supports the following modes of transport for remote execution: pull-based mode (over MQTT/HTTPS) and push-based mode (over SSH).

For more information, see Configuring and setting up remote jobs in Managing hosts.

Automating tasks with a configuration management tool

By integrating orcharhino with a configuration management tool, you can automate repetitive tasks and ensure consistent configuration of your hosts.

For more information on using Ansible with orcharhino, see Configuring hosts by using Ansible. You will need to enable the Ansible plugin on your orcharhino Server.

For more information on using Puppet with orcharhino, see Configuring hosts by using Puppet.

For more information on using Salt with orcharhino, see Configuring hosts by using Salt. You will need to enable the Salt plugin on your orcharhino Server.

Security management with OpenSCAP

You can enable the OpenSCAP plugin on your orcharhino Server and any orcharhino Proxies. With OpenSCAP, you can manage compliance policies and run security compliance scans on your hosts. After the scan completes, a compliance report is uploaded to your orcharhino Server.

For more information, see Managing security compliance.

Load balancing

With load balancing configured on your orcharhino Proxies, you can improve performance on orcharhino Proxies while also improving performance and stability for host connections to orcharhino.

Incident management with Red Hat Insights

With Red Hat Insights enabled on your orcharhino Server, you can identify key risks to stability, security, and performance.

For more information, see Using Red Hat Insights with orcharhino Server in Installing orcharhino Server.

8. Common deployment scenarios

This section provides a brief overview of common deployment scenarios for orcharhino. Note that many variations and combinations of the following layouts are possible.

8.1. Single location with segregated subnets

Your infrastructure might require multiple isolated subnets even if orcharhino is deployed in a single geographic location. This can be achieved for example by deploying multiple orcharhino Proxies with DHCP and DNS services, but the recommended way is to create segregated subnets using a single orcharhino Proxy. This orcharhino Proxy is then used to manage hosts and compute resources in those segregated networks to ensure they only have to access the orcharhino Proxy for provisioning, configuration, errata, and general management. For more information on configuring subnets, see Managing hosts.

8.2. Multiple locations

ATIX AG recommends to create at least one orcharhino Proxy per geographic location. This practice can save bandwidth since hosts obtain content from a local orcharhino Proxy. Synchronization of content from remote repositories is done only by the orcharhino Proxy, not by each host in a location. In addition, this layout makes the provisioning infrastructure more reliable and easier to configure.

The graphics in this section are Red Hat illustrations. Non-Red Hat illustrations are welcome. If you want to contribute alternative images, raise a pull request in the Foreman Documentation GitHub page. Note that in Red Hat terminology, "Satellite" refers to Foreman and "Capsule" refers to Smart Proxy.

Content Flow in orcharhino

8.3. Content view scenarios

The following section provides general scenarios for deploying content views as well as lifecycle environments.

The default lifecycle environment called Library gathers content from all connected sources. It is not recommended to associate hosts directly with the Library as it prevents any testing of content before making it available to hosts. Instead, create a lifecycle environment path that suits your content workflow. The following scenarios are common:

  • A single lifecycle environment – content from Library is promoted directly to the production stage. This approach limits the complexity but still allows for testing the content within the Library before making it available to hosts.

    A single lifecycle environment
  • A single lifecycle environment path – both operating system and applications content is promoted through the same path. The path can consist of several stages (for example Development, QA, Production), which enables thorough testing but requires additional effort.

    A single lifecycle environment path
  • Application specific lifecycle environment paths – each application has a separate path, which allows for individual application release cycles. You can associate specific compute resources with application lifecycle stages to facilitate testing. On the other hand, this scenario increases the maintenance complexity.

    Application specific lifecycle environment paths

The following content view scenarios are common:

  • All in one content view – a content view that contains all necessary content for the majority of your hosts. Reducing the number of content views is an advantage in deployments with constrained resources (time, storage space) or with uniform host types. However, this scenario limits the content view capabilities such as time based snapshots or intelligent filtering. Any change in content sources affects a proportion of hosts.

  • Host specific content view – a dedicated content view for each host type. This approach can be useful in deployments with a small number of host types (up to 30). However, it prevents sharing content across host types as well as separation based on criteria other than the host type (for example between operating system and applications). With critical updates every content view has to be updated, which increases maintenance efforts.

  • Host specific composite content view – a dedicated combination of content views for each host type. This approach enables separating host specific and shared content, for example you can have dedicated content views for the operating system and application content. By using a composite, you can manage your operating system and applications separately and at different frequencies.

  • Component based content view – a dedicated content view for a specific application. For example a database content view can be included into several composite content views. This approach allows for greater standardization but it leads to an increased number of content views.

The optimal solution depends on the nature of your host environment. Avoid creating a large number of content views, but keep in mind that the size of a content view affects the speed of related operations (publishing, promoting). Also make sure that when creating a subset of packages for the content view, all dependencies are included as well. Note that Kickstart repositories should not be added to content views, as they are used for host provisioning only.

8.4. orcharhino Server with multiple manifests

If you plan to have more than one Red Hat Network account, or if you want to manage systems belonging to another entity that is also a Red Hat Network account holder, then you and the other account holder can assign subscriptions, as required, to manifests. A customer that does not have a orcharhino subscription can create a Subscription Asset Manager manifest, which can be used with orcharhino, if they have other valid subscriptions. You can then use the multiple manifests in one orcharhino Server to manage multiple organizations.

If you must manage systems but do not have access to the subscriptions for the RPMs, you must use Red Hat Enterprise Linux orcharhino Add-On. For more information, see orcharhino Add-On.

The following diagram shows two Red Hat Network account holders, who want their systems to be managed by the same orcharhino installation. In this scenario, Example Corporation 1 can allocate any subset of their 60 subscriptions, in this example they have allocated 30, to a manifest. This can be imported into the orcharhino as a distinct Organization. This allows system administrators the ability to manage Example Corporation 1’s systems using orcharhino completely independently of Example Corporation 2’s organizations (R&D, Operations, and Engineering).

orcharhino Server with multiple manifests
Figure 4. orcharhino Server with multiple manifests

When creating a Red Hat subscription manifest:

  • Add the subscription for orcharhino Server to the manifest if planning a disconnected or self-registered orcharhino Server. This is not necessary for a connected orcharhino Server that is subscribed using the Subscription Manager utility on the base system.

  • Add subscriptions for all orcharhino Proxies you want to create.

  • Add subscriptions for all Red Hat products you want to manage with orcharhino.

  • Note the date when the subscriptions are due to expire and plan for their renewal before the expiry date.

  • Create one manifest per organization. You can use multiple manifests and they can be from different Red Hat subscriptions.

orcharhino allows the use of future-dated subscriptions in the manifest. This enables uninterrupted access to repositories when future-dated subscriptions are added to a manifest before the expiry date of existing subscriptions.

Note that the Red Hat subscription manifest can be modified and reloaded to orcharhino Server in case of any changes in your infrastructure, or when adding more subscriptions. Manifests should not be deleted. If you delete the manifest from the Red Hat Customer Portal or in the orcharhino management UI it will unregister all of your content hosts.

8.5. Host group structures

The fact that host groups can be nested to inherit parameters from each other allows for designing host group hierarchies that fit particular workflows. A well planned host group structure can help to simplify the maintenance of host settings. This section outlines four approaches to organizing host groups.

Host group structuring examples
Figure 5. Host group structuring examples
Flat structure

The advantage of a flat structure is limited complexity, as inheritance is avoided. In a deployment with few host types, this scenario is the best option. However, without inheritance there is a risk of high duplication of settings between host groups.

Lifecycle environment based structure

In this hierarchy, the first host group level is reserved for parameters specific to a lifecycle environment. The second level contains operating system related definitions, and the third level contains application specific settings. Such structure is useful in scenarios where responsibilities are divided among lifecycle environments (for example, a dedicated owner for the Development, QA, and Production lifecycle stages).

Application based structure

This hierarchy is based on roles of hosts in a specific application. For example, it enables defining network settings for groups of back-end and front-end servers. The selected characteristics of hosts are segregated, which supports Puppet-focused management of complex configurations. However, the content views can only be assigned to host groups at the bottom level of this hierarchy.

Location based structure

In this hierarchy, the distribution of locations is aligned with the host group structure. In a scenario where the location (orcharhino Proxy) topology determines many other attributes, this approach is the best option. On the other hand, this structure complicates sharing parameters across locations, therefore in complex environments with a large number of applications, the number of host group changes required for each configuration change increases significantly.

9. Provisioning concepts

An important feature of orcharhino is unattended provisioning of hosts. To achieve this, orcharhino uses DNS and DHCP infrastructures, PXE booting, TFTP, and Kickstart. Use this chapter to understand the working principle of these concepts.

9.1. PXE booting

Preboot execution environment (PXE) provides the ability to boot a system over a network. Instead of using local hard drives or a CD-ROM, PXE uses DHCP to provide host with standard information about the network, to discover a TFTP server, and to download a boot image.

9.1.1. PXE sequence

  1. The host boots the PXE image if no other bootable image is found.

  2. A NIC of the host sends a broadcast request to the DHCP server.

  3. The DHCP server receives the request and sends standard information about the network: IP address, subnet mask, gateway, DNS, the location of a TFTP server, and a boot image.

  4. The host obtains the boot loader image/pxelinux.0 and the configuration file pxelinux.cfg/00:MA:CA:AD:D from the TFTP server.

  5. The host configuration specifies the location of a kernel image, initrd and Kickstart.

  6. The host downloads the files and installs the image.

For an example of using PXE Booting by orcharhino Server, see Provisioning Workflow in Provisioning hosts.

9.1.2. PXE booting requirements

To provision machines using PXE booting, ensure that you meet the following requirements:

Network requirements
  • Optional: If the host and the DHCP server are separated by a router, configure the DHCP relay agent and point to the DHCP server.

Client requirements
  • Ensure that all the network-based firewalls are configured to allow clients on the subnet to access the orcharhino Proxy. For more information, see orcharhino Proxy networking.

  • Ensure that your client has access to the DHCP and TFTP servers.

orcharhino requirements
  • Ensure that both orcharhino Server and orcharhino Proxy have DNS configured and are able to resolve provisioned host names.

  • Ensure that the UDP ports 67 and 68 are accessible by the client to enable the client to receive a DHCP offer with the boot options.

  • Ensure that the UDP port 69 is accessible by the client so that the client can access the TFTP server on the orcharhino Proxy.

  • Ensure that the TCP port 80 is accessible by the client to allow the client to download files and Kickstart templates from the orcharhino Proxy.

  • Ensure that the host provisioning interface subnet has a DHCP orcharhino Proxy set.

  • Ensure that the host provisioning interface subnet has a TFTP orcharhino Proxy set.

  • Ensure that the host provisioning interface subnet has a Templates orcharhino Proxy set.

  • Ensure that DHCP with the correct subnet is enabled using the orcharhino installer.

  • Enable TFTP using the orcharhino installer.

9.2. HTTP booting

You can use HTTP booting to boot systems over a network using HTTP.

9.2.1. HTTP booting requirements with managed DHCP

To provision machines through HTTP booting ensure that you meet the following requirements:

Client requirements

For HTTP booting to work, ensure that your environment has the following client-side configurations:

  • All the network-based firewalls are configured to allow clients on the subnet to access the orcharhino Proxy. For more information, see orcharhino Proxy networking.

  • Your client has access to the DHCP and DNS servers.

  • Your client has access to the HTTP UEFI Boot orcharhino Proxy.

Network requirements
  • Optional: If the host and the DHCP server are separated by a router, configure the DHCP relay agent and point to the DHCP server.

orcharhino requirements

Although TFTP protocol is not used for HTTP UEFI Booting, orcharhino uses TFTP orcharhino Proxy API to deploy boot loader configuration.

For HTTP booting to work, ensure that orcharhino has the following configurations:

  • Both orcharhino Server and orcharhino Proxy have DNS configured and are able to resolve provisioned host names.

  • The UDP ports 67 and 68 are accessible by the client so that the client can send and receive a DHCP request and offer.

  • Ensure that the TCP port 8000 is open for the client to download the boot loader and Kickstart templates from the orcharhino Proxy.

  • The TCP port 9090 is open for the client to download the boot loader from the orcharhino Proxy using the HTTPS protocol.

  • The subnet that functions as the host’s provisioning interface has a DHCP orcharhino Proxy, an HTTP Boot orcharhino Proxy, a TFTP orcharhino Proxy, and a Templates orcharhino Proxy

  • The grub2-efi package is updated to the latest version. To update the grub2-efi package to the latest version and execute the installer to copy the recent boot loader from /boot into /var/lib/tftpboot directory, enter the following commands:

    # dnf upgrade grub2-efi
    # foreman-installer

9.2.2. HTTP booting requirements with unmanaged DHCP

To provision machines through HTTP booting without managed DHCP ensure that you meet the following requirements:

Client requirements
  • HTTP UEFI Boot URL must be set to one of:

    • http://orcharhino-proxy.example.com:8000

    • https://orcharhino-proxy.example.com:9090

  • Ensure that your client has access to the DHCP and DNS servers.

  • Ensure that your client has access to the HTTP UEFI Boot orcharhino Proxy.

  • Ensure that all the network-based firewalls are configured to allow clients on the subnet to access the orcharhino Proxy. For more information, see orcharhino Proxy networking.

Network requirements
  • An unmanaged DHCP server available for clients.

  • An unmanaged DNS server available for clients. In case DNS is not available, use IP address to configure clients.

orcharhino requirements

Although TFTP protocol is not used for HTTP UEFI Booting, orcharhino use TFTP orcharhino Proxy API to deploy boot loader configuration.

  • Ensure that both orcharhino Server and orcharhino Proxy have DNS configured and are able to resolve provisioned host names.

  • Ensure that the UDP ports 67 and 68 are accessible by the client so that the client can send and receive a DHCP request and offer.

  • Ensure that the TCP port 8000 is open for the client to download boot loader and Kickstart templates from the orcharhino Proxy.

  • Ensure that the TCP port 9090 is open for the client to download the boot loader from the orcharhino Proxy through HTTPS.

  • Ensure that the host provisioning interface subnet has an HTTP Boot orcharhino Proxy set.

  • Ensure that the host provisioning interface subnet has a TFTP orcharhino Proxy set.

  • Ensure that the host provisioning interface subnet has a Templates orcharhino Proxy set.

  • Update the grub2-efi package to the latest version and execute the installer to copy the recent boot loader from the /boot directory into the /var/lib/tftpboot directory:

    # dnf upgrade grub2-efi
    # foreman-installer

9.3. Secure boot

When orcharhino is installed on Enterprise Linux using foreman-installer, grub2 and shim boot loaders that are signed by Red Hat are deployed into the TFTP and HTTP UEFI Boot directory. PXE loader options named "SecureBoot" configure hosts to load shim.efi.

On Debian and Ubuntu operating systems, the grub2 boot loader is created using the grub2-mkimage unsigned. To perform the Secure Boot, the boot loader must be manually signed and key enrolled into the EFI firmware. Alternatively, grub2 from Ubuntu or Enterprise Linux can be copied to perform booting.

Grub2 in Enterprise Linux 8.0-8.3 were updated to mitigate Boot Hole Vulnerability and keys of existing Enterprise Linux kernels were invalidated. To boot any of the affected Enterprise Linux kernel (or operating system installer), you must enroll keys manually into the EFI firmware for each host:

# pesign -P -h -i /boot/vmlinuz-<version>
# mokutil --import-hash <hash value returned from pesign>
# reboot

Appendix A: Technical users provided and required by orcharhino

During the installation of orcharhino, system accounts are created. They are used to manage files and process ownership of the components integrated into orcharhino. Some of these accounts have fixed UIDs and GIDs, while others take the next available UID and GID on the system instead. To control the UIDs and GIDs assigned to accounts, you can define accounts before installing orcharhino. Because some of the accounts have hard-coded UIDs and GIDs, it is not possible to do this with all accounts created during orcharhino installation.

The following table lists all the accounts created by orcharhino during installation. You can predefine accounts that have Yes in the Flexible UID and GID column with custom UID and GID before installing orcharhino.

Do not change the home and shell directories of system accounts because they are requirements for orcharhino to work correctly.

Because of potential conflicts with local users that orcharhino creates, you cannot use external identity providers for the system users of the orcharhino base operating system.

Table 2. Technical users provided and required by orcharhino
User name UID Group name GID Flexible UID and GID Home Shell

foreman

N/A

foreman

N/A

yes

/usr/share/foreman

/sbin/nologin

foreman-proxy

N/A

foreman-proxy

N/A

yes

/usr/share/foreman-proxy

/sbin/nologin

apache

48

apache

48

no

/usr/share/httpd

/sbin/nologin

postgres

26

postgres

26

no

/var/lib/pgsql

/bin/bash

pulp

N/A

pulp

N/A

no

N/A

/sbin/nologin

puppet

52

puppet

52

no

/opt/puppetlabs/server/data/puppetserver

/sbin/nologin

saslauth

N/A

saslauth

76

no

/run/saslauthd

/sbin/nologin

tomcat

53

tomcat

53

no

/usr/share/tomcat

/bin/nologin

unbound

N/A

unbound

N/A

yes

/etc/unbound

/sbin/nologin

Appendix B: Glossary of terms used in orcharhino

orcharhino is a complete lifecycle management tool for physical hosts, virtual machines, and cloud instances. Key features include automated host provisioning, configuration management, and content management including patch and errata management. You can automate tasks and quickly provision hosts, all through a single unified interface.

This alphabetically ordered glossary provides an overview of orcharhino related technical terms.

Activation key

Activation keys are used by Subscription Manager to register hosts to orcharhino. They define content view and lifecycle environment associations, content overrides, system purpose attributes, and other parameters to be associated with a newly created host.

They are associated to exactly one lifecycle environment and exactly one content view, though this may be a composite content view. You can use them on multiple machines and they behave like configuration information rather than traditional software license keys. You can also use multiple activation keys with a single host. When you register a host using an activation key, certain content from orcharhino is provided to the host. The content that is made available depends on the content in the activation key’s content view and lifecycle environment, any content overrides present, any repository-level restrictions such as operating system or architecture, and system purpose attributes such as release version.

Activation key
Ansible

Ansible is an agentless open-source automation engine. For hosts running Linux, Ansible uses SSH to connect to hosts. For hosts running Microsoft Windows, Ansible relies on WinRM. It uses playbooks and roles to describe and bundle tasks. Within orcharhino, you can use Ansible to configure hosts and perform remote execution.

For more information about using Ansible to configure hosts, see Configuring hosts by using Ansible. For more information about automating orcharhino using orcharhino Ansible collection, see Managing orcharhino with Ansible collections in Administering orcharhino.

Answer file

A configuration file that defines settings for an installation scenario. Answer files are defined in the YAML format and stored in the /etc/foreman-installer/scenarios.d/ directory. To see the default values for installation scenario parameters, use the foreman-installer --full-help command on your orcharhino Server.

ARF report

Asset Reporting Format (ARF) reports are the result of OpenSCAP compliance scans on hosts which have a policy assigned. Summarizes the security compliance of hosts managed by orcharhino. They list compliance criteria and whether the scanned host has passed or failed.

Audits

Provide a report on changes made by a specific user. Audits can be viewed in the orcharhino management UI under Monitor > Audits.

Baseboard management controller (BMC)

Enables remote power management of bare-metal hosts. In orcharhino, you can create a BMC interface to manage selected hosts.

Boot disk

An ISO image used for PXE-less provisioning. This ISO enables the host to connect to orcharhino Server, boot the installation media, and install the operating system. There are several kinds of boot disks: host image, full host image, generic image, and subnet image.

Catalog

A document that describes the desired system state for one specific host managed by Puppet. It lists all of the resources that need to be managed, as well as any dependencies between those resources. Catalogs are compiled by a Puppet server from Puppet Manifests and data from Puppet agents.

Candlepin

A service within Katello responsible for subscription management.

Compliance policy

Compliance policies refer to the application of SCAP content to hosts by using orcharhino with its OpenSCAP plugin. You can create compliance policies by using the orcharhino management UI, Hammer CLI, or API. A compliance policy requires the setting of a specific XCCDF profile from a SCAP content, optionally using a tailoring file. You can set up scheduled tasks on orcharhino that check your hosts for compliance against SCAP content. When a compliance policy scan completes, the host sends an ARF report to orcharhino.

Compute profile

Specifies default attributes for new virtual machines on a compute resource.

Compute resource

A compute resource is an external virtualization or cloud infrastructure that you can attach to orcharhino. orcharhino can provision, configure, and manage hosts within attached compute resources. Examples of compute resources include VMware or libvirt and cloud providers such as Microsoft Azure or Amazon EC2.

Configuration Management

Configuration management describes the task of configuring and maintaining hosts. In orcharhino, you can use Ansible, Puppet, and Salt to configure and maintain hosts with orcharhino as a single source of infrastructure truth.

Container (Docker container)

An isolated application sandbox that contains all runtime dependencies required by an application. orcharhino supports container provisioning on a dedicated compute resource.

Container image

A static snapshot of the container’s configuration. orcharhino supports various methods of importing container images as well as distributing images to hosts through content views.

Content

A general term for everything orcharhino distributes to hosts. Content includes software packages such as .rpm packages, errata, or Docker images. Content is synchronized into the Library and then promoted into lifecycle environments using content views so that they can be consumed by hosts.

Content delivery network (CDN)

The mechanism used to deliver Red Hat content to orcharhino Server.

Content view

Content views are named and versioned collections of repositories. When you publish a content view, orcharhino creates a new content view version. This content view version is a frozen snapshot of the current state of the repositories within the content view. Any subsequent changes to the underlying repositories will no longer affect the published content view version. Once a content view is published, it can be promoted through the lifecycle environment path, or modified using incremental upgrades.

Composite content view

Composite content views contain content views, which allows for a more modular approach to manage and version content. You can choose which version of each content view is used in a composite content view.

Discovered host

A bare-metal host detected on the provisioning network by the Discovery plugin.

Discovery image

Refers to the minimal operating system based on Enterprise Linux that is PXE-booted on hosts to acquire initial hardware information and to communicate with orcharhino Server before starting the provisioning process.

Discovery plugin

Enables automatic bare-metal discovery of unknown hosts on the provisioning network. The plugin consists of three components: services running on orcharhino Server and orcharhino Proxy, and the Discovery image running on host.

Discovery rule

A set of predefined provisioning rules which assigns a host group to discovered hosts and triggers provisioning automatically.

Docker tag

A mark used to differentiate container images, typically by the version of the application stored in the image. In the orcharhino management UI, you can filter images by tag under Content > Docker Tags.

Enterprise Linux

An umbrella term for the following Red Hat Enterprise Linux-like operating systems:

  • AlmaLinux

  • CentOS Linux

  • CentOS Stream

  • Oracle Linux

  • Red Hat Enterprise Linux

  • Rocky Linux

ERB

Embedded Ruby (ERB) is a template syntax used in provisioning and job templates.

Errata

Updated packages containing security fixes, bug fixes, and enhancements. In relationship to a host, erratum is applicable if it updates a package installed on the host and installable if it is present in the host’s content view (which means it is accessible for installation on the host).

External node classifier (ENC)

A construct that provides additional data for a server to use when configuring hosts. When Puppet obtains information about nodes from an external source instead of its own database, the external source is called External node classifier. If the Puppet plugin is installed, orcharhino can act as an External node classifier to Puppet servers in a orcharhino deployment.

Facter

A program that provides information (facts) about the system on which it is run; for example, Facter can report total memory, operating system version, architecture, and more. Puppet modules enable specific configurations based on host data gathered by Facter.

Facts

Host parameters such as total memory, operating system version, or architecture. Facts are reported by Facter and used by Puppet.

Foreman

Foreman is an open-source component to provision and manage hosts. Foreman is the core upstream component of orcharhino.

Full host image

A boot disk used for PXE-less provisioning of a specific host. The full host image contains an embedded Linux kernel and init RAM disk of the associated operating system installer.

Generic image

A boot disk for PXE-less provisioning that is not tied to a specific host. The generic image sends the MAC address of your host to orcharhino Server, which matches it against the host entry.

Hammer

Hammer is a command-line interface tool for orcharhino. You can execute Hammer commands from the command line or utilize it in scripts. You can use Hammer to automate certain recurring tasks as an alternative to orcharhino Ansible collection or orcharhino API.

Host

A host is a physical, virtual, or cloud instance registered to orcharhino.

Host collection

A user defined group of one or more Hosts used for bulk actions such as errata installation.

Host group

A host group is a template to build hosts that holds shared parameters, such as subnet or lifecycle environment. It helps to unify configuration management in Ansible, Puppet, and Salt by grouping hosts. You can nest host groups to create a hierarchical structure. For more information, see Working with host groups in Managing hosts.

Host image

A boot disk used for PXE-less provisioning of a specific host. The host image only contains the boot files necessary to access the installation media on orcharhino Server.

Incremental upgrade (of a content view)

The act of creating a new (minor) content view version in a lifecycle environment. Incremental upgrades provide a way to make in-place modification of an already published content view. Useful for rapid updates, for example when applying security errata.

Installation Media

Installation media are sets of installation files used to install the base operating system during the provisioning process. An installation medium in orcharhino represents the installation files for one or more operating systems, which must be accessible over the network, either through an URL or an NFS server location. It is usually either a mirror or a CD or DVD image. Pointing the URL of the installation medium to a local copy, for example http://orcharhino.example.com/pub/installation_media/, can improve provisioning time and reduce network load.

Every operating system depends on exactly one path of an installation medium, whereas installation media paths may serve different operating systems at the same time. You can use operating system parameters such as $version, $major, and $minor to parameterize the URL.

Job

A command executed remotely on a host from orcharhino Server. Every job is defined in a job template.

Katello

Katello is an open-source plugin to perform content management and subscription handling. It depends on Pulp for content management, which fetches software from repositories and stores various versions of it. It also depends on Candlepin for host registration and managing subscription manifests. The Katello plugin is always installed on orcharhino.

Lazy sync

The ability to change the default download policy of a repository from Immediate to On Demand. The On Demand setting saves storage space and synchronization time by only downloading the packages when requested by a host.

Location

A collection of default settings that represent a physical place. Location is a tag mostly used for geographical separation of hosts within orcharhino. Examples include different cities or different data centers.

Library

A container for content from all synchronized repositories on orcharhino Server. Libraries exist by default for each organization as the root of every lifecycle environment path and the source of content for every content view.

Lifecycle environment

A lifecycle environment represents a step in the lifecycle environment path. It defines the stage in which certain versions of content are available to hosts, such as development, testing, and production. This way, new versions of software can be developed and tested before being deployed in a production environment, thus reducing the risk of disruption by prematurely rolled out updates. Content moves through lifecycle environments by publishing and promoting content views.

Lifecycle environment path

A sequence of lifecycle environments through which content views are promoted. You can promote a content view through a typical promotion path, for example, from development to test to production. All lifecycle environment paths originate from the Library environment, which is always present by default.

Manifest (Red Hat subscription manifest)

A mechanism for transferring subscriptions from the Red Hat Customer Portal to orcharhino. Do not confuse with Puppet manifest.

Migrating orcharhino

The process of moving an existing orcharhino installation to a new instance.

OpenSCAP

A project implementing security compliance auditing according to the Security Content Automation Protocol (SCAP). OpenSCAP is integrated in orcharhino to provide compliance auditing for hosts.

orcharhino Customer Center (OCC)

orcharhino Customer Center provides all content from ATIX AG to install, update, and run orcharhino Server and orcharhino Proxies. It also provides all orcharhino Clients. For more information, see Registering orcharhino Server to OCC and Adding orcharhino Clients manually in the ATIX Service Portal.

orcharhino Proxy

orcharhino Proxies can provide DHCP, DNS, and TFTP services and act as an Ansible control node, Puppet server, or Salt Master in separate networks. They interact with orcharhino Server in a client-server model. orcharhino Server always comes bundled with an integrated orcharhino Proxy.

orcharhino Proxies are required in orcharhino deployments that manage IT infrastructure spanning across multiple networks and useful for orcharhino deployments across various geographical locations.

Organization

An isolated collection of systems, content, and other functionality within orcharhino. Organization is a tag used for organizational separation of hosts within orcharhino. Examples include different teams or business units.

Parameter

Defines the behavior of orcharhino components during provisioning. Depending on the parameter scope, we distinguish between global, domain, host group, and host parameters. Depending on the parameter complexity, we distinguish between simple parameters (key-value pair) and smart parameters (conditional arguments, validation, overrides).

Parametrized class (smart class parameter)

A parameter created by importing a class from Puppet server.

Patch and release management

Patch and release management describes the process of acquiring, managing, and installing patches and software updates to your infrastructure. Using orcharhino, you can keep control of the package versions available to your hosts and deliver applicable errata.

Permission

Defines an action related to a selected part of orcharhino infrastructure (resource type). Each resource type is associated with a set of permissions, for example the Architecture resource type has the following permissions: view_architectures, create_architectures, edit_architectures, and destroy_architectures. You can group permissions into roles and associate them with users or user groups.

Product

Products are named collections of one or more repositories. If you manage Red Hat content and upload a Red Hat manifest, orcharhino automatically groups Red Hat content within products. If you manage SUSE content using SCC Manager plugin, orcharhino automatically groups SUSE content within products. For more information, see Managing SUSE content in Managing content.

Promote (a content view)

The act of moving a content view from one lifecycle environment to another. For more information, see Promoting a content view in Managing content.

Provisioning

The provisioning of a host is the deployment of the base operating system on the host and registration of the host to orcharhino. Optionally, the process continues with the supply of content and configuration. This process is ideally automated. Provisioned hosts run on compute resources or bare metal, never orcharhino Server or orcharhino Proxies.

Provisioning template

Provisioning templates are templates that automate deployment of an operating system on hosts. orcharhino contains provisioning templates for all supported host operating system families:

  • AutoYaST for SUSE Linux Enterprise Server

  • Kickstart for AlmaLinux, Amazon Linux, CentOS, Oracle Linux, Red Hat Enterprise Linux, and Rocky Linux

  • Preseed files for Debian and Ubuntu

Publish (a content view)

The act of making a content view version available in a lifecycle environment and usable by hosts.

Pulp

A service within Katello responsible for repository and content management.

Puppet

Puppet is a configuration management tool utilizing a declarative language in a server-client architecture. For more information about using Puppet to configure hosts, see Configuring hosts by using Puppet.

Puppet agent

A service running on a host that applies configuration changes to that host.

Puppet environment

An isolated set of Puppet agent nodes that can be associated with a specific set of Puppet Modules.

Puppet manifest

Refers to Puppet scripts, which are files with the .pp extension. The files contain code to define a set of necessary resources, such as packages, services, files, users and groups, and so on, using a set of key-value pairs for their attributes.

Puppet server

A orcharhino Proxy component that provides a Puppet catalog to hosts for execution by the Puppet agent.

Puppet module

A self-contained bundle of code (Puppet Manifests) and data (facts) that you can use to manage resources such as users, files, and services.

PXE

PXE stands for preboot execution environment and is used to boot operating systems received from the network rather than a local disk. It requires a compatible network interface card (NIC) and relies on DHCP and TFTP.

Recurring logic

A job executed automatically according to a schedule. In the orcharhino management UI, you can view those jobs under Monitor > Recurring logics.

Registry

An archive of container images. orcharhino supports importing images from local and external registries. orcharhino itself can act as an image registry for hosts. However, hosts cannot push changes back to the registry.

Remote execution (REX)

Remote execution is the process of using orcharhino to run commands on registered hosts.

Repository

A repository is a single source and the smallest unit of content in orcharhino. You can either synchronize a repository with a URL or manually upload content to orcharhino. orcharhino supports multiple content types. For more information, see Content types in orcharhino in Managing content. One or more repositories form a product.

Resource type

Refers to a part of orcharhino infrastructure, for example host, orcharhino Proxy, or architecture. Used in permission filtering.

Role

Specifies a collection of permissions that are applied to a set of resources, such as hosts. Roles can be assigned to users and user groups. orcharhino provides a number of predefined roles.

Salt

Salt is a configuration management tool used to maintain hosts in certain defined states, for example have packages installed or services running. It is designed to be idempotent. For more information about using Salt to configure hosts, see Configuring hosts by using Salt.

SCAP content

SCAP stands for Security Content Automation Protocol and refers to .xml files containing the configuration and security baseline against which hosts are checked. orcharhino uses SCAP content in compliance policies.

Subnet image

A type of generic image for PXE-less provisioning that communicates through orcharhino Proxy.

Subscription

An entitlement for receiving content and service from Red Hat.

Subscription Manager

Subscription Manager is a client application to register hosts to orcharhino. subscription-manager uses activation keys to consume content on hosts.

SUSE Subscription

You can use orcharhino to manage SUSE content. For more information, see Managing SUSE content in Managing content.

Synchronization

Synchronization describes the process of fetching content from external repositories into the orcharhino Server.

Sync plan

Sync plans describe the scheduled synchronization of content from external content.

Tailoring files

Tailoring files specify a set of modifications to existing SCAP content. They adapt SCAP content to your particular needs without changing the original SCAP content itself.

Task

A background process executed on the orcharhino or orcharhino Proxy, such as repository synchronization or content view publishing. You can monitor the task status in the orcharhino management UI under Monitor > orcharhino Tasks > Tasks.

Updating orcharhino

The process of advancing your orcharhino Server and orcharhino Proxy installations from one patch release to the next, for example orcharhino 7.0.0 to orcharhino 7.0.1.

Upgrading orcharhino

The process of advancing your orcharhino Server and orcharhino Proxy installations from one minor release to the next, for example orcharhino 6.11 to orcharhino 7.0.

User group

A collection of roles which can be assigned to a collection of users.

User

Anyone registered to use orcharhino. Authentication and authorization is possible through built-in logic, through external resources (LDAP, Identity Management, or Active Directory), or with Kerberos.

Virtualization

Virtualization describes the process of running multiple operating systems with various applications on a single hardware host using hypervisors like VMware, Proxmox, or libvirt. It facilitates scalability and cost savings. You can attach virtualization infrastructure as compute resources to orcharhino. Enable appropriate plugins to access this feature.

virt-who

An agent for retrieving IDs of virtual machines from the hypervisor. When used with orcharhino, virt-who reports those IDs to orcharhino Server so that it can provide subscriptions for hosts provisioned on virtual machines.

XCCDF profiles

Extensible configuration checklist description format (XCCDF) profiles are a component of SCAP content. XCCDF is a language to write security checklists and benchmarks. An XCCDF file contains security configuration rules for lists of hosts.

Appendix C: CLI help

orcharhino offers multiple user interfaces: orcharhino management UI, Hammer CLI, API, and through Ansible collection theforeman.foreman. If you want to administer orcharhino on the command line, have a look at the following help output.

orcharhino services

A set of services that orcharhino Server and orcharhino Proxies use for operation. You can use the foreman-maintain tool to manage these services. To see the full list of services, enter the foreman-maintain service list command on the machine where orcharhino or orcharhino Proxy is installed. For more information, run foreman-maintain --help on your orcharhino Server or orcharhino Proxy.

orcharhino plugins

You can extend orcharhino by installing plugins. For more information, run foreman-installer --full-help on your orcharhino Server or orcharhino Proxy.

Hammer CLI

You can manage orcharhino on the command line using hammer. For more information on using Hammer CLI, see Using the Hammer CLI tool or run hammer --help on your orcharhino Server or orcharhino Proxy.