Illumina Innovates with Rancher and Kubernetes
This page describes the requirements for the nodes where your apps and services will be installed.
In this section, “user cluster” refers to a cluster running your apps, which should be separate from the cluster (or single node) running Rancher.
If Rancher is installed on a high-availability Kubernetes cluster, the Rancher server cluster and user clusters have different requirements. For Rancher installation requirements, refer to the node requirements in the installation section.
Make sure the nodes for the Rancher server fulfill the following requirements:
Rancher should work with any modern Linux distribution and any modern Docker version. Linux is required for the etcd and controlplane nodes of all downstream clusters. Worker nodes may run Linux or Windows Server. The capability to use Windows worker nodes in downstream clusters was added in Rancher v2.3.0.
Rancher works has been tested and is supported with downstream clusters running Ubuntu, CentOS, Oracle Linux, RancherOS, and RedHat Enterprise Linux. For details on which OS and Docker versions were tested with each Rancher version, refer to the support maintenance terms.
All supported operating systems are 64-bit x86.
If you plan to use ARM64, see Running on ARM64 (Experimental).
For information on how to install Docker, refer to the official Docker documentation.
Some distributions of Linux derived from RHEL, including Oracle Linux, may have default firewall rules that block communication with Helm. This how-to guide shows how to check the default firewall rules and how to open the ports with firewalld if necessary.
firewalld
SUSE Linux may have a firewall that blocks all ports by default. In that situation, follow these steps to open the ports needed for adding a host to a custom cluster.
Windows worker nodes can be used as of Rancher v2.3.0
Nodes with Windows Server must run Docker Enterprise Edition.
Windows nodes can be used for worker nodes only. See Configuring Custom Clusters for Windows
The hardware requirements for nodes with the worker role mostly depend on your workloads. The minimum to run the Kubernetes node components is 1 CPU (core) and 1GB of memory.
worker
Regarding CPU and memory, it is recommended that the different planes of Kubernetes clusters (etcd, controlplane, and workers) should be hosted on different nodes so that they can scale separately from each other.
For hardware recommendations for large Kubernetes clusters, refer to the official Kubernetes documentation on building large clusters.
For hardware recommendations for etcd clusters in production, refer to the official etcd documentation.
For a production cluster, we recommend that you restrict traffic by opening only the ports defined in the port requirements below.
The ports required to be open are different depending on how the user cluster is launched. Each of the sections below list the ports that need to be opened for different cluster creation options.
For a breakdown of the port requirements for etcd nodes, controlplane nodes, and worker nodes in a Kubernetes cluster, refer to the port requirements for the Rancher Kubernetes Engine.
Details on which ports are used in each situation are found in the following sections:
If security isn’t a large concern and you’re okay with opening a few additional ports, you can use this table as your port reference instead of the comprehensive tables in the following sections.
These ports are typically opened on your Kubernetes nodes, regardless of what type of cluster it is.
If you are launching a Kubernetes cluster on your existing infrastructure, refer to these port requirements.
The following table depicts the port requirements for Rancher Launched Kubernetes with custom nodes.
If you are launching a Kubernetes cluster on nodes that are in an infrastructure provider such as Amazon EC2, Google Container Engine, DigitalOcean, Azure, or vSphere, these port requirements apply.
These required ports are automatically opened by Rancher during creation of clusters using cloud providers.
The following table depicts the port requirements for Rancher Launched Kubernetes with nodes created in an Infrastructure Provider.
Note: The required ports are automatically opened by Rancher during creation of clusters in cloud providers like Amazon EC2 or DigitalOcean.
When using the AWS EC2 node driver to provision cluster nodes in Rancher, you can choose to let Rancher create a security group called rancher-nodes. The following rules are automatically added to this security group.
rancher-nodes
If you are launching a cluster with a hosted Kubernetes provider such as Google Kubernetes Engine, Amazon EKS, or Azure Kubernetes Service, refer to these port requirements.
The following table depicts the port requirements for nodes in hosted Kubernetes clusters.
If you are importing an existing cluster, refer to these port requirements.
The following table depicts the port requirements for imported clusters.
Ports marked as local traffic (i.e., 9099 TCP) in the port requirements are used for Kubernetes healthchecks (livenessProbe andreadinessProbe). These healthchecks are executed on the node itself. In most cloud environments, this local traffic is allowed by default.
local traffic
9099 TCP
livenessProbe
readinessProbe
However, this traffic may be blocked when:
In these cases, you have to explicitly allow this traffic in your host firewall, or in case of public/private cloud hosted machines (i.e. AWS or OpenStack), in your security group configuration. Keep in mind that when using a security group as source or destination in your security group, explicitly opening ports only applies to the private interface of the nodes/instances.
If you want to provision a Kubernetes cluster that is compliant with the CIS (Center for Internet Security) Kubernetes Benchmark, we recommend to following our hardening guide to configure your nodes before installing Kubernetes.
For more information on the hardening guide and details on which version of the guide corresponds to your Rancher and Kubernetes versions, refer to the security section.
SUSE Linux may have a firewall that blocks all ports by default. To open the ports needed for adding the host to a custom cluster,
etc/sysconfig/SuSEfirewall2
FW_SERVICES_EXT_TCP="22 80 443 2376 2379 2380 6443 9099 9796 10250 10254 30000:32767" FW_SERVICES_EXT_UDP="8472 30000:32767" FW_ROUTE=yes
SuSEfirewall2
Result: The node has the open ports required to be added to a custom cluster.