Illumina Innovates with Rancher and Kubernetes
Available as of v2.3.0
When provisioning a custom cluster using Rancher, Rancher uses RKE (the Rancher Kubernetes Engine) to provision the Kubernetes custom cluster on your existing infrastructure.
You can use a mix of Linux and Windows hosts as your cluster nodes. Windows nodes can only be used for deploying workloads, while Linux nodes are required for cluster management.
You can only add Windows nodes to a cluster if Windows support is enabled. Windows support can be enabled for new custom clusters that use Kubernetes 1.15+ and the Flannel network provider. Windows support cannot be enabled for existing clusters.
Windows clusters have more requirements than Linux clusters. For example, Windows nodes must have 50 GB of disk space. Make sure your Windows cluster fulfills all of the requirements.
For a summary of Kubernetes features supported in Windows, see the Kubernetes documentation on supported functionality and limitations for using Kubernetes with Windows or the guide for scheduling Windows containers in Kubernetes.
This guide covers the following topics:
Before provisioning a new cluster, be sure that you have already installed Rancher on a device that accepts inbound network traffic. This is required in order for the cluster nodes to communicate with Rancher. If you have not already installed Rancher, please refer to the installation documentation before proceeding with this guide.
Note on Cloud Providers: If you set a Kubernetes cloud provider in your cluster, some additional steps are required. You might want to set a cloud provider if you want to want to leverage a cloud provider’s capabilities, for example, to automatically provision storage, load balancers, or other infrastructure for your cluster. Refer to this page for details on how to configure a cloud provider cluster of nodes that meet the prerequisites.
For a custom cluster, the general node requirements for networking, operating systems, and Docker are the same as the node requirements for a Rancher installation.
In order to add Windows worker nodes to a cluster, the node must be running one of the following Windows Server versions and the corresponding version of Docker Engine - Enterprise Edition (EE):
Notes: If you are using AWS, Rancher recommends Microsoft Windows Server 2019 Base with Containers as the Amazon Machine Image (AMI). If you are using GCE, Rancher recommends Windows Server 2019 Datacenter for Containers as the OS image.
Notes:
The hosts in the cluster need to have at least:
Rancher will not provision the node if the node does not meet these requirements.
Rancher only supports Windows using Flannel as the network provider.
There are two network options: Host Gateway (L2bridge) and VXLAN (Overlay). The default option is VXLAN (Overlay) mode.
For Host Gateway (L2bridge) networking, it’s best to use the same Layer 2 network for all nodes. Otherwise, you need to configure the route rules for them. For details, refer to the documentation on configuring cloud-hosted VM routes. You will also need to disable private IP address checks if you are using Amazon EC2, Google GCE, or Azure VM.
For VXLAN (Overlay) networking, the KB4489899 hotfix must be installed. Most cloud-hosted VMs already have this hotfix.
The Kubernetes cluster management nodes (etcd and controlplane) must be run on Linux nodes.
etcd
controlplane
The worker nodes, which is where your workloads will be deployed on, will typically be Windows nodes, but there must be at least one worker node that is run on Linux in order to run the Rancher cluster agent, DNS, metrics server, and Ingress related containers.
worker
We recommend the minimum three-node architecture listed in the table below, but you can always add additional Linux and Windows workers to scale up your cluster for redundancy:
Windows requires that containers must be built on the same Windows Server version that they are being deployed on. Therefore, containers must be built on Windows Server core version 1809 or above. If you have existing containers built for an earlier Windows Server core version, they must be re-built on Windows Server core version 1809 or above.
This tutorial describes how to create a Rancher-provisioned cluster with the three nodes in the recommended architecture.
When you provision a custom cluster with Rancher, you will add nodes to the cluster by installing the Rancher agent on each one. When you create or edit your cluster from the Rancher UI, you will see a Customize Node Run Command that you can run on each server to add it to your custom cluster.
To set up a custom cluster with support for Windows nodes and containers, you will need to complete the tasks below.
To begin provisioning a custom cluster with Windows support, prepare your hosts.
Your hosts can be:
You will provision three nodes:
If your nodes are hosted by a Cloud Provider and you want automation support such as loadbalancers or persistent storage devices, your nodes have additional configuration requirements. For details, see Selecting Cloud Providers.
The instructions for creating a custom cluster that supports Windows nodes are very similar to the general instructions for creating a custom cluster with some Windows-specific requirements.
Windows support only be enabled if the cluster uses Kubernetes v1.15+ and the Flannel network provider.
From the Global view, click on the Clusters tab and click Add Cluster.
Click From existing nodes (Custom).
Enter a name for your cluster in the Cluster Name text box.
In the Kubernetes Version dropdown menu, select v1.15 or above.
In the Network Provider field, select Flannel.
In the Windows Support section, click Enable.
Optional: After you enable Windows support, you will be able to choose the Flannel backend. There are two network options: Host Gateway (L2bridge) and VXLAN (Overlay). The default option is VXLAN (Overlay) mode.
Click Next.
Important: For Host Gateway (L2bridge) networking, it’s best to use the same Layer 2 network for all nodes. Otherwise, you need to configure the route rules for them. For details, refer to the documentation on configuring cloud-hosted VM routes. You will also need to disable private IP address checks if you are using Amazon EC2, Google GCE, or Azure VM.
This section describes how to register your Linux and Worker nodes to your custom cluster.
The first node in your cluster should be a Linux host has both the Control Plane and etcd roles. At a minimum, both of these roles must be enabled for this node, and this node must be added to your cluster before you can add Windows hosts.
In this section, we fill out a form on the Rancher UI to get a custom command to install the Rancher agent on the Linux master node. Then we will copy the command and run it on our Linux master node to register the node in the cluster.
In the Node Operating System section, click Linux.
In the Node Role section, choose at least etcd and Control Plane. We recommend selecting all three.
Optional: If you click Show advanced options, you can customize the settings for the Rancher agent and node labels.
Copy the command displayed on the screen to your clipboard.
SSH into your Linux host and run the command that you copied to your clipboard.
When you are finished provisioning your Linux node(s), select Done.
Result:
Default
default
System
cattle-system
ingress-nginx
kube-public
kube-system
It may take a few minutes for the node to be registered in your cluster.
After the initial provisioning of your custom cluster, your cluster only has a single Linux host. Next, we add another Linux worker host, which will be used to support Rancher cluster agent, Metrics server, DNS and Ingress for your cluster.
From the Global view, click Clusters.
Go to the custom cluster that you created and click Ellipsis (…) > Edit.
Scroll down to Node Operating System. Choose Linux.
In the Customize Node Run Command section, go to the Node Options and select the Worker role.
Copy the command displayed on screen to your clipboard.
Log in to your Linux host using a remote Terminal connection. Run the command copied to your clipboard.
From Rancher, click Save.
Result: The Worker role is installed on your Linux host, and the node registers with Rancher. It may take a few minutes for the node to be registered in your cluster.
Note: Taints on Linux Worker Nodes For each Linux worker node added into the cluster, the following taints will be added to Linux worker node. By adding this taint to the Linux worker node, any workloads added to the windows cluster will be automatically scheduled to the Windows worker node. If you want to schedule workloads specifically onto the Linux worker node, you will need to add tolerations to those workloads. Taint Key Taint Value Taint Effect cattle.io/os linux NoSchedule
Note: Taints on Linux Worker Nodes
For each Linux worker node added into the cluster, the following taints will be added to Linux worker node. By adding this taint to the Linux worker node, any workloads added to the windows cluster will be automatically scheduled to the Windows worker node. If you want to schedule workloads specifically onto the Linux worker node, you will need to add tolerations to those workloads.
cattle.io/os
linux
NoSchedule
You can add Windows hosts to a custom cluster by editing the cluster and choosing the Windows option.
Scroll down to Node Operating System. Choose Windows. Note: You will see that the worker role is the only available role.
Log in to your Windows host using your preferred tool, such as Microsoft Remote Desktop. Run the command copied to your clipboard in the Command Prompt (CMD).
Optional: Repeat these instructions if you want to add more Windows nodes to your cluster.
Result: The Worker role is installed on your Windows host, and the node registers with Rancher. It may take a few minutes for the node to be registered in your cluster. You now have a Windows Kubernetes cluster.
After creating your cluster, you can access it through the Rancher UI. As a best practice, we recommend setting up these alternate ways of accessing your cluster:
If you are using Azure VMs for your nodes, you can use Azure files as a storage class for the cluster.
In order to have the Azure platform create the required storage resources, follow these steps:
Configure the Azure cloud provider.
Configure kubectl to connect to your cluster.
kubectl
Copy the ClusterRole and ClusterRoleBinding manifest for the service account:
ClusterRole
ClusterRoleBinding
--- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: system:azure-cloud-provider rules: - apiGroups: [''] resources: ['secrets'] verbs: ['get','create'] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: system:azure-cloud-provider roleRef: kind: ClusterRole apiGroup: rbac.authorization.k8s.io name: system:azure-cloud-provider subjects: - kind: ServiceAccount name: persistent-volume-binder namespace: kube-system
Create these in your cluster using one of the follow command.
# kubectl create -f <MANIFEST>