Illumina Innovates with Rancher and Kubernetes
Kubernetes cluster. If you are installing Rancher on a single node, the main architecture recommendation that applies to your installation is that the node running Rancher should be separate from downstream clusters.
This section covers the following topics:
A user cluster is a downstream Kubernetes cluster that runs your apps and services.
If you have a Docker installation of Rancher, the node running the Rancher server should be separate from your downstream clusters.
In Kubernetes Installations of Rancher, the Rancher server cluster should also be separate from the user clusters.
We recommend installing the Rancher server on a three-node Kubernetes cluster for production, primarily because it protects the Rancher server data. The Rancher server stores its data in etcd in both single-node and Kubernetes Installations.
When Rancher is installed on a single node, if the node goes down, there is no copy of the etcd data available on other nodes and you could lose the data on your Rancher server.
By contrast, in the high-availability installation,
We recommend the following configurations for the load balancer and Ingress controllers:
Rancher installed on a Kubernetes cluster with layer 4 load balancer, depicting SSL termination at ingress controllers Rancher installed on a Kubernetes cluster with Layer 4 load balancer (TCP), depicting SSL termination at ingress controllers
It is strongly recommended to install Rancher on a Kubernetes cluster on hosted infrastructure such as Amazon’s EC2 or Google Compute Engine.
For the best performance and greater security, we recommend a dedicated Kubernetes cluster for the Rancher management server. Running user workloads on this cluster is not advised. After deploying Rancher, you can create or import clusters for running your workloads.
It is not recommended to install Rancher on top of a managed Kubernetes service such as Amazon’s EKS or Google Kubernetes Engine. These hosted Kubernetes solutions do not expose etcd to a degree that is manageable for Rancher, and their customizations can interfere with Rancher operations.
We recommend installing Rancher on a Kubernetes cluster in which each node has all three Kubernetes roles: etcd, controlplane, and worker.
Our recommendation for node roles on the Rancher server cluster contrast with our recommendations for the downstream user clusters that run your apps and services. We recommend that each node in a user cluster should have a single role for stability and scalability.
Kubernetes only requires at least one node with each role and does not require nodes to be restricted to one role. However, for the clusters that run your apps, we recommend separate roles for each node so that workloads on worker nodes don’t interfere with the Kubernetes master or cluster data as your services scale.
We recommend that downstream user clusters should have at least:
With that said, it is safe to use all three roles on three nodes when setting up the Rancher server because:
etcd
controlplane
Because no additional workloads will be deployed on the Rancher server cluster, in most cases it is not necessary to use the same architecture that we recommend for the scalability and reliability of user clusters.
For more best practices for user clusters, refer to the production checklist or our best practices guide.
If you are using an authorized cluster endpoint, we recommend creating an FQDN pointing to a load balancer which balances traffic across your nodes with the controlplane role.
If you are using private CA signed certificates on the load balancer, you have to supply the CA certificate, which will be included in the generated kubeconfig file to validate the certificate chain. See the documentation on kubeconfig files and API keys for more information.