Illumina Innovates with Rancher and Kubernetes
This section describes how to install a Kubernetes cluster on your three nodes according to our best practices for the Rancher server environment. This cluster should be dedicated to run only the Rancher server. We recommend using RKE to install Kubernetes on this cluster. Hosted Kubernetes providers such as EKS should not be used.
For systems without direct internet access, refer to Air Gap: Kubernetes install.
Single-node Installation Tip: In a single-node Kubernetes cluster, the Rancher server does not have high availability, which is important for running Rancher in production. However, installing Rancher on a single-node cluster can be useful if you want to save resources by using a single node in the short term, while preserving a high-availability migration path. To set up a single-node cluster, configure only one node in the cluster.yml when provisioning the cluster with RKE. The single node should have all three roles: etcd, controlplane and worker. Then Rancher can be installed with Helm on the cluster in the same way that it would be installed on any other cluster.
Single-node Installation Tip: In a single-node Kubernetes cluster, the Rancher server does not have high availability, which is important for running Rancher in production. However, installing Rancher on a single-node cluster can be useful if you want to save resources by using a single node in the short term, while preserving a high-availability migration path.
To set up a single-node cluster, configure only one node in the cluster.yml when provisioning the cluster with RKE. The single node should have all three roles: etcd, controlplane and worker. Then Rancher can be installed with Helm on the cluster in the same way that it would be installed on any other cluster.
cluster.yml
etcd
controlplane
worker
rancher-cluster.yml
Using the sample below, create the rancher-cluster.yml file. Replace the IP Addresses in the nodes list with the IP address or DNS names of the 3 nodes you created.
nodes
If your node has public and internal addresses, it is recommended to set the internal_address: so Kubernetes will use it for intra-cluster communication. Some services like AWS EC2 require setting the internal_address: if you want to use self-referencing security groups or firewalls.
internal_address:
nodes: - address: 165.227.114.63 internal_address: 172.16.22.12 user: ubuntu role: [controlplane, worker, etcd] - address: 165.227.116.167 internal_address: 172.16.32.37 user: ubuntu role: [controlplane, worker, etcd] - address: 165.227.127.226 internal_address: 172.16.42.73 user: ubuntu role: [controlplane, worker, etcd] services: etcd: snapshot: true creation: 6h retention: 24h # Required for external TLS termination with # ingress-nginx v0.22+ ingress: provider: nginx options: use-forwarded-headers: "true"
address
user
role
internal_address
ssh_key_path
~/.ssh/id_rsa
RKE has many configuration options for customizing the install to suit your specific environment.
Please see the RKE Documentation for the full list of options and capabilities.
For tuning your etcd cluster for larger Rancher installations see the etcd settings guide.
rke up --config ./rancher-cluster.yml
When finished, it should end with the line: Finished building Kubernetes cluster successfully.
Finished building Kubernetes cluster successfully
RKE should have created a file kube_config_rancher-cluster.yml. This file has the credentials for kubectl and helm.
kube_config_rancher-cluster.yml
kubectl
helm
Note: If you have used a different file name from rancher-cluster.yml, then the kube config file will be named kube_config_<FILE_NAME>.yml.
kube_config_<FILE_NAME>.yml
You can copy this file to $HOME/.kube/config or if you are working with multiple Kubernetes clusters, set the KUBECONFIG environmental variable to the path of kube_config_rancher-cluster.yml.
$HOME/.kube/config
KUBECONFIG
export KUBECONFIG=$(pwd)/kube_config_rancher-cluster.yml
Test your connectivity with kubectl and see if all your nodes are in Ready state.
Ready
kubectl get nodes NAME STATUS ROLES AGE VERSION 165.227.114.63 Ready controlplane,etcd,worker 11m v1.13.5 165.227.116.167 Ready controlplane,etcd,worker 11m v1.13.5 165.227.127.226 Ready controlplane,etcd,worker 11m v1.13.5
Check that all the required pods and containers are healthy are ready to continue.
Running
Completed
READY
3/3
STATUS
0/1
kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE ingress-nginx nginx-ingress-controller-tnsn4 1/1 Running 0 30s ingress-nginx nginx-ingress-controller-tw2ht 1/1 Running 0 30s ingress-nginx nginx-ingress-controller-v874b 1/1 Running 0 30s kube-system canal-jp4hz 3/3 Running 0 30s kube-system canal-z2hg8 3/3 Running 0 30s kube-system canal-z6kpw 3/3 Running 0 30s kube-system kube-dns-7588d5b5f5-sf4vh 3/3 Running 0 30s kube-system kube-dns-autoscaler-5db9bbb766-jz2k6 1/1 Running 0 30s kube-system metrics-server-97bc649d5-4rl2q 1/1 Running 0 30s kube-system rke-ingress-controller-deploy-job-bhzgm 0/1 Completed 0 30s kube-system rke-kubedns-addon-deploy-job-gl7t4 0/1 Completed 0 30s kube-system rke-metrics-addon-deploy-job-7ljkc 0/1 Completed 0 30s kube-system rke-network-plugin-deploy-job-6pbgj 0/1 Completed 0 30s
Important The files mentioned below are needed to maintain, troubleshoot and upgrade your cluster.
Save a copy of the following files in a secure location:
rancher-cluster.rkestate
Note: The “rancher-cluster” parts of the two latter file names are dependent on how you name the RKE cluster configuration file.
See the Troubleshooting page.