kubeadm helps you bootstrap a minimum viable Kubernetes cluster that conforms to best practices. With kubeadm, your cluster should pass Kubernetes Conformance tests. Kubeadm also supports other cluster lifecycle functions, such as upgrades, downgrade, and managing bootstrap tokens.
Because you can install kubeadm on various types of machine (e.g. laptop, server, Raspberry Pi, etc.), it’s well suited for integration with provisioning systems such as Terraform or Ansible.
kubeadm’s simplicity means it can serve a wide range of use cases:
kubeadm’s overall feature state is Beta.
See “Installing kubeadm”.
Note: If you have already installed kubeadm, run
apt-get update && apt-get upgrade
oryum update
to get the latest version of kubeadm.When you upgrade, the kubelet restarts every few seconds as it waits in a crashloop for kubeadm to tell it what to do. This crashloop is expected and normal. After you initialize your master, the kubelet runs normally.
The master is the machine where the control plane components run, including etcd (the cluster database) and the API server (which the kubectl CLI communicates with).
--pod-network-cidr
to
a provider-specific value. See Installing a pod network add-on.--apiserver-advertise-address=<ip-address>
argument
to kubeadm init
. To deploy an IPv6 Kubernetes cluster using IPv6 addressing, you
must specify an IPv6 address, for example --apiserver-advertise-address=fd00::101
Now run:
kubeadm init <args>
For more information about kubeadm init
arguments, see the kubeadm reference guide.
For a complete list of configuration options, see the configuration file documentation.
To customize control plane components, including optional IPv6 assignment to liveness probe for control plane components and etcd server, provide extra arguments to each component as documented in custom arguments.
To run kubeadm init
again, you must first tear down the cluster.
If you join a node with a different architecture to your cluster, create a separate
Deployment or DaemonSet for kube-proxy
and kube-dns
on the node. This is because the Docker images for these
components do not currently support multi-architecture.
kubeadm init
first runs a series of prechecks to ensure that the machine
is ready to run Kubernetes. These prechecks expose warnings and exit on errors. kubeadm init
then downloads and installs the cluster control plane components. This may take several minutes.
The output should look like:
[init] Using Kubernetes version: vX.Y.Z
[preflight] Running pre-flight checks
... (log output of initialization workflow) ...
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run (as a regular user):
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the addon options listed at:
http://kubernetes.io/docs/admin/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>
To make kubectl work for your non-root user, run these commands, which are
also part of the kubeadm init
output:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root
user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
Make a record of the kubeadm join
command that kubeadm init
outputs. You
need this command to join nodes to your cluster.
The token is used for mutual authentication between the master and the joining
nodes. The token included here is secret. Keep it safe, because anyone with this
token can add authenticated nodes to your cluster. These tokens can be listed,
created, and deleted with the kubeadm token
command. See the
kubeadm reference guide.
Caution: This section contains important information about installation and deployment order. Read it carefully before proceeding.
You must install a pod network add-on so that your pods can communicate with each other.
The network must be deployed before any applications. An internal helper service, kube-dns, will not start up before a network is installed. kubeadm supports only Container Network Interface (CNI) based networks. It does not support kubenet.
Several projects provide Kubernetes pod networks using CNI, some of which also support Network Policy. See the add-ons page for a complete list of available network add-ons. - IPv6 support was added in CNI v0.6.0. - CNI bridge and local-ipam are the only supported IPv6 network plugins in Kubernetes version 1.9.
Note that kubeadm sets up a more secure cluster by default and enforces use of RBAC. Make sure that your network manifest supports RBAC.
You can install a pod network add-on with the following command:
kubectl apply -f <add-on.yaml>
You can install only one pod network per cluster.
Please select one of the tabs to see installation instructions for the respective third-party Pod Network Provider.
For more information about using Calico, see Quickstart for Calico on Kubernetes, Installing Calico for policy and networking, and other related resources.
In order for Network Policy to work correctly, you need to pass --pod-network-cidr=192.168.0.0/16
to kubeadm init
. Note that Calico works on amd64
only.
kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
Canal uses Calico for policy and Flannel for networking. Refer to the Calico documentation for the official getting started guide.
For Canal to work correctly, --pod-network-cidr=10.244.0.0/16
has to be passed to kubeadm init
. Note that Canal works on amd64
only.
kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/canal/rbac.yaml
kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/canal/canal.yaml
For flannel
to work correctly, --pod-network-cidr=10.244.0.0/16
has to be passed to kubeadm init
. Note that flannel
works on amd64
, arm
, arm64
and ppc64le
. For it to work on a platform other than
amd64
, you must manually download the manifest and replace amd64
occurrences with your chosen platform.
Set /proc/sys/net/bridge/bridge-nf-call-iptables
to 1
by running sysctl net.bridge.bridge-nf-call-iptables=1
to pass bridged IPv4 traffic to iptables’ chains. This is a requirement for some CNI plugins to work, for more information
please see here.
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml
For more information about flannel
, see the CoreOS flannel repository on GitHub
.
Set /proc/sys/net/bridge/bridge-nf-call-iptables
to 1
by running sysctl net.bridge.bridge-nf-call-iptables=1
to pass bridged IPv4 traffic to iptables’ chains. This is a requirement for some CNI plugins to work, for more information
please see here.
Kube-router relies on kube-controller-manager to allocate pod CIDR for the nodes. Therefore, use kubeadm init
with the --pod-network-cidr
flag.
Kube-router provides pod networking, network policy, and high-performing IP Virtual Server(IPVS)/Linux Virtual Server(LVS) based service proxy.
For information on setting up Kubernetes cluster with Kube-router using kubeadm, please see official setup guide.
Set /proc/sys/net/bridge/bridge-nf-call-iptables
to 1
by running sysctl net.bridge.bridge-nf-call-iptables=1
to pass bridged IPv4 traffic to iptables’ chains. This is a requirement for some CNI plugins to work, for more information
please see here.
The official Romana set-up guide is here.
Romana works on amd64
only.
kubectl apply -f https://raw.githubusercontent.com/romana/romana/master/containerize/specs/romana-kubeadm.yml
Set /proc/sys/net/bridge/bridge-nf-call-iptables
to 1
by running sysctl net.bridge.bridge-nf-call-iptables=1
to pass bridged IPv4 traffic to iptables’ chains. This is a requirement for some CNI plugins to work, for more information
please see here.
The official Weave Net set-up guide is here.
Weave Net works on amd64
, arm
, arm64
and ppc64le
without any extra action required.
Weave Net sets hairpin mode by default. This allows Pods to access themselves via their Service IP address
if they don’t know their PodIP.
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
Once you install a pod network, you can confirm that it works by
checking that the kube-dns pod is Running in the output of kubectl get pods --all-namespaces
.
Once the kube-dns pod is up and running, you can continue by joining your nodes.
If your network is not working or kube-dns is not in the Running state, check out our troubleshooting docs.
By default, your cluster will not schedule pods on the master for security reasons. If you want to be able to schedule pods on the master, e.g. for a single-machine Kubernetes cluster for development, run:
kubectl taint nodes --all node-role.kubernetes.io/master-
With output looking something like:
node "test-01" untainted
taint key="dedicated" and effect="" not found.
taint key="dedicated" and effect="" not found.
This will remove the node-role.kubernetes.io/master
taint from any nodes that
have it, including the master node, meaning that the scheduler will then be able
to schedule pods everywhere.
The nodes are where your workloads (containers and pods, etc) run. To add new nodes to your cluster do the following for each machine:
sudo su -
)kubeadm init
. For example:kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>
Note: To specify an IPv6 tuple for<master-ip>:<master-port>
, IPv6 address must be enclosed in square brackets, for example:[fd00::101]:2073
.
The output should look something like:
[preflight] Running pre-flight checks
... (log output of join workflow) ...
Node join complete:
* Certificate signing request sent to master and response
received.
* Kubelet informed of new secure connection details.
Run 'kubectl get nodes' on the master to see this machine join.
A few seconds later, you should notice this node in the output from kubectl get
nodes
when run on the master.
In order to get a kubectl on some other computer (e.g. laptop) to talk to your cluster, you need to copy the administrator kubeconfig file from your master to your workstation like this:
scp root@<master ip>:/etc/kubernetes/admin.conf .
kubectl --kubeconfig ./admin.conf get nodes
Note: The example above assumes SSH access is enabled for root. If that is not the case, you can copy the
admin.conf
file to be accessible by some other user andscp
using that other user instead.The
admin.conf
file gives the user superuser privileges over the cluster. This file should be used sparingly. For normal users, it’s recommended to generate an unique credential to which you whitelist privileges. You can do this with thekubeadm alpha phase kubeconfig user --client-name <CN>
command. That command will print out a KubeConfig file to STDOUT which you should save to a file and distribute to your user. After that, whitelist privileges by usingkubectl create (cluster)rolebinding
.
If you want to connect to the API Server from outside the cluster you can use
kubectl proxy
:
scp root@<master ip>:/etc/kubernetes/admin.conf .
kubectl --kubeconfig ./admin.conf proxy
You can now access the API Server locally at http://localhost:8001/api/v1
To undo what kubeadm did, you should first drain the node and make sure that the node is empty before shutting it down.
Talking to the master with the appropriate credentials, run:
kubectl drain <node name> --delete-local-data --force --ignore-daemonsets
kubectl delete node <node name>
Then, on the node being removed, reset all kubeadm installed state:
kubeadm reset
If you wish to start over simply run kubeadm init
or kubeadm join
with the
appropriate arguments.
More options and information about the
kubeadm reset command
.
Instructions for maintaining kubeadm clusters (e.g. upgrades,downgrades, etc.) can be found here.
See the list of add-ons to explore other add-ons, including tools for logging, monitoring, network policy, visualization & control of your Kubernetes cluster.
kubectl
.--log-driver=json-file --log-opt=max-size=10m --log-opt=max-file=5
. See Configure and troubleshoot the Docker daemon for more details.The kubeadm CLI tool of version vX.Y may deploy clusters with a control plane of version vX.Y or vX.(Y-1). kubeadm CLI vX.Y can also upgrade an existing kubeadm-created cluster of version vX.(Y-1).
Due to that we can’t see into the future, kubeadm CLI vX.Y may or may not be able to deploy vX.(Y+1) clusters.
Example: kubeadm v1.8 can deploy both v1.7 and v1.8 clusters and upgrade v1.7 kubeadm-created clusters to v1.8.
Please also check our installation guide for more information on the version skew between kubelets and the control plane.
kubeadm deb/rpm packages and binaries are built for amd64, arm (32-bit), arm64, ppc64le, and s390x following the multi-platform proposal.
Only some of the network providers offer solutions for all platforms. Please consult the list of network providers above or the documentation from each provider to figure out whether the provider supports your chosen platform.
Please note: kubeadm is a work in progress and these limitations will be addressed in due course.
Workaround: regularly
back up etcd. The
etcd data directory configured by kubeadm is at /var/lib/etcd
on the master.
If you are running into difficulties with kubeadm, please consult our troubleshooting docs.