This guide is for people who want to craft a custom Kubernetes cluster. If you can find an existing Getting Started Guide that meets your needs on this list, then we recommend using it, as you will be able to benefit from the experience of others. However, if you have specific IaaS, networking, configuration management, or operating system requirements not met by any of those guides, then this guide will provide an outline of the steps you need to take. Note that it requires considerably more effort than using one of the pre-defined guides.
This guide is also useful for those wanting to understand at a high level some of the steps that existing cluster setup scripts are making.
kubectl
installed on your desktop. This will happen as a side
effect of completing one of the other Getting Started Guides. If not, follow the instructions
here.Kubernetes has the concept of a Cloud Provider, which is a module which provides
an interface for managing TCP Load Balancers, Nodes (Instances) and Networking Routes.
The interface is defined in pkg/cloudprovider/cloud.go
. It is possible to
create a custom cluster without implementing a cloud provider (for example if using
bare-metal), and not all parts of the interface need to be implemented, depending
on how flags are set on various components.
Kubernetes has a distinctive networking model.
Kubernetes allocates an IP address to each pod. When creating a cluster, you need to allocate a block of IPs for Kubernetes to use as Pod IPs. The simplest approach is to allocate a different block of IPs to each node in the cluster as the node is added. A process in one pod should be able to communicate with another pod using the IP of the second pod. This connectivity can be accomplished in two ways:
Which method you choose depends on your environment and requirements. There are various ways to implement one of the above options:
You will need to select an address range for the Pod IPs.
10.0.0.0/8
. Carve off a /16
for each
Kubernetes cluster from that space, which leaves room for several clusters.
Each node gets a further subdivision of this space./24
per
node supports 254 pods per machine and is a common choice. If IPs are
scarce, a /26
(62 pods per machine) or even a /27
(30 pods) may be sufficient.10.10.0.0/16
as the range for the cluster, with up to 256 nodes
using 10.10.0.0/24
through 10.10.255.0/24
, respectively.Kubernetes also allocates an IP to each service. However,
service IPs do not necessarily need to be routable. The kube-proxy takes care
of translating Service IPs to Pod IPs before traffic leaves the node. You do
need to allocate a block of IPs for services. Call this
SERVICE_CLUSTER_IP_RANGE
. For example, you could set
SERVICE_CLUSTER_IP_RANGE="10.0.0.0/16"
, allowing 65534 distinct services to
be active at once. Note that you can grow the end of this range, but you
cannot move it without disrupting the services and pods that already use it.
Also, you need to pick a static IP for master node.
MASTER_IP
.net.ipv4.ip_forward = 1
Kubernetes enables the definition of fine-grained network policy between Pods using the NetworkPolicy resource.
Not all networking providers support the Kubernetes NetworkPolicy API, see Using Network Policy for more information.
You should pick a name for your cluster. Pick a short name for each cluster which is unique from future cluster names. This will be used in several ways:
CLUSTER_NAME
.You will need binaries for:
A Kubernetes binary release includes all the Kubernetes binaries as well as the supported release of etcd. You can use a Kubernetes binary release (recommended) or build your Kubernetes binaries following the instructions in the Developer Documentation. Only using a binary release is covered in this guide.
Download the latest binary release and unzip it.
Server binary tarballs are no longer included in the Kubernetes final tarball, so you will need to locate and run
./kubernetes/cluster/get-kube-binaries.sh
to download the client and server binaries.
Then locate ./kubernetes/server/kubernetes-server-linux-amd64.tar.gz
and unzip that.
Then, within the second set of unzipped files, locate ./kubernetes/server/bin
, which contains
all the necessary binaries.
You will run docker, kubelet, and kube-proxy outside of a container, the same way you would run any system daemon, so you just need the bare binaries. For etcd, kube-apiserver, kube-controller-manager, and kube-scheduler, we recommend that you run these as containers, so you need an image to be built.
You have several choices for Kubernetes images:
k8s.gcr.io/hyperkube:$TAG
, where TAG
is the latest
release tag, which can be found on the latest releases page.hyperkube kubelet ...
runs the kubelet, hyperkube apiserver ...
runs an apiserver, etc../kubernetes/server/bin/kube-apiserver.tar
which
can be converted into docker images using a command like
docker load -i kube-apiserver.tar
docker images
For etcd, you can:
k8s.gcr.io/etcd:2.2.1
quay.io/coreos/etcd:v2.2.1
cd kubernetes/cluster/images/etcd; make
We recommend that you use the etcd version which is provided in the Kubernetes binary distribution. The Kubernetes binaries in the release
were tested extensively with this version of etcd and not with any other version.
The recommended version number can also be found as the value of TAG
in kubernetes/cluster/images/etcd/Makefile
.
The remainder of the document assumes that the image identifiers have been chosen and stored in corresponding env vars. Examples (replace with latest tags and appropriate registry):
HYPERKUBE_IMAGE=k8s.gcr.io/hyperkube:$TAG
ETCD_IMAGE=k8s.gcr.io/etcd:$ETCD_VERSION
There are two main options for security:
If following the HTTPS approach, you will need to prepare certs and credentials.
You need to prepare several certs:
Unless you plan to have a real CA generate your certs, you will need to generate a root cert and use that to sign the master, kubelet, and kubectl certs. How to do this is described in the authentication documentation.
You will end up with the following files (we will use these variables later on)
CA_CERT
/srv/kubernetes/ca.crt
.MASTER_CERT
/srv/kubernetes/server.crt
MASTER_KEY
/srv/kubernetes/server.key
KUBELET_CERT
KUBELET_KEY
The admin user (and any users) need:
TOKEN=$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64 | tr -d "=+/[:space:]" | dd bs=32 count=1 2>/dev/null)
Your tokens and passwords need to be stored in a file for the apiserver
to read. This guide uses /var/lib/kube-apiserver/known_tokens.csv
.
The format for this file is described in the authentication documentation.
For distributing credentials to clients, the convention in Kubernetes is to put the credentials into a kubeconfig file.
The kubeconfig file for the administrator can be created as follows:
$HOME/.kube/config
file.kubectl config set-cluster $CLUSTER_NAME --server=http://$MASTER_IP --insecure-skip-tls-verify=true
kubectl config set-cluster $CLUSTER_NAME --certificate-authority=$CA_CERT --embed-certs=true --server=https://$MASTER_IP
kubectl config set-credentials $USER --client-certificate=$CLI_CERT --client-key=$CLI_KEY --embed-certs=true --token=$TOKEN
kubectl config set-context $CONTEXT_NAME --cluster=$CLUSTER_NAME --user=$USER
kubectl config use-context $CONTEXT_NAME
Next, make a kubeconfig file for the kubelets and kube-proxy. There are a couple of options for how many distinct files to make:
You can make the files by copying the $HOME/.kube/config
or by using the following template:
apiVersion: v1
kind: Config
users:
- name: kubelet
user:
token: ${KUBELET_TOKEN}
clusters:
- name: local
cluster:
certificate-authority: /srv/kubernetes/ca.crt
contexts:
- context:
cluster: local
user: kubelet
name: service-account-context
current-context: service-account-context
Put the kubeconfig(s) on every node. The examples later in this
guide assume that there are kubeconfigs in /var/lib/kube-proxy/kubeconfig
and
/var/lib/kubelet/kubeconfig
.
This section discusses how to configure machines to be Kubernetes nodes.
You should run three daemons on every node:
You will also need to do assorted other configuration on top of a base OS install.
Tip: One possible starting point is to setup a cluster using an existing Getting Started Guide. After getting a cluster running, you can then copy the init.d scripts or systemd unit files from that cluster, and then modify them for use on your custom cluster.
The minimum required Docker version will vary as the kubelet version changes. The newest stable release is a good choice. Kubelet will log a warning and refuse to start pods if the version is too old, so pick a version and try it.
If you previously had Docker installed on a node without setting Kubernetes-specific options, you may have a Docker-created bridge and iptables rules. You may want to remove these as follows before proceeding to configure Docker for Kubernetes.
iptables -t nat -F
ip link set docker0 down
ip link delete docker0
The way you configure docker will depend in whether you have chosen the routable-vip or overlay-network approaches for your network. Some suggested docker options:
--bridge=cbr0
option on docker.--iptables=false
so docker will not manipulate iptables for host-ports (too coarse on older docker versions, may be fixed in newer versions)
so that kube-proxy can manage iptables instead of docker.--ip-masq=false
--mtu=
--insecure-registry $CLUSTER_SUBNET
You may want to increase the number of open files for docker:
DOCKER_NOFILE=1000000
Where this config goes depends on your node OS. For example, GCE’s Debian-based distro uses /etc/default/docker
.
Ensure docker is working correctly on your system before proceeding with the rest of the installation, by following examples given in the Docker documentation.
rkt is an alternative to Docker. You only need to install one of Docker or rkt. The minimum version required is v0.5.6.
systemd is required on your node to run rkt. The minimum version required to match rkt v0.5.6 is systemd 215.
rkt metadata service is also required
for rkt networking support. You can start rkt metadata service by using command like
sudo systemd-run rkt metadata-service
Then you need to configure your kubelet with flag:
--container-runtime=rkt
All nodes should run kubelet. See Software Binaries.
Arguments to consider:
--kubeconfig=/var/lib/kubelet/kubeconfig
--config=/etc/kubernetes/manifests
--cluster-dns=
to the address of the DNS server you will setup (see Starting Cluster Services.)--cluster-domain=
to the dns domain prefix to use for cluster DNS addresses.--docker-root=
--root-dir=
--pod-cidr=
The CIDR to use for pod IP addresses, only used in standalone mode. In cluster mode, this is obtained from the master.--register-node
(described in Node documentation.)All nodes should run kube-proxy. (Running kube-proxy on a “master” node is not strictly required, but being consistent is easier.) Obtain a binary as described for kubelet.
Arguments to consider:
--master=https://$MASTER_IP
--kubeconfig=/var/lib/kube-proxy/kubeconfig
--master=http://$MASTER_IP
Note that on some Linux platforms, you may need to manually install the
conntrack
package which is a dependency of kube-proxy, or else kube-proxy
cannot be started successfully.
For more details on debugging kube-proxy problems, please refer to Debug Services
Each node needs to be allocated its own CIDR range for pod networking.
Call this NODE_X_POD_CIDR
.
A bridge called cbr0
needs to be created on each node. The bridge is explained
further in the networking documentation. The bridge itself
needs an address from $NODE_X_POD_CIDR
- by convention the first IP. Call
this NODE_X_BRIDGE_ADDR
. For example, if NODE_X_POD_CIDR
is 10.0.0.0/16
,
then NODE_X_BRIDGE_ADDR
is 10.0.0.1/16
. NOTE: this retains the /16
suffix
because of how this is used later.
If you have turned off Docker’s IP masquerading to allow pods to talk to each other, then you may need to do masquerading just for destination IPs outside the cluster network. For example:
iptables -t nat -A POSTROUTING ! -d ${CLUSTER_SUBNET} -m addrtype ! --dst-type LOCAL -j MASQUERADE
This will rewrite the source address from the PodIP to the Node IP for traffic bound outside the cluster, and kernel connection tracking will ensure that responses destined to the node still reach the pod.
NOTE: This is environment specific. Some environments will not need any masquerading at all. Others, such as GCE, will not allow pod IPs to send traffic to the internet, but have no problem with them inside your GCE Project.
glusterfs-client
for GlusterFS
volumes.The previous steps all involved “conventional” system administration techniques for setting up machines. You may want to use a Configuration Management system to automate the node configuration process. There are examples of Saltstack, Ansible, Juju, and CoreOS Cloud Config in the various Getting Started Guides.
While the basic node services (kubelet, kube-proxy, docker) are typically started and managed using traditional system administration/automation approaches, the remaining master components of Kubernetes are all configured and managed by Kubernetes:
You will need to run one or more instances of etcd.
Not highly available, but easy to restore - Run one etcd instance, with its log written to a directory backed by durable storage (RAID, GCE PD).
Note: May result in operations outages in case of instance outage.
Highly available - Run 3 or 5 etcd instances with non durable storage.
Note: Log can be written to non-durable storage because storage is replicated.
See cluster-troubleshooting for more discussion on factors affecting cluster availability.
To run an etcd instance:
cluster/gce/manifests/etcd.manifest
The apiserver, controller manager, and scheduler will each run as a pod on the master node.
For each of these components, the steps to start them running are similar:
HYPERKUBE_IMAGE
to the values chosen in Selecting Images.{
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "kube-apiserver"
},
"spec": {
"hostNetwork": true,
"containers": [
{
"name": "kube-apiserver",
"image": "${HYPERKUBE_IMAGE}",
"command": [
"/hyperkube",
"apiserver",
"$ARG1",
"$ARG2",
...
"$ARGN"
],
"ports": [
{
"name": "https",
"hostPort": 443,
"containerPort": 443
},
{
"name": "local",
"hostPort": 8080,
"containerPort": 8080
}
],
"volumeMounts": [
{
"name": "srvkube",
"mountPath": "/srv/kubernetes",
"readOnly": true
},
{
"name": "etcssl",
"mountPath": "/etc/ssl",
"readOnly": true
}
],
"livenessProbe": {
"httpGet": {
"scheme": "HTTP",
"host": "127.0.0.1",
"port": 8080,
"path": "/healthz"
},
"initialDelaySeconds": 15,
"timeoutSeconds": 15
}
}
],
"volumes": [
{
"name": "srvkube",
"hostPath": {
"path": "/srv/kubernetes"
}
},
{
"name": "etcssl",
"hostPath": {
"path": "/etc/ssl"
}
}
]
}
}
Here are some apiserver flags you may need to set:
--cloud-provider=
see cloud providers--cloud-config=
see cloud providers--address=${MASTER_IP}
or --bind-address=127.0.0.1
and --address=127.0.0.1
if you want to run a proxy on the master node.--service-cluster-ip-range=$SERVICE_CLUSTER_IP_RANGE
--etcd-servers=http://127.0.0.1:4001
--tls-cert-file=/srv/kubernetes/server.cert
--tls-private-key-file=/srv/kubernetes/server.key
--enable-admission-plugins=$RECOMMENDED_LIST
--allow-privileged=true
, only if you trust your cluster user to run pods as root.If you are following the firewall-only security approach, then use these arguments:
--token-auth-file=/dev/null
--insecure-bind-address=$MASTER_IP
--advertise-address=$MASTER_IP
If you are using the HTTPS approach, then set:
--client-ca-file=/srv/kubernetes/ca.crt
--token-auth-file=/srv/kubernetes/known_tokens.csv
--basic-auth-file=/srv/kubernetes/basic_auth.csv
This pod mounts several node file system directories using the hostPath
volumes. Their purposes are:
/etc/ssl
mount allows the apiserver to find the SSL root certs so it can
authenticate external services, such as a cloud provider.
/srv/kubernetes
mount allows the apiserver to read certs and credentials stored on the
node disk. These could instead be stored on a persistent disk, such as a GCE PD, or baked into the image./var/log
as well and redirect output there (not shown in template).
TODO document proxy-ssh setup.
Apiserver supports several cloud providers.
--cloud-provider
flag are aws
, azure
, cloudstack
, fake
, gce
, mesos
, openstack
, ovirt
, rackspace
, vsphere
, or unset.Some cloud providers require a config file. If so, you need to put config file into apiserver image or mount through hostPath.
--cloud-config=
set if cloud provider requires a config file.aws
, gce
, mesos
, openstack
, ovirt
and rackspace
.Complete this template for the scheduler pod:
{
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "kube-scheduler"
},
"spec": {
"hostNetwork": true,
"containers": [
{
"name": "kube-scheduler",
"image": "$HYPERKUBE_IMAGE",
"command": [
"/hyperkube",
"scheduler",
"--master=127.0.0.1:8080",
"$SCHEDULER_FLAG1",
...
"$SCHEDULER_FLAGN"
],
"livenessProbe": {
"httpGet": {
"scheme": "HTTP",
"host": "127.0.0.1",
"port": 10251,
"path": "/healthz"
},
"initialDelaySeconds": 15,
"timeoutSeconds": 15
}
}
]
}
}
Typically, no additional flags are required for the scheduler.
Optionally, you may want to mount /var/log
as well and redirect output there.
Template for controller manager pod:
{
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "kube-controller-manager"
},
"spec": {
"hostNetwork": true,
"containers": [
{
"name": "kube-controller-manager",
"image": "$HYPERKUBE_IMAGE",
"command": [
"/hyperkube",
"controller-manager",
"$CNTRLMNGR_FLAG1",
...
"$CNTRLMNGR_FLAGN"
],
"volumeMounts": [
{
"name": "srvkube",
"mountPath": "/srv/kubernetes",
"readOnly": true
},
{
"name": "etcssl",
"mountPath": "/etc/ssl",
"readOnly": true
}
],
"livenessProbe": {
"httpGet": {
"scheme": "HTTP",
"host": "127.0.0.1",
"port": 10252,
"path": "/healthz"
},
"initialDelaySeconds": 15,
"timeoutSeconds": 15
}
}
],
"volumes": [
{
"name": "srvkube",
"hostPath": {
"path": "/srv/kubernetes"
}
},
{
"name": "etcssl",
"hostPath": {
"path": "/etc/ssl"
}
}
]
}
}
Flags to consider using with controller manager:
--cluster-cidr=
, the CIDR range for pods in cluster.--allocate-node-cidrs=
, if you are using --cloud-provider=
, allocate and set the CIDRs for pods on the cloud provider.--cloud-provider=
and --cloud-config
as described in apiserver section.--service-account-private-key-file=/srv/kubernetes/server.key
, used by the service account feature.--master=127.0.0.1:8080
Place each completed pod template into the kubelet config dir
(whatever --config=
argument of kubelet is set to, typically
/etc/kubernetes/manifests
). The order does not matter: scheduler and
controller manager will retry reaching the apiserver until it is up.
Use ps
or docker ps
to verify that each process has started. For example, verify that kubelet has started a container for the apiserver like this:
$ sudo docker ps | grep apiserver
5783290746d5 k8s.gcr.io/kube-apiserver:e36bf367342b5a80d7467fd7611ad873 "/bin/sh -c '/usr/lo'" 10 seconds ago Up 9 seconds k8s_kube-apiserver.feb145e7_kube-apiserver-kubernetes-master_default_eaebc600cf80dae59902b44225f2fc0a_225a4695
Then try to connect to the apiserver:
$ echo $(curl -s http://localhost:8080/healthz)
ok
$ curl -s http://localhost:8080/api
{
"versions": [
"v1"
]
}
If you have selected the --register-node=true
option for kubelets, they will now begin self-registering with the apiserver.
You should soon be able to see all your nodes by running the kubectl get nodes
command.
Otherwise, you will need to manually create node objects.
You will want to complete your Kubernetes clusters by adding cluster-wide services. These are sometimes called addons, and an overview of their purpose is in the admin guide.
Notes for setting up each cluster service are given below:
cluster/validate-cluster.sh
is used by cluster/kube-up.sh
to determine if
the cluster start succeeded.
Example usage and output:
KUBECTL_PATH=$(which kubectl) NUM_NODES=3 KUBERNETES_PROVIDER=local cluster/validate-cluster.sh
Found 3 node(s).
NAME STATUS AGE VERSION
node1.local Ready 1h v1.6.9+a3d1dfa6f4335
node2.local Ready 1h v1.6.9+a3d1dfa6f4335
node3.local Ready 1h v1.6.9+a3d1dfa6f4335
Validate output:
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-1 Healthy {"health": "true"}
etcd-2 Healthy {"health": "true"}
etcd-0 Healthy {"health": "true"}
Cluster validation succeeded
Try to run through the “Inspect your cluster” section in one of the other Getting Started Guides, such as GCE. You should see some services. You should also see “mirror pods” for the apiserver, scheduler and controller-manager, plus any add-ons you started.
At this point you should be able to run through one of the basic examples, such as the nginx example.
You may want to try to run the Conformance test. Any failures may give a hint as to areas that need more attention.
The nodes must be able to connect to each other using their private IP. Verify this by pinging or SSH-ing from one node to another.
If you run into trouble, please see the section on troubleshooting, post to the kubernetes-users group, or come ask questions on Slack.
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level |
---|---|---|---|---|---|---|
any | any | any | any | docs | Community (@erictune) |
For support level information on all solutions, see the Table of solutions chart.