

You will configure Ansible locally so that it can communicate with and execute commands on your remote servers. In this section, you will create a directory on your local machine that will serve as your workspace. Step 1 - Setting Up the Workspace Directory and Ansible Inventory File Look at “Step 5 - Running a Docker Container” in How To Install and Use Docker on Ubuntu 20.04 if you need a refresher. Knowledge of how to launch a container from a Docker image. For review, check out Configuration Management 101: Writing Ansible Playbooks. For installation instructions on other platforms like macOS or Rocky Linux, follow the official Ansible installation documentation.įamiliarity with Ansible playbooks. If you’re running Ubuntu 20.04 as your OS, follow the “Step 1 - Installing Ansible” section in How to Install and Configure Ansible on Ubuntu 20.04 to install Ansible. You should do this now, or as an alternative, you can disable host key checking.Īnsible installed on your local machine. Note: If you haven’t SSH’d into each of these servers at least once prior to following this tutorial, you may be prompted to accept their host fingerprints at an inconvenient time later on. You should be able to SSH into each server as the root user with your SSH key pair. Three servers running Ubuntu 20.04 with at least 2GB RAM and 2 vCPUs each.

If you haven’t used SSH keys before, you can learn how to set them up by following this explanation of how to set up SSH keys on your local machine. PrerequisitesĪn SSH key pair on your local Linux/macOS/BSD machine. Once the cluster is set up, you will deploy the web server Nginx to it to ensure that it is running workloads correctly. The cluster itself will consume around 300-500MB of memory and 10% of CPU on each node. Almost any traditional Unix application including web applications, databases, daemons, and command line tools can be containerized and made to run on the cluster. A cluster’s capacity can be increased by adding workers.Īfter completing this guide, you will have a cluster ready to run containerized applications, provided that the servers in the cluster have sufficient CPU and RAM resources for your applications to consume. A worker will continue to run your workload once they’re assigned to it, even if the control plane goes down once scheduling is complete. containerized applications and services) will run. Worker nodes are the servers where your workloads (i.e. It runs Etcd, which stores cluster data among components that schedule workloads to worker nodes. The control plane node (a node in Kubernetes refers to a server) is responsible for managing the state of the cluster. Your cluster will include the following physical resources: In this guide, you will set up a Kubernetes cluster from scratch using Ansible and Kubeadm, and then deploy a containerized Nginx application to it.

Using these tools makes creating additional clusters or recreating existing clusters much simpler and less error prone. For these preliminary tasks, it is possible to use a configuration management tool like Ansible or SaltStack. It does not, however, create users or handle the installation of operating-system-level dependencies and their configuration.

Kubeadm automates the installation and configuration of Kubernetes components such as the API server, Controller Manager, and Kube DNS. For up-to-date information on the latest version, please see the current release notes in the official Kubernetes documentation. Note: This tutorial uses version 1.22 of Kubernetes, the official supported version at the time of this article’s publication.
