Deploy Kubernetes cluster on Google Compute Engine (not GKE) using Terrafrom and Ansible playbook in 3 minutes.
Environment:
- Google Cloud Compute Engine
- Terraform for IaC
- Ansible for Configuration management
- Jump host - Centos 7 installed with Terraform, Ansible
- Kubernetes Master node - Centos 7 - k8s-master - 10.138.0.15
- Kubernetes Worker1 node - Centos 7 - k8s-worker1 - 10.138.0.16
- Kubernetes Worker2 node - Centos 7 - k8s-worker2 - 10.138.0.17
Prerequisites:
Install Ansible on jump host
# yum install ansible -y
# ansible --version
ansible 2.9.2
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /bin/ansible
python version = 2.7.5 (default, Aug 7 2019, 00:51:29) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
Install Terraform on jump host
Download Terraform package from https://www.terraform.io/downloads.html
Download terraform_0.12.17_linux_amd64.zip and unzipping it and moving it to a directory included in your system's PATH.
# terraform -v
Terraform v0.12.17
Your version of Terraform is out of date! The latest version
is 0.12.20. You can update by downloading from https://www.terraform.io/downloads.html
Project directory tree:
# tree
.
├── ansible_playbook
│ ├── group_vars
│ │ └── env_variables
│ ├── hosts
│ ├── playbooks
│ │ ├── configure_master_node.yml
│ │ ├── configure_worker_nodes.yml
│ │ ├── prerequisites.yml
│ │ └── setting_up_nodes.yml
│ ├── setup_master_node.yml
│ └── setup_worker_nodes.yml
├── clean.sh
├── graph.svg
├── main.tf
├── modules
│ ├── gce-k8s-master
│ │ ├── main.tf
│ │ └── variables.tf
│ └── gce-k8s-worker
│ ├── main.tf
│ └── variables.tf
├── README.md
├── terraform.tfstate
├── terraform.tfstate.backup
└── variables.tf
Three folders under project directory used as mentioned below:
ansible_playbook - Ansible playbook's used for
- Kubernetes master and worker node deployment
- Create Kubernetes cluster
module - gce-k8s-master - Terraform module used for
- To deploy k8s-master GCE instance with Centos 7 image.
- Assign IP: 10.138.0.15
- Firewall rule create : http:80
- Add SSH key for password less authentication
module - gce-k8s-worker - Terraform module used for
- To deploy k8s-worker GCE instance with Centos 7 image.
- Assign IP: 10.138.0.15 & 16
- Firewall rule create : http:80
- Add SSH key for password less authentication
Download Terraform code and Ansible playbook from Git repository:
# git clone https://github.com/vdsridevops/terraform-k8s-gcp.git
Cloning into 'terraform-k8s-gcp'...
remote: Enumerating objects: 41, done.
remote: Counting objects: 100% (41/41), done.
remote: Compressing objects: 100% (30/30), done.
remote: Total 41 (delta 4), reused 38 (delta 4), pack-reused 0
Unpacking objects: 100% (41/41), done.
Modify the code according to your environment:
../ansible_playbook/group_vars/env_variables
ad_addr: 10.138.0.15 # IP of K8s-master
cidr_v: 10.244.0.0/16 # K8s cluster CIDR
path: /terraform/k8s-gcp/ansible_playbook
packages:
- docker
- kubelet-1.11.3
- kubeadm-1.11.3
- kubectl-1.11.3
- kubernetes-cni-0.6.0
services:
- docker
- kubelet
token_file: join_token
../ansible_playbook/hosts
[k8smaster]
k8s-master =10.138.0.15
[k8sworkers]
k8s-worker1 =10.138.0.16
k8s-worker2 =10.138.0.17
root variables.tf
variable "project" {
description = " Google Cloud Platform - Project "
type = string
default = "devopsbar20"
}
variable "region" {
description = " Google Cloud Platform - Region "
type = string
default = "us-west1"
}
variable "machine_type" {
description = " Machine type "
type = string
default = "n1-standard-1"
}
variable "zone" {
description = " Zone "
type = string
default = "us-west1-a"
}
variable "image" {
description = " Image "
type = string
default = "centos-7"
}
variable "path" {
description = "Ansible playbook path"
type = string
default = "/terraform/k8s-gcp/ansible_playbook"
../modules/gce-k8s-master/variables.tf
variable "project" {
description = " Google Cloud Platform - Project "
type = string
default = "devopsbar20"
}
variable "region" {
description = " Google Cloud Platform - Region "
type = string
default = "us-west1"
}
variable "machine_type" {
description = " Machine type "
type = string
default = "n1-standard-2"
}
variable "zone" {
description = " Zone "
type = string
default = "us-west1-a"
}
variable "image" {
description = " Image "
type = string
default = "centos-7"
}
variable "instance_name" {
description = "Instance Names"
type = string
default = "k8s-master"
}
variable "ip" {
description = "IP Address"
type = string
default = "10.138.0.15"
}
variable "passwd" {
description = "Password"
type = string
default = "ansible123"
}
../modules/gce-k8s-worker/variables.tf
variable "project" {
description = " Google Cloud Platform - Project "
type = string
default = "devopsbar20"
}
variable "region" {
description = " Google Cloud Platform - Region "
type = string
default = "us-west1"
}
variable "machine_type" {
description = " Machine type "
type = string
default = "n1-standard-2"
}
variable "zone" {
description = " Zone "
type = string
default = "us-west1-a"
}
variable "image" {
description = " Image "
type = string
default = "centos-7"
}
variable "instance_name" {
description = "Instance Names"
type = list(string)
default = ["k8s-worker1", "k8s-worker2"]
}
variable "ip" {
description = "IP Address"
type = list(string)
default = ["10.138.0.16", "10.138.0.17"]
}
variable "passwd" {
description = "Password"
type = string
default = "ansible123"
Deploy environment:
# terraform init
Initializing modules...
Initializing the backend...
Initializing provider plugins...
The following providers do not have any version constraints in configuration,
so the latest version was installed.
To prevent automatic upgrades to new major versions that may contain breaking
changes, it is recommended to add version = "..." constraints to the
corresponding provider blocks in configuration, with the constraint strings
suggested below.
* provider.null: version = "~> 2.1"
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
# terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
------------------------------------------------------------------------
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# null_resource.ansible-play will be created
+ resource "null_resource" "ansible-play" {
+ id = (known after apply)
}
# module.gce-k8s-master.google_compute_firewall.default will be created
+ resource "google_compute_firewall" "default" {
+ creation_timestamp = (known after apply)
+ destination_ranges = (known after apply)
+ direction = (known after apply)
+ id = (known after apply)
+ name = "k8s-master-allow-http"
+ network = "https://www.googleapis.com/compute/v1/projects/devopsbar20/global/networks/default"
+ priority = 1000
+ project = (known after apply)
+ self_link = (known after apply)
+ source_ranges = (known after apply)
+ target_tags = [
+ "master-http",
]
+ allow {
+ ports = [
+ "80",
]
+ protocol = "tcp"
}
}
# module.gce-k8s-master.google_compute_instance.vm_instance will be created
+ resource "google_compute_instance" "vm_instance" {
+ can_ip_forward = false
+ cpu_platform = (known after apply)
+ deletion_protection = false
+ guest_accelerator = (known after apply)
+ id = (known after apply)
+ instance_id = (known after apply)
+ label_fingerprint = (known after apply)
+ machine_type = "n1-standard-2"
+ metadata = {
+ "startup-script" = <<~EOT
echo "ansible123" | passwd --stdin root
sed -i 's/PermitRootLogin no/PermitRootLogin yes/g' /etc/ssh/sshd_config
sed -i 's/PasswordAuthentication no/PasswordAuthentication yes/g' /etc/ssh/sshd_config
systemctl restart sshd
EOT
}
+ metadata_fingerprint = (known after apply)
+ min_cpu_platform = (known after apply)
+ name = "k8s-master"
+ project = (known after apply)
+ self_link = (known after apply)
+ tags = [
+ "k8s-master",
]
+ tags_fingerprint = (known after apply)
+ zone = (known after apply)
+ boot_disk {
+ auto_delete = true
+ device_name = (known after apply)
+ disk_encryption_key_sha256 = (known after apply)
+ kms_key_self_link = (known after apply)
+ mode = "READ_WRITE"
+ source = (known after apply)
+ initialize_params {
+ image = "centos-7"
+ labels = (known after apply)
+ size = (known after apply)
+ type = (known after apply)
}
}
+ network_interface {
+ name = (known after apply)
+ network = "default"
+ network_ip = "10.138.0.15"
+ subnetwork = "default"
+ subnetwork_project = (known after apply)
+ access_config {
+ nat_ip = (known after apply)
+ network_tier = (known after apply)
}
}
+ scheduling {
+ automatic_restart = (known after apply)
+ on_host_maintenance = (known after apply)
+ preemptible = (known after apply)
+ node_affinities {
+ key = (known after apply)
+ operator = (known after apply)
+ values = (known after apply)
}
}
}
# module.gce-k8s-worker.google_compute_firewall.default will be created
+ resource "google_compute_firewall" "default" {
+ creation_timestamp = (known after apply)
+ destination_ranges = (known after apply)
+ direction = (known after apply)
+ id = (known after apply)
+ name = "k8s-worker-allow-http"
+ network = "https://www.googleapis.com/compute/v1/projects/devopsbar20/global/networks/default"
+ priority = 1000
+ project = (known after apply)
+ self_link = (known after apply)
+ source_ranges = (known after apply)
+ target_tags = [
+ "worker-http",
]
+ allow {
+ ports = [
+ "80",
]
+ protocol = "tcp"
}
}
# module.gce-k8s-worker.google_compute_instance.vm_instance[0] will be created
+ resource "google_compute_instance" "vm_instance" {
+ can_ip_forward = false
+ cpu_platform = (known after apply)
+ deletion_protection = false
+ guest_accelerator = (known after apply)
+ id = (known after apply)
+ instance_id = (known after apply)
+ label_fingerprint = (known after apply)
+ machine_type = "n1-standard-2"
+ metadata = {
+ "startup-script" = <<~EOT
echo "ansible123" | passwd --stdin root
sed -i 's/PermitRootLogin no/PermitRootLogin yes/g' /etc/ssh/sshd_config
sed -i 's/PasswordAuthentication no/PasswordAuthentication yes/g' /etc/ssh/sshd_config
systemctl restart sshd
EOT
}
+ metadata_fingerprint = (known after apply)
+ min_cpu_platform = (known after apply)
+ name = "k8s-worker1"
+ project = (known after apply)
+ self_link = (known after apply)
+ tags = [
+ "k8s-worker1",
]
+ tags_fingerprint = (known after apply)
+ zone = (known after apply)
+ boot_disk {
+ auto_delete = true
+ device_name = (known after apply)
+ disk_encryption_key_sha256 = (known after apply)
+ kms_key_self_link = (known after apply)
+ mode = "READ_WRITE"
+ source = (known after apply)
+ initialize_params {
+ image = "centos-7"
+ labels = (known after apply)
+ size = (known after apply)
+ type = (known after apply)
}
}
+ network_interface {
+ name = (known after apply)
+ network = "default"
+ network_ip = "10.138.0.16"
+ subnetwork = "default"
+ subnetwork_project = (known after apply)
+ access_config {
+ nat_ip = (known after apply)
+ network_tier = (known after apply)
}
}
+ scheduling {
+ automatic_restart = (known after apply)
+ on_host_maintenance = (known after apply)
+ preemptible = (known after apply)
+ node_affinities {
+ key = (known after apply)
+ operator = (known after apply)
+ values = (known after apply)
}
}
}
# module.gce-k8s-worker.google_compute_instance.vm_instance[1] will be created
+ resource "google_compute_instance" "vm_instance" {
+ can_ip_forward = false
+ cpu_platform = (known after apply)
+ deletion_protection = false
+ guest_accelerator = (known after apply)
+ id = (known after apply)
+ instance_id = (known after apply)
+ label_fingerprint = (known after apply)
+ machine_type = "n1-standard-2"
+ metadata = {
+ "startup-script" = <<~EOT
echo "ansible123" | passwd --stdin root
sed -i 's/PermitRootLogin no/PermitRootLogin yes/g' /etc/ssh/sshd_config
sed -i 's/PasswordAuthentication no/PasswordAuthentication yes/g' /etc/ssh/sshd_config
systemctl restart sshd
EOT
}
+ metadata_fingerprint = (known after apply)
+ min_cpu_platform = (known after apply)
+ name = "k8s-worker2"
+ project = (known after apply)
+ self_link = (known after apply)
+ tags = [
+ "k8s-worker2",
]
+ tags_fingerprint = (known after apply)
+ zone = (known after apply)
+ boot_disk {
+ auto_delete = true
+ device_name = (known after apply)
+ disk_encryption_key_sha256 = (known after apply)
+ kms_key_self_link = (known after apply)
+ mode = "READ_WRITE"
+ source = (known after apply)
+ initialize_params {
+ image = "centos-7"
+ labels = (known after apply)
+ size = (known after apply)
+ type = (known after apply)
}
}
+ network_interface {
+ name = (known after apply)
+ network = "default"
+ network_ip = "10.138.0.17"
+ subnetwork = "default"
+ subnetwork_project = (known after apply)
+ access_config {
+ nat_ip = (known after apply)
+ network_tier = (known after apply)
}
}
+ scheduling {
+ automatic_restart = (known after apply)
+ on_host_maintenance = (known after apply)
+ preemptible = (known after apply)
+ node_affinities {
+ key = (known after apply)
+ operator = (known after apply)
+ values = (known after apply)
}
}
}
Plan: 6 to add, 0 to change, 0 to destroy.
------------------------------------------------------------------------
Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.
# terraform apply
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# null_resource.ansible-play will be created
+ resource "null_resource" "ansible-play" {
+ id = (known after apply)
}
# module.gce-k8s-master.google_compute_firewall.default will be created
+ resource "google_compute_firewall" "default" {
+ creation_timestamp = (known after apply)
+ destination_ranges = (known after apply)
+ direction = (known after apply)
+ id = (known after apply)
+ name = "k8s-master-allow-http"
+ network = "https://www.googleapis.com/compute/v1/projects/devopsbar20/global/networks/default"
+ priority = 1000
+ project = (known after apply)
+ self_link = (known after apply)
+ source_ranges = (known after apply)
+ target_tags = [
+ "master-http",
]
+ allow {
+ ports = [
+ "80",
]
+ protocol = "tcp"
}
}
# module.gce-k8s-master.google_compute_instance.vm_instance will be created
+ resource "google_compute_instance" "vm_instance" {
+ can_ip_forward = false
+ cpu_platform = (known after apply)
+ deletion_protection = false
+ guest_accelerator = (known after apply)
+ id = (known after apply)
+ instance_id = (known after apply)
+ label_fingerprint = (known after apply)
+ machine_type = "n1-standard-2"
+ metadata = {
+ "startup-script" = <<~EOT
echo "ansible123" | passwd --stdin root
sed -i 's/PermitRootLogin no/PermitRootLogin yes/g' /etc/ssh/sshd_config
sed -i 's/PasswordAuthentication no/PasswordAuthentication yes/g' /etc/ssh/sshd_config
systemctl restart sshd
EOT
}
+ metadata_fingerprint = (known after apply)
+ min_cpu_platform = (known after apply)
+ name = "k8s-master"
+ project = (known after apply)
+ self_link = (known after apply)
+ tags = [
+ "k8s-master",
]
+ tags_fingerprint = (known after apply)
+ zone = (known after apply)
+ boot_disk {
+ auto_delete = true
+ device_name = (known after apply)
+ disk_encryption_key_sha256 = (known after apply)
+ kms_key_self_link = (known after apply)
+ mode = "READ_WRITE"
+ source = (known after apply)
+ initialize_params {
+ image = "centos-7"
+ labels = (known after apply)
+ size = (known after apply)
+ type = (known after apply)
}
}
+ network_interface {
+ name = (known after apply)
+ network = "default"
+ network_ip = "10.138.0.15"
+ subnetwork = "default"
+ subnetwork_project = (known after apply)
+ access_config {
+ nat_ip = (known after apply)
+ network_tier = (known after apply)
}
}
+ scheduling {
+ automatic_restart = (known after apply)
+ on_host_maintenance = (known after apply)
+ preemptible = (known after apply)
+ node_affinities {
+ key = (known after apply)
+ operator = (known after apply)
+ values = (known after apply)
}
}
}
# module.gce-k8s-worker.google_compute_firewall.default will be created
+ resource "google_compute_firewall" "default" {
+ creation_timestamp = (known after apply)
+ destination_ranges = (known after apply)
+ direction = (known after apply)
+ id = (known after apply)
+ name = "k8s-worker-allow-http"
+ network = "https://www.googleapis.com/compute/v1/projects/devopsbar20/global/networks/default"
+ priority = 1000
+ project = (known after apply)
+ self_link = (known after apply)
+ source_ranges = (known after apply)
+ target_tags = [
+ "worker-http",
]
+ allow {
+ ports = [
+ "80",
]
+ protocol = "tcp"
}
}
# module.gce-k8s-worker.google_compute_instance.vm_instance[0] will be created
+ resource "google_compute_instance" "vm_instance" {
+ can_ip_forward = false
+ cpu_platform = (known after apply)
+ deletion_protection = false
+ guest_accelerator = (known after apply)
+ id = (known after apply)
+ instance_id = (known after apply)
+ label_fingerprint = (known after apply)
+ machine_type = "n1-standard-2"
+ metadata = {
+ "startup-script" = <<~EOT
echo "ansible123" | passwd --stdin root
sed -i 's/PermitRootLogin no/PermitRootLogin yes/g' /etc/ssh/sshd_config
sed -i 's/PasswordAuthentication no/PasswordAuthentication yes/g' /etc/ssh/sshd_config
systemctl restart sshd
EOT
}
+ metadata_fingerprint = (known after apply)
+ min_cpu_platform = (known after apply)
+ name = "k8s-worker1"
+ project = (known after apply)
+ self_link = (known after apply)
+ tags = [
+ "k8s-worker1",
]
+ tags_fingerprint = (known after apply)
+ zone = (known after apply)
+ boot_disk {
+ auto_delete = true
+ device_name = (known after apply)
+ disk_encryption_key_sha256 = (known after apply)
+ kms_key_self_link = (known after apply)
+ mode = "READ_WRITE"
+ source = (known after apply)
+ initialize_params {
+ image = "centos-7"
+ labels = (known after apply)
+ size = (known after apply)
+ type = (known after apply)
}
}
+ network_interface {
+ name = (known after apply)
+ network = "default"
+ network_ip = "10.138.0.16"
+ subnetwork = "default"
+ subnetwork_project = (known after apply)
+ access_config {
+ nat_ip = (known after apply)
+ network_tier = (known after apply)
}
}
+ scheduling {
+ automatic_restart = (known after apply)
+ on_host_maintenance = (known after apply)
+ preemptible = (known after apply)
+ node_affinities {
+ key = (known after apply)
+ operator = (known after apply)
+ values = (known after apply)
}
}
}
# module.gce-k8s-worker.google_compute_instance.vm_instance[1] will be created
+ resource "google_compute_instance" "vm_instance" {
+ can_ip_forward = false
+ cpu_platform = (known after apply)
+ deletion_protection = false
+ guest_accelerator = (known after apply)
+ id = (known after apply)
+ instance_id = (known after apply)
+ label_fingerprint = (known after apply)
+ machine_type = "n1-standard-2"
+ metadata = {
+ "startup-script" = <<~EOT
echo "ansible123" | passwd --stdin root
sed -i 's/PermitRootLogin no/PermitRootLogin yes/g' /etc/ssh/sshd_config
sed -i 's/PasswordAuthentication no/PasswordAuthentication yes/g' /etc/ssh/sshd_config
systemctl restart sshd
EOT
}
+ metadata_fingerprint = (known after apply)
+ min_cpu_platform = (known after apply)
+ name = "k8s-worker2"
+ project = (known after apply)
+ self_link = (known after apply)
+ tags = [
+ "k8s-worker2",
]
+ tags_fingerprint = (known after apply)
+ zone = (known after apply)
+ boot_disk {
+ auto_delete = true
+ device_name = (known after apply)
+ disk_encryption_key_sha256 = (known after apply)
+ kms_key_self_link = (known after apply)
+ mode = "READ_WRITE"
+ source = (known after apply)
+ initialize_params {
+ image = "centos-7"
+ labels = (known after apply)
+ size = (known after apply)
+ type = (known after apply)
}
}
+ network_interface {
+ name = (known after apply)
+ network = "default"
+ network_ip = "10.138.0.17"
+ subnetwork = "default"
+ subnetwork_project = (known after apply)
+ access_config {
+ nat_ip = (known after apply)
+ network_tier = (known after apply)
}
}
+ scheduling {
+ automatic_restart = (known after apply)
+ on_host_maintenance = (known after apply)
+ preemptible = (known after apply)
+ node_affinities {
+ key = (known after apply)
+ operator = (known after apply)
+ values = (known after apply)
}
}
}
Plan: 6 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
.......
.......
.......
.......
null_resource.ansible-play (local-exec): PLAY RECAP *********************************************************************
null_resource.ansible-play (local-exec): k8s-master : ok=23 changed=22 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
null_resource.ansible-play (local-exec): k8s-worker1 : ok=14 changed=13 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
null_resource.ansible-play (local-exec): k8s-worker2 : ok=14 changed=13 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
null_resource.ansible-play (local-exec): PLAY [k8sworkers] **************************************************************
null_resource.ansible-play (local-exec): TASK [Copying token to worker nodes] *******************************************
null_resource.ansible-play (local-exec): changed: [k8s-worker2]
null_resource.ansible-play (local-exec): changed: [k8s-worker1]
null_resource.ansible-play (local-exec): TASK [Joining worker nodes with kubernetes master] *****************************
null_resource.ansible-play: Still creating... [3m0s elapsed]
null_resource.ansible-play: Still creating... [3m10s elapsed]
null_resource.ansible-play (local-exec): changed: [k8s-worker1]
null_resource.ansible-play (local-exec): changed: [k8s-worker2]
null_resource.ansible-play (local-exec): PLAY RECAP *********************************************************************
null_resource.ansible-play (local-exec): k8s-worker1 : ok=2 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
null_resource.ansible-play (local-exec): k8s-worker2 : ok=2 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
null_resource.ansible-play: Creation complete after 3m11s [id=3551740635995781484]
Apply complete! Resources: 6 added, 0 changed, 0 destroyed.
Verification:
Google complete engine instance created
Login to Kubernetes master node (k8s-master) and check cluster status
[root@k8s-master ~]# kubectl cluster-info
Kubernetes master is running at https://10.138.0.15:6443
KubeDNS is running at https://10.138.0.15:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
No comments:
Post a Comment