bannerbannerbanner
IT Cloud

Eugeny Shtoltc
IT Cloud

Полная версия

main.tf variables.tf

$ cat variables.tf

variable "github_token" {

default = "630bc9696d0b2f4ce164b1cabb118eaaa1909838"

}

$ cat main.tf

provider "github" {

token = "$ {var.github_token}"

}

(agile-aleph-203917) $ ./terraform init

(agile-aleph-203917) $ ./terraform apply

Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

Now, let's create a manager account Settings -> Organizations -> New organization -> Create organization. … Using: Terraform Repository API www.terraform.io/docs/providers/github/r/repository. html add a description of the repository to the config:

(agile-aleph-203917) $ cat main.tf

provider "github" {

token = "$ {var.github_token}"

}

resource "github_repository" "terraform_repo" {

name = "terraform-repo"

description = "my terraform repo"

auto_init = true

}

Now it remains to apply, look at the plan for creating a repository, agree with it:

(agile-aleph-203917) $ ./terraform apply

provider.github.organization

The GitHub organization name to manage.

Enter a value: essch2

An execution plan has been generated and is shown below.

Resource actions are indicated with the following symbols:

+ create

Terraform will perform the following actions:

+ github_repository.terraform_repo

id: <computed>

allow_merge_commit: "true"

allow_rebase_merge: "true"

allow_squash_merge: "true"

archived: "false"

auto_init: "true"

default_branch: <computed>

description: "my terraform repo"

etag: <computed>

full_name: <computed>

git_clone_url: <computed>

html _url: <computed>

http_clone_url: <computed>

name: "terraform-repo"

ssh_clone_url: <computed>

svn_url: <computed>

Plan: 1 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?

Terraform will perform the actions described above.

Only 'yes' will be accepted to approve.

Enter a value: yes

github_repository.terraform_repo: Creating …

allow_merge_commit: "" => "true"

allow_rebase_merge: "" => "true"

allow_squash_merge: "" => "true"

archived: "" => "false"

auto_init: "" => "true"

default_branch: "" => "<computed>"

description: "" => "my terraform repo"

etag: "" => "<computed>"

full_name: "" => "<computed>"

git_clone_url: "" => "<computed>"

html_url: "" => "<computed>"

http_clone_url: "" => "<computed>"

name: "" => "terraform-repo"

ssh_clone_url: "" => "<computed>"

svn_url: "" => "<computed>"

github_repository.terraform_repo: Creation complete after 4s (ID: terraform-repo)

Apply complete! Resources: 1 added, 0 changed, 0 destroyed

Now you can see an empty terraform-repo repository in the WEB interface. Reapplying will not create a repository because Terraform only applies the changes that weren't:

(agile-aleph-203917) $ ./terraform apply

provider.github.organization

The GITHub organization name to manage.

Enter a value: essch2

github_repository.terraform_repo: Refreshing state … (ID: terraform-repo)

Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

But if I change the name, then Terraform will try to apply the changes to the name by deleting and creating a new one with the current name. It is important to note that any data that we would push into this repository after the name change would be deleted. To check how updates will be performed, you can first ask for a list of actions to be performed with the command ./Terraform plane . And so, let's get started:

(agile-aleph-203917) $ cat main.tf

provider "github" {

token = "$ {var.github_token}"

}

resource "github_repository" "terraform_repo" {

name = "terraform-repo2"

description = "my terraform repo"

auto_init = true

}

(agile-aleph-203917) $ ./terraform plan

provider.github.organization

The GITHub organization name to manage.

Enter a value: essch

Refreshing Terraform state in-memory prior to plan …

The refreshed state will be used to calculate this plan, but will not be

persisted to local or remote state storage.

github_repository.terraform_repo: Refreshing state … (ID: terraform-repo)

–– –

An execution plan has been generated and is shown below.

Resource actions are indicated with the following symbols:

+ create

Terraform will perform the following actions:

+ github_repository.terraform_repo

id: <computed>

allow_merge_commit: "true"

allow_rebase_merge: "true"

allow_squash_merge: "true"

archived: "false"

auto_init: "true"

default_branch: <computed>

description: "my terraform repo"

etag: <computed>

full_name: <computed>

git_clone_url: <computed>

html_url: <computed>

http_clone_url: <computed>

name: "terraform-repo2"

ssh_clone_url: <computed>

svn_url: <computed>

"terraform apply" is subsequently run.

esschtolts @ cloudshell: ~ / terraform (agile-aleph-203917) $ ./terraform apply

provider.github.organization

The GITHub organization name to manage.

Enter a value: essch2

github_repository.terraform_repo: Refreshing state … (ID: terraform-repo)

An execution plan has been generated and is shown below.

Resource actions are indicated with the following symbols:

– / + destroy and then create replacement

Terraform will perform the following actions:

– / + github_repository.terraform_repo (new resource required)

id: "terraform-repo" => <computed> (forces new resource)

allow_merge_commit: "true" => "true"

allow_rebase_merge: "true" => "true"

allow_squash_merge: "true" => "true"

archived: "false" => "false"

auto_init: "true" => "true"

default_branch: "master" => <computed>

description: "my terraform repo" => "my terraform repo"

etag: "W / \" a92e0b300d8c8d2c869e5f271da6c2ab \ "" => <computed>

full_name: "essch2 / terraform-repo" => <computed>

git_clone_url: "git: //github.com/essch2/terraform-repo.git" => <computed>

html_url: "https://github.com/essch2/terraform-repo" => <computed>

http_clone_url: "https://github.com/essch2/terraform-repo.git" => <computed>

name: "terraform-repo" => "terraform-repo2" (forces new resource)

ssh_clone_url: "git@github.com: essch2 / terraform-repo.git" => <computed>

svn_url: "https://github.com/essch2/terraform-repo" => <computed>

Plan: 1 to add, 0 to change, 1 to destroy.

Do you want to perform these actions?

Terraform will perform the actions described above.

Only 'yes' will be accepted to approve.

Enter a value: yes

github_repository.terraform_repo: Destroying … (ID: terraform-repo)

github_repository.terraform_repo: Destruction complete after 0s

github_repository.terraform_repo: Creating …

allow_merge_commit: "" => "true"

allow_rebase_merge: "" => "true"

allow_squash_merge: "" => "true"

archived: "" => "false"

auto_init: "" => "true"

default_branch: "" => "<computed>"

description: "" => "my terraform repo"

etag: "" => "<computed>"

full_name: "" => "<computed>"

git_clone_url: "" => "<computed>"

html_url: "" => "<computed>"

http_clone_url: "" => "<computed>"

name: "" => "terraform-repo2"

ssh_clone_url: "" => "<computed>"

svn_url: "" => "<computed>"

github_repository.terraform_repo: Creation complete after 5s (ID: terraform-repo2)

Apply complete! Resources: 1 added, 0 changed, 1 destroyed.

For reasons of clarity, I created a big security hole – I put the token in the configuration file, and therefore in the repository, and now anyone who can access it can delete all repositories. Terraform provides several ways to set variables besides the one used. I'll just recreate the token and override it with the one passed on the command line:

(agile-aleph-203917) $ rm variables.tf

(agile-aleph-203917) $ sed -i 's / terraform-repo2 / terraform-repo3 /' main.tf

./terraform apply -var = "github_token = f7602b82e02efcbae7fc915c16eeee518280cf2a"

Building infrastructure in GCP with Terraform

Each cloud has its own set of services, its own APIs for them. To simplify the transition from one cloud for both employees in terms of learning and rewriting, there are universal libraries that implement the Facade pattern. A facade is understood as a universal API that disrupts the features of the systems behind it.

One representative of the cloud API facades is KOPS. KOPS is a tool for deploying Kubernetes to GCP, AWS and Azure. KOPS is similar to Kubectl – it is a binary, it can create both commands and the YML config, has a similar syntax, but unlike Kubectl, it creates not a POD, but a cluster node. Another example is Terraform, which specializes in deployment by configuration to adhere to the IasC concept.

To create the infrastructure, we need a token, it is created in GCP for the service account to which access is issued. To do this, I went along the path: IAM and administration -> Service accounts -> Create a service account and upon creation I dropped the Owner role (full access for test purposes), created a key with the Create key button in JSON format and renamed the downloaded key to Key. JSON. To describe the infrastructure, I used the documentation www.terraform.io/docs/providers/google/index.html :

 

(agil7e-aleph-20391) $ cat main.tf

provider "google" {

credentials = "$ {file (" key.json ")}"

project = "agile-aleph-203917"

region = "us-central1"

}

resource "google_compute_instance" "terraform" {

name = "terraform"

machine_type = "n1-standard-1"

zone = "us-central1-a"

boot_disk {

initialize_params {

image = "debian-cloud / debian-9"

}

}

network_interface {

network = "default"

}

}

Let's check the user rights:

(agile-aleph-203917) $ gcloud auth list

Credentialed Accounts

ACTIVE ACCOUNT

* esschtolts@gmail.com

To set the active account, run:

$ gcloud config set account `ACCOUNT`

Let's select the project as the current one (you can create the current one by default):

$ gcloud config set project agil7e-aleph-20391;

(agil7e-aleph-20391) $ ./terraform init | grep success

Terraform has been successfully initialized!

Now let's create one instance in the WEB console, after copying the key to the key.json file in the Terraform directory:

machine_type: "" => "n1-standard-1"

metadata_fingerprint: "" => "<computed>"

name: "" => "terraform"

network_interface. #: "" => "1"

network_interface.0.address: "" => "<computed>"

network_interface.0.name: "" => "<computed>"

network_interface.0.network: "" => "default"

network_interface.0.network_ip: "" => "<computed>"

network_interface.0.network: "" => "default"

project: "" => "<computed>"

scheduling. #: "" => "<computed>"

self_link: "" => "<computed>"

tags_fingerprint: "" => "<computed>"

zone: "" => "us-central1-a"

google_compute_instance.terraform: Still creating … (10s elapsed)

google_compute_instance.terraform: Still creating … (20s elapsed)

google_compute_instance.terraform: Still creating … (30s elapsed)

google_compute_instance.terraform: Still creating … (40s elapsed)

google_compute_instance.terraform: Creation complete after 40s (ID: terraform)

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

That's it, we have created a server instance. Now let's remove it:

~ / terraform (agil7e-aleph-20391) $ ./terraform apply

google_compute_instance.terraform: Refreshing state … (ID: terraform)

An execution plan has been generated and is shown below.

Resource actions are indicated with the following symbols:

– destroy

Terraform will perform the following actions:

– google_compute_instance.terraform

Plan: 0 to add, 0 to change, 1 to destroy.

Do you want to perform these actions?

Terraform will perform the actions described above.

Only 'yes' will be accepted to approve.

Enter a value: yes

google_compute_instance.terraform: Destroying … (ID: terraform)

google_compute_instance.terraform: Still destroying … (ID: terraform, 10s elapsed)

google_compute_instance.terraform: Still destroying … (ID: terraform, 20s elapsed)

google_compute_instance.terraform: Still destroying … (ID: terraform, 30s elapsed)

google_compute_instance.terraform: Still destroying … (ID: terraform, 40s elapsed)

google_compute_instance.terraform: Still destroying … (ID: terraform, 50s elapsed)

google_compute_instance.terraform: Still destroying … (ID: terraform, 1m0s elapsed)

google_compute_instance.terraform: Still destroying … (ID: terraform, 1m10s elapsed)

google_compute_instance.terraform: Still destroying … (ID: terraform, 1m20s elapsed)

google_compute_instance.terraform: Still destroying … (ID: terraform, 1m30s elapsed)

google_compute_instance.terraform: Still destroying … (ID: terraform, 1m40s elapsed)

google_compute_instance.terraform: Still destroying … (ID: terraform, 1m50s elapsed)

google_compute_instance.terraform: Still destroying … (ID: terraform, 2m0s elapsed)

google_compute_instance.terraform: Still destroying … (ID: terraform, 2m10s elapsed)

google_compute_instance.terraform: Still destroying … (ID: terraform, 2m20s elapsed)

google_compute_instance.terraform: Destruction complete after 2m30s

Apply complete! Resources: 0 added, 0 changed, 1 destroyed.

Building infrastructure in AWS

To create an AWS cluster configuration, create a separate folder for it, and the previous one in a parallel one:

esschtolts @ cloudshell: ~ / terraform (agil7e-aleph-20391) $ mkdir gcp

esschtolts @ cloudshell: ~ / terraform (agil7e-aleph-20391) $ mv main.tf gcp / main.tf

esschtolts @ cloudshell: ~ / terraform (agil7e-aleph-20391) $ mkdir aws

esschtolts @ cloudshell: ~ / terraform (agil7e-aleph-20391) $ cd aws

Role is an analogue of a user, only not for people, but for services such as AWS, and in our case, these are EKS servers. But I do not see users as an analogue of roles, but groups, for example, a group for creating a cluster, a group for working with a database, etc. Only one role can be assigned to a server, and a role can contain multiple rights (Polices). As a result, we do not need to work with logins and passwords, or with tokens, or with certificates: store, transfer, restrict access, transfer – we only indicate in the WEB toolbar (IMA) or using the API (and derivatively in the configuration) the rights … Our cluster needs these rights in order for it to self-configure and replicate as it consists of standard AWS services. To manage the components of the AWS EC2 cluster (server), AWS ELB (Elastic Load Balancer, balancer) and AWS KMS (Key Management Service, key manager and encryption), you need AmazonEKSClusterPolicy access, to monitor AmazonEKSServicePolicy using CloudWatch Logs components (monitoring by logs) , Route 53 (creating a network in the zone), IAM (rights management). I did not describe the role in the config and created it through IAM according to the documentation: https://docs.aws.amazon.com/eks/latest/userguide/service_IAM_role. html # create-service-role.

For greater reliability, the nodes of the Kubernetes cluster should be located in different zones, that is, data centers. Each region contains several zones to maintain fault tolerance, while maintaining minimal letency (server response time) for the local population. It is important to note that some regions may be represented in several copies within the same country, for example, US-east-1 in US East (N. Virginia) and US-east-2 in US East (Ohio) – regions are designated in numbers. So far, the creation of an EKS cluster is available only to the US-east zone.

The VPC for the developer, at its simplest, boils down to naming a subnet as a specific resource.

Let's write the configuration according to the documentation www.terraform.io/docs/providers/aws/r/eks_cluster. html :

esschtolts @ cloudshell: ~ / terraform / aws (agile-aleph-203917) $ cat main.tf

provider "aws" {

access_key = "$ {var.token}"

secret_key = "$ {var.key}"

region = "us-east-1"

}

# Params

variable "token" {

default = ""

}

variable "key" {

default = ""

}

# EKS

resource "aws_eks_cluster" "example" {

enabled_cluster_log_types = ["api", "audit"]

name = "exapmle"

role_arn = "arn: aws: iam :: 177510963163: role / ServiceRoleForAmazonEKS2"

vpc_config {

subnet_ids = ["$ {aws_subnet.subnet_1.id}", "$ {aws_subnet.subnet_2.id}"]

}

}

output "endpoint" {

value = "$ {aws_eks_cluster.example.endpoint}"

}

output "kubeconfig-certificate-authority-data" {

value = "$ {aws_eks_cluster.example.certificate_authority.0.data}"

}

# Role

data "aws_iam_policy_document" "eks-role-policy" {

statement {

actions = ["sts: AssumeRole"]

principals {

type = "Service"

identifiers = ["eks.amazonaws.com"]

}

}

}

resource "aws_iam_role" "tf_role" {

name = "tf_role"

assume_role_policy = "$ {data.aws_iam_policy_document.eks-role-policy.json}"

tags = {

tag-key = "tag-value"

}

}

resource "aws_iam_role_policy_attachment" "attach-cluster" {

role = "tf_role"

policy_arn = "arn: aws: iam :: aws: policy / AmazonEKSClusterPolicy"

}

resource "aws_iam_role_policy_attachment" "attach-service" {

role = "tf_role"

policy_arn = "arn: aws: iam :: aws: policy / AmazonEKSServicePolicy"

}

# Subnet

resource "aws_subnet" "subnet_1" {

vpc_id = "$ {aws_vpc.main.id}"

cidr_block = "10.0.1.0/24"

availability_zone = "us-east-1a"

tags = {

Name = "Main"

}

}

resource "aws_subnet" "subnet_2" {

vpc_id = "$ {aws_vpc.main.id}"

cidr_block = "10.0.2.0/24"

availability_zone = "us-east-1b"

tags = {

Name = "Main"

}

}

resource "aws_vpc" "main" {

cidr_block = "10.0.0.0/16"

}

After 9 minutes 44 seconds, I got a ready-made self-supporting infrastructure for a Kubernetes cluster:

esschtolts @ cloudshell: ~ / terraform / aws (agile-aleph-203917) $ ./../terraform apply -var = "token = AKIAJ4SYCNH2XVSHNN3A" -var = "key = huEWRslEluynCXBspsul3AkKlin1ViR9 + Mo

Now let's delete (it took me 10 minutes 23 seconds):

esschtolts @ cloudshell: ~ / terraform / aws (agile-aleph-203917) $ ./../terraform destroy -var = "token = AKIAJ4SYCNH2XVSHNN3A" -var = "key = huEWRslEluynCXBspsul3AkKlin1ViR9 + Mo

Destroy complete! Resources: 7 destroyed.

Establishing the CI / CD process

Amazon provides (aws.amazon.com/ru/devops/) a wide range of DevOps tools designed in a cloud infrastructure:

* AWS Code Pipeline – the service allows you to create a chain of stages from a set of services in a visual editor, through which the code must go before it goes to production, for example, assembly and testing.

* AWS Code Build – the service provides an auto-scaling build queue, which may be required for compiled programming languages, when adding features or making changes requires a long re-compilation of the entire application, when using one server it becomes a bottleneck when rolling out the changes.

* AWS Code Deploy – Automates deployment and rollback in case of errors.

* AWS CodeStar – the service combines the main features of the previous services.

Setting up remote control

artifact server

aws s3 ls s3: // name_backet aws s3 sync s3: // name_backet name_fonder –exclude * .tmp # files from the bucket will be downloaded to the folder, for example, a website

Now, we need to download the AWS plugin:

esschtolts @ cloudshell: ~ / terraform / aws (agile-aleph-203917) $ ./../terraform init | grep success

Terraform has been successfully initialized!

Now we need to get access to AWS, for that we click on the name of your user in the header of the WEB interface, in addition to My account , the My Security Credentials item will appear , by selecting which, we go to Access Key -> Create New Access Key . Let's create EKS (Elastic Kuberntes Service):

esschtolts @ cloudshell: ~ / terraform / aws (agile-aleph-203917) $ ./../terraform apply

–var = "token = AKIAJ4SYCNH2XVSHNN3A" -var = "key = huEWRslEluynCXBspsul3AkKlinAlR9 + MoU1ViY7"

Delete everything:

$ ../terraform destroy

Creating a cluster in GCP

node pool – combining nodes into a cluster with

resource "google_container_cluster" "primary" {

name = "tf"

location = "us-central1"

$ cat main.tf # configuration state

terraform {

required_version = "> 0.10.0"

}

terraform {

backend "s3" {

bucket = "foo-terraform"

key = "bucket / terraform.tfstate"

region = "us-east-1"

encrypt = "true"

}

}

$ cat cloud.tf # cloud configuration

provider "google" {

token = "$ {var.hcloud_token}"

}

$ cat variables.tf # variables and getting tokens

variable "hcloud_token" {}

$ cat instances.tf # create resources

resource "hcloud_server" "server" {....

$ terraform import aws_acm_certificate.cert arn: aws: acm: eu-central-1: 123456789012: certificate / 7e7a28d2-163f-4b8f-b9cd-822f96c08d6a

 

$ terraform init # Initialize configs

$ terraform plan # Check actions

$ terraform apply # Running actions

Debugging:

essh @ kubernetes-master: ~ / graylog $ sudo docker run –name graylog –link graylog_mongo: mongo –link graylog_elasticsearch: elasticsearch \

–p 9000: 9000 -p 12201: 12201 -p 1514: 1514 \

–e GRAYLOG_HTTP_EXTERNAL_URI = "http://127.0.0.1:9000/" \

–d graylog / graylog: 3.0

0f21f39192440d9a8be96890f624c1d409883f2e350ead58a5c4ce0e91e54c9d

docker: Error response from daemon: driver failed programming external connectivity on endpoint graylog (714a6083b878e2737bd4d4577d1157504e261c03cb503b6394cb844466fb4781): Bind for 0.0.0.0:9000 failed: port is already allocated.

essh @ kubernetes-master: ~ / graylog $ sudo netstat -nlp | grep 9000

tcp6 0 0 ::: 9000 ::: * LISTEN 2505 / docker-proxy

essh @ kubernetes-master: ~ / graylog $ docker rm graylog

graylog

essh @ kubernetes-master: ~ / graylog $ sudo docker run –name graylog –link graylog_mongo: mongo –link graylog_elasticsearch: elasticsearch \

–p 9001: 9000 -p 12201: 12201 -p 1514: 1514 \

–e GRAYLOG_HTTP_EXTERNAL_URI = "http://127.0.0.1:9001/" \

–d graylog / graylog: 3.0

e5aefd6d630a935887f494550513d46e54947f897e4a64b0703d8f7094562875

https://blog.maddevs.io/terrafom-hetzner-a2f22534514b

For example, let's create one instance:

$ cat aws / provider.tf

provider "aws" {

region = "us-west-1"

}

resource "aws_instance" "my_ec2" {

ami = "$ {data.aws_ami.ubuntu.id}"

instance_type = "t2.micro"

}

$ cd aws

$ aws configure

$ terraform init

$ terraform apply –auto-approve

$ cd ..

provider "aws" {

region = "us-west-1"

}

resource "aws_sqs_queue" "terraform_queue" {

name = "terraform-queue"

delay_seconds = 90

max_message_size = 2048

message_retention_seconds = 86400

receive_wait_time_seconds = 10

}

data "aws_route53_zone" "vuejs_phalcon" {

name = "test.com."

private_zone = true

}

resource "aws_route53_record" "www" {

zone_id = "$ {data.aws_route53_zone.vuejs_phalcon.zone_id}"

name = "www. $ {data.aws_route53_zone.selected.name}"

type = "A"

ttl = "300"

records = ["10.0.0.1"]

}

resource "aws_elasticsearch_domain" "example" {

domain_name = "example"

elasticsearch_version = "1.5"

cluster_config {

instance_type = "r4.large.elasticsearch"

}

snapshot_options {

automated_snapshot_start_hour = 23

}

}

resource "aws_eks_cluster" "eks_vuejs_phalcon" {

name = "eks_vuejs_phalcon"

role_arn = "$ {aws_iam_role.eks_vuejs_phalcon.arn}"

vpc_config {

subnet_ids = ["$ {aws_subnet.eks_vuejs_phalcon.id}", "$ {aws_subnet.example2.id}"]

}

}

output "endpoint" {

value = "$ {aws_eks_cluster.eks_vuejs_phalcon.endpoint}"

}

output "kubeconfig-certificate-authority-data" {

value = "$ {aws_eks_cluster.eks_vuejs_phalcon.certificate_authority.0.data}"

}

provider "google" {

credentials = "$ {file (" account.json ")}"

project = "my-project-id"

region = "us-central1"

}

resource "google_container_cluster" "primary" {

name = "my-gke-cluster"

location = "us-central1"

remove_default_node_pool = true

initial_node_count = 1

master_auth {

username = ""

password = ""

}

}

output "client_certificate" {

value = "$ {google_container_cluster.primary.master_auth.0.client_certificate}"

}

output "client_key" {

value = "$ {google_container_cluster.primary.master_auth.0.client_key}"

}

output "cluster_ca_certificate" {

value = "$ {google_container_cluster.primary.master_auth.0.cluster_ca_certificate}"

}

$ cat deployment.yml

apiVersion: apps / v1

kind: Deployment

metadata:

name: phalcon_vuejs

namespace: development

spec:

selector:

matchLabels:

app: vuejs

replicas: 1

template:

metadata:

labels:

app: vuejs

spec:

initContainers:

– name: vuejs_build

image: vuejs / ci

volumeMounts:

– name: app

mountPath: / app / public

command:

– / bin / bash

– -c

– |

cd / app / public

git clone essch / vuejs_phalcon: 1.0.

npm test

npm build

containers:

– name: healtcheck

image: mileschou / phalcon: 7.2-cli

args:

– / bin / sh

– -c

– cd / usr / src / app && git clone essch / app_phalcon: 1.0 && touch / tmp / healthy && sleep 10 && php script.php

readinessProbe:

exec:

command:

– cat

– / tmp / healthy

initialDelaySeconds: 5

periodSeconds: 5

livenessProbe:

exec:

command:

– cat

– / tmp / healthy

initialDelaySeconds: 15

periodSeconds: 5

voumes:

– name: app

emptyDir: {}

So we created an AWS EC2 instance. We omitted specifying the keys because the AWS API is already authorized and this authorization will be used by Terraform.

Also, for code use, Terraform supports variables, data, and modules.

Let's create a separate network:

resource "aws_vpc" "my_vpc" {

cidr_block = "190.160.0.0/16"

instance_target = "default"

}

resource "aws_subnet" "my_subnet" {

vpc_id = "$ {aws_vpc.my_vpc.id}"

cidr_block = "190.160.1.0/24"

}

$ cat gce / provider.tf

provider "google" {

credentials = "$ {file (" account.json ")}"

project = "my-project-id"

region = "us-central1"

}

resource "google_compute_instance" "default" {

name = "test"

machine_type = "n1-standard-1"

zone = "us-central1-a"

}

$ cd gce

$ terraform init

$ terraform apply

$ cd ..

For distributed work, let's put the state in AWS S3 the state of the infrastructure (you can also put other data), but for security in a different region:

terraform {

backend "s3" {

bucket = "tfstate"

key = "terraform.tfstate"

region = "us-state-2"

}

}

provider "kubernetes" {

host = "https://104.196.242.174"

username = "ClusterMaster"

password = "MindTheGap"

}

resource "kubernetes_pod" "my_pod" {

spec {

container {

image = "Nginx: 1.7.9"

name = "Nginx"

port {

container_port = 80

}

}

}

}

Commands:

terraform init # downloading dependencies according to configs, checking them

terraform validate # syntax check

terraform plan # to see in detail how the infrastructure will be changed and why exactly so, for example,

whether only the service meta information will be changed or the service itself will be re-created, which is often unacceptable for databases.

terraform apply # applying changes

The common part for all providers is the core.

$ which aws

$ aws fonfigure # https://www.youtube.com/watch?v=IxA1IPypzHs

$ cat aws.tf

# https://www.terraform.io/docs/providers/aws/r/instance.html

resource "aws_instance" "ec2instance" {

ami = "$ {var.ami}"

instance_type = "t2.micro"

}

resource "aws_security_group" "instance_gc" {

}

$ cat run.js

export AWS_ACCESS_KEY_ID = "anaccesskey"

export AWS_SECRET_ACCESS_KEY = "asecretkey"

export AWS_DEFAULT_REGION = "us-west-2"

terraform plan

terraform apply

$ cat gce.tf # https://www.terraform.io/docs/providers/google/index.html#

# Google Cloud Platform Provider

provider "google" {

credentials = "$ {file (" account.json ")}"

project = "phalcon"

region = "us-central1"

}

#https: //www.terraform.io/docs/providers/google/r/app_engine_application.html

resource "google_project" "my_project" {

name = "My Project"

project_id = "your-project-id"

org_id = "1234567"

}

resource "google_app_engine_application" "app" {

project = "$ {google_project.my_project.project_id}"

location_id = "us-central"

}

# google_compute_instance

resource "google_compute_instance" "default" {

name = "test"

machine_type = "n1-standard-1"

zone = "us-central1-a"

tags = ["foo", "bar"]

boot_disk {

initialize_params {

image = "debian-cloud / debian-9"

}

}

// Local SSD disk

scratch_disk {

}

network_interface {

network = "default"

access_config {

// Ephemeral IP

}

}

metadata = {

foo = "bar"

}

metadata_startup_script = "echo hi> /test.txt"

service_account {

scopes = ["userinfo-email", "compute-ro", "storage-ro"]

}

}

Extensibility using an external resource, which can be a BASH script:

data "external" "python3" {

program = ["Python3"]

}

Building a cluster of machines with Terraform

Clustering with Terraform is covered in Building Infrastructure in GCP. Now let's pay more attention to the cluster itself, and not to the tools for creating it. I will create a project through the GCE admin panel (displayed in the interface header) node-cluster. I downloaded the key for Kubernetes IAM and administration -> Service accounts -> Create a service account and when creating it, I selected the Owner role and put it in a project called kubernetes_key.JSON:

eSSH @ Kubernetes-master: ~ / node-cluster $ cp ~ / Downloads / node-cluster-243923-bbec410e0a83.JSON ./kubernetes_key.JSON

Downloaded terraform:

essh @ kubernetes-master: ~ / node-cluster $ wget https://releases.hashicorp.com/terraform/0.12.2/terraform_0.12.2_linux_amd64.zip> / dev / null 2> / dev / null

essh @ kubernetes-master: ~ / node-cluster $ unzip terraform_0.12.2_linux_amd64.zip && rm -f terraform_0.12.2_linux_amd64.zip

Archive: terraform_0.12.2_linux_amd64.zip

inflating: terraform

essh @ kubernetes-master: ~ / node-cluster $ ./terraform version

Terraform v0.12.2

Added the GCE provider and started downloading the "drivers" to it:

essh @ kubernetes-master: ~ / node-cluster $ cat main.tf

provider "google" {

credentials = "$ {file (" kubernetes_key.json ")}"

project = "node-cluster"

region = "us-central1"

} essh @ kubernetes-master: ~ / node-cluster $ ./terraform init

Initializing the backend …

Initializing provider plugins …

– Checking for available provider plugins …

– Downloading plugin for provider "google" (terraform-providers / google) 2.8.0 …

The following providers do not have any version constraints in configuration,

so the latest version was installed.

To prevent automatic upgrades to new major versions that may contain breaking

changes, it is recommended to add version = "…" constraints to the

corresponding provider blocks in configuration, with the constraint strings

suggested below.

* provider.google: version = "~> 2.8"

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see

any changes that are required for your infrastructure. All Terraform commands

should now work.

If you ever set or change modules or backend configuration for Terraform,

rerun this command to reinitialize your working directory. If you forget, other

commands will detect it and remind you to do so if necessary.

Add a virtual machine:

essh @ kubernetes-master: ~ / node-cluster $ cat main.tf

provider "google" {

credentials = "$ {file (" kubernetes_key.json ")}"

project = "node-cluster-243923"

region = "europe-north1"

}

resource "google_compute_instance" "cluster" {

1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36 
Рейтинг@Mail.ru