Kubernestes Deployment scripts on AWS not working

Hello community. I building an infrastructure on AWS with Terraform trying to create a Kubernestes deployment with the following Terraform scripts.

//The script for the deployment

provider “kubernetes” {
load_config_file = “false”
host = data.aws_eks_cluster.cluster.endpoint
token = data.aws_eks_cluster_auth.cluster.token
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
}

resource “kubernetes_deployment” “nginx” {
metadata {
name = “scalable-nginx”
labels = {
App = “ScalableNginx”
}
}

spec {
replicas = 4
selector {
match_labels = {
App = “ScalableNginx”
}
}
template {
metadata {
labels = {
App = “ScalableNginx”
}
}
spec {
container {
image = “markoshust/magento-nginx:1.18-2”
name = “magento-nginx”

      port {
        container_port = 81
      }

      resources {
        limits {
          cpu    = "0.5"
          memory = "512Mi"
        }
        requests {
          cpu    = "250m"
          memory = "50Mi"
        }
      }
    }
  }
}

}
}

resource “kubernetes_service” “nginx” {
metadata {
name = “nginx”
}
spec {
selector = {
App = kubernetes_deployment.nginx.spec.0.template.0.metadata[0].labels.App
}
port {
node_port = 30201
port = 81
target_port = 81
}

type = "NodePort"

}
}

resource “kubernetes_deployment” “magento2-php” {
metadata {
name = “scalable-magento2-php”
labels = {
App = “Scalablemagento2-php”
}
}

spec {
replicas = 4
selector {
match_labels = {
App = “Scalablemagento2-php”
}
}
template {
metadata {
labels = {
App = “Scalablemagento2-php”
}
}
spec {
container {
image = “markoshust/magento-php:7.4-fpm-0”
name = “magento2-php”

      port {
        container_port = 80
      }

      resources {
        limits {
          cpu    = "0.5"
          memory = "512Mi"
        }
        requests {
          cpu    = "250m"
          memory = "50Mi"
        }
      }
    }
  }
}

}
}

resource “kubernetes_service” “magento2-php” {
metadata {
name = “magento2-php”
}
spec {
selector = {
App = kubernetes_deployment.nginx.spec.0.template.0.metadata[0].labels.App
}
port {
node_port = 30201
port = 80
target_port = 80
}

type = "NodePort"

}
}

resource “kubernetes_deployment” “redis” {
metadata {
name = “scalable-redis”
labels = {
App = “Scalableredis”
}
}

spec {
replicas = 4
selector {
match_labels = {
App = “Scalableredis”
}
}
template {
metadata {
labels = {
App = “Scalableredis”
}
}
spec {
container {
image = “redis”
name = “redis”

      port {
        container_port = 80
      }

      resources {
        limits {
          cpu    = "0.5"
          memory = "512Mi"
        }
        requests {
          cpu    = "250m"
          memory = "50Mi"
        }
      }
    }
  }
}

}
}

resource “kubernetes_service” “redis” {
metadata {
name = “redis”
}
spec {
selector = {
App = kubernetes_deployment.nginx.spec.0.template.0.metadata[0].labels.App
}
port {
node_port = 30201
port = 80
target_port = 80
}

type = "NodePort"

}
}

resource “kubernetes_deployment” “varnish” {
metadata {
name = “scalable-varnish”
labels = {
App = “Scalablevarnish”
}
}

spec {
replicas = 4
selector {
match_labels = {
App = “Scalablevarnish”
}
}
template {
metadata {
labels = {
App = “Scalablevarnish”
}
}
spec {
container {
image = “varnish”
name = “varnish”

      port {
        container_port = 80
      }

      resources {
        limits {
          cpu    = "0.5"
          memory = "512Mi"
        }
        requests {
          cpu    = "250m"
          memory = "50Mi"
        }
      }
    }
  }
}

}
}

resource “kubernetes_service” “varnish” {
metadata {
name = “redis”
}
spec {
selector = {
App = kubernetes_deployment.nginx.spec.0.template.0.metadata[0].labels.App
}
port {
node_port = 30201
port = 80
target_port = 80
}

type = "NodePort"

}
}

resource “kubernetes_deployment” “mysqlpercona” {
metadata {
name = “scalable-mysqlpercona”
labels = {
App = “Scalablemysqlpercona”
}
}

spec {
replicas = 4
selector {
match_labels = {
App = “Scalablemysqlperconah”
}
}
template {
metadata {
labels = {
App = “Scalablemysqlpercona”
}
}
spec {
container {
image = “percona”
name = “percona”

      port {
        container_port = 3306
      }

      resources {
        limits {
          cpu    = "0.5"
          memory = "512Mi"
        }
        requests {
          cpu    = "250m"
          memory = "50Mi"
        }
      }
    }
  }
}

}
}

resource “kubernetes_service” “mysqlpercona” {
metadata {
name = “redis”
}
spec {
selector = {
App = kubernetes_deployment.nginx.spec.0.template.0.metadata[0].labels.App
}
port {
node_port = 30201
port = 3306
target_port = 3306
}

type = "NodePort"

}
}

The scripts is not deploying and i can find the image in AWS CLUSTER. Would really need the community help in getting around the problem. Thanks.

Hi @topeawolwo

Welcome to the Terraform forums.

Could you please tell us:

  • Isthe EKS cluster is already created or you are creating the aws_eks_cluster in the same Terraform configuration files?
  • What error are you getting?

Please when posting code, use the “<>” icon on top of the editor to format the code.

1 Like

Hi @javierruizjimenez

I am creating the aws_eks_cluster in another Terraform file

What error are you getting? after running ALL the scripts, the AWS EKS cluster was created but AWS EKS CLUSTER IMAGE was not created

Thanks @topeawolwo

I have been able to deploy your code (1) in my Kubernetes.

I had issues with the number of CPUs, too many needed for what was available at my Kubernetes cluster. Terraform kept “waiting” for the PODs to be created. I have reduced all replicas to 1.

  • Have you checked the Kubernetes log or the Dashboard for errors?
  • Does Terraform tell you everything was created or what message you get after applying?
  • What you get when running kubectl get pods --all-namespaces (2)

I also have found problems with scalable-magento2-php, it was in “pending” state (waiting for resources):

   $kubectl get pods --all-namespaces
    NAMESPACE              NAME                                         READY   STATUS    RESTARTS   AGE
    default                color-blue-dep-5fb96494d7-jv6zw              1/1     Running   1          2d16h
    default                color-blue-dep-5fb96494d7-k94xm              1/1     Running   1          2d16h
    default                color-blue-dep-5fb96494d7-mjj2k              1/1     Running   1          2d16h
    default                scalable-magento2-php-69c8fdbdd6-6jcpw       0/1     Pending   0          2m28s
    default                scalable-nginx-5f7d59d89b-c9rhx              1/1     Running   0          21m
    default                scalable-nginx-5f7d59d89b-dw485              1/1     Running   0          21m
    default                scalable-nginx-5f7d59d89b-kcv9n              1/1     Running   0          21m
    default                scalable-nginx-5f7d59d89b-rxxk4              1/1     Running   0          21m
    default                scalable-redis-586c5884bc-2tr22              1/1     Running   0          21m
    default                scalable-redis-586c5884bc-2zjrc              1/1     Running   0          21m
    default                scalable-redis-586c5884bc-dgb58              1/1     Running   0          21m
    default                scalable-redis-586c5884bc-mfgxh              1/1     Running   0          21m
    default                scalable-varnish-7d9d578f5f-9wsvv            1/1     Running   0          21m
    default                scalable-varnish-7d9d578f5f-fhwkq            1/1     Running   0          21m
    default                scalable-varnish-7d9d578f5f-px4wf            1/1     Running   0          21m
    default                scalable-varnish-7d9d578f5f-v26g6            0/1     Pending   0          21m
    kube-system            calico-kube-controllers-578894d4cd-nf4pm     1/1     Running   1          2d21h
    kube-system            calico-node-8pv65                            1/1     Running   1          2d20h
    kube-system            calico-node-nmqmr                            1/1     Running   1          2d21h
    kube-system            calico-node-zcdd4                            1/1     Running   1          2d21h
    kube-system            coredns-66bff467f8-6jnkg                     1/1     Running   1          2d21h
    kube-system            coredns-66bff467f8-qs54m                     1/1     Running   1          2d21h
    kube-system            etcd-k8s-m-1                                 1/1     Running   1          2d21h
    kube-system            kube-apiserver-k8s-m-1                       1/1     Running   1          2d21h
    kube-system            kube-controller-manager-k8s-m-1              1/1     Running   1          2d21h
    kube-system            kube-proxy-j8gwm                             1/1     Running   1          2d21h
    kube-system            kube-proxy-jxww4                             1/1     Running   1          2d21h
    kube-system            kube-proxy-lvkd2                             1/1     Running   1          2d20h
    kube-system            kube-scheduler-k8s-m-1                       1/1     Running   1          2d21h
    kubernetes-dashboard   dashboard-metrics-scraper-6b4884c9d5-ttqtt   1/1     Running   1          2d20h
    kubernetes-dashboard   kubernetes-dashboard-7b544877d5-5hjr6        1/1     Running   4          2d20h

After deleting other PODS the magento2 was able to get needed resources and start

 $kubectl logs -f scalable-magento2-php-69c8fdbdd6-6jcpw
  [03-Aug-2020 14:49:44] NOTICE: fpm is running, pid 1
  [03-Aug-2020 14:49:44] NOTICE: ready to handle connections

1: Your code edited replacing “…” with “…” and with my config_context

    terraform {
      required_version = "~> 0.12" 
    }

    provider "kubernetes" {
      #Context to choose from the config file.
      config_context = "kubernetes-admin@ditwl-k8s-01"
      version        = "~> 1.12"
    }

    resource "kubernetes_deployment" "nginx" {
      metadata {
        name = "scalable-nginx"
        labels = {
          App = "ScalableNginx"
        }
      }

      spec {
        replicas = 1
        selector {
          match_labels = {
            App = "ScalableNginx"
          }
        }
        template {
          metadata {
            labels = {
              App = "ScalableNginx"
            }
          }
          spec {
            container {
              image = "markoshust/magento-nginx:1.18-2"
              name  = "magento-nginx"

              port {
                container_port = 81
              }

              resources {
                limits {
                  cpu    = "0.5"
                  memory = "512Mi"
                }
                requests {
                  cpu    = "250m"
                  memory = "50Mi"
                }
              }
            }
          }
        }

      }
    }

    resource "kubernetes_service" "nginx" {
      metadata {
        name = "nginx"
      }
      spec {
        selector = {
          App = kubernetes_deployment.nginx.spec.0.template.0.metadata[0].labels.App
        }
        port {
          node_port   = 30201
          port        = 81
          target_port = 81
        }

        type = "NodePort"

      }
    }

    resource "kubernetes_deployment" "magento2-php" {
      metadata {
        name = "scalable-magento2-php"
        labels = {
          App = "Scalablemagento2-php"
        }
      }

      spec {
        replicas = 1
        selector {
          match_labels = {
            App = "Scalablemagento2-php"
          }
        }
        template {
          metadata {
            labels = {
              App = "Scalablemagento2-php"
            }
          }
          spec {
            container {
              image = "markoshust/magento-php:7.4-fpm-0"
              name  = "magento2-php"

              port {
                container_port = 80
              }

              resources {
                limits {
                  cpu    = "0.5"
                  memory = "512Mi"
                }
                requests {
                  cpu    = "250m"
                  memory = "50Mi"
                }
              }
            }
          }
        }

      }
    }

    resource "kubernetes_service" "magento2-php" {
      metadata {
        name = "magento2-php"
      }
      spec {
        selector = {
          App = kubernetes_deployment.nginx.spec.0.template.0.metadata[0].labels.App
        }
        port {
          node_port   = 30201
          port        = 80
          target_port = 80
        }

        type = "NodePort"

      }
    }

    resource "kubernetes_deployment" "redis" {
      metadata {
        name = "scalable-redis"
        labels = {
          App = "Scalableredis"
        }
      }

      spec {
        replicas = 1
        selector {
          match_labels = {
            App = "Scalableredis"
          }
        }
        template {
          metadata {
            labels = {
              App = "Scalableredis"
            }
          }
          spec {
            container {
              image = "redis"
              name  = "redis"

              port {
                container_port = 80
              }

              resources {
                limits {
                  cpu    = "0.5"
                  memory = "512Mi"
                }
                requests {
                  cpu    = "250m"
                  memory = "50Mi"
                }
              }
            }
          }
        }

      }
    }

    resource "kubernetes_service" "redis" {
      metadata {
        name = "redis"
      }
      spec {
        selector = {
          App = kubernetes_deployment.nginx.spec.0.template.0.metadata[0].labels.App
        }
        port {
          node_port   = 30201
          port        = 80
          target_port = 80
        }

        type = "NodePort"

      }
    }

    resource "kubernetes_deployment" "varnish" {
      metadata {
        name = "scalable-varnish"
        labels = {
          App = "Scalablevarnish"
        }
      }

      spec {
        replicas = 1
        selector {
          match_labels = {
            App = "Scalablevarnish"
          }
        }
        template {
          metadata {
            labels = {
              App = "Scalablevarnish"
            }
          }
          spec {
            container {
              image = "varnish"
              name  = "varnish"

              port {
                container_port = 80
              }

              resources {
                limits {
                  cpu    = "0.5"
                  memory = "512Mi"
                }
                requests {
                  cpu    = "250m"
                  memory = "50Mi"
                }
              }
            }
          }
        }

      }
    }

    resource "kubernetes_service" "varnish" {
      metadata {
        name = "redis"
      }
      spec {
        selector = {
          App = kubernetes_deployment.nginx.spec.0.template.0.metadata[0].labels.App
        }
        port {
          node_port   = 30201
          port        = 80
          target_port = 80
        }

        type = "NodePort"

      }
    }

    resource "kubernetes_deployment" "mysqlpercona" {
      metadata {
        name = "scalable-mysqlpercona"
        labels = {
          App = "Scalablemysqlpercona"
        }
      }

      spec {
        replicas = 1
        selector {
          match_labels = {
            App = "Scalablemysqlperconah"
          }
        }
        template {
          metadata {
            labels = {
              App = "Scalablemysqlpercona"
            }
          }
          spec {
            container {
              image = "percona"
              name  = "percona"

              port {
                container_port = 3306
              }

              resources {
                limits {
                  cpu    = "0.5"
                  memory = "512Mi"
                }
                requests {
                  cpu    = "250m"
                  memory = "50Mi"
                }
              }
            }
          }
        }

      }
    }

    resource "kubernetes_service" "mysqlpercona" {
      metadata {
        name = "redis"
      }
      spec {
        selector = {
          App = kubernetes_deployment.nginx.spec.0.template.0.metadata[0].labels.App
        }
        port {
          node_port   = 30201
          port        = 3306
          target_port = 3306
        }

        type = "NodePort"

      }
    }


Thanks @ javierruizjimenez for putting me through

Answering your question : Does Terraform tell you everything was created or what message you get after applying?

Terraform init. plan and apply ran very well and the attached diagram.

I had deployed the versioned code you sent.  
i do have the following questions please :

(1) How do i access the Magento container in AWS EKS in an endpiont URL 
(2) The idea of the project was to used to Terraform for Kubernestes deployment for AWS RDS database and containerized Magento CMS.

Thanks so much for your response.

Hi @topeawolwo

Great to see that you were able to deploy :smiley:

(1) How do i access the Magento container in AWS EKS in an endpiont URL

I don’t have experience (yet) with AWS EKS endpoints. I believe you need to create a Kubernetes LoadBalancer as shown on AWS docs:

See: Network load balancing on Amazon EKS - Amazon EKS

(2) The idea of the project was to used to Terraform for Kubernestes deployment for AWS RDS database and containerized Magento CMS.

AWS RDS is great, you can find a tutorial that uses Terraform to create an AWS RDS MariaDB at my website:

The question : no matching VPC found.

Hi @javierruizjimenez

I want to say thank you for yesterday. I resolved that issue.

I am having issue with my vpc.tf and a main.tf .

The main.tf file is :

    indent preformatted text by 4 spaces 

######################################

Data sources to get VPC and subnets

######################################
data “aws_vpc” “default” {
default = true
}

data “aws_subnet_ids” “all” {
vpc_id = data.aws_vpc.default.id
}

#############

RDS Aurora

#############
module “aurora” {
source = “terraform-aws-modules/rds-aurora/aws”
version = “~> 2.0”
name = “aurora-example-mysql”
engine = “aurora-mysql”
engine_version = “5.7.12”
subnets = data.aws_subnet_ids.all.ids
vpc_id = data.aws_vpc.default.id
replica_count = 0
instance_type = “db.t2.medium”
apply_immediately = true
skip_final_snapshot = true
db_parameter_group_name = aws_db_parameter_group.aurora_db_57_parameter_group.id
db_cluster_parameter_group_name = aws_rds_cluster_parameter_group.aurora_57_cluster_parameter_group.id
iam_database_authentication_enabled = true
enabled_cloudwatch_logs_exports = [“audit”, “error”, “general”, “slowquery”]
allowed_cidr_blocks = [“10.20.0.0/20”, “20.20.0.0/20”]

create_security_group = true
}

resource “aws_db_parameter_group” “aurora_db_57_parameter_group” {
name = “test-aurora-db-57-parameter-group”
family = “aurora-mysql5.7”
description = “test-aurora-db-57-parameter-group”
}

resource “aws_rds_cluster_parameter_group” “aurora_57_cluster_parameter_group” {
name = “test-aurora-57-cluster-parameter-group”
family = “aurora-mysql5.7”
description = “test-aurora-57-cluster-parameter-group”
}

############################

Example of security group

############################
resource “aws_security_group” “app_servers” {
name_prefix = “app-servers-”
description = “For application servers”
vpc_id = data.aws_vpc.default.id
}

resource “aws_security_group_rule” “allow_access” {
type = “ingress”
from_port = module.aurora.this_rds_cluster_port
to_port = module.aurora.this_rds_cluster_port
protocol = “tcp”
source_security_group_id = aws_security_group.app_servers.id
security_group_id = module.aurora.this_security_group_id
}

IAM Policy for use with iam_database_authentication_enabled = true

resource “aws_iam_policy” “aurora_mysql_policy_iam_auth” {
name = “test-aurora-db-57-policy-iam-auth”

policy = <<POLICY
{
“Version”: “2012-10-17”,
“Statement”: [
{
“Effect”: “Allow”,
“Action”: [
“rds-db:connect”
],
“Resource”: [
“arn:aws:rds-db:us-east-2:123456789012:dbuser:${module.aurora.this_rds_cluster_resource_id}/jane_doe”
]
}
]
}
POLICY
}

 The VPC.tf is 

variable “region” {
default = “us-east-2”
description = “AWS region”
}

provider “aws” {
version = “>= 2.28.1”
region = “us-east-2”
}

data “aws_availability_zones” “available” {}

locals {
cluster_name = “training-eks-${random_string.suffix.result}”
}

resource “random_string” “suffix” {
length = 8
special = false
}

module “vpc” {
source = “terraform-aws-modules/vpc/aws”
version = “2.6.0”

name = “training-vpc”
cidr = “10.0.0.0/16”
azs = data.aws_availability_zones.available.names
private_subnets = [“10.0.1.0/24”, “10.0.2.0/24”, “10.0.3.0/24”]
public_subnets = [“10.0.4.0/24”, “10.0.5.0/24”, “10.0.6.0/24”]
enable_nat_gateway = true
single_nat_gateway = true
enable_dns_hostnames = true

tags = {
kubernetes.io/cluster/${local.cluster_name}” = “shared”
}

public_subnet_tags = {
kubernetes.io/cluster/${local.cluster_name}” = “shared”
kubernetes.io/role/elb” = “1”
}

private_subnet_tags = {
kubernetes.io/cluster/${local.cluster_name}” = “shared”
kubernetes.io/role/internal-elb” = “1”
}
}

Do help to resolve. Thanks.

Hi @topeawolwo

Great!!

Make me a big favor :pray: so I can keep helping: please format code!! as I don’t have superpowers :man_superhero: like @apparentlymart

It is really difficult to read the code when it is not properly indicated to the forum software.

When posting code use

```Terraform <- at the start of the block

CODE GOES HERE

And end the block with --> ```

Example (it started with a line with ``` and ended with a line with ```

resource “random_string” “suffix” {
  length = 8
 special = false
}
....

I can see that you are using TWO VPCs and I believe that you don’t want that.

VPCs

  1. You are creating a NEW VPC using a module:
module “vpc” {
 source = “terraform-aws-modules/vpc/aws”
 version = “2.6.0”

 name = “training-vpc”
 cidr = “10.0.0.0/16”
 azs = data.aws_availability_zones.available.names
 private_subnets = [“10.0.1.0/24”, “10.0.2.0/24”, “10.0.3.0/24”]
 public_subnets = [“10.0.4.0/24”, “10.0.5.0/24”, “10.0.6.0/24”]
 enable_nat_gateway = true
 single_nat_gateway = true
 enable_dns_hostnames = true
....
}
  1. And you are using a Data source to get the Default VPC. The Default VPC is a VPC (a network) that AWS creates by itself in each region.
data “aws_vpc” “default” {
 default = true
}

data “aws_subnet_ids” “all” {
 vpc_id = data.aws_vpc.default.id
}

I suggest that you forget about the “Data source to get the Default VPC” and write all your Terraform plan using the VPC that is created by the module “vpc”

Check AWS VPC Terraform module documentation to see module outputs:

https://registry.terraform.io/modules/terraform-aws-modules/vpc/aws/2.44.0#outputs

You will be using the VPC ID, and other outputs like the subnets.

For example if you need the VPC ID, your code will use:

module “aurora” {
 ...
 subnets = module.vpc.private_subnets #was data.aws_subnet_ids.all.ids
 vpc_id = module.vpc.default_vpc_id    #was data.aws_vpc.default.id
 replica_count = 0
 ....
 create_security_group = true
}

:warning: I consider that using modules is great but, if you didn’t write the modules, please review security and understand everything that the modules and your own code does. i.e. Create a RDS Subnet instead of using ALL VPC subnets.

And please… Make me a big favor :pray: and format the code!!

Thanks so much …i will format my codes going forward. I do have a question. I got the infrastructure working . Thank you. I want to run Mangento CMS on the created cluster. Do you have resources or documentation please. Thanks