AWS EKS 클러스터 구성하기 (실습)
EKS 클러스터 생성
변수 설정
배포 진행

이후 작업
Last updated

Last updated
# Basic Information
account_alias = "id"
product = "eks"
# Cluster information
cluster_version = "1.30"
release_version = "1.30.4-20240917"
# Service CIDR
service_ipv4_cidr = "172.20.0.0/16"
# Addon information
# https://docs.aws.amazon.com/eks/latest/userguide/managing-coredns.html
coredns_version = "v1.11.1-eksbuild.9"
# https://docs.aws.amazon.com/eks/latest/userguide/managing-kube-proxy.html
kube_proxy_version = "v1.30.0-eksbuild.3"
# https://docs.aws.amazon.com/eks/latest/userguide/managing-vpc-cni.html
vpc_cni_version = "v1.18.3-eksbuild.1"
# https://github.com/kubernetes-sigs/aws-ebs-csi-driver
ebs_csi_driver_version = "v1.34.0-eksbuild.1"
# https://github.com/aws/eks-pod-identity-agent
pod_identity_agent_version = "v1.3.2-eksbuild.2"
# Enable Public Access
enable_public_access = true
# Fargate Information
fargate_enabled = false
fargate_profile_name = ""
# Node Group configuration
node_group_configurations = [
{
name = "ondemand_1_30_4-20240917"
spot_enabled = false
release_version = "1.30.4-20240917"
disk_size = 20
ami_type = "AL2023_x86_64_STANDARD"
node_instance_types = ["t3.large"]
node_min_size = 2
node_desired_size = 2
node_max_size = 2
labels = {
"cpu_chip" = "intel"
}
},
{
name = "spot_1_30_4-20240917"
spot_enabled = true
disk_size = 20
release_version = "1.30.4-20240917"
ami_type = "AL2023_x86_64_STANDARD"
node_instance_types = ["t3.large"]
node_min_size = 2
node_desired_size = 2
node_max_size = 10
labels = {
"cpu_chip" = "intel"
}
},
]
additional_security_group_ingress = [
{
from_port = 443
to_port = 443
protocol = "TCP"
cidr_blocks = ["10.10.0.0/16"]
}
]
aws_auth_master_users_arn = [
"arn:aws:iam::<account-id>:user/xxxxx"
]
# Cluster Access
aws_auth_master_roles_arn = [
"${data.terraform_remote_state.iam.outputs.demo_arn}"
]
aws_auth_viewer_roles_arn = [
]
# Specified KMS ARNs accessed by ExternalSecrets
external_secrets_access_kms_arns = [
"${var.aws_kms_arn}"
]
# Specified SSM ARNs accessed by ExternalSecrets
external_secrets_access_ssm_arns = [
"*"
]
# Specified SecretsManager ARNs accessed by ExternalSecrets
external_secrets_access_secretsmanager_arns = [
"${data.terraform_remote_state.secretsmanager.outputs.aws_secretsmanager_id}"
] # module.eks.kubernetes_config_map.aws_auth will be created
+ resource "kubernetes_config_map" "aws_auth" {
+ data = (known after apply)
+ id = (known after apply)
+ metadata {
+ generation = (known after apply)
+ name = "aws-auth"
+ namespace = "kube-system"
+ resource_version = (known after apply)
+ uid = (known after apply)
}
}
Plan: 56 to add, 0 to change, 0 to destroy.
Changes to Outputs:
+ aws_iam_openid_connect_provider_arn = (known after apply)
+ aws_iam_openid_connect_provider_url = (known after apply)
+ aws_security_group_eks_cluster_default_id = (known after apply)
+ aws_security_group_eks_cluster_id = (known after apply)
+ aws_security_group_eks_node_group_id = (known after apply)
────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these actions if you run "terraform apply" now.
module.eks.aws_eks_addon.ebs_csi: Still creating... [30s elapsed]
module.eks.aws_eks_addon.ebs_csi: Still creating... [40s elapsed]
module.eks.aws_eks_addon.ebs_csi: Still creating... [50s elapsed]
module.eks.aws_eks_addon.ebs_csi: Creation complete after 55s [id=eksdapne2-aolu:aws-ebs-csi-driver]
Apply complete! Resources: 56 added, 0 changed, 0 destroyed.
Outputs:
aws_iam_openid_connect_provider_arn = "arn:aws:iam::<account-id>:oidc-provider/oidc.eks.ap-northeast-2.amazonaws.com/id/<oidc-id>"
aws_iam_openid_connect_provider_url = "oidc.eks.ap-northeast-2.amazonaws.com/id/<oidc-id>"
aws_security_group_eks_cluster_default_id = "sg-0f11de133a8bd16f3"
aws_security_group_eks_cluster_id = "sg-030b9ed1097d127c9"
aws_security_group_eks_node_group_id = "sg-0d95d73be64aa27d8"aws eks update-kubeconfig --name eksdapne2-aolu
> Updated context arn:aws:eks:ap-northeast-2:<account-id>:cluster/eksdapne2-aolu in /Users/test/.kube/configkubectl get nodes
> NAME STATUS ROLES AGE VERSION
ip-10-20-109-16.ap-northeast-2.compute.internal Ready <none> 6m6s v1.30.4-eks-a737599
ip-10-20-80-110.ap-northeast-2.compute.internal Ready <none> 6m12s v1.30.4-eks-a737599
ip-10-20-88-229.ap-northeast-2.compute.internal Ready <none> 6m13s v1.30.4-eks-a737599
ip-10-20-98-126.ap-northeast-2.compute.internal Ready <none> 6m7s v1.30.4-eks-a737599
kubectl get ns
> NAME STATUS AGE
default Active 27m
kube-node-lease Active 27m
kube-public Active 27m
kube-system Active 27m