getting the below error with my terraform config.
Error: Post "https://35.224.178.141/api/v1/namespaces": x509: certificate signed by unknown authority
on main.tf line 66, in resource "kubernetes_namespace" "example":
66: resource "kubernetes_namespace" "example" {
Here is my config, all I want to do for now is create a cluster auth with it, and create a namespace.
I have searched everyone and cant see where anyone else has run into this problem.
It is most likely something stupid I am doing. I thought this would be relatively simple, but its turning out to be a pain. I dont want to have to wrap gcloud commands in my build script.
provider "google" {
project = var.project
region = var.region
zone = var.zone
credentials = "google-key.json"
}
terraform {
backend "gcs" {
bucket = "tf-state-bucket-devenv"
prefix = "terraform"
credentials = "google-key.json"
}
}
resource "google_container_cluster" "my_cluster" {
name = var.kube-clustername
location = var.zone
remove_default_node_pool = true
initial_node_count = 1
master_auth {
username = ""
password = ""
client_certificate_config {
issue_client_certificate = false
}
}
}
resource "google_container_node_pool" "primary_preemptible_nodes" {
name = var.kube-poolname
location = var.zone
cluster = google_container_cluster.my_cluster.name
node_count = var.kube-nodecount
node_config {
preemptible = var.kube-preemptible
machine_type = "n1-standard-1"
disk_size_gb = 10
disk_type = "pd-standard"
metadata = {
disable-legacy-endpoints = "true",
}
oauth_scopes = [
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
]
}
}
data "google_client_config" "provider" {}
provider "kubernetes" {
load_config_file = false
host = "https://${google_container_cluster.my_cluster.endpoint}"
cluster_ca_certificate = "{base64decode(google_container_cluster.my_cluster.master_auth.0.cluster_ca_certificate)}"
token = "{data.google_client_config.provider.access_token}"
}
resource "kubernetes_namespace" "example" {
metadata {
name = "my-first-namespace"
}
}
TL;DR
Change the provider definition to:
provider "kubernetes" {
load_config_file = false
host = "https://${google_container_cluster.my_cluster.endpoint}"
cluster_ca_certificate = base64decode(google_container_cluster.my_cluster.master_auth.0.cluster_ca_certificate)
token = data.google_client_config.provider.access_token
}
What changed?
The "{}" was deleted from the cluster_ca_certificate and token values
I included the explanation below.
I used your original terraform file and I received the same error as you. I modified (simplified) your terraform file and added the output definitions:
resource "google_container_cluster" "my_cluster" {
OMMITED
}
data "google_client_config" "provider" {}
provider "kubernetes" {
load_config_file = false
host = "https://${google_container_cluster.my_cluster.endpoint}"
cluster_ca_certificate = "{base64decode(google_container_cluster.my_cluster.master_auth.0.cluster_ca_certificate)}"
token = "{data.google_client_config.provider.access_token}"
}
output "cert" {
value = "{base64decode(google_container_cluster.my_cluster.master_auth.0.cluster_ca_certificate)}"
}
output "token" {
value = "{data.google_client_config.provider.access_token}"
}
Running above file showed:
$ terraform apply --auto-approve
data.google_client_config.provider: Refreshing state...
google_container_cluster.my_cluster: Creating...
google_container_cluster.my_cluster: Creation complete after 2m48s [id=projects/PROJECT-NAME/locations/europe-west3-c/clusters/gke-terraform]
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
Outputs:
cert = {base64decode(google_container_cluster.my_cluster.master_auth.0.cluster_ca_certificate)}
token = {data.google_client_config.provider.access_token}
As you can see the values were interpreted as strings from the provider and not "processed" to get the needed values. To fix that you will need change the provider definition to:
cluster_ca_certificate = base64decode(google_container_cluster.my_cluster.master_auth.0.cluster_ca_certificate)
token = data.google_client_config.provider.access_token
Running $ terraform apply --auto-approve once again:
data.google_client_config.provider: Refreshing state...
google_container_cluster.my_cluster: Creation complete after 3m18s [id=projects/PROJECT-NAME/locations/europe-west3-c/clusters/gke-terraform]
kubernetes_namespace.example: Creating...
kubernetes_namespace.example: Creation complete after 0s [id=my-first-namespace]
Apply complete! Resources: 2 added, 0 changed, 0 destroyed.
Outputs:
cert = -----BEGIN CERTIFICATE-----
MIIDKzCCAhOgAwIBAgIRAO2bnO3FU6HZ0T2u3XBN1jgwDQYJKoZIhvcNAQELBQAw
<--OMMITED-->
a9Ybow5tZGu+fqvFHnuCg/v7tln/C3nVuTbwa4StSzujMsPxFv4ONVl4F4UaGw0=
-----END CERTIFICATE-----
token = ya29.a0AfH6SMBx<--OMMITED-->fUvCeFg
As you can see the namespace was created. You can check it by running:
$ gcloud container clusters get-credentials CLUSTER-NAME --zone=ZONE
$ kubectl get namespace my-first-namespace
Output:
NAME STATUS AGE
my-first-namespace Active 3m14s
Additional resources:
Terraform.io: Docs: Configuration: Outputs
Terraform.io: Docs: Configuration: Variables
Related
I am getting below error while creating firewall manager policy for cloud front distribution.
the documentation provide little details on how to deploy a Cloudfront distribution which is a Global resource.
I am getting below error while executing my code:
aws_fms_policy.xxxx: Creating...
╷
│ Error: error creating FMS Policy: InternalErrorException:
│
│ with aws_fms_policy.xxxx,
│ on r_wafruleset.tf line 1, in resource "aws_fms_policy" "xxxx":
│ 1: resource "aws_fms_policy" "xxxx" {
│
╵
Releasing state lock. This may take a few moments...
main.tf looks like this with provider information:
provider "aws" {
region = "ap-southeast-2"
assume_role {
role_arn = "arn:aws:iam::${var.account_id}:role/yyyy"
}
}
provider "aws" {
alias = "us_east_1"
region = "us-east-1"
assume_role {
role_arn = "arn:aws:iam::${var.account_id}:role/yyyy"
}
}
r_fms.tf looks like this:
resource "aws_fms_policy" "xxxx" {
name = "xxxx"
exclude_resource_tags = true
resource_tags = var.exclude_tags
remediation_enabled = true
provider = aws.us_east_1
include_map {
account = ["123123123"]
}
resource_type = "AWS::CloudFront::Distribution"
security_service_policy_data {
type = "WAFV2"
managed_service_data = jsonencode(
{
type = "WAFV2"
defaultAction = {
type = "ALLOW"
}
overrideCustomerWebACLAssociation = false
postProcessRuleGroups = []
preProcessRuleGroups = [
{
excludeRules = []
managedRuleGroupIdentifier = {
vendorName = "AWS"
managedRuleGroupName = "AWSManagedRulesAmazonIpReputationList"
version = true
}
overrideAction = {
type = "COUNT"
}
ruleGroupArn = null
ruleGroupType = "ManagedRuleGroup"
sampledRequestsEnabled = true
},
{
excludeRules = []
managedRuleGroupIdentifier = {
managedRuleGroupName = "AWSManagedRulesWindowsRuleSet"
vendorName = "AWS"
version = null
}
overrideAction = {
type = "COUNT"
}
ruleGroupArn = null
ruleGroupType = "ManagedRuleGroup"
sampledRequestsEnabled = true
},
]
sampledRequestsEnabledForDefaultActions = true
})
}
}
I have tried to follow the thread but still getting below error:
https://github.com/hashicorp/terraform-provider-aws/issues/17821
Terraform Version:
Terraform v1.1.7
on windows_386
+ provider registry.terraform.io/hashicorp/aws v4.6.0
There is open issue in terraform aws provider.
A workaround for this issue is to remove: 'version' attribute;
AWS has recently introduced Versioning with WAF policies managed by Firewall Manager; which is causing this weird error.
Though a permanent fix is InProgress (refer my earlier post) we can remove the attribute to avoid this error.
Another approach is to use the new attribute: versionEnabled=true in case you want versioning enabled.
I'm trying to deploy a cluster with self managed node groups. No matter what config options I use, I always come up with the following error:
Error: Post "http://localhost/api/v1/namespaces/kube-system/configmaps": dial tcp 127.0.0.1:80: connect: connection refusedwith module.eks-ssp.kubernetes_config_map.aws_auth[0]on .terraform/modules/eks-ssp/aws-auth-configmap.tf line 19, in resource "kubernetes_config_map" "aws_auth":resource "kubernetes_config_map" "aws_auth" {
The .tf file looks like this:
module "eks-ssp" {
source = "github.com/aws-samples/aws-eks-accelerator-for-terraform"
# EKS CLUSTER
tenant = "DevOpsLabs2"
environment = "dev-test"
zone = ""
terraform_version = "Terraform v1.1.4"
# EKS Cluster VPC and Subnet mandatory config
vpc_id = "xxx"
private_subnet_ids = ["xxx","xxx", "xxx", "xxx"]
# EKS CONTROL PLANE VARIABLES
create_eks = true
kubernetes_version = "1.19"
# EKS SELF MANAGED NODE GROUPS
self_managed_node_groups = {
self_mg = {
node_group_name = "DevOpsLabs2"
subnet_ids = ["xxx","xxx", "xxx", "xxx"]
create_launch_template = true
launch_template_os = "bottlerocket" # amazonlinux2eks or bottlerocket or windows
custom_ami_id = "xxx"
public_ip = true # Enable only for public subnets
pre_userdata = <<-EOT
yum install -y amazon-ssm-agent \
systemctl enable amazon-ssm-agent && systemctl start amazon-ssm-agent \
EOT
disk_size = 20
instance_type = "t2.small"
desired_size = 2
max_size = 10
min_size = 2
capacity_type = "" # Optional Use this only for SPOT capacity as capacity_type = "spot"
k8s_labels = {
Environment = "dev-test"
Zone = ""
WorkerType = "SELF_MANAGED_ON_DEMAND"
}
additional_tags = {
ExtraTag = "t2x-on-demand"
Name = "t2x-on-demand"
subnet_type = "public"
}
create_worker_security_group = false # Creates a dedicated sec group for this Node Group
},
}
}
module "eks-ssp-kubernetes-addons" {
source = "github.com/aws-samples/aws-eks-accelerator-for-terraform//modules/kubernetes-addons"
eks_cluster_id = module.eks-ssp.eks_cluster_id
# EKS Addons
enable_amazon_eks_vpc_cni = true
enable_amazon_eks_coredns = true
enable_amazon_eks_kube_proxy = true
enable_amazon_eks_aws_ebs_csi_driver = true
#K8s Add-ons
enable_aws_load_balancer_controller = true
enable_metrics_server = true
enable_cluster_autoscaler = true
enable_aws_for_fluentbit = true
enable_argocd = true
enable_ingress_nginx = true
depends_on = [module.eks-ssp.self_managed_node_groups]
}
Providers:
terraform {
backend "remote" {}
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 3.66.0"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = ">= 2.6.1"
}
helm = {
source = "hashicorp/helm"
version = ">= 2.4.1"
}
}
}
Based on the example provided in the Github repo [1], my guess is that the provider configuration blocks are missing for this to work as expected. Looking at the code provided in the question, it seems that the following needs to be added:
data "aws_region" "current" {}
data "aws_eks_cluster" "cluster" {
name = module.eks-ssp.eks_cluster_id
}
data "aws_eks_cluster_auth" "cluster" {
name = module.eks-ssp.eks_cluster_id
}
provider "aws" {
region = data.aws_region.current.id
alias = "default" # this should match the named profile you used if at all
}
provider "kubernetes" {
experiments {
manifest_resource = true
}
host = data.aws_eks_cluster.cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
token = data.aws_eks_cluster_auth.cluster.token
}
If helm is also required, I think the following block [2] needs to be added as well:
provider "helm" {
kubernetes {
host = data.aws_eks_cluster.cluster.endpoint
token = data.aws_eks_cluster_auth.cluster.token
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
}
}
Provider argument reference for kubernetes and helm is in [3] and [4] respectively.
[1] https://github.com/aws-samples/aws-eks-accelerator-for-terraform/blob/main/examples/eks-cluster-with-self-managed-node-groups/main.tf#L23-L47
[2] https://github.com/aws-samples/aws-eks-accelerator-for-terraform/blob/main/examples/eks-cluster-with-eks-addons/main.tf#L49-L55
[3] https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs#argument-reference
[4] https://registry.terraform.io/providers/hashicorp/helm/latest/docs#argument-reference
The above answer from Marko E seems to fix / just ran into this issue. After applying the above code, altogether in a separate providers.tf file, terraform now makes it past the error. Will post later as to whether the deployment makes it fully through.
For reference was able to go from 65 resources created down to 42 resources created before I hit this error. This was using the exact best practice / sample configuration recommended at the top of the README from AWS Consulting here: https://github.com/aws-samples/aws-eks-accelerator-for-terraform
In my case i was trying to deploy to the kubernetes cluster(GKE) using Terraform. I have replaced the kubeconfig path with the kubeconfig file's absolute path.
From
provider "kubernetes" {
config_path = "~/.kube/config"
#config_context = "my-context"
}
TO
provider "kubernetes" {
config_path = "/Users/<username>/.kube/config"
#config_context = "my-context"
}
i can't create a VM on GCP using terraform, i want to attach a kms key in the attribute "kms_key_self_link", but when the machine is being created, time goes and after 2 minutes waiting (in every case) the error 503 appears. I'm going to share my script, is worthly to say that with the attribute "kms_key_self_link" dissabled, the script runs ok.
data "google_compute_image" "tomcat_centos" {
name = var.vm_img_name
}
data "google_kms_key_ring" "keyring" {
name = "keyring-example"
location = "global"
}
data "google_kms_crypto_key" "cmek-key" {
name = "crypto-key-example"
key_ring = data.google_kms_key_ring.keyring.self_link
}
data "google_project" "project" {}
resource "google_kms_crypto_key_iam_member" "key_user" {
crypto_key_id = data.google_kms_crypto_key.cmek-key.id
role = "roles/owner"
member = "serviceAccount:service-${data.google_project.project.number}#compute-system.iam.gserviceaccount.com"
}
resource "google_compute_instance" "vm-hsbc" {
name = var.vm_name
machine_type = var.vm_machine_type
zone = var.zone
allow_stopping_for_update = true
can_ip_forward = false
deletion_protection = false
boot_disk {
kms_key_self_link = data.google_kms_crypto_key.cmek-key.self_link
initialize_params {
type = var.disk_type
#GCP-CE-CTRL-22
image = data.google_compute_image.tomcat_centos.self_link
}
}
network_interface {
network = var.network
}
#GCP-CE-CTRL-2-...-5, 7, 8
service_account {
email = var.service_account_email
scopes = var.scopes
}
#GCP-CE-CTRL-31
shielded_instance_config {
enable_secure_boot = true
enable_vtpm = true
enable_integrity_monitoring = true
}
}
And this is the complete error:
Error creating instance: googleapi: Error 503: Internal error. Please try again or contact Google Support. (Code: '5C54C97EB5265.AA25590.F4046F68'), backendError
I solved this issue granting to my compute service account the role of encrypter/decripter through this resource:
resource "google_kms_crypto_key_iam_binding" "key_iam_binding" {
crypto_key_id = data.google_kms_crypto_key.cmek-key.id
role = "roles/cloudkms.cryptoKeyEncrypter"
members = [
"serviceAccount:service-${data.google_project.gcp_project.number}#compute-system.iam.gserviceaccount.com",
]
}
I am using terraform to create an elastic beanstalk environment and application. The solution stack name seen on the aws doc website does not list a solution name for the current time frame [8th September 2020].
Consequently, I keep getting an error No Solution Stack named '64bit Amazon Linux 2 v3.1.2 running Go 1' found. when I try to run terraform apply.
Also, I am using a module provided by cloud posse to get my infrastructure up and running but I doubt that cloud posse is at fault here. Any help is highly appreciated. Thank you
Update: Here's the terraform source code to create elastic beasntalk resources.
provider aws {
region = var.region
}
module vpc {
source = "git::https://github.com/cloudposse/terraform-aws-vpc.git?ref=master"
namespace = var.namespace
stage = var.stage
name = var.name
cidr_block = "172.16.0.0/16"
}
module subnets {
source = "git::https://github.com/cloudposse/terraform-aws-dynamic-subnets.git?ref=master"
availability_zones = ["us-east-1a"]
namespace = var.namespace
stage = var.stage
name = var.name
vpc_id = module.vpc.vpc_id
igw_id = module.vpc.igw_id
cidr_block = module.vpc.vpc_cidr_block
nat_gateway_enabled = false
nat_instance_enabled = false
}
module elastic_beanstalk_application {
source = "git::https://github.com/cloudposse/terraform-aws-elastic-beanstalk-application.git?ref=master"
namespace = var.namespace
stage = var.stage
name = var.name
description = "Sentinel Staging"
}
module elastic_beanstalk_environment {
source = "git::https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment.git?ref=master"
namespace = var.namespace
stage = var.stage
name = var.name
description = "Test elastic_beanstalk_environment"
region = var.region
availability_zone_selector = "Any 2"
dns_zone_id = var.dns_zone_id
elastic_beanstalk_application_name = module.elastic_beanstalk_application.elastic_beanstalk_application_name
instance_type = "t3.small"
autoscale_min = 1
autoscale_max = 2
updating_min_in_service = 0
updating_max_batch = 1
loadbalancer_type = "application"
vpc_id = module.vpc.vpc_id
loadbalancer_subnets = module.subnets.public_subnet_ids
application_subnets = module.subnets.public_subnet_ids
allowed_security_groups = [module.vpc.vpc_default_security_group_id]
// https://docs.aws.amazon.com/elasticbeanstalk/latest/platforms/platforms-supported.html
// https://docs.aws.amazon.com/elasticbeanstalk/latest/platforms/platforms-supported.html#platforms-supported.docker
solution_stack_name = "64bit Amazon Linux 2 v3.1.2 running Go 1"
additional_settings = [
{
namespace = "aws:elasticbeanstalk:application:environment"
name = "DB_URI"
value = var.db_uri
},
{
namespace = "aws:elasticbeanstalk:application:environment"
name = "USER"
value = var.user
},
{
namespace = "aws:elasticbeanstalk:application:environment"
name = "PASSWORD"
value = var.password
},
{
namespace = "aws:elasticbeanstalk:application:environment"
name = "PORT"
value = "5000"
}
]
}
And here's the exact error taken from the console when I try terraform apply with suitable variables passed as command line arguments.
module.subnets.aws_network_acl.public[0]: Refreshing state... [id=acl-0a62395908205c288]
module.subnets.data.aws_availability_zones.available[0]: Reading... [id=2020-09-07 06:34:08.002137232 +0000 UTC]
module.subnets.data.aws_availability_zones.available[0]: Read complete after 0s [id=2020-09-07 06:34:12.416467455 +0000 UTC]
module.elastic_beanstalk_environment.aws_elastic_beanstalk_environment.default: Creating...
Error: InvalidParameterValue: No Solution Stack named '64bit Amazon Linux 2 v3.1.2 running Go 1' found.
status code: 400, request id: 631d5204-f667-4355-aa65-d617536a00b7
on .terraform/modules/elastic_beanstalk_environment/main.tf line 505, in resource "aws_elastic_beanstalk_environment" "default":
505: resource "aws_elastic_beanstalk_environment" "default" {
Using terraform to create nat-gateway using this module.
https://registry.terraform.io/modules/GoogleCloudPlatform/nat-gateway/google/1.1.3
using this code :
module "nat" {
source = "GoogleCloudPlatform/nat-gateway/google"
region = "${var.gcloud-region}"
network = "${google_compute_network.vpc-network.name}"
subnetwork = "${google_compute_subnetwork.vpc-subnetwork-public.name}"
machine_type = "${var.vm-type-nat-gateway}"
}
Other snippets :
variable "gcloud-region" { default = "europe-west1" }
variable "vm-type-nat-gateway" { default = "n1-standard-2"}
resource "google_compute_network" "vpc-network" {
name = "foobar-vpc-network"
auto_create_subnetworks = false
}
resource "google_compute_subnetwork" "vpc-subnetwork-public" {
name = "foobar-vpc-subnetwork-public"
ip_cidr_range = "10.0.1.0/24"
network = "${google_compute_network.vpc-network.self_link}"
region = "${var.gcloud-region}"
private_ip_google_access = false
}
================
module.nat.google_compute_route.nat-gateway: 1 error(s) occurred:
module.nat.google_compute_route.nat-gateway: element: element() may not be used with an empty list in:
${element(split("/", element(module.nat-gateway.instances[0], 0)),
10)}
Above errror coming up and whole terraform script get stop , and unable to run
terraform apply or terraform destroy at any changes,
any possible issue causing this ?