I have a deploymgr template that creates a bunch of network assets and VMs and it runs fine with no errors reported, however no VPC peerings are ever created. It works fine if I create a peering via the console or on the cli via glcoud
Peering fails (with no error msg):
# Create the required routes to talk to prod project
- name: mytest-network
type: compute.v1.network
properties:
name: mytest
autoCreateSubnetworks: false
peerings:
- name: mytest-to-prod
network: projects/my-prod-project/global/networks/default
autoCreateRoutes: true
Peering Works:
$ gcloud compute networks peerings create mytest-to-prod --project=myproject --network=default --peer-network=projects/my-prod-project/global/networks/default --auto-create-routes
The Peering cannot be done at network creation time as per the API reference.
First the network needs to be created and once it has been created successfully, the addPeering method needs to be called.
This explains why your YAML definition created the network but not the peering and it worked after running the gcloud command that it calls the addPeering method.
There is a possibility of creating and doing the peering on one YAML file by using the Deployment manager actions:
resources:
- name: mytest-network1
type: compute.v1.network
properties:
name: mytest1
autoCreateSubnetworks: false
- name: mytest-network2
type: compute.v1.network
properties:
name: mytest2
autoCreateSubnetworks: false
- name: addPeering2-1
action: gcp-types/compute-v1:compute.networks.addPeering
metadata:
runtimePolicy:
- CREATE
properties:
network: mytest-network2
name: vpc-2-1
autoCreateRoutes: true
peerNetwork: $(ref.mytest-network1.selfLink)
metadata:
dependsOn:
- mytest-network1
- mytest-network2
- name: addPeering1-2
action: gcp-types/compute-v1:compute.networks.addPeering
metadata:
runtimePolicy:
- CREATE
properties:
network: mytest-network1
name: vpc-1-2
autoCreateRoutes: true
peerNetwork: $(ref.mytest-network2.selfLink)
metadata:
dependsOn:
- mytest-network1
- mytest-network2
You can copy-paste the YAML above, create the deployment and the peering should be done. The actions use the dependsOn option to make sure the network are created first and when deleting the deployment the peerings would be deleted by calling the removePeering method and then the networks would be deleted.
Note: The Deployment manager actions are undocumented yet but there are several examples in the GoogleCloudPlatform/deploymentmanager-samples repository such as this and this.
From gcloud works as expected, please update your YAML file to use "peerings[].network" when specifying the list of peered network resources.
Related
I just started to study GCP deployment-manager and I'm creating a file to create one network and one subnetwork. I did a test using 2 different files (one for each) and worked fine. Now, when I combine the creation of network and subnetwork, there's 2 problems:
During creation, when network finish the creation and the step of subnetwork starts, looks like that network info is not yet created and I got an error of resource not found. But If I run an update again, the subnet is created.
During the delete, deployment-manager tries to delete first the network before the subnetwork and I got the message "resource is in use, you can't delete".
So,m I would like to get a help here with best practices about this. Many thanks.
My config:
main.yml
imports:
- path: network.jinja
- path: subnetwork.jinja
resources:
- name: network
type: network.jinja
- name: subnetwork
type: subnetwork.jinja
network.jinja
resources:
- type: gcp-types/compute-v1:networks
name: network-{{ env["deployment"] }}
properties:
routingConfig:
routingMode: REGIONAL
autoCreateSubnetworks: false
subnetwork.jinja
resources:
- type: gcp-types/compute-v1:subnetworks
name: subnetwork-{{ env["deployment"] }}
properties:
region: us-central1
network: https://www.googleapis.com/compute/v1/projects/XXXXXXXX/global/networks/network-{{ env["deployment"] }}
ipCidrRange: 10.10.10.0/24
privateIpGoogleAccess: false
It is likely your issue resulted because Deployment Manager didn't recognize that there were dependencies between your resources. Here is a working YAML that I used:
resources:
- type: gcp-types/compute-v1:networks
name: network-mynet
properties:
routingConfig:
routingMode: REGIONAL
autoCreateSubnetworks: false
- type: gcp-types/compute-v1:subnetworks
name: subnetwork-mynet
properties:
region: us-central1
network: $(ref.network-mynet.selfLink)
ipCidrRange: 10.10.10.0/24
privateIpGoogleAccess: false
I believe that the major difference is that in this example, the network element within the subnetworks definition uses a Deployment Manager references. By leverarging this technique, we have more of a declarative solution and relationships can be deduced.
I'm using .NET Core WEBAPI and below Dockerfile
FROM microsoft/dotnet:sdk AS build-env
WORKDIR /app
# Copy csproj and restore as distinct layers
COPY *.csproj ./
RUN dotnet restore
# Copy everything else and build
COPY . ./
RUN dotnet publish -c Release -o out
# Build runtime image
FROM microsoft/dotnet:aspnetcore-runtime
WORKDIR /app
COPY --from=build-env /app/out .
ENTRYPOINT ["dotnet", "DummyService.dll"]
In my cloudformation template, the ECS part looks like this
dummyWebApiEcsTaskDefinition:
Type: AWS::ECS::TaskDefinition
Properties:
Family: !Ref AWS::StackName
TaskRoleArn: !GetAtt dummyWebApiIamRole.Arn
ContainerDefinitions:
- Name: !Ref AWS::StackName
Image: MY IMAGE URL
DnsSearchDomains:
- !Join [".", [{"Fn::ImportValue": !Sub "${accountStackName}-${AWS::Region}-envName"}, "connected", !If [chinaPartition, "TEST", "CORP"], "cloud"]]
LogConfiguration:
LogDriver: splunk
Options:
splunk-token: {"Fn::ImportValue": !Sub "${splunkHECStackName}-${AWS::Region}-SplunkHECToken"}
splunk-url: "http://splunk-forwarder:8088"
splunk-insecureskipverify: True
tag: !Ref AWS::StackName
splunk-format: json
splunk-source: !Ref AWS::StackName
splunk-sourcetype: AWS:ECS
EntryPoint: []
PortMappings:
- ContainerPort: 5000
Command: []
Cpu: 0
Environment:
- Name: BindAddress
Value: http://0.0.0.0:5000
- Name: MinLogLevel
Value: !If [isProduction, "Information", "Debug"]
Ulimits: []
DnsServers: []
MountPoints: []
DockerSecurityOptions: []
Memory: 512
VolumesFrom: []
Essential: true
ExtraHosts: []
ReadonlyRootFilesystem: false
DockerLabels: {}
Privileged: false
dummyEcsService:
Type: AWS::ECS::Service
DependsOn:
- dummyWebApiIamRole
- dummyInternalAlb
- dummyAlbTargetGroup
Properties:
Cluster:
Fn::ImportValue: !Sub "cld-core-ecs-${AWS::Region}-ECSCluster"
DeploymentConfiguration:
MaximumPercent: 200
MinimumHealthyPercent: 50
DesiredCount: 2
LoadBalancers:
- ContainerName: !Ref AWS::StackName
ContainerPort: 5000
TargetGroupArn: !Ref dummyAlbTargetGroup
PlacementStrategies:
- Type: spread
Field: attribute:ecs.availability-zone
TaskDefinition: !Ref dummyWebApiEcsTaskDefinition
ServiceName: !Ref AWS::StackName
Role: !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/aws-service-role/ecs.amazonaws.com/AWSServiceRoleForECS"
The deployment couldn't finish and I can see this error in the ECS Service Events tab
service cld-dummy-test was unable to place a task because no container instance met all of its requirements. Reason: No Container Instances were found in your cluster.
I eventually got this figured out. The error message below indicates that there's no EC2 in this cluster, and hence no container can be started. We are not using Fargate.
service cld-dummy-test was unable to place a task because no container instance met all of its requirements. Reason: No Container Instances were found in your cluster.
To register an EC2 to a cluster, you need to follow this AWS article.
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/launch_container_instance.html
Please be aware that the EC2 you start need to have below userdata in order for it to be registered.
#!/bin/bash
echo ECS_CLUSTER=your_cluster_name >> /etc/ecs/ecs.config
Once the above is completed, you shouldn't see the error about "no container". However, if you are like me, having the splunk logging section in the template. You will have a different issue which says something like no container can be used for the task because it is missing an attribute. This is quite a vague message and the attribute can be anything that is listed at the bottom of your task definition page.
In my case it was the splunk logging. The splunk driver needs to be added to the EC2 instance. Since I later found out that we don't need splunk anymore so I removed the splunk section. But if you want to do that, you probably need to add the below line to your userdata.
ECS_AVAILABLE_LOGGING_DRIVERS=["splunk","awslogs"]
I hope this helps someone.
AWS ECS has two launch type config :
Fargate
Fargate + EC2
in both cases you can not access underlying resources.
so may possible cause of iusse in launch type configuration you are not able to spin up task otherwise from ecs dashboard you can choose launch type and also choose task defination.
I was also having the same error, but I was using ecs-cli to create the cluster, tasks and service, so manually registering the EC2 instance to the cluster had already been done (as suggested by the sheepinwild's answer).
What solved the issue for us was making sure the IAM role assigned to the instance had the AWS managed policy AmazonEC2ContainerServiceforEC2Role. I only discovered this as we had another ECS instance running successfully that I compared against. If you're using ecs-cli, this is the role you pass like so ecs-cli up --instance-role HERE. Alternatively, you can also pass --capability-iam and that will create a new role with the correct policies and assign it to your instance for you. More info on the AWS KB for ecs-cli.
I'm planning to completely adopt the GCP service creation unto Deployment Manager. But based on the documentation, I can not see any options on converting the nodes to be created into the cluster into preemptibles.
I am hoping that there is a way but just not written on the document as by experience there should be some options that are not written in the document.
Below is the jinja template for it
resources:
- name: practice-gke-clusters
type: container.v1.cluster
properties:
zone: asia-east2-a
cluster:
name: practice-gke-clusters
network: $(ref.practice-gke-network.selfLink)
subnetwork: $(ref.practice-gke-network-subnet-1.selfLink)
initialNodeCount: 1
loggingService: logging.googleapis.com
monitoringService: monitoring.googleapis.com
nodeConfig:
oauthScopes:
- https://www.googleapis.com/auth/compute
- https://www.googleapis.com/auth/devstorage.read_only
- https://www.googleapis.com/auth/logging.write
- https://www.googleapis.com/auth/monitoring
Preemptible VMs are in Beta stage at Google Kubernetes Engine (GKE). As per documentation, it seems you need to add preemptible value as "True" into the deployment script such as this.
I'm trying to create GKE REGION cluster (beta feature) with GCP deployment manager.
But I got error. Is there any way to use GKE beta features (include region cluster) with deployment manager?
ERROR: (gcloud.beta.deployment-manager.deployments.create) Error in
Operation [operation-1525054837836-56b077fdf48e0-a571296c-604523fb]:
errors:
- code: RESOURCE_ERROR
location: /deployments/test-cluster1/resources/source-cluster
message: '{"ResourceType":"container.v1.cluster","ResourceErrorCode":"400","ResourceErrorMessage":{"code":400,"message":"v1 API cannot be used to access GKE regional clusters. See https://cloud.google.com/kubernetes-engine/docs/reference/api-organization#beta for more information.","status":"INVALID_ARGUMENT","statusMessage":"Bad Request","requestPath":"https://container.googleapis.com/v1/projects/project_id/zones/us-central1/clusters","httpMethod":"POST"}}'
In the error message, link of gcp help.
https://cloud.google.com/kubernetes-engine/docs/reference/api-organization#beta
Configured as described there but error still appears.
My deployment manager yaml file looks like,
resources:
- name: source-cluster
type: container.v1.cluster
properties:
zone: us-central1
cluster:
name: source
initialNodeCount: 3
Yet, zonal cluster is completely work. So I think it's related to usage of container v1beta api in deployment-manager commands.
resources:
- name: source-cluster
type: container.v1.cluster
properties:
zone: us-central1-b
cluster:
name: source
initialNodeCount: 3
Thanks.
The error message you are receiving appears to be related to the fact that you are attempting to use a beta feature but you are specifying a Deployment Manager resource as using API v1 (i.e. container.v1.cluster). This means there's inconstancy between the beta resource you are trying to create and the specified resource.
I've had a look into this and discovered that the ability to add regional clusters via Deployment Manager is a very recent addition to Google Cloud Platform as detailed in this public feature request which has only recently been implemented.
It seems you would need to specify the API type as 'gcp-types/container-v1beta1:projects.locations.clusters' for this to work, and rather than using the 'zone' or 'region' key in the YAML, you would instead use a parent property that includes locations.
So your YAML would look something like this (replace PROJECT_ID with your own).
resources:
- type: gcp-types/container-v1beta1:projects.locations.clusters
name: source-cluster
properties:
parent: projects/PROJECT_ID/locations/us-central1
cluster:
name: source
initialNodeCount: 3
I'm using a slightly customized Terraform configuration to generate my Kubernetes cluster on AWS. The configuration includes an EFS instance attached to the cluster nodes and master. In order for Kubernetes to use this EFS instance for volumes, my Kubernetes YAML needs the id and endpoint/domain of the EFS instance generated by Terraform.
Currently, my Terraform outputs the EFS id and DNS name, and I need to manually edit my Kubernetes YAML with these values after terraform apply and before I kubectl apply the YAML.
How can I automate passing these Terraform output values to Kubernetes?
I don't know what you mean by a yaml to set up an Kubernetes cluster in AWS. But then, I've always set up my AWS clusters using kops. Additionally I don't understand why you would want to mount an EFS to the master and/or nodes instead of to the containers.
But in direct answer to your question: you could write a script to output your Terraform outputs to a Helm values file and use that to generate the k8s config.
I stumbled upon this question when searching for a way to get TF outputs to envvars specified in Kubernetes and I expect more people do. I also suspect that that was really your question as well or at least that it can be a way to solve your problem. So:
You can use the Kubernetes Terraform provider to connect to your cluster and then use the kubernetes_config_map resources to create configmaps.
provider "kubernetes" {}
resource "kubernetes_config_map" "efs_configmap" {
"metadata" {
name = "efs_config" // this will be the name of your configmap
}
data {
efs_id = "${aws_efs_mount_target.efs_mt.0.id}"
efs_dns = "${aws_efs_mount_target.efs_mt.0.dns_name}"
}
}
If you have secret parameters use the kubernetes_secret resource:
resource "kubernetes_secret" "some_secrets" {
"metadata" {
name = "some_secrets"
}
data {
s3_iam_access_secret = "${aws_iam_access_key.someresourcename.secret}"
rds_password = "${aws_db_instance.someresourcename.password}"
}
}
You can then consume these in your k8s yaml when setting your environment:
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: some-app-deployment
spec:
selector:
matchLabels:
app: some
template:
metadata:
labels:
app: some
spec:
containers:
- name: some-app-container
image: some-app-image
env:
- name: EFS_ID
valueFrom:
configMapKeyRef:
name: efs_config
key: efs_id
- name: RDS_PASSWORD
valueFrom:
secretKeyRef:
name: some_secrets
key: rds_password