I'm creating a database cluster and a DMS in the same stack. I'm using SecretsManager to pass the connection properties of the DB Cluster to the Endpoint. I've added a dependency between the endpoint and DB Cluster, but when I try to deploy the stack I get a problem like this:
11/14 | 10:08:49 AM | CREATE_COMPLETE | AWS::RDS::DBCluster | FooDBCluster
11/14 | 10:08:51 AM | CREATE_IN_PROGRESS | AWS::DMS::Endpoint | fooendpoint
11/14 | 10:08:51 AM | CREATE_IN_PROGRESS | AWS::SecretsManager::SecretTargetAttachment | FooDBSecretAttachment (FooDBSecretAttachmentE2E5F50F)
12/14 | 10:08:52 AM | CREATE_FAILED | AWS::DMS::Endpoint | fooendpoint Could not find a value associated with JSONKey in SecretString
The same does not happen if I have complemented the deployment of the DB Cluster some time before starting to deploy the Endpoint. This implies the host & port are not present in the Secret right after the DB Cluster has been created. Indeed in CDK they are declared after the DB Cluster.
However, I cannot add a dependency between the Endpoint and SecretTargetAttachment as SecretTargetAttackment is not a CfnResource type expected by the CfnEndpoint addDependencyOn method.
You can add the dependency if you access the underlying node, like so.
if (secretsAttachment.node.defaultChild) {
endPoint.node.addDependency(secretsAttachment.node.defaultChild);
}
Related
I have created EKS cluster using terraform-aws-modules/vpc/aws with Terraform, I use one VPC with 3 private subnets on each AZs in Frankfurt. I've created two services (tomcat and psql) and deployment which are exposed via LoadBalancer and accessible via internet. It looks fine so far.
but the problem is that it's only one environment (DEV). I would like to create multiple environments like stage,test and more inside one VPC and inside one cluster, how to do it using terraform? should I create new files per environment? It would not make sense but nothing comes to my mind... I was considering also workspaces but the problem is that new workspace requires new state - it means that I need to create new VPC with new cluster per one workspace! maybe I should divide my terraform files to have something like "general" workspace and there would be a configuration to VPC and cluster, and create new workspaces for each of the environments? do you have any ideas or better solutions?
VPC - 172.26.0.0/16
+----------------------+----------------------------------+
| |
| |
| KUBERNETES CLUSTER |
| +-------------------------------------------------+ |
| | | |
| | | |
| | | |
| | +------------------+ +-----------------+ | |
| | | | | | | |
| | | TEST ENV | | DEV ENV | | |
| | | +------+ +-----+ | | +-----+ +-----+ | | |
| | | |tomcat| |psql | | | |tomcat |psql | | | |
| | | +------+ +-----+ | | +-----+ +-----+ | | |
| | | | | | | |
| | +------------------+ +-----------------+ | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| +-------------------------------------------------+ |
| |
+---------------------------------------------------------+
It is possible to create multiple environments in a single K8s cluster. You could use namespace for this. To access the different environments from outside the cluster, you can use a different domain name for each environment.
For example dev.abc.com to access the development environment and test.abc.com to access the test environment.
You can "separate the vpc" in its own state file. And then have a workspace for each EKS cluster. For the EKS you can pull the VPC info one of two ways, either from AWS data source by tag or from the state file.
Your tree structure would look something like this:
├── vpc
│ ├── main.tf
│ └── outputs.tf
└── eks
└── main.tf
Add the following to the backend settings in vpc/main.tf:
terraform {
backend "s3" {
...
key = "vpc/terraform.tfstate"
workspace_key_prefix = "vpc"
...
}
}
and eks/main.tf:
terraform {
backend "s3" {
...
key = "eks/terraform.tfstate"
workspace_key_prefix = "eks"
...
}
}
Passing the VPC to the EKS section:
Option 1 (pull from aws data source by name, ref https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/vpc):
data "aws_vpc" "selected" {
filter {
...
}
}
Option 2 (pull from state file):
data "terraform_remote_state" "vpc" {
backend = "s3"
config = {
...
key = "vpc/terraform.tfstate"
workspace_key_prefix = "vpc"
...
}
}
It's not a good practice to manage your applications inside terraform, you can use terraform just to create your cluster (infra) EC2, EKS, VPC.... but what inside the cluster, you can use helm/kubectl.... to manage your pods, for example you can have two repositories, one for terraform iac and the other for projects, then you can manage your environments ( dev, staging, prod...) by namespaces...
I'm having trouble managing an INACTIVE ECS task definition using CDK. Furthermore, the CloudFormation drift detection seems to miss this "deregistration".
Repro
Put the following into app.py:
from aws_cdk import (
aws_ecs as ecs,
core,
)
class ExampleStack(core.Stack):
def __init__(self, scope: core.Construct, id: str, **kwargs) -> None:
super().__init__(scope, id, **kwargs)
task_defn = ecs.FargateTaskDefinition(self, "ExampleTaskDefinition", cpu=256, memory_limit_mib=512)
task_defn.add_container("ExampleContainer", image=ecs.ContainerImage.from_registry("amazon/amazon-ecs-sample"))
app = core.App()
ExampleStack(app, "ExampleStack", env={"region": "us-east-1"})
app.synth()
Deploy it:
$ cdk deploy
...
2/4 | 7:10:50 PM | CREATE_IN_PROGRESS | AWS::ECS::TaskDefinition | ExampleTaskDefinition (ExampleTaskDefinition47549670)
2/4 | 7:10:50 PM | CREATE_IN_PROGRESS | AWS::ECS::TaskDefinition | ExampleTaskDefinition (ExampleTaskDefinition47549670) Resource creation Initiated
3/4 | 7:10:51 PM | CREATE_COMPLETE | AWS::ECS::TaskDefinition | ExampleTaskDefinition (ExampleTaskDefinition47549670)
4/4 | 7:10:52 PM | CREATE_COMPLETE | AWS::CloudFormation::Stack | ExampleStack
✅ ExampleStack
Stack ARN:
arn:aws:cloudformation:us-east-1:865458870989:stack/ExampleStack/47a559b0-2386-11ea-a6f0-0a4fdb0c1726
...
Confirm the task definition:
$ aws ecs list-task-definitions --region=us-east-1
{
"taskDefinitionArns": [
"arn:aws:ecs:us-east-1:865458870989:task-definition/ExampleStackExampleTaskDefinition169C2730:1"
]
}
Deregister the task definition:
$ aws ecs deregister-task-definition --task-definition arn:aws:ecs:us-east-1:865458870989:task-definition/ExampleStackExampleTaskDefinition169C2730:1 --region=us-east-1
...
"status": "INACTIVE",
...
Issue 1: At this point, the CloudFormation drift detection continues to show the task definition as "IN_SYNC".
My guess is that this is because of the way ECS Task Definitions remain discoverable (https://docs.aws.amazon.com/AmazonECS/latest/userguide/deregister-task-definition.html):
"At this time, INACTIVE task definitions remain discoverable in your account indefinitely"
Question 1: Is there a way for CloudFormation to detect this as drift?
Issue 2: the CDK is unable to deploy this stack again. For example, if I add an ECS Cluster and Service to app.py:
from aws_cdk import (
aws_ecs as ecs,
core,
)
class ExampleStack(core.Stack):
def __init__(self, scope: core.Construct, id: str, **kwargs) -> None:
super().__init__(scope, id, **kwargs)
task_defn = ecs.FargateTaskDefinition(self, "ExampleTaskDefinition", cpu=256, memory_limit_mib=512)
task_defn.add_container("ExampleContainer", image=ecs.ContainerImage.from_registry("amazon/amazon-ecs-sample"))
cluster = ecs.Cluster(self, "ExampleServiceCluster")
ecs.FargateService(
self,
"ExampleService",
cluster=cluster,
task_definition=task_defn,
assign_public_ip=False,
desired_count=1,
min_healthy_percent=0,
max_healthy_percent=100,
service_name="ExampleService"
)
app = core.App()
ExampleStack(app, "ExampleStack", env={"region": "us-east-1"})
app.synth()
I'll now get the following error:
$ cdk deploy
...
16/27 | 7:18:30 PM | CREATE_FAILED | AWS::ECS::Service | ExampleService/Service (ExampleServiceC7919DA2) TaskDefinition is inactive (Service: AmazonECS; Status Code: 400; Error Code: ClientException; Request ID: dbb927b6-b2ba-495b-be47-b202a2802465)
new BaseService (/tmp/jsii-kernel-9athIS/node_modules/#aws-cdk/aws-ecs/lib/base/base-service.js:98:25)
\_ new FargateService (/tmp/jsii-kernel-9athIS/node_modules/#aws-cdk/aws-ecs/lib/fargate/fargate-service.js:35:9)
...
Question 2: Is there an easy way to get this stack work again without destroying it completely? I suppose I could also change something about the TaskDefinition (e.g., change the name to "ExampleTaskDefinition1") to trigger the creation of a new ARN, but this seems a bit clunky. Any ideas on a better approach?
Thanks in advance!
As far as I Know Amazon RDS Supports Stopping and Starting of Database Instances.
I am running the instance from a Mac OS Sierra
I want to start a DB instance using the AWS Command Line Interface (following this tutorial: http://docs.aws.amazon.com/cli/latest/reference/rds/start-db-instance.html)
But somehow I got an error:
MacBook-Pro-de-lopes:~ lopes$ aws rds start-db-instance lopesdbtest
usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
To see help text, you can run:
aws help
aws <command> help
aws <command> <subcommand> help
aws: error: argument operation: Invalid choice, valid choices are:
add-source-identifier-to-subscription | add-tags-to-resource
apply-pending-maintenance-action | authorize-db-security-group-ingress
copy-db-cluster-snapshot | copy-db-parameter-group
copy-db-snapshot | copy-option-group
create-db-cluster | create-db-cluster-parameter-group
create-db-cluster-snapshot | create-db-instance
create-db-instance-read-replica | create-db-parameter-group
create-db-security-group | create-db-snapshot
create-db-subnet-group | create-event-subscription
create-option-group | delete-db-cluster
delete-db-cluster-parameter-group | delete-db-cluster-snapshot
delete-db-instance | delete-db-parameter-group
delete-db-security-group | delete-db-snapshot
delete-db-subnet-group | delete-event-subscription
delete-option-group | describe-account-attributes
describe-certificates | describe-db-cluster-parameter-groups
describe-db-cluster-parameters | describe-db-cluster-snapshots
describe-db-clusters | describe-db-engine-versions
describe-db-instances | describe-db-log-files
describe-db-parameter-groups | describe-db-parameters
describe-db-security-groups | describe-db-snapshot-attributes
describe-db-snapshots | describe-db-subnet-groups
describe-engine-default-cluster-parameters | describe-engine-default-parameters
describe-event-categories | describe-event-subscriptions
describe-events | describe-option-group-options
describe-option-groups | describe-orderable-db-instance-options
describe-pending-maintenance-actions | describe-reserved-db-instances
describe-reserved-db-instances-offerings | download-db-log-file-portion
failover-db-cluster | list-tags-for-resource
modify-db-cluster | modify-db-cluster-parameter-group
modify-db-instance | modify-db-parameter-group
modify-db-snapshot-attribute | modify-db-subnet-group
modify-event-subscription | promote-read-replica
purchase-reserved-db-instances-offering | reboot-db-instance
remove-source-identifier-from-subscription | remove-tags-from-resource
reset-db-cluster-parameter-group | reset-db-parameter-group
restore-db-cluster-from-snapshot | restore-db-cluster-to-point-in-time
restore-db-instance-from-db-snapshot | restore-db-instance-to-point-in-time
revoke-db-security-group-ingress | add-option-to-option-group
remove-option-from-option-group | wait
help
Invalid choice: 'start-db-instance', maybe you meant:
* reboot-db-instance
* create-db-instance
You need to update to the latest version of the AWS CLI tool. The version you currently have installed was released before the RDS start/stop feature was available.
It is a new feature (Announced on Jun 1, 2017). You have to upgrade your AWS CLI.
Amazon RDS Supports Stopping and Starting of Database Instances
I want to launch a instance with two vNic card . On one vnic i want use for private network and other vNic use for public network, How we can do that in openstack ?
you need to boot the image with twice the --nic net-id= argument as in the example bellow:
nova boot --image cirros-0.3.3-x86_64 --flavor tiny_ram_small_disk --nic net-id=b7ab2080-a71a-44f6-9f66-fde526bb73d3 --nic net-id=120a6fde-7e2d-4856-90ee-5609a5f3035f --security-group default --key-name bob-key CIRROSone
here is the result
root#columbo:~# nova list
+--------------------------------------+-----------+---------+------------+-------------+-----------------------------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+-----------+---------+------------+-------------+-----------------------------------------------+
| d75ef5b3-060d-4ec0-9ddf-a3685a7f1199 | CIRROSone | ACTIVE | - | Running | SecondVlan=5.5.5.4; SERVER_VLAN_1=10.255.1.16 |
+--------------------------------------+-----------+---------+------------+-------------+-----------------------------------------------+
Here is the documentation for the Activity data type.
However, I think I've seen 4 status codes for the responses:
'Successful'
'Cancelled'
'InProgress'
'PreInProgress'
Are there any others?
Looks like they have updated the documentation, in the same url you have shared:
Valid Values: WaitingForSpotInstanceRequestId | WaitingForSpotInstanceId | WaitingForInstanceId | PreInService | InProgress | Successful | Failed | Cancelled