VPN Using AWS CDK - amazon-web-services

I've been working on creating a VPN using AWS's CDK. I had to use Cloudformation lower level resources, as there doesn't seem to be any constructs yet. I believe I have the code set up correctly, as cdk diff doesn't show any errors. However, when running cdk deploy I get the following error:
CREATE_FAILED | AWS::EC2::ClientVpnEndpoint | ClientVpnEndpoint2
Mutual authentication is required but is missing in the request (Service: AmazonEC2; Status Code: 400; Error Code: MissingParameter; Request ID: 5
384a1d9-ff60-4ac4-a1bc-df3a4db9146b; Proxy: null)
Which is odd... because I wouldn't think I'd need mutual authentication in order to create a VPN that uses mutual authentication. And if that is the case, then how do I get the aws cdk stack to use mutual authentication on deployment? Here is the relevant code I have:
client_cert = certificate_manager.Certificate.from_certificate_arn(
self,
"ServerCertificate",
self.cert_arn,
)
server_cert = certificate_manager.Certificate.from_certificate_arn(
self,
"ClientCertificate",
self.client_arn,
)
log_group = logs.LogGroup(
self,
"ClientVpnLogGroup",
retention=logs.RetentionDays.ONE_MONTH
)
log_stream = log_group.add_stream("ClientVpnLogStream")
endpoint = ec2.CfnClientVpnEndpoint(
self,
"ClientVpnEndpoint2",
description="VPN",
authentication_options=[{
"type": "certificate-authentication",
"mutual_authentication": {
"client_root_certificate_chain_arn": client_cert.certificate_arn
}
}],
tag_specifications=[{
"resourceType": "client-vpn-endpoint",
"tags": [{
"key": "Name",
"value": "Swyp VPN CDK created"
}]
}],
client_cidr_block="10.27.0.0/20",
connection_log_options={
"enabled": True,
"cloudwatch_log_group": log_group.log_group_name,
"cloudwatch_log_stream": log_stream.log_stream_name,
},
server_certificate_arn=server_cert.certificate_arn,
split_tunnel=False,
vpc_id=vpc.vpc_id,
dns_servers=["8.8.8.8", "8.8.4.4"],
)
dependables = core.ConcreteDependable()
for i, subnet in enumerate(vpc.isolated_subnets):
network_asc = ec2.CfnClientVpnTargetNetworkAssociation(
self,
"ClientVpnNetworkAssociation-" + str(i),
client_vpn_endpoint_id=endpoint.ref,
subnet_id=subnet.subnet_id,
)
dependables.add(network_asc)
auth_rule = ec2.CfnClientVpnAuthorizationRule(
self,
"ClientVpnAuthRule",
client_vpn_endpoint_id=endpoint.ref,
target_network_cidr="0.0.0.0/0",
authorize_all_groups=True,
description="Allow all"
)
# add routes for subnets in order to surf internet (useful while splitTunnel is off)
for i, subnet in enumerate(vpc.isolated_subnets):
ec2.CfnClientVpnRoute(
self,
"CfnClientVpnRoute" + str(i),
client_vpn_endpoint_id=endpoint.ref,
destination_cidr_block="0.0.0.0/0",
description="Route to all",
target_vpc_subnet_id=subnet.subnet_id,
).node.add_dependency(dependables)
Maybe this is something simple like needing to update IAM policies? I'm fairly new to aws, aws cdk/cloudformation, and devops in general. So any insight would be much appreciated!

Related

How to send a http patch method for a Google Cloud Deployment Manager resource using a python template

I'm creating a HA VPN using Google Cloud Deployment Manager using the following guide:
https://cloud.google.com/network-connectivity/docs/vpn/how-to/creating-ha-vpn#api_4
As part of the guide there is a requirement to send a Patch to the existing cloud router already created, however I haven't been able to find a way to set a patch request in my python template.
The resource is currently setup as below in my python template:
resources.extend([
{
# Cloud Router resource for HA VPN.
'name': 'cloud_router',
# https://cloud.google.com/compute/docs/reference/rest/v1/routers
'type': 'gcp-types/compute-v1:routers',
'properties':
{
'router': cloud_router,
'name': cloud_router,
'project': project_id,
'network': network,
'region': context.properties['region'],
'interfaces': [{
"name": f"{cloud_router}-bgp-int-0",
"linkedVpnTunnel": "vpn_tunnel",
"ipRange":
context.properties[f"bgp_ip_0"]+context.properties[f"subnet_mask_0"]
}],
},
'metadata': {
'dependsOn': [
f"{vpn_tunnel}0",
f"{vpn_tunnel}1",
cloud_router,
]
}
}
}
)]
The rest of the resources (vpn_tunnel, vpnGateway, ExternalVPNGateway, cloud router) all create fine as a post request on the Deployment Manager console.
The error I receive is related to the "linkedVPNTunnel" value which is the name of the VPNTunnel used as per the How to guide. If I remove this field the resource is recreated via the POST request, however the bgp peer isn't associated to the tunnel as required because of the missing field.
code: RESOURCE_ERROR
location: /deployments/ha-vpn-test/resources/cr-bgp-int
message: "{"ResourceType":"gcp-types/compute-v1:routers","ResourceErrorCode"
:"400","ResourceErrorMessage":{"code":400,"errors":[{"domain":"global"
,"message":"Invalid value for field 'resource.interfaces[0].linkedVpnTunnel':
\ 'vpn-tunnel-0'. The URL is malformed.","reason":"invalid"}],"message"
:"Invalid value for field 'resource.interfaces[0].linkedVpnTunnel': 'vpn-tunnel-0'.
\ The URL is malformed.","statusMessage":"Bad Request","requestPath":"
https://compute.googleapis.com/compute/v1/projects/dev-test/regions/asia-southeast1/routers\"\
,"httpMethod":"POST"}}"
Found the problem.
The methods listed on the API site can be appended directly to the end of the 'type' field or alternatively the 'action' field can be used but isn't recommended.
This allowed me to send a http PACT request:
'type': 'gcp-types/compute-v1:compute.routers.patch'
Previously I had the below which resulted in a POST:
'type': 'gcp-types/compute-v1:routers'

What is the integration service name for CloudWatch

I am trying to create AWS API gateway with AWS service integration with cloudwatch using AWS cdk/ cloudformation. But I am getting errors like "AWS service of type cloudwatch not supported". When I try to use Cloud watch log then it works but not for only cloudwatch.
Code
new AwsIntegrationProps
{
Region = copilotFoundationalInfrastructure.Region,
Options = new IntegrationOptions {
PassthroughBehavior = PassthroughBehavior.WHEN_NO_TEMPLATES,
CredentialsRole = Role.FromRoleArn(this,"CloudWatchAccessRole", "arn:aws:iam::800524210815:role/APIGatewayCloudWatchRole"),
RequestParameters = new Dictionary\<string, string\>()
{
{ "integration.request.header.Content-Encoding", "'amz-1.0'" },
{ "integration.request.header.Content-Type", "'application/json'" },
{ "integration.request.header.X-Amz-Target", "'GraniteServiceVersion20100801.PutMetricData'" },
},
},
IntegrationHttpMethod = "POST",
Service = "cloudwatch", // this is working with s3 and logs
Action = "PutMetricData"
}
What is the correct service name for cloudwatch to putmetricsdata?
new AwsIntegrationProps
{
Region = copilotFoundationalInfrastructure.Region,
Options = new IntegrationOptions {
PassthroughBehavior = PassthroughBehavior.WHEN_NO_TEMPLATES,
CredentialsRole = Role.FromRoleArn(this,"CloudWatchAccessRole", "arn:aws:iam::800524210815:role/APIGatewayCloudWatchRole"),
RequestParameters = new Dictionary<string, string>()
{
{ "integration.request.header.Content-Encoding", "'amz-1.0'" },
{ "integration.request.header.Content-Type", "'application/json'" },
{ "integration.request.header.X-Amz-Target", "'GraniteServiceVersion20100801.PutMetricData'" },
},
},
IntegrationHttpMethod = "POST",
Service = "", // What will be the correct value for cloudwatch
Action = "PutMetricData"
}
What will be the correct value for cloudwatch
For CloudWatch logs you put logs right?
So for CloudWatch, it is monitoring... I got it from a github code but cannot find it anymore.
There are several ways to configure CloudWatch to monitor your API Gateway. First, you can create an AWS CloudWatch metric to monitor specific outputs produced by your API Gateway - see an example here. The second way is to use the default configuration - see here.

AWS CDK: enabling access logging for classical load balancer

We are using Classical load balancer in our Infra deployed via CDK. For deploying Load balancer we are using level 2 Constructs. The code is like this:
const lb = new elb.LoadBalancer(this, 'LB', {
vpc: vpcRef,
internetFacing: true,
healthCheck: {
port: 80
},
});
lb.addListener({
externalPort: 80,
});
}
We are not able to find any property using which we can enable the access logging. Someone suggested me to use AccessLoggingPolicyProperty. I checked that and found that this property can be used with Level 1 constructs only. Can some please guide me on how we can enable the access logs via CDK on a classical load balancer using Level 2 constructs.
As per the documentation you need S3 bucket with right permissions configured. With that you can follow aws-cdk documentation on how to get access to L1 Construct.
It is going to look roughly like the following code
const lbLogs = new Bucket(this, 'LB Logs');
const elbAccountId = 'TODO: find right account for you region in docs';
lbLogs.grantPut(new AccountPrincipal(elbAccountId));
lbLogs.grantPut(
new ServicePrincipal('delivery.logs.amazonaws.com', {
conditions: {
StringEquals: {
's3:x-amz-acl': 'bucket-owner-full-control',
},
},
})
);
lbLogs.grantRead(new ServicePrincipal('delivery.logs.amazonaws.com'));
const cfnLoadBalancer = lb.node.defaultChild as CfnLoadBalancer;
cfnLoadBalancer.accessLoggingPolicy = {
enabled: true,
s3BucketName: lbLogs.bucketName,
};

AWS CDK access Denied when trying to create connection AWS glue

I have written a CDK code for creating a connection but I am getting an error while creating:
User: arn:aws:iam::XXXXXXX:root is not authorized to perform: glue:CreateConnection on resource: arn:aws:glue:us-east-2:Connectionnew:catalog (Service: AWSGlue; Status Code: 400; Error Code: AccessDeniedException; Request ID: a8702efb-4467-4ffb-8fe0-18468f336299)
Below is my simple Code:
glue_connection = glue.CfnConnection(self, "Connectionnew",
catalog_id = "Connectionnew",
connection_input = {
"connectionType":"JDBC",
"Name":"JDBCConnection",
"connectionProperties": {
"JDBC_CONNECTION_URL": "jdbc:redshift://non-prod-royalties2.xxxxxxx.us-east-1.redshift.amazonaws.com:xxx/xxxxx",
"USERNAME":"xxxxxx",
"Password":"xxxxxxxx"
}
}
)
Please help me with this
Being that you are using the root account (which is not advisable), it's not an issue of your active AWS user having the incorrect permissions.
Likely, the connection details you are providing are incorrect. The username/password might be correct but the formatting of the JSON is questionable. I'd check to see if the JDBC keys are case-sensitive because that could be your issue.
I was able to get this issue resolved but putting the AWS accountnumber as below:
glue_connection = glue.CfnConnection(self, "Connectionnew", catalog_id = "AWSAccountNumber", connection_input = { "connectionType": "JDBC", "Name": "JDBCConnection", "connectionProperties": { "JDBC_CONNECTION_URL": "jdbcredshiftlink", "USERNAME": "xxxxxx", "PASSWORD": "xxxxxxxx" } } )

Idle delete configuration for PySpark Cluster on GCP

I am trying to define a create cluster function to create a cluster on Cloud Dataproc. While going through the reference material I came across an idle delete parameter (idleDeleteTtl) which would auto-delete the cluster if not in use for the amount of time defined. When I try to include it in cluster config it gives me a ValueError: Protocol message ClusterConfig has no "lifecycleConfig" field.
The create cluster function for reference:
def create_cluster(dataproc, project, zone, region, cluster_name, pip_packages):
"""Create the cluster."""
print('Creating cluster...')
zone_uri = \
'https://www.googleapis.com/compute/v1/projects/{}/zones/{}'.format(
project, zone)
cluster_data = {
'project_id': project,
'cluster_name': cluster_name,
'config': {
'initialization_actions': [{
'executable_file': 'gs://<some_path>/python/pip-install.sh'
}],
'gce_cluster_config': {
'zone_uri': zone_uri,
'metadata': {
'PIP_PACKAGES': pip_packages
}
},
'master_config': {
'num_instances': 1,
'machine_type_uri': 'n1-standard-1'
},
'worker_config': {
'num_instances': 2,
'machine_type_uri': 'n1-standard-1'
},
'lifecycleConfig': { #### PROBLEM AREA ####
'idleDeleteTtl': '30m'
}
}
}
cluster = dataproc.create_cluster(project, region, cluster_data)
cluster.add_done_callback(callback)
global waiting_callback
waiting_callback = True
I want similar functionality if not in the same function itself. I already have a manual delete function defined but I want to add the functionality to auto delete clusters when not in use.
You are calling the v1 API passing a parameter that is part of the v1beta2 API.
Change your endpoint from:
https://www.googleapis.com/compute/v1/projects/{}/zones/{}
To this:
https://www.googleapis.com/compute/v1beta2/projects/{}/zones/{}