How to send a http patch method for a Google Cloud Deployment Manager resource using a python template - google-cloud-platform

I'm creating a HA VPN using Google Cloud Deployment Manager using the following guide:
https://cloud.google.com/network-connectivity/docs/vpn/how-to/creating-ha-vpn#api_4
As part of the guide there is a requirement to send a Patch to the existing cloud router already created, however I haven't been able to find a way to set a patch request in my python template.
The resource is currently setup as below in my python template:
resources.extend([
{
# Cloud Router resource for HA VPN.
'name': 'cloud_router',
# https://cloud.google.com/compute/docs/reference/rest/v1/routers
'type': 'gcp-types/compute-v1:routers',
'properties':
{
'router': cloud_router,
'name': cloud_router,
'project': project_id,
'network': network,
'region': context.properties['region'],
'interfaces': [{
"name": f"{cloud_router}-bgp-int-0",
"linkedVpnTunnel": "vpn_tunnel",
"ipRange":
context.properties[f"bgp_ip_0"]+context.properties[f"subnet_mask_0"]
}],
},
'metadata': {
'dependsOn': [
f"{vpn_tunnel}0",
f"{vpn_tunnel}1",
cloud_router,
]
}
}
}
)]
The rest of the resources (vpn_tunnel, vpnGateway, ExternalVPNGateway, cloud router) all create fine as a post request on the Deployment Manager console.
The error I receive is related to the "linkedVPNTunnel" value which is the name of the VPNTunnel used as per the How to guide. If I remove this field the resource is recreated via the POST request, however the bgp peer isn't associated to the tunnel as required because of the missing field.
code: RESOURCE_ERROR
location: /deployments/ha-vpn-test/resources/cr-bgp-int
message: "{"ResourceType":"gcp-types/compute-v1:routers","ResourceErrorCode"
:"400","ResourceErrorMessage":{"code":400,"errors":[{"domain":"global"
,"message":"Invalid value for field 'resource.interfaces[0].linkedVpnTunnel':
\ 'vpn-tunnel-0'. The URL is malformed.","reason":"invalid"}],"message"
:"Invalid value for field 'resource.interfaces[0].linkedVpnTunnel': 'vpn-tunnel-0'.
\ The URL is malformed.","statusMessage":"Bad Request","requestPath":"
https://compute.googleapis.com/compute/v1/projects/dev-test/regions/asia-southeast1/routers\"\
,"httpMethod":"POST"}}"

Found the problem.
The methods listed on the API site can be appended directly to the end of the 'type' field or alternatively the 'action' field can be used but isn't recommended.
This allowed me to send a http PACT request:
'type': 'gcp-types/compute-v1:compute.routers.patch'
Previously I had the below which resulted in a POST:
'type': 'gcp-types/compute-v1:routers'

Related

How can I specify certificate authority when creating a model in Sagemaker with boto3

We try to create a model in Sagemaker with boto3 based on an image we saved in a private artifactory. We found a way to setup the basic credentials acording to the doc https://docs.amazonaws.cn/en_us/sagemaker/latest/dg/your-algorithms-containers-inference-private.html but I cant find a solution to pass the certificate authority.
The code we use to create the model is
import boto3
client = boto3.client('sagemaker')
response = client.create_model(
ModelName='model-name',
PrimaryContainer={
'Image': 'local-repo.my_artifactory_endpoint/train:latest',
'ImageConfig': {
'RepositoryAccessMode': 'Vpc',
},
'Environment': {
'SAGEMAKER_PROGRAM': 'train.py',
"SAGEMAKER_SUBMIT_DIRECTORY": "/opt/ml/model/code",
"SAGEMAKER_CONTAINER_LOG_LEVEL": "20",
"SAGEMAKER_REGION": <my-region>,
"MMS_DEFAULT_RESPONSE_TIMEOUT": "500"
}
},
ExecutionRoleArn=<my-role>,
VpcConfig={
'SecurityGroupIds': [
...
],
'Subnets': [
...
]
},
EnableNetworkIsolation=False
)
After, when we try to create an endpoint based on that model we endup with ther error:
Attempt to pull model image local-repo.my_artifactory_endpoint/train:latest failed due to error constructing client to call registry: constructing authentication challenge manager: Get "https://local-repo.my_artifactory_endpoint/v2/": x509: certificate signed by unknown authority.
We don't find a way to pass this certificate or to ignore ssl option.

VPN Using AWS CDK

I've been working on creating a VPN using AWS's CDK. I had to use Cloudformation lower level resources, as there doesn't seem to be any constructs yet. I believe I have the code set up correctly, as cdk diff doesn't show any errors. However, when running cdk deploy I get the following error:
CREATE_FAILED | AWS::EC2::ClientVpnEndpoint | ClientVpnEndpoint2
Mutual authentication is required but is missing in the request (Service: AmazonEC2; Status Code: 400; Error Code: MissingParameter; Request ID: 5
384a1d9-ff60-4ac4-a1bc-df3a4db9146b; Proxy: null)
Which is odd... because I wouldn't think I'd need mutual authentication in order to create a VPN that uses mutual authentication. And if that is the case, then how do I get the aws cdk stack to use mutual authentication on deployment? Here is the relevant code I have:
client_cert = certificate_manager.Certificate.from_certificate_arn(
self,
"ServerCertificate",
self.cert_arn,
)
server_cert = certificate_manager.Certificate.from_certificate_arn(
self,
"ClientCertificate",
self.client_arn,
)
log_group = logs.LogGroup(
self,
"ClientVpnLogGroup",
retention=logs.RetentionDays.ONE_MONTH
)
log_stream = log_group.add_stream("ClientVpnLogStream")
endpoint = ec2.CfnClientVpnEndpoint(
self,
"ClientVpnEndpoint2",
description="VPN",
authentication_options=[{
"type": "certificate-authentication",
"mutual_authentication": {
"client_root_certificate_chain_arn": client_cert.certificate_arn
}
}],
tag_specifications=[{
"resourceType": "client-vpn-endpoint",
"tags": [{
"key": "Name",
"value": "Swyp VPN CDK created"
}]
}],
client_cidr_block="10.27.0.0/20",
connection_log_options={
"enabled": True,
"cloudwatch_log_group": log_group.log_group_name,
"cloudwatch_log_stream": log_stream.log_stream_name,
},
server_certificate_arn=server_cert.certificate_arn,
split_tunnel=False,
vpc_id=vpc.vpc_id,
dns_servers=["8.8.8.8", "8.8.4.4"],
)
dependables = core.ConcreteDependable()
for i, subnet in enumerate(vpc.isolated_subnets):
network_asc = ec2.CfnClientVpnTargetNetworkAssociation(
self,
"ClientVpnNetworkAssociation-" + str(i),
client_vpn_endpoint_id=endpoint.ref,
subnet_id=subnet.subnet_id,
)
dependables.add(network_asc)
auth_rule = ec2.CfnClientVpnAuthorizationRule(
self,
"ClientVpnAuthRule",
client_vpn_endpoint_id=endpoint.ref,
target_network_cidr="0.0.0.0/0",
authorize_all_groups=True,
description="Allow all"
)
# add routes for subnets in order to surf internet (useful while splitTunnel is off)
for i, subnet in enumerate(vpc.isolated_subnets):
ec2.CfnClientVpnRoute(
self,
"CfnClientVpnRoute" + str(i),
client_vpn_endpoint_id=endpoint.ref,
destination_cidr_block="0.0.0.0/0",
description="Route to all",
target_vpc_subnet_id=subnet.subnet_id,
).node.add_dependency(dependables)
Maybe this is something simple like needing to update IAM policies? I'm fairly new to aws, aws cdk/cloudformation, and devops in general. So any insight would be much appreciated!

How to ingest variable data like passwords into compute instance when deploying from template

We are trying to figure out how we can create a compute engine template and set some information like passwords with the help of variables in the moment when the final instance is generated by deployment manager, not in the base image.
When deploying something from marketplace you can see that passwords are generated by "password.py" and stored as metadata in the VMs template. But i can't find the code that writes this data into the VMs disk image.
Could someone explain how this can be achieved?
Edit:
I found out that startup scripts are able to read the instance's metadata: https://cloud.google.com/compute/docs/storing-retrieving-metadata Is this how they do it in marketplace click-to-deploy scripts like https://console.cloud.google.com/marketplace/details/click-to-deploy-images/wordpress ? Or is there an even better way to accomplish this?
The best way is to use the metadata server.
In a star-up script, use this to recover all the attributes of your VM.
curl -H "Metadata-Flavor: Google" "http://metadata.google.internal/computeMetada
ta/v1/instance/attributes/"
Then, do what you want with it
Don't forget to delete secret from metadata after their use. Or change them on the compute. Secrets must be stay secret.
By the way, I would to recommand you to have a look to another things: berglas. Berglas is made by a Google Developer Advocate, specialized in security: Seth Vargo. In summary the principle:
Bootstrap a bucket with Berglas
Create a secret in this Bucket ith Berglas
Pass the reference to this secret in your compute Metadata (berglas://<my_bucket>/<my secret name>)
Use berglas in start up script to resolve secret.
All this action are possible in command line, thus an integration in a script is possible.
You can use python templates , this give you more flexibility. In your YAML you can call the python script to fill the necessary information, from documentation:
imports:
- path: vm-template.py
resources:
- name: vm-1
type: vm-template.py
- name: a-new-network
type: compute.v1.network
properties:
routingConfig:
routingMode: REGIONAL
autoCreateSubnetworks: true
Where vm-template.py it's a python script:
"""Creates the virtual machine."""
COMPUTE_URL_BASE = 'https://www.googleapis.com/compute/v1/'
def GenerateConfig(unused_context):
"""Creates the first virtual machine."""
resources = [{
'name': 'the-first-vm',
'type': 'compute.v1.instance',
'properties': {
'zone': 'us-central1-f',
'machineType': ''.join([COMPUTE_URL_BASE, 'projects/[MY_PROJECT]',
'/zones/us-central1-f/',
'machineTypes/f1-micro']),
'disks': [{
'deviceName': 'boot',
'type': 'PERSISTENT',
'boot': True,
'autoDelete': True,
'initializeParams': {
'sourceImage': ''.join([COMPUTE_URL_BASE, 'projects/',
'debian-cloud/global/',
'images/family/debian-9'])
}
}],
'networkInterfaces': [{
'network': '$(ref.a-new-network.selfLink)',
'accessConfigs': [{
'name': 'External NAT',
'type': 'ONE_TO_ONE_NAT'
}]
}]
}
}]
return {'resources': resources}
Now for the password it depends which VM you are using, Windows or Linux.
Linux you can add a startup script which inject a ssh public key.
Windows you can first prepare the proper key, see this Automate password generation

RDS generate_presigned_url does not support the DestinationRegion parameter

I was trying to set up encrypted RDS replica in another region, but I got stuck on generating pre-signed URL.
It seems that boto3/botocore does not allow DestinationRegion parameter, which is defined as a requirement on AWS API (link) in case we want to generate PreSignedUrl.
Versions used:
boto3 (1.4.7)
botocore (1.7.10)
Output:
botocore.exceptions.ParamValidationError: Parameter validation failed:
Unknown parameter in input: "DestinationRegion", must be one of: DBInstanceIdentifier, SourceDBInstanceIdentifier, DBInstanceClass, AvailabilityZone, Port, AutoMinorVersionUpgrade, Iops, OptionGroupName, PubliclyAccessible, Tags, DBSubnetGroupName, StorageType, CopyTagsToSnapshot, MonitoringInterval, MonitoringRoleArn, KmsKeyId, PreSignedUrl, EnableIAMDatabaseAuthentication, SourceRegion
Example code:
import boto3
url = boto3.client('rds', 'eu-east-1').generate_presigned_url(
ClientMethod='create_db_instance_read_replica',
Params={
'DestinationRegion': 'eu-east-1',
'SourceDBInstanceIdentifier': 'abc',
'KmsKeyId': '1234',
'DBInstanceIdentifier': 'someidentifier'
},
ExpiresIn=3600,
HttpMethod=None
)
Same issue was already reported but got closed.
Thanks for help,
Petar
Generate Pre signed URL from the source region, then populate the create_db_instance_read_replica with that url.
The presigned URL must be a valid request for the CreateDBInstanceReadReplica API action that can be executed in the source AWS Region that contains the encrypted source DB instance
PreSignedUrl (string) --
The URL that contains a Signature Version 4 signed request for the CreateDBInstanceReadReplica API action in the source AWS Region that contains the source DB instance.
import boto3
session = boto3.Session(profile_name='profile_name')
url = session.client('rds', 'SOURCE_REGION').generate_presigned_url(
ClientMethod='create_db_instance_read_replica',
Params={
'DBInstanceIdentifier': 'db-1-read-replica',
'SourceDBInstanceIdentifier': 'database-source',
'SourceRegion': 'SOURCE_REGION'
},
ExpiresIn=3600,
HttpMethod=None
)
print(url)
source_db = session.client('rds', 'SOURCE_REGION').describe_db_instances(
DBInstanceIdentifier='database-SOURCE'
)
print(source_db)
response = session.client('rds', 'DESTINATION_REGION').create_db_instance_read_replica(
SourceDBInstanceIdentifier="arn:aws:rds:SOURCE_REGION:account_number:db:database-SOURCE",
DBInstanceIdentifier="db-1-read-replica",
KmsKeyId='DESTINATION_REGION_KMS_ID',
PreSignedUrl=url,
SourceRegion='SOURCE'
)
print(response)

Amazon API Gateway swagger importer tool does not import minItems feild from swagger

I am trying the api gateway validation example from here https://github.com/rpgreen/apigateway-validation-demo . I observed that from the given swagger.json file, minItems is not imported into the models which got created during the swagger import.
"CreateOrders": {
"title": "Create Orders Schema",
"type": "array",
"minItems" : 1,
"items": {
"type": "object",
"$ref" : "#/definitions/Order"
}
}
Because of this when you give an empty array [ ] as input, instead of throwing an error about minimum items in an array, the api responds with a message 'created orders successfully'.
When I manually add the same from the API gateway console UI, it seems to work as expected. Am i missing something or this is a bug in the importer?
This is a known issue with the Swagger import feature of API Gateway.
From http://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-known-issues.html
The maxItems and minItems tags are not included in simple request validation. To work around this, update the model after import before doing validation.