Right now I'm learning how to setup a website served by a GCP bucket using Pulumi however, I've stuck at the last step exposing an IP address and attaching it to the LB. Everything looks good except This load balancer has no frontend configured
I think the ForwardingRule is what I need but it doesn't except the BucketBackend (see code and output below).
Any suggestions on how to move forward?
####### WEBSITE ##########
web_bucket = gcp.storage.Bucket('web',
project="myproj",
cors=[gcp.storage.BucketCorArgs(
max_age_seconds=3600,
methods=[
"GET",
],
origins=["https://myproj.com", "https://sandbox.myproj.com"],
response_headers=["*"],
)],
force_destroy=True,
location="US",
uniform_bucket_level_access=True,
website=gcp.storage.BucketWebsiteArgs(
main_page_suffix="index.html",
not_found_page="404.html",
),
)
pulumi.export('web bucket', web_bucket.url)
ssl_certificate = gcp.compute.SSLCertificate("SSLCertificate",
project="myproj",
name_prefix="certificate-",
private_key=(lambda path: open(path).read())("ssl/private.key"),
certificate=(lambda path: open(path).read())("ssl/certificate.crt"))
http_health_check = gcp.compute.HttpHealthCheck("httphealthcheck",
project="myproj",
request_path="/",
check_interval_sec=1,
timeout_sec=1
)
# Backend Bucket Service
web_backend = gcp.compute.BackendBucket("web-backend",
project="myproj",
description="Serves website",
bucket_name=web_bucket.name,
enable_cdn=True
)
# LB Backend hostpath and rules
url_map = gcp.compute.URLMap("urlmap",
project="myproj",
description="URL mapping",
default_service=web_backend.id,
host_rules=[gcp.compute.URLMapHostRuleArgs(
hosts=["myproj.io"],
path_matcher="allpaths",
)],
path_matchers=[gcp.compute.URLMapPathMatcherArgs(
name="allpaths",
default_service=web_backend.id,
path_rules=[gcp.compute.URLMapPathMatcherPathRuleArgs(
paths=["/*"],
service=web_backend.id,
)],
)]
)
# Route to backed (bucket backend)
target_https_proxy = gcp.compute.TargetHttpsProxy("targethttpsproxy",
project="myproj",
url_map=url_map.id,
ssl_certificates=[ssl_certificate.id])
# Forwarding rule for External Network Load Balancing using Backend Services
web_forward = gcp.compute.ForwardingRule("webforward",
project="myproj",
region="us-central1",
port_range="80",
backend_service=web_backend.id # this doesn't work
)
Diagnostics:
gcp:compute:ForwardingRule (default):
error: 1 error occurred:
* Error creating ForwardingRule: googleapi: Error 400: Invalid value for field 'resource.backendService': 'https://compute.googleapis.com/compute/beta/projects/myproj/global/backendBuckets/web-backend-576fa1b'. Unexpected resource collection 'backendBuckets'., invalid
I was using the wrong fowarding rule class. Because of the LB setup regional forwarding was wrong.
# Forwarding rule for External Network Load Balancing using Backend Services
web_forward = gcp.compute.GlobalForwardingRule("webforward",
project="myproj",
port_range="443",
target=nbprod_target_https_proxy.self_link
)
Related
I'm trying to get an AWS/Lightsail Debian server automatically renewing certificates with certbot. My DNS is with Namecheap.
I'm follow the steps on https://blog.bryanroessler.com/2019-02-09-automatic-certbot-namecheap-acme-dns/ and https://blog.bryanroessler.com/2019-02-09-automatic-certbot-namecheap-acme-dns/. I keep getting a no-permission error.
I run:
sudo certbot certonly -d "*.example.com" --agree-tos --manual-public-ip-logging-ok --server https://acme-v02.api.letsencrypt.org/directory --preferred-challenges dns --manual --manual-auth-hook /etc/letsencrypt/acme-dns-auth.py --debug-challenges
I see:
Failed authorization procedure. example.com (dns-01): urn:ietf:params:acme:error:unauthorized :: The client lacks sufficient authorization :: No TXT record found at _acme-challenge.example.com
It says I need to open port 53. I followed Amazon's Lightsail instructions. Neither iptables nor ufw seems to be installed. When I nmap my machine, I don't see 53. I actually installed ufw for lack of a good idea, to no avail.
My /etc/acme-dns/config.cfg is as follows:
#/etc/acme-dns/config.cfg
[general]
# DNS interface
listen = ":53"
protocol = "udp"
# domain name to serve the requests off of
domain = "acme.example.com"
# zone name server
nsname = "ns1.acme.example.com"
# admin email address, where # is substituted with .
nsadmin = "example.example.com"
# predefined records served in addition to the TXT
records = [
"acme.example.com. A <public ip>",
"ns1.acme.example.com. A <public ip>",
"acme.example.com. NS ns1.acme.example.com.",
]
debug = false
[database]
engine = "sqlite3"
connection = "/var/lib/acme-dns/acme-dns.db"
[api]
api_domain = ""
ip = "127.0.0.1"
disable_registration = false
autocert_port = "80"
port = "8082"
tls = "none"
corsorigins = [
"*"
]
use_header = false
header_name = "X-Forwarded-For"
[logconfig]
loglevel = "debug"
logtype = "stdout"
logformat = "text"
For the listen value, I also tried 127.0.0.1:53 and :53
The settings portion of /etc/letsencrypt/acme-dns-auth.py:
# URL to acme-dns instance
ACMEDNS_URL = "http://127.0.0.1:8082"
# Path for acme-dns credential storage
STORAGE_PATH = "/etc/letsencrypt/acmedns.json"
# Whitelist for address ranges to allow the updates from
# Example: ALLOW_FROM = ["192.168.10.0/24", "::1/128"]
ALLOW_FROM = []
# Force re-registration. Overwrites the already existing acme-dns accounts.
FORCE_REGISTER = False
Thanks for any help you can provide.
If you don't wish to maintain your own acme DNS server, I built and use this script to automatically renew NameCheap wildcard certs with certbot. I hope it helps:
https://github.com/scribe777/letsencrypt-namecheap-dns-auth
I have an ECS container which runs two endpoints on two different ports.
I configure a network load balancer infront of it to have two listeners, each with their own target group.
AWS CDK code for my stack is here (Note: I changed the construct in my example)
class MyStack(Stack):
def __init__(self, scope: Construct, construct_id: str, certificate: Certificate, vpc: Vpc, repository: Repository, subnets: SubnetSelection, **kwargs) -> None:
super().__init__(scope, construct_id, **kwargs)
cluster: Cluster = Cluster(self, "MyCluster", vpc=vpc, container_insights=True)
image: ContainerImage = ContainerImage.from_ecr_repository(repository=repository, tag="latest")
task_definition: FargateTaskDefinition = FargateTaskDefinition(
self, "MyTaskDefinition", cpu=512, memory_limit_mib=1024,
)
container: ContainerDefinition = task_definition.add_container(
"MyContainer", image=image, environment={}
)
# As you can see, here I add two port mappings on my container
container.add_port_mappings(PortMapping(container_port=9876, host_port=9876))
container.add_port_mappings(PortMapping(container_port=8000, host_port=8000))
load_balancer: NetworkLoadBalancer = NetworkLoadBalancer(
self, "MyNetworkLoadBalancer",
load_balancer_name="my-nlb",
vpc=vpc,
vpc_subnets=subnets,
internet_facing=False
)
security_group: SecurityGroup = SecurityGroup(
self, "MyFargateServiceSecurityGroup",
vpc=vpc,
allow_all_outbound=True,
description="My security group"
)
security_group.add_ingress_rule(
Peer.any_ipv4(), Port.tcp(9876), 'Allow a connection on port 9876 from anywhere'
)
security_group.add_ingress_rule(
Peer.any_ipv4(), Port.tcp(8000), "Allow a connection on port 8000 from anywhere"
)
service: FargateService = FargateService(
self, "MyFargateService",
cluster=cluster,
task_definition=task_definition,
desired_count=1,
health_check_grace_period=Duration.seconds(30),
vpc_subnets=subnets,
security_groups=[security_group]
)
# Listener 1 is open to incoming connections on port 9876
listener_9876: NetworkListener = load_balancer.add_listener(
"My9876Listener",
port=9876,
protocol=Protocol.TLS,
certificates=[ListenerCertificate(certificate.certificate_arn)],
ssl_policy=SslPolicy.TLS12_EXT
)
# Incoming connections on 9876 are redirected to the container on 9876
# A health check is done on 8000/health
listener_9876.add_targets(
"My9876TargetGroup", targets=[service], port=9876, protocol=Protocol.TCP,
health_check=HealthCheck(port="8000", protocol=Protocol.HTTP, enabled=True, path="/health")
)
# Listener 2 is open to incoming connections on port 443
listener_443: NetworkListener = load_balancer.add_listener(
"My443Listener",
port=443,
protocol=Protocol.TLS,
certificates=[ListenerCertificate(certificates.quickfix_certificate.certificate_arn)],
ssl_policy=SslPolicy.TLS12_EXT
)
# Incoming connections on 443 are redirected to the container on 8000
# A health check is done on 8000/health
listener_443.add_targets(
"My443TargetGroup", targets=[service], port=8000, protocol=Protocol.TCP,
health_check=HealthCheck(port="8000", protocol=Protocol.HTTP, enabled=True, path="/health")
)
Now I deploy this stack successfully, but the result is not what I expected
Two target groups directing traffic to my container, but both on port 9876.
I read in the documentation that it is possible to have a load balancer direct traffic to different ports via different target groups.
Am I doing something wrong? Or does AWS CDK not support this?
I double checked the synthesized cloudformation template. It properly generates two target groups, one with port 9876 and one with port 8000.
Hi you need create a target from service then add as a target to listener.
const target = service.loadBalancerTarget({
containerName: 'MyContainer',
containerPort: 8000
}));
Is that possible to create bulk aws ALBs using powershell script?
If someone can provide Powershell script template, that would be great.
Absolutely, you can install AWS Tools for PowerShell. Check link below, there are examples there.
https://aws.amazon.com/powershell/
`# Create HTTP Listener
$HTTPListener = New-Object -TypeName ‘Amazon.ElasticLoadBalancing.Model.Listener’
$HTTPListener.Protocol = ‘http’
$HTTPListener.InstancePort = 80
$HTTPListener.LoadBalancerPort = 80
#Create HTTPS Listener
$HTTPSListener = New-Object -TypeName ‘Amazon.ElasticLoadBalancing.Model.Listener’
$HTTPSListener.Protocol = ‘http’
$HTTPSListener.InstancePort = 443
$HTTPSListener.LoadBalancerPort = 80
$HTTPSListener.SSLCertificateId = ‘YourSSL’
# Create Load Balancer
New-ELBLoadBalancer -LoadBalancerName ‘YourLoadBalancerName’ -Listeners
#($HTTPListener, $HTTPSListener) -SecurityGroups #($sgId) -Subnets #($sn1Id, $sn2Id)
-Scheme ‘internet-facing’
# Create Load Balancer
New-ELBLoadBalancer -LoadBalancerName ‘YourLoadBalancerName’ -Listeners
#($HTTPListener, $HTTPSListener) -SecurityGroups #(‘SecurityGroupId’) -Subnets
#(‘subnetId1’, ‘subnetId2’) -Scheme ‘internet-facing’
# Associate Instances with Load Balancer
Register-ELBInstanceWithLoadBalancer -LoadBalancerName ‘YourLoadBalancerName’ -
Instances #(‘instance1ID’, ‘instance2ID’)
# Create Application Cookie Stickiness Policy
New-ELBAppCookieStickinessPolicy -LoadBalancerName ‘YourLoadBalancerName’ -
PolicyName ‘SessionName’ -CookieName ‘CookieName’
# Set the Application Cookie Stickiness Policy to Load Balancer
Set-ELBLoadBalancerPolicyOfListener -LoadBalancerName ‘YourLoadBalancerName’ -
LoadBalancerPort 80 -PolicyNames ‘SessionName’`
This script is just for one elb...how to transform this scripts to create bulk elbs?
Also, where to mention AWS account credentials?
Given the Datomic Cloudformation template (described here and here), I can deploy a Datomic instance in AWS. I can also use Terraform to automate this.
Using Terraform, how do we put a load balancer in front of the instance in that instance in the Cloudformation template?
Using Terraform, how do we put a Route53 domain name in front of the Datomic instance (or load balancer) in the Cloudformation template?
The Datomic Cloudformation template looks like this:
cf.json
{"Resources":
{"LaunchGroup":
{"Type":"AWS::AutoScaling::AutoScalingGroup",
"Properties":
{"MinSize":{"Ref":"GroupSize"},
"Tags":
[{"Key":"Name",
"Value":{"Ref":"AWS::StackName"},
"PropagateAtLaunch":"true"}],
"MaxSize":{"Ref":"GroupSize"},
"AvailabilityZones":{"Fn::GetAZs":""},
"LaunchConfigurationName":{"Ref":"LaunchConfig"}}},
"LaunchConfig":
{"Type":"AWS::AutoScaling::LaunchConfiguration",
"Properties":
{"ImageId":
{"Fn::FindInMap":
["AWSRegionArch2AMI", {"Ref":"AWS::Region"},
{"Fn::FindInMap":
["AWSInstanceType2Arch", {"Ref":"InstanceType"}, "Arch"]}]},
"UserData":
{"Fn::Base64":
{"Fn::Join":
["\n",
["exec > >(tee \/var\/log\/user-data.log|logger -t user-data -s 2>\/dev\/console) 2>&1",
{"Fn::Join":["=", ["export XMX", {"Ref":"Xmx"}]]},
{"Fn::Join":["=", ["export JAVA_OPTS", {"Ref":"JavaOpts"}]]},
{"Fn::Join":
["=",
["export DATOMIC_DEPLOY_BUCKET",
{"Ref":"DatomicDeployBucket"}]]},
{"Fn::Join":
["=", ["export DATOMIC_VERSION", {"Ref":"DatomicVersion"}]]},
"cd \/datomic", "cat <<EOF >aws.properties",
"host=`curl http:\/\/169.254.169.254\/latest\/meta-data\/local-ipv4`",
"alt-host=`curl http:\/\/169.254.169.254\/latest\/meta-data\/public-ipv4`",
"aws-dynamodb-region=us-east-1\naws-transactor-role=datomic-aws-transactor-10\naws-peer-role=datomic-aws-peer-10\nprotocol=ddb\nmemory-index-max=256m\nport=4334\nmemory-index-threshold=32m\nobject-cache-max=128m\nlicense-key=\naws-dynamodb-table=your-system-name",
"EOF", "chmod 744 aws.properties",
"AWS_ACCESS_KEY_ID=\"${DATOMIC_READ_DEPLOY_ACCESS_KEY_ID}\" AWS_SECRET_ACCESS_KEY=\"${DATOMIC_READ_DEPLOY_AWS_SECRET_KEY}\" aws s3 cp \"s3:\/\/${DATOMIC_DEPLOY_BUCKET}\/${DATOMIC_VERSION}\/startup.sh\" startup.sh",
"chmod 500 startup.sh", ".\/startup.sh"]]}},
"InstanceType":{"Ref":"InstanceType"},
"InstanceMonitoring":{"Ref":"InstanceMonitoring"},
"SecurityGroups":{"Ref":"SecurityGroups"},
"IamInstanceProfile":{"Ref":"InstanceProfile"},
"BlockDeviceMappings":
[{"DeviceName":"\/dev\/sdb", "VirtualName":"ephemeral0"}]}}},
"Mappings":
{"AWSInstanceType2Arch":
{"m3.large":{"Arch":"64h"},
"c4.8xlarge":{"Arch":"64h"},
"t2.2xlarge":{"Arch":"64h"},
"c3.large":{"Arch":"64h"},
"hs1.8xlarge":{"Arch":"64h"},
"i2.xlarge":{"Arch":"64h"},
"r4.4xlarge":{"Arch":"64h"},
"m1.small":{"Arch":"64p"},
"m4.large":{"Arch":"64h"},
"m4.xlarge":{"Arch":"64h"},
"c3.8xlarge":{"Arch":"64h"},
"m1.xlarge":{"Arch":"64p"},
"cr1.8xlarge":{"Arch":"64h"},
"m4.10xlarge":{"Arch":"64h"},
"i3.8xlarge":{"Arch":"64h"},
"m3.2xlarge":{"Arch":"64h"},
"r4.large":{"Arch":"64h"},
"c4.xlarge":{"Arch":"64h"},
"t2.medium":{"Arch":"64h"},
"t2.xlarge":{"Arch":"64h"},
"c4.large":{"Arch":"64h"},
"c3.2xlarge":{"Arch":"64h"},
"m4.2xlarge":{"Arch":"64h"},
"i3.2xlarge":{"Arch":"64h"},
"m2.2xlarge":{"Arch":"64p"},
"c4.2xlarge":{"Arch":"64h"},
"cc2.8xlarge":{"Arch":"64h"},
"hi1.4xlarge":{"Arch":"64p"},
"m4.4xlarge":{"Arch":"64h"},
"i3.16xlarge":{"Arch":"64h"},
"r3.4xlarge":{"Arch":"64h"},
"m1.large":{"Arch":"64p"},
"m2.4xlarge":{"Arch":"64p"},
"c3.4xlarge":{"Arch":"64h"},
"r3.large":{"Arch":"64h"},
"c4.4xlarge":{"Arch":"64h"},
"r3.xlarge":{"Arch":"64h"},
"m2.xlarge":{"Arch":"64p"},
"r4.16xlarge":{"Arch":"64h"},
"t2.large":{"Arch":"64h"},
"m3.xlarge":{"Arch":"64h"},
"i2.4xlarge":{"Arch":"64h"},
"r4.8xlarge":{"Arch":"64h"},
"i3.large":{"Arch":"64h"},
"r3.8xlarge":{"Arch":"64h"},
"c1.medium":{"Arch":"64p"},
"r4.2xlarge":{"Arch":"64h"},
"i2.8xlarge":{"Arch":"64h"},
"m3.medium":{"Arch":"64h"},
"r3.2xlarge":{"Arch":"64h"},
"m1.medium":{"Arch":"64p"},
"i3.4xlarge":{"Arch":"64h"},
"m4.16xlarge":{"Arch":"64h"},
"i3.xlarge":{"Arch":"64h"},
"r4.xlarge":{"Arch":"64h"},
"c1.xlarge":{"Arch":"64p"},
"t1.micro":{"Arch":"64p"},
"c3.xlarge":{"Arch":"64h"},
"i2.2xlarge":{"Arch":"64h"},
"t2.small":{"Arch":"64h"}},
"AWSRegionArch2AMI":
{"ap-northeast-1":{"64p":"ami-eb494d8c", "64h":"ami-81f7cde6"},
"ap-northeast-2":{"64p":"ami-6eb66a00", "64h":"ami-f594489b"},
"ca-central-1":{"64p":"ami-204bf744", "64h":"ami-5e5be73a"},
"us-east-2":{"64p":"ami-5b42643e", "64h":"ami-896c4aec"},
"eu-west-2":{"64p":"ami-e52d3a81", "64h":"ami-55091e31"},
"us-west-1":{"64p":"ami-97cbebf7", "64h":"ami-442a0a24"},
"ap-southeast-1":{"64p":"ami-db1492b8", "64h":"ami-3e90165d"},
"us-west-2":{"64p":"ami-daa5c6ba", "64h":"ami-cb5030ab"},
"eu-central-1":{"64p":"ami-f3f02b9c", "64h":"ami-d564bcba"},
"us-east-1":{"64p":"ami-7f5f1e69", "64h":"ami-da5110cc"},
"eu-west-1":{"64p":"ami-66001700", "64h":"ami-77465211"},
"ap-southeast-2":{"64p":"ami-32cbdf51", "64h":"ami-66647005"},
"ap-south-1":{"64p":"ami-82126eed", "64h":"ami-723c401d"},
"sa-east-1":{"64p":"ami-afd7b9c3", "64h":"ami-ab9af4c7"}}},
"Parameters":
{"InstanceType":
{"Description":"Type of EC2 instance to launch",
"Type":"String",
"Default":"c3.large"},
"InstanceProfile":
{"Description":"Preexisting IAM role \/ instance profile",
"Type":"String",
"Default":"datomic-aws-transactor-10"},
"Xmx":
{"Description":"Xmx setting for the JVM",
"Type":"String",
"AllowedPattern":"\\d+[GgMm]",
"Default":"2625m"},
"GroupSize":
{"Description":"Size of machine group",
"Type":"String",
"Default":"1"},
"InstanceMonitoring":
{"Description":"Detailed monitoring for store instances?",
"Type":"String",
"Default":"true"},
"JavaOpts":
{"Description":"Options passed to Java launcher",
"Type":"String",
"Default":""},
"SecurityGroups":
{"Description":"Preexisting security groups.",
"Type":"CommaDelimitedList",
"Default":"datomic"},
"DatomicDeployBucket":
{"Type":"String",
"Default":"deploy-a0dbc565-faf2-4760-9b7e-29a8e45f428e"},
"DatomicVersion":{"Type":"String", "Default":"0.9.5561.50"}},
"Description":"Datomic Transactor Template"}
samples/cf-template.properties
#################################################################
# AWS instance and group settings
#################################################################
# required
# AWS instance type. See http://aws.amazon.com/ec2/instance-types/ for
# a list of legal instance types.
aws-instance-type=c3.large
# required, see http://docs.amazonwebservices.com/general/latest/gr/rande.html#ddb_region
aws-region=us-east-1
# required
# Enable detailed monitoring of AWS instances.
aws-instance-monitoring=true
# required
# Set group size >1 to create a standby pool for High Availability.
aws-autoscaling-group-size=1
# required, default = 70% of AWS instance RAM
# Passed to java launcher via -Xmx
java-xmx=
#################################################################
# Java VM options
#
# If you set the java-opts property, it will entirely replace the
# value used by bin/transactor, which you should consult as a
# starting point if you are configuring GC.
#
# Note that the single-quoting is necessary due to the whitespace
# between options.
#################################################################
# java-opts='-XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+CMSParallelRemarkEnabled -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly'
#################################################################
# security settings
#
# You must specify at least one of aws-ingress-grops or
# aws-ingress-cidrs to allows peers to connect!
#################################################################
# required
# The transactor needs to run in a security group that opens the
# transactor port to legal peers. If you specify a security group,
# `bin/transactor ensure-cf ...` will ensure that security group
# allows ingress on the transactor port.
aws-security-group=datomic
# Comma-delimited list of security groups. Security group syntax:
# group-name or aws-account-id:group-name
aws-ingress-groups=datomic
# Comma-delimited list of CIDRS.
# aws-ingress-cidrs=0.0.0.0/0
#################################################################
# datomic deployment settings
#################################################################
# required, default = VERSION number of Datomic you deploy from
# Which Datomic version to run.
datomic-version=
# required
# download Datomic from this bucket on startup. You typically will not change this.
datomic-deploy-s3-bucket=some-value
Unless you can't easily avoid it, I wouldn't recommend mixing Cloudformation with Terraform because it's going to make it a pain to do a lot of things. Normally I'd only recommend it for things such as the rare occurrences that Cloudformation covers a resource but not Terraform.
If you do need to do this you should be in luck because your Cloudformation template adds a tag to the autoscaling group with your instance(s) in that you can use to then link a load balancer to the autoscaling group and have the instances attach themselves to the load balancer as they are created (and detach when they are being deleted).
Unfortunately the Cloudformation template doesn't simply output the autoscaling group name so you'll probably need to do this in two separate terraform apply actions (probably keeping the configuration in separate folders).
Assuming something like this for your Cloudformation stack:
resource "aws_cloudformation_stack" "datomic" {
name = "datomic-stack"
...
}
Then a minimal example looks something like this:
data "aws_autoscaling_groups" "datomic" {
filter {
name = "key"
values = ["AWS::StackName"]
}
filter {
name = "value"
values = ["datomic-stack"]
}
}
resource "aws_lb_target_group" "datomic" {
name = "datomic-lb-tg"
port = 80
protocol = "HTTP"
vpc_id = "${var.vpc_id}"
}
resource "aws_lb" "datomic" {
name = "datomic-lb"
internal = false
security_groups = ["${var.security_group_id}"]
subnets = ["${var.subnet_id"]
}
resource "aws_autoscaling_attachment" "asg_attachment" {
autoscaling_group_name = "${data.aws_autoscaling_groups.datomic.names[0]}"
alb_target_group_arn = "${aws_alb_target_group.datomic.arn}"
}
resource "aws_lb_listener" "datomic" {
load_balancer_arn = "${aws_lb.datomic.arn}"
port = "80"
protocol = "HTTP"
default_action {
target_group_arn = "${aws_lb_target_group.datomic.arn}"
type = "forward"
}
}
The above config will find the autoscaling group created by the Cloudformation template and then attach it to an application load balancer that listens for HTTP traffic and forwards HTTP traffic to the Datomic instances.
It's trivial from here to add a Route53 record to the load balancer but because your instances are in an autoscaling group you can't easily add Route53 records for these instances (and probably shouldn't need to).
My application is hosted on Amazon Web Services, and I'm starting to script the creation of all the infrastructure of my app (VPC, Security Group, Beanstalk ect ...). I did not find the proper way to create a RDS Aurora Cluster, and I failed to reproduce the RDS wizard (helping you to create the db instances and the cluster) in Python with Boto3. Maybe I lack of knowledge in infrastructure, and networks, but I think creating a Aurora cluster must be accessible to me.
So here is my question:
Lets says I have a VPC id, a security group id, and some database info (user, password...), what are the minimum API calls I have to do to create a cluster, and make it usable by my application? The procedure must end with a cluster reader/writer endpoint and a reader only endpoint.
Here is how I create an Aurora MySQL instance in Python/BOTO3. You have to implement by yourself some missing functions.
def create_aurora(
instance_identifier, # used for instance name and cluster name
db_username,
db_password,
db_name,
db_port,
vpc_id,
vpc_sg, # Must be an array
dbsubnetgroup_name,
public_access = False,
AZ = None,
instance_type = "db.t2.small",
multi_az = True,
nb_instance = 1,
extratags = []
):
rds = boto3.client('rds')
# Assume a DB SUBNET Groups exists before creating the cluster. You must have created a DBSUbnetGroup associated to the Subnet of the VPC of your cluster. AWS will find it automatically.
#
# Search if the cluster exists
try:
db_cluster = rds.describe_db_clusters(
DBClusterIdentifier = instance_identifier
)['DBClusters']
db_cluster = db_cluster[0]
except botocore.exceptions.ClientError as e:
psa.printf("Creating empty cluster\r\n");
res = rds.create_db_cluster(
DBClusterIdentifier = instance_identifier,
Engine="aurora",
MasterUsername=db_username,
MasterUserPassword=db_password,
DBSubnetGroupName=dbsubnetgroup_name,
VpcSecurityGroupIds=vpc_sg,
AvailabilityZones=AZ
)
db_cluster = res['DBCluster']
cluster_name = db_cluster['DBClusterIdentifier']
instance_identifier = db_cluster['DBClusterIdentifier']
psa.printf("Cluster identifier : %s, status : %s, members : %d\n", instance_identifier , db_cluster['Status'], len(db_cluster['DBClusterMembers']))
if (db_cluster['Status'] == 'deleting'):
psa.printf(" Please wait for the cluster to be deleted and try again.\n")
return None
psa.printf(" Writer Endpoint : %s\n", db_cluster['Endpoint'])
psa.printf(" Reader Endpoint : %s\n", db_cluster['ReaderEndpoint'])
# Now create instances
# Loop on requested number of instance, and balance them on AZ
for i in range(1, nb_instance+1):
if AZ != None:
the_AZ = AZ[i -1 % len(AZ)]
dbinstance_id = instance_identifier+"-"+str(i)+"-"+the_AZ
else:
the_AZ = None
dbinstance_id = instance_identifier+"-"+str(i)
psa.printf("Creating instance %d named '%s' in AZ %s\n", i, dbinstance_id, the_AZ)
try:
res = rds.create_db_instance(
DBInstanceIdentifier=dbinstance_id,
DBInstanceClass=instance_type,
Engine='aurora',
PubliclyAccessible=False,
AvailabilityZone=the_AZ,
DBSubnetGroupName=dbsubnetgroup_name,
DBClusterIdentifier=instance_identifier,
Tags = psa.tagsKeyValueToAWStags(extratags)
)['DBInstance']
psa.printf(" DbiResourceId=%s\n", res['DbiResourceId'])
except botocore.exceptions.ClientError as e:
psa.printf(" Instance seems to exists.\n")
res = rds.describe_db_instances(DBInstanceIdentifier = dbinstance_id)['DBInstances']
psa.printf(" Status is %s\n", res[0]['DBInstanceStatus'])
return db_cluster
Yeah, you are on the right track. Here is the boto3 document for creating a Aurora RDS cluster.
Further, to address the bigger picture problem (i.e. managing your entire infrastructure as code), you should look at options like Terraform.
Check out their Git Repo Terraform Git Repo So, you can accomplish the same task of creating the Aurora cluster using terraform using this template