I'm using Terraform to spin-up my infrastructure on AWS and keep state in the .tfstate file. In my application I have to VPN into other networks where the admin of the other network has defined parameters on the IPSec ESP connection that the VPN connection on my end has to adhere to.
Using Terraform I create a VPN Gateway and a Customer Gateway with the remote network's parameters to the extent that's possible. Then I create a VPN connection and the appropriate route. Here's my VPN code in Terraform:
resource "aws_vpn_gateway" "vpn_gw" {
vpc_id = "${aws_vpc.default.id}"
tags {
Name = "default"
Terraform = true
}
}
resource "aws_customer_gateway" "customer_gw" {
bgp_asn = 65000
ip_address = "172.0.0.1"
type = "ipsec.1"
}
resource "aws_vpn_connection" "default" {
vpn_gateway_id = "${aws_vpn_gateway.vpn_gw.id}"
customer_gateway_id = "${aws_customer_gateway.customer_gw.id}"
type = "ipsec.1"
static_routes_only = true
}
resource "aws_vpn_connection_route" "office" {
destination_cidr_block = "192.168.10.0/24"
vpn_connection_id = "${aws_vpn_connection.default.id}"
}
I have to be able to set the following parameters on my VPN tunnel for phase 1 and phase 2 of the connection:
Phase 1
Authentication Method e.g. Pre-shared Secret
Encryption Scheme e.g. IKE
Diffie-Hellman Group e.g. Group 2
Encryption Algorithm e.g. AES-256
Hashing Algorithm e.g. SHA-1
Main or Aggressive Mode e.g. Main Mode
Lifetime (for renegotiation) e.g. 86400
Phase 2
Encapsulation (ESP or AH) e.g. ESP
Encryption Algorithm e.g. AES-256
Authentication Algorithm e.g. SHA-1
Perfect Forward Secrecy e.g. NO-PFS
Lifetime (for renegotiation) e.g. 3600
The docs on the VPN Customer Gateway show that you can't set that many parameters yourself: https://www.terraform.io/docs/providers/aws/r/customer_gateway.html
The Boto API also doesn't allow any additional parameters to be set.
Is there any way of setting these parameters (programmatically)?
Unfortunately, the answer to your question is no. This isn't a Terraform issue, as such, this is a limitation of the service provided by AWS. You don't get that level of configuration with AWS's basic solution.
To achieve what you are looking for, you'll need to spin up an EC2 instance and either
Roll your own using OpenSwan, VyOS, etc., e.g., https://aws.amazon.com/marketplace/pp/B00JK5UPF6 (just the AWS costs)
Use a VPN appliance from the AWS marketplace, e.g., https://aws.amazon.com/marketplace/pp/B00OCG4OAA (you'll need to pay a licensing fee to the vendor, as well as your AWS costs)
Related
AWS ECS allows for new services to automatically use Service Discovery, as mentioned in the documentation here:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-configure-servicediscovery.html
If I understand correctly, there should always be a namespace and a service created in CloudMap, before a service instance can register itself into it. After registering, the service instance can then be discovered using DNS records, which are kept in Route 53, which is a global service. The namespace has its own private zone and applications from VPCs associated with this zone can query the records and discover the service needed, regardless of the region they are in.
However, if I understand Correctly, CloudMap resources themselves are regional
Let's consider the following scenario: There is a CloudMap namespace and service X defined in region A. For redundancy reasons I would like instances of service X run in region A, but also in region B. However, when configuring service discovery in ECS, it is not possible to use a namespace from region A.
How can then CloudMap service discovery be used in a multi-region environment? Shall corresponding namespaces be created in both regions?
Redundancy can be built in within a single region. I have not seen a regulator yet that expects more than what is offered by multiple Availability Zones in a single region, but if you still wanted to achieve what you are asking, you would need to perform some kind of VPC network peering: https://docs.aws.amazon.com/vpc/latest/peering/peering-scenarios.html#peering-scenarios-full
I've not got experience with how Cloud Map behaves in this context though. Assuming DNS resolution is possible, it would supposedly still work. But aws services are best (cheaper, more stable and lower latency) when used within each region, targeting their region specific api https://docs.aws.amazon.com/general/latest/gr/cloud_map.html
Background: We have an VPC, it has an Internet Gateway attached.
I would like to get the InternetGatewayId of the VPC via aws-cdk
vpc := awsec2.Vpc_FromLookup(stack, jsii.String(viper.GetString(`vpc.id`)), &awsec2.VpcLookupOptions{
VpcId: jsii.String(viper.GetString(`vpc.id`)),
}) //Here it returns awsec2.Ivpc
But according to the code, only awsec2.Vpc has a method InternetGatewayId(). How could I convert awsec2.Ivpc to awsec2.Vpc?
The Ivpc type returned from Vpc_FromLookup is a CDK convenience method to cache a limited set of VPC attributes at synth-time. Unfortunately, the Internet Gateway ID isn't one of them:
Currently you're unable to get the [Internet Gateway] ID for imported VPCs. To do that you'd have to specifically look up the Internet Gateway by name, which would require knowing the name beforehand.
A simple, deterministic workaround is to manually store the ID as a SSM Parameter Store Parameter. At synth-time, StringParameter_ValueFromLookup looks up and caches the IGW ID value as Context in cdk.context.json.:
igwID := awsssm.StringParameter_ValueFromLookup(stack, jsii.String("/my-params/vpc/igw-id"))
A more advanced CDK-only solution is to lookup the ID in a deploy-time CustomResource, which "can do arbitrary lookups or modifications during a CloudFormation deployment" (typically by making SDK calls using a Lambda). Note that this is not necessarily a better solution, because it introduces non-determinacy into the deployment.
I am trying to set up Serverless VPC access
Serverless VPC Access enables you to connect from your Cloud Functions directly to Compute Engine VM instances, Memorystore instances, Cloud SQL instances,
Sounds great. But the documentation is not super friendly to a beginner. Step 2 is to create a connector, about which I have a couple of questions:
In the Network field, select the VPC network to connect to.
My dropdown here contains only "Default". Is this normal? What should IO expect to see here?
In the IP range field, enter an unused CIDR /28 IP range. Addresses in this range are used as source addresses for traffic sent through the connector. This IP range must not overlap with any existing IP address reservations in your VPC network.
I don't know what to do here. I tried using the information in the linked document to first) enter an IP from the region I had selected, and, second) enter an IP from outside that region. Both resulted in connectors that were created with the error. "Connector is in a bad state, manual deletion is recommended"
The documentation continues with a couple of troubleshooting steps if the creation fails:
Specify an IP range that does not overlap with any existing IP address reservations in the VPC network.
I don't know what this means. Maybe like, if I have other connectors I should be sure the IP range for the new one doesn't overlap with those. That's just a guess, but anyway I have none.
Grant your project permission to use Compute Engine VM images from the project with ID serverless-vpc-access-images. See Setting image access constraints for information on how to update your organization policy accordingly.
This leads me to another document about updating my organization's "Image Policy". This one has me so out of my depth, I don't even think I should be here.
This has all started with just wanting to connect to a SQL Server instance from Firebase. Creating the VPC connector seems like a good step, but I've just fallen at every hurdle. Can a cloud-dweller please help me with a few of these points of confusion?
I think you've resolved the issue but I will write an answer to summarize all the steps for future reference.
1. Create a Serverless VPC Access
I think the best reference is to follow the steps in this doc. In step 7, it says the following:
In the IP range field, enter an unreserved CIDR /28 IP range.
The IP you can use is for example 10.8.0.0/28 or even 10.64.0.0/28 with the condition it is not in use for any other network. You can check which IPs are in use going to VPC Network > VPC networks. In the Network field you will have the "default" option so it's okay.
This can take some minutes, so in the meantime you can create your SQL Server/MySQL/PostgreSQL instance.
2. Creating a CloudSQL instance
Create your desired instance (MySQL/PostgreSQL/SQL Server). In your case it will be a SQL Server instance. Also check these steps to configure the Private IP for your instance at creation time or if you have created an instance you can check this. Take note of the Private IP as you will use it later.
3. Create a Cloud function
Before creating your Cloud Function, you have to grant permission to the CF service account to use the VPC. Please follow these steps.
Then follow these steps to configure the connector of your function to use the VPC. In step 5 it says the following:
In the VPC connector field, enter the fully-qualified name of your connector in the following format:
projects/PROJECT_ID/locations/REGION/connectors/CONNECTOR_NAME
It is not necessary to add your VPC with this format. There is already a list where you can choose your VPC. Finally deploy your function.
I wrote a little function to test the connection. I would prefer to use Python but it needs more system dependencies than NodeJS.
index.js:
var express = require('express');
var app = express();
var sql = require("mssql");
exports.helloWorld = (req, res) => {
var config = {
user: 'sqlserver',
password: 'password',
server: 'Your.SQL.Priavte.IP',
database: 'dbname'
};
// connect to your database
sql.connect(config, function (err) {
if (err) console.log(err);
// create Request object
var request = new sql.Request();
// query to the database and get the records
request.query('select * from a_table', function (err, recordset) {
if (err) console.log(err)
// send records as a response
res.send(recordset);
});
});
};
package.json:
{
"name": "sample-http",
"version": "0.0.1",
"dependencies": {
"express": "4.17.1",
"mssql": "6.0.1"
}
}
And that's all! :D
It's important to mention that this procedure is more about connecting Cloud Functions to SQL Server as there is already an easier way to connect CF to PostgreSQL and MySQL.
I discovered that there exists a hard limit on how many IP you can use for such connectors. You can increase quota or you can switch to other region.
Hard limit on IP are imposed by quota on the free tier https://console.cloud.google.com/iam-admin/quotas.
When not in free tier, you can request an increment on quota.
Goal is to visualise the relationship of resources within AWS account(may have multiple VPC's).
This would help daiy operations. For example: Resources getting affected after modifying the security group
Each resource has ARN assigned in AWS cloud
Below are some example relationsships among resources:
Route table has-many relationship with subnets
NACL has-many relationship with subnets
Availability zone has-many relationship with subnets
IAM resource has-many regions
has-many is something like compose relation
security group has association relation with any resource in VPC
NACL has association relation with subnet only
We also have VPC flow logs to find the relationships
Using AWS SDK's,
1)
For on-prem networks, we take IP range and send ICMP requests to verify existence of devices in the IP range and then we send snmp query to classify the device as (windows/linux/router/gateway etc...)
How to find the list of resources allocated within an AWS account? How to classify resources?
2)
What are the parameters that need to be queried from AWS resources(IAM, VPC, subnet, RTable, NACL, IGW etc...) that help create relationsip view of the resources within an AWS account?
you don't have to stitch your ressources together by your self in your app. you can use the ressourcegrouptagging api from aws. take a look on ressourcegroups in your aws console. there you can group things based on tags. then, you can tag the groups itself. requesting with the boto3 python lib will give you a bunch of information. read about boto3, its huge! another thing which might be intresting for you is "aws config".. there you can have your compliance, config histoty, relationship of ressources and plenty of other stuff!
also, check out aws cloudwatch for health monitoring
I am new to terraform and aws. I have a requirement for provisioning elasticache redis with cluster mode disabled. I have gone through the documentation of aws_elasticache_replication_group resource and it specifies primary_endpoint_address as the address of the endpoint for the primary node in the replication group, if the cluster mode is disabled.
And according to the aws docs:
For Redis (cluster mode disabled) clusters, use the Primary Endpoint for all write operations. Use the Reader Endpoint to evenly split incoming connections to the endpoint between all read replicas. Use the individual Node Endpoints for read operations (In the API/CLI these are referred to as Read Endpoints).
My question is on how can we get the reader_endpoint_address from
aws_elasticache_replication_group?
Seems like the terraform provider doesn't support this field. I'll try to suggest a PR adding it.
If you have the Primary Endpoint Address then you can simply deduce the Reader Endpoint Address from that by simply adding ro.
ro here stands for read only.
Primary Endpoint: xxxx.xxxx.xx.xxxx.xxxx.cache.amazonaws.com
Reader Endpoint: xxxx-ro.xxxx.xx.xxxx.xxxx.cache.amazonaws.com
Port is same for both.