I've been trying to setup a terraform module to create private cluster, and I'm struggling with a strange situation.
When creating a cluster with a master authorized network, if I do it through the GCP console, I can create the private cluster just fine. But when I do it with Terraform, I get a strange error:
Invalid master authorized networks: network "<cidr>" is not a reserved network, which is required for private endpoints.
The interesting parts of the code are as follows:
....
master_authorized_networks_config {
cidr_blocks {
cidr_block = "<my-network-cidr>"
}
}
private_cluster_config {
enable_private_endpoint = true
enable_private_nodes = true
master_ipv4_cidr_block = "<cidr>"
}
....
Is there something I'm forgetting here?
According to Google Cloud Platform documentation here, it should be possible to have both private and public endpoints, and the master_authorized_networks_config argument should have networks which can reach either of those endpoints.
If setting the enable_private_endpoint argument to false means that the private endpoint is created, but it also creates the public endpoint, then that is a horribly mis-named argument; enable_private_endpoint is actually flipping the public endpoint off and on, not the private one. Apparently, specifying a private_cluster_config is sufficient to enable the private endpoint, and the flag toggles the public endpoint, if reported behavior is to be believed.
That is certainly the experience that I had: specifying my local IP address in the master_authorized_networks_config caused cluster creation to fail when enable_private_endpoint is true. When I set it to false, I get both endpoints and the config. is not rejected.
I've had the same issue recently.
The solution I found is to set the enable_private_endpoint = false.
In this case the private endpoint created anyway, but you are allowed to add CIDR with external addresses to master authorized networks.
master_authorized_networks_config {
}
private_cluster_config {
enable_private_endpoint = true
enable_private_nodes = true
master_ipv4_cidr_block = "<cidr>"
}
Should create the private_end_point and it won't complain about Invalid master authorized networks. The one you tried, is passing up the external CIDR for the whitelist to access the public endpoint while at the same time you want it to be strictly private.
I was able to figure it out by myself, I guess I should have read the all the documentation on the gcp side in detail.
The problem here is that I'm adding a master authorized network cidr range to enable local network access, that is an external address and from the GCP documentation
You cannot include external IP addresses in the list of master authorized networks, because access to the public endpoint is disabled.
If you are curious, and want know more click here
Related
I am trying to setup a secure connection to Azure synapse studio using private link hub and private endpoint as mentioned in the below doc,
https://learn.microsoft.com/en-us/azure/synapse-analytics/security/how-to-connect-to-workspace-from-restricted-network
However, it throws an error.
"Unable to connect to serverless SQL pool because of a
network/firewall issue"
Please note- We use a VPN to connect to on-premise company network from home and access the dedicated pool using a SQL authentication. This works absolutely fine.
The private endpoint and link hub are mounted on the same subnet as the one we use for dedicated pool. So I don't think there is any problem with allowing certain ports for serverless pool. Please correct me.
What am I missing here?
Unable to connect to serverless SQL pool because of a network/firewall
issue ?
Synapse Studio
Follow this instruction for troubleshooting network and firewall:
When creating your workspace, managed virtual network should be enabled and make sure to allow all IP addresses.
Note: If you do not enable it, your synapse studio will not be able to create a private endpoint. So if you fail to do this during synapse
workspace creation. You will not be able to change this later and you
will be a force to recreate the synapse workspace.
Create a Managed private endpoint, connect to your data source and check whether the managed private endpoint is approved or not.
To know more details please Refer this link:
https://www.thedataguy.blog/azure-synapse-understanding-private-endpoints/
https://www.c-sharpcorner.com/article/how-to-setup-azure-synapse-analytics-with-private-endpoint/
How to set up access control for your Azure Synapse workspace - Azure Synapse Analytics | Microsoft Docs
This is what resolved it for me. Hope this helps someone out there.
Used Terraform for_each loop to deploy the Private Endpoints. The Synapse Workspace is using a Managed Private Network. In order to disable Public Network Access, the Private Link Hub, plus the 3 Synapse-specific endpoints (for the sub-resources) are required.
Pre-Reqs:
Private DNS Zones need to exist
Private Link Hub (deployed via TF in same resource group as the Synapse Workspace)
main.tf
# Loop through Synapse subresource names, and create Private Endpoints to each of them
resource "azurerm_private_endpoint" "this" {
for_each = var.endpoints
name = lower("pep-syn-${var.location}-${var.environment}-${each.value["alias"]}")
location = var.location
resource_group_name = var.resource_group_name
subnet_id = data.azurerm_subnet.subnet.id
private_service_connection {
name = lower("pep-syn-${var.location}-${var.environment}-${each.value["alias"]}")
private_connection_resource_id = ("${each.key}" == "web") ? azurerm_synapse_private_link_hub.this.id : azurerm_synapse_workspace.this.id
subresource_names = ["${each.key}"]
is_manual_connection = false
}
private_dns_zone_group {
name = "${each.value["source_dns_zone_group_name"]}"
private_dns_zone_ids = [var.private_dns_zone_config[each.value["source_dns_zone_group_name"]]]
}
tags = var.tags
lifecycle {
ignore_changes = [
tags
]
}
}
variables.tf
variable "endpoints" {
description = "Private Endpoint Connections required. 'web' (case-sensitive) is for the Workspace to the Private Link Hub, and Sql/Dev/SqlOnDemand (case-sensitive) are from the Synapse workspace"
type = map(map(string))
default = {
"Dev" = {
alias = "studio"
source_dns_zone_group_name = "privatelink_dev_azuresynapse_net"
}
"Sql" = {
alias = "sqlpool"
source_dns_zone_group_name = "privatelink_sql_azuresynapse_net"
}
"SqlOnDemand" = {
alias = "sqlondemand"
source_dns_zone_group_name = "privatelink_sql_azuresynapse_net"
}
"web" = {
alias = "pvtlinkhub"
source_dns_zone_group_name = "privatelink_azuresynapse_net"
}
}
}
Appendix:
https://learn.microsoft.com/en-us/azure/synapse-analytics/security/how-to-connect-to-workspace-from-restricted-network#step-4-create-private-endpoints-for-your-workspace-resource
https://learn.microsoft.com/en-gb/azure/private-link/private-endpoint-overview#private-link-resource
When deploying an RDS database via Terraform, my Default target is unavailable.
Running the following command:
aws rds describe-db-proxy-targets --db-proxy-name <my_proxy_name_here>
I get two errors:
initially its in state: PENDING_PROXY_CAPACITY
eventually that times out with the following error: DBProxy Target unavailable due to an internal error
Following extensive research, a two hour call with AWS support and very few search results for the error: PENDING_PROXY_CAPACITY
I stumbled across the following discussion: https://github.com/hashicorp/terraform-provider-aws/issues/16379
I had a couple of issues with my config:
Outbound rules for my RDS proxy security group was limited to internal traffic only. This causes problems as you need public internet access to access AWS Secrets manager!
At the time of writing the Terraform documentation here suggests you can pass a "username" option to the Auth block for the rds_proxy resource (see: https://registry.terraform.io/providers/hashicorp/aws/4.26.0/docs/resources/db_proxy). This does not work, and returns an error stating the username option is not expected. This is because the rds_proxy expects all the information for Auth to be contained in one json object within the secret arn provided. For this reason I created a 2nd secret containing all the auth information like so:
resource "aws_secretsmanager_secret_version" "lambda_rds_test_proxy_creds" {
secret_id = aws_secretsmanager_secret.lambda_rds_test_proxy_creds.id
secret_string = jsonencode({
"username" = aws_db_instance.lambda_rds_test.username
"password" = module.lambda_rds_secret.secret
"engine" = "postgres"
"host" = aws_db_instance.lambda_rds_test.address
"port" = 5432
"dbInstanceIdentifier" = aws_db_instance.lambda_rds_test.id
})
}
Fixing both issues still gave me an Auth error for credentials, this required the IAM permissions fixing (this is discussed in the above github issue). But by creating the new Secret to contain all the info required both the proxy, It no longer had access to the new secret so I updated my IAM role for the newly created resource
I am posting this here as the Github issue is archived and I am unable to update the comments to include some of my search terms to assist those searching for the same issue to come across the issue quicker as there is very little info out there regarding RDS_PROXY errors experienced here.
I'm creating a Square Client object like this:
const squareClient = new Client({
environment: Environment.Sandbox,
accessToken:
"The_correct_sandbox_token_goes_here",
});
const paymentsApi = squareClient.paymentsApi;
and calling the createPayment method from within a lambda function with a body like the one below:
{
"sourceId": "cnon:CBASEHY1uZmmlYRYagaqS7yd9Zo",
"amountMoney": {
"amount": "12500",
"currency": "USD"
},
"locationId": "Location_ID_here",
"idempotencyKey": "6a36e49c-914d-4934-bc34-c183ba0a08c5"
}
This works fine on my local machine (using serverless offline), but when deployed to AWS, the call to createPayment times out after six seconds. Is there something extra that needs to be done to call createPayment from a lambda function?
The timeout you experienced might be due to the function being attached to a VPC.
If an AWS Lambda function is configured to use a VPC, then the function only has access to the Internet if the following have been configured:
Lambda function is connected to a private subnet
A NAT Gateway or NAT Instance is running in a public subnet in the same VPC
A Route Table on the private subnet directs Internet-bound traffic to the NAT
If the Lambda function does not require access to any resources in the VPC, do not attach the function to a VPC. This will automatically grant direct access to the Internet.
Alternatively, you might try increasing the timeout of the Lambda function in case the external service simply needed more time.
I am running a Node(12.x) Lambda in AWS. The purpose of this lambda is to interact with Cloudformation stacks, and I'm doing that via the aws-sdk. When testing this lambda locally using lambda-local, it executes successfully and the stack can be seen in CREATING state in AWS console.
However, when I push and run this lambda in AWS, it fails after 15 seconds, and I get this error:
{"errorType":"TimeoutError","errorMessage":"Socket timed out without establishing a connection","code":"TimeoutError","message":"Socket timed out without establishing a connection","time":"2020-06-29T03:10:27.668Z","region":"us-east-1","hostname":"cloudformation.us-east-1.amazonaws.com","retryable":true,"stack":["TimeoutError: Socket timed out without establishing a connection"," at Timeout.connectTimeout [as _onTimeout] (/var/task/node_modules/aws-sdk/lib/http/node.js:69:15)"," at listOnTimeout (internal/timers.js:549:17)"," at processTimers (internal/timers.js:492:7)"]}
This lead me to investigate the lambda timeout and the possible configuration changes I could make found in https://aws.amazon.com/premiumsupport/knowledge-center/lambda-function-retry-timeout-sdk/ and https://aws.amazon.com/premiumsupport/knowledge-center/lambda-vpc-troubleshoot-timeout/ but nothing worked.
I found a couple of similar issues such as AWS Lambda: Task timed out which include possible suggestions such as lambda timeout and lambda memory issues, but Ive set mine to 30 seconds and the logs show max memory used is 88MB out of possible 128MB, but I tried with an increase anyway, and no luck.
The curious part is that it fails without establishing a connection to hostname cloudformation.us-east-1.amazonaws.com. How is that possible when the role assigned to the lambda has full Cloudformation privileges? I'm completely out of ideas so any help would be greatly appreciated. Heres my code:
TEST EVENT:
{
"stackName": "mySuccessfulStack",
"app": "test"
}
Function my handler calls (createStack):
const AWS = require('aws-sdk');
const templates = {
"test": {
TemplateURL: "https://<bucket>.s3.amazonaws.com/<path_to_file>/test.template",
Capabilities: ["CAPABILITY_IAM"],
Parameters: {
"HostingBucket": "test-hosting-bucket"
}
}
}
async function createStack(event) {
AWS.config.update({
maxRetries: 2,
httpOptions: {
timeout: 30000,
connectTimeout: 5000
}
});
const cloudformation = new AWS.CloudFormation();
const { app, stackName } = event;
let stackParams = templates[app];
stackParams['StackName'] = app + "-" + stackName;
let formattedTemplateParams = [];
for (let [key, value] of Object.entries(stackParams.Parameters)) {
formattedTemplateParams.push({"ParameterKey":key, "ParameterValue": value})
}
stackParams['Parameters'] = formattedTemplateParams;
const result = await cloudformation.createStack(stackParams).promise();
return result;
}
Lambda function in a VPC does not public IP address nor internet access. From docs:
Connect your function to private subnets to access private resources. If your function needs internet access, use NAT. Connecting a function to a public subnet does not give it internet access or a public IP address.
There are two common solutions for that:
place lambda function in a private subnet and setup NAT gateway in public subnet. Then set route table from private subnet to the NAT device. This will enable the lambda to access the internet and subsequently CloudFormation service.
setup a VPC interface endpoint for CloudFormation. This will allow your lambda function in private subnet to access CloudFormation without the internet.
I'm using Terraform to build an API + corresponding lambda functions.
I have some other infrastructure, which I'd like to think is nicely set up (maybe I'm wrong?):
2 VPCs (let's just call'em test and prod)
Private & public subnets in each VPC
RDS DBs launched in the private subnets
All resources are identical on both VPCs; e.g. there's a test-private-subnet and a prod-private-subnet with exactly the same specs, same for DBs, etc.
Now, I'm working on the API and the lambdas that will power said API.
I don't feel like I need a test & prod API gateway and test & prod lambdas:
lambda code will be the same, just acting on different DB
you can use API stage_variables, with different ips, to achieve a test vs prod environment for the API
But when I try and setup a lambda with the vpc_config block (cause I need it to be associated with the security group that's allowed ingress on the DBs), I get the following error:
Error applying plan:
1 error(s) occurred:
* module.lambdas.aws_lambda_function.api-lambda-users: 1 error(s) occurred:
* aws_lambda_function.api-lambda-users: Error creating Lambda function: InvalidParameterValueException: Security Groups are required to be in the same VPC.
status code: 400, request id: xxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx
My lambda config looks like this:
resource "aws_lambda_function" "api-lambda-users" {
provider = "PROVIDER"
function_name = "users"
s3_key = "users/${var.lambda-package-name}"
s3_bucket = "${var.api-lambdas-bucket}"
role = "${aws_iam_role.lambda-role.arn}"
handler = "${var.handler-name}"
runtime = "${var.lambda-runtime}"
vpc_config {
security_group_ids = [
//"${data.aws_security_group.prod-lambda.id}",
"${data.aws_security_group.test-lambda.id}"
]
subnet_ids = [
//"${data.aws_subnet.prod-primary.id}",
"${data.aws_subnet.test-primary.id}"
]
}
}
Notice I'd ideally like to just specify them, together, in their corresponding lists.
Am I missing something?
Suggestions?
Any help, related or not, is much appreciated.
Lambda running inside a vpc is subject to the same networking "rules" as ec2 instances. So it can't "exist" in two VPC's. If the lambda function needs to talk vpc resources in two separate VPC's you could use something like VPC peering or just run two copies of the function in the two different vpc's.
When you add VPC configuration to a Lambda function, it can only access resources in that VPC. If a Lambda function needs to access both VPC resources and the public Internet, the VPC needs to have a Network Address Translation (NAT) instance inside the VPC and a VPC Peering connection.