I have two accounts, account A and B. Account A has a VPC that contains resources. These resources are needed by my Lambda function in account B. I am trying to configure my lambda function to the VPC found in account A.
I tried a peering connection between the VPCs in account A and account B, which I created successfully. I then added routes to the route table for the peering connection. I then tried to connect the Lambda function in account B to the VPC in account A using this CLI command:
aws lambda update-function-configuration --function-name <myLambdaName> --vpc-config SubnetIds=<subnet-in-vpc-account-A>,SecurityGroupIds=<sec-group-in-vpc-account-A>
However this failed with the following error:
An error occurred (InvalidParameterValueException) when calling the UpdateFunctionConfiguration operation: Error occurred while DescribeSecurityGroups. EC2 Error Code: InvalidGroup.NotFound. EC2 Error Message: The security group <sec-group-in-vpc-account-A> does not exist
Which suggests my VPCs are not connected, is it even possible for the lambda function to connect to a cross-account VPC?
Related
I'm trying to connect a spring boot application from AWS EKS to AWS Opensearch both of which reside in a VPC. Though the connection is successful im unable to write any data to the index.
All the AWS resources - EKS and Opensearch are configured using terraform. I have mentioned the elasticsearch subnet CIDR in the egress which is attached to the application. Also, the application correctly assumes the EKS service account and the pod role - which I mentioned in the services stanza for Elasticsearch. In the policy which is attached to the pod role, I see all the permissions mentioned - ESHttpPost, ESHttpget, ESHttpPut, etc.
This is the error I get,
{"error":{"root_cause": [{"type":"security_exception", "reason":"no
permissions for [indices:data/write/index] and User
[name=arn:aws:iam::ACCOUNT_NO:role/helloworld-demo-eks-PodRle-
hellodemo-role-1,backend_roles=
[arn:aws:iam::ACCOUNT_NO:role/helloworld-demo-eks-PodRle-hellodemo
role-1], requested
Tenant=null]"}],"type":"security_exception", "reason":"no
permissions for [indices:data/write/index] and User
[name=arn:aws:iam::ACCOUNT_NO:role/helloworld demo-eks-PodRle-
hellodemo-role-1,
backend_roles=[arn:aws:iam::ACCOUNT_NO:role/helloworld-demo-eks-
PodRle-hellodemo role-1], requested Tenant=null]"},"status":403}
Is there anything that I'm missing out on while configuring?
This error can be resolved by assigning the pod role to additional_roles key in the Elasticsearch terraform. This internally is taken care by AWS STS when it receives a request from EKS.
I accidentally deleted all the default subnets in aws,I want to recreate default subnets。I make CLI command: "aws ec2 create-default-subnet --availability-zone us-west-2a"
,but always get the error message
"An error occurred (DefaultSubnetAlreadyExistsInAvailabilityZone) when calling the CreateDefaultSubnet operation: 'subnet-015c449cab525d947' is already the default subnet in us-west-2d."
how to solve this problem?
There is only one default subnet can exist in each availability zone, seems you already have yours on us-west-2a, login to you AWS account search for VPC > Subnets and delete what you have there, then you can re-create it with this command:
ws ec2 create-default-subnet --availability-zone us-west-2a
check AWS document fore more info:
https://aws.amazon.com/premiumsupport/knowledge-center/recreate-default-vpc/
I am trying to set cross account data transfer from AWS Lambda in AWS account A to SQS in AWS account B using boto3. Below are the steps which I have followed.
Created an IAM role in account A which has "SendMessage" access to SQS queue in account B. (Given an ARN of SQS queue of account B)
Added an account ID of AWS account B in the trust relationship of an IAM role in account A.
Attached this IAM role to Lambda function and written a code to send the message to SQS queue using SQS queue URL.
Created an SQS queue in account B.
In the SQS queue access policy I have written a policy which will allow lambda role of account A to send message to its SQS queue.
================================================================================
After that when I am trying to test my lambda function, it is giving me below error.
[ERROR] ClientError: An error occurred (AccessDenied) when calling the SendMessage operation: Access to the resource https://queue.amazonaws.com/ is denied.
=====================================================================================
Can anybody please help to understand what's wrong here?.
This error can occur if you are attempting to access SQS via the boto3 Python library (e.g. OP's lambda) from inside a VPC with private DNS enabled.
Per AWS documentation:
Private DNS doesn't support legacy endpoints such as queue.amazonaws.com or us-east-2.queue.amazonaws.com.
(emphasis mine)
To solve this error:
Create a VPC endpoint for com.amazonaws.<region>.sqs in your VPC
Pass the appropriate service endpoint URL to the boto3.client() constructor:
import boto3
client = boto3.client('sqs', endpoint_url='https://sqs.%s.amazonaws.com' % region)
IAM permissions are left as an exercise to the reader.
Intermittently getting the following error while running aws glue job,
Error downloading script: fatal error: An error occurred (404) when calling the HeadObject operation:
Not sure why it would be intermittent, but this is likely an issue connecting to S3. A few things to check:
Glue Jobs run with an IAM role. You can check your Job details to see what it's currently set to. You should make sure that role has privileges to access the S3 bucket that has your job code in it.
Glue Jobs require a VPC endpoint. You should check to make sure that you have one properly created for the VPC you're using
It is possible to configure a VPC endpoint without associating it with any subnets. Check your VPC Endpoint for the correct routing.
Below is a bit of reference code written with AWS CDK, in case it's helpful
IAM Role
new iam.Role(this, `GlueJobRole`, {
assumedBy: new iam.ServicePrincipal(`glue.amazonaws.com`),
managedPolicies: [
iam.ManagedPolicy.fromAwsManagedPolicyName(
`service-role/AWSGlueServiceRole`
),
],
});
VPC Endpoint
const vpc = ec2.Vpc.fromLookup(this, `VPC`, { vpcId: VPC_ID });
new ec2.GatewayVpcEndpoint(this, `S3VpcEndpoint`, {
service: ec2.GatewayVpcEndpointAwsService.S3,
subnets: vpc.publicSubnets,
vpc,
});
May be your bucket is enabled customer managed key. You need to add Glue role to kms to fixed this issue.
I've created a node.js application which connects to DynamoDB. Everything is working fine locally Now I'm trying to setup on AWS servers.
First I've created DynamoDB tables from AWS DynamoDB console. It is working fine.
I've created a new role from IAM management console > Roles to access DynamoDB. And attached that role to EC2 instance.
But when I fire any aws dynamodb cli command, it gave me error to mention the region.
So I went to IAM management console > Users, and created an access key to my admin type user.
Now I'm login to EC2 CLI using ec2-user and aws configure with previously generated access key.
AWS Access Key ID [None]: ACCESS KEY
AWS Secret Access Key [None]: SECRET
Default region name [None]: us-east-1
Default output format [None]: json
But when I use following command aws dynamodb list-tables. It gives no output, no error.
As I commented, The main issue was outbound rules for attached security group. Here are the necessary things to do
Set a security group outbound rule to HTTPS
Setup Credentials
Create Access Key from IAM management console > Users.
SSH to EC2 instance.
Configure the credentials to EC2 instance using aws configure command or directly modify ~/.aws/credentials file.
Attach Role
Create Role from IAM management console > Roles. Select the role which is necessary to perform operation on AWS service. Eg AmazonDynamoDBFullAccess
Open VPC console and select the EC2 instance.
Attach the role from Actions menu
It is good, though optional, to create VPC endpoint. If you face UnauthorizedOperation error while creating endpoint, assign AmazonEC2FullAccess permission to the user from IAM console. Remove it later if you don't need it.
To use the AWS service from your application, find the relevant endpoint from this list.
It sounds like you are having problems connecting to DynamoDB because of the way you have configured your VPC.
There are some options but if you would prefer to keep your VPC isolated from the internet then you could enable VPC endpoints for DynamoDB. That way you can access DynamoDB from within your VPC without those connections going over the public internet.
There is a step-by-step guide for how to do that here: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/vpc-endpoints-dynamodb.html
Essentially, it involves the following steps:
you have to get the VPC id for the VPC where your EC2 instance is located
create a VPC endpoint for DynamoDB, specifying the VPC id and the regional dynamodb service name:
aws ec2 create-vpc-endpoint --service-name com.amazonaws.<region>.dynamodb --vpc-id <yourvpcid>