I want to allow CodeBuild to run my database migrations. I am configuring my CodeBuild project to be in the VPC and subnet of my RDS. But what do I put for security group? Is this security group to allow/deny access to my CodeBuild? Or should I understand it as the security group I want my CodeBuild to access?
Quote from CodeBuild doc:
"For Security Groups, choose the security groups that AWS CodeBuild uses to allow access to resources in the VPCs."
Learn more about using VPC with CodeBuild: https://docs.aws.amazon.com/codebuild/latest/userguide/vpc-support.html
I need to update a config file in a shared EFS drive with all of the private IP addresses of the current autoscaling group.
The approach I'm thinking is to run a user data script that queries the ASG for the private IP addresses then echo that into the config file. To do that the ec2 needs to have AWS CLI credentials and appropriate read-only access. Ideally, I don't want to store any credentials on this ec2.
Is there another way? Possibly VPC Endpoint or something?
Thanks!
You are asking two questions.
How do I provide credentials securely to an EC2 instance?
You use IAM Roles and assign the role to your EC2 instances. Then use the instance credentials in your code. The CLI examples below will automatically pick up these credentials.
Using an IAM Role to Grant Permissions to Applications Running on Amazon EC2 Instances
How do I get the private IP address of EC2 instances in an Auto Scaling Group (ASG)?
You need to get a list of instances attached to your ASG.
For each instance in your ASG call the describe API and extract the private IP address.
Example commands:
aws autoscaling describe-auto-scaling-groups --auto-scaling-group-name my-auto-scaling-group
aws ec2 describe-instances --instance-ids i-1234567890abcdef0
You can filter the command output. For example add the following to the second command to just display the private IP address:
--query 'Reservations[*].Instances[*].PrivateIpAddress'
Recommendation:
I would use the Python SDK and write a simple program that provides these features and updates your config file.
I'm trying to create Beanstalk app with Network load balancer in new VPC (one public, one private subnet, internet gateway, nat gateway...)
I can create successfully in my personal aws account.
With account of my organization, I have this error "VPC does not exist":
eb create Dev-Price-Availability-API-App-Dev -i t2.micro --vpc --vpc.id vpc-e753b89d
Do you want to associate a public IP address? (Y/n): n
Enter a comma-separated list of Amazon EC2 subnets: subnet-2903f417
Enter a comma-separated list of Amazon ELB subnets: subnet-2903f417
Enter a comma-separated list of Amazon VPC security groups: sg-c382d588
Do you want the load balancer to be public? (Select no for internal) (Y/n): n
NOTE: The current directory does not contain any source code. Elastic Beanstalk is launching the sample application instead.
ERROR: ServiceError - Configuration validation exception: The VPC 'vpc-e753b89d' does not exist.
I try to reproduce many times (create new VPC...), and the script always run successfully in my personal AWS but having the same error in organization AWS. All subnet of vpc and beanstalk are in the same region (us-east-1).
Sometimes, the script throws "subnet does not exist", "securitygroups does not exists"
Does anyone have the same issue, could you give me some ideas?
I think there is a bug from EB CLI. Currently I use: EB CLI 3.14.1 (Python 3.6.5).
When I run eb cli, eb always take my default aws profile even though I set aws profile to my organization profile.
[default]
region=us-west-2
aws_access_key_id=....
aws_secret_access_key=...
[myorganization]
aws_access_key_id=...
aws_secret_access_key=...
region=us-east-1
output=json
So the way I did is:
- Backup my default profile
- Rename my organization profile to default
Then I can run eb command successfuly without error.
Thanks
I want to create a db security group before creating an RDS instance using Ansible. There is a module for called ec2_group which creates security groups for VPC but I want to create security group for the DB instance only.
Is there a separate module for it or do I have to use the ec2_group module to create a DB security group?
Also, if I'm wrong correct me! The EC2 security group and DB security group are 2 different things right?
I'm afraid there is no Ansible module for that.
DB SG and VPC SG are indeed different things: you create DB SG and allow traffic flow from one of VPC SGs.
If you need to automate DB SG creation, your option is aws cli.
Use local commands aws rds create-db-security-group and aws rds authorize-db-security-group-ingress with Ansible command module.
I submitted a PR last year to add 2 AWS modules : boto3 and boto3_wait.
These 2 modules allow you to interact with AWS API using boto3.
For instance, you could create a DB security group by calling create_db_security_group and authorize_db_security_group_ingress methods on RDS service :
- name: Create DB security group
boto3:
name: rds
region: us-east-1
operation: create_db_security_group
parameters:
DBSecurityGroupName: mydbsecuritygroup
DBSecurityGroupDescription: My DB security group
- name: Authorize IP range
boto3:
name: rds
region: us-east-1
operation: authorize_db_security_group_ingress
parameters:
DBSecurityGroupName: mydbsecuritygroup
CIDRIP: 203.0.113.5/32
If you're interrested by this feature, feel free to vote up on the PR. :)
Im trying to deploy a docker container image to AWS using ECS, but the EC2 instance is not being created. I have scoured the internet looking for an explanation as to why I'm receiving the following error:
"A client error (InvalidParameterException) occurred when calling the RunTask operation: No Container Instances were found in your cluster."
Here are my steps:
1. Pushed a docker image FROM Ubuntu to my Amazon ECS repo.
2. Registered an ECS Task Definition:
aws ecs register-task-definition --cli-input-json file://path/to/my-task.json
3. Ran the task:
aws ecs run-task --task-definition my-task
Yet, it fails.
Here is my task:
{
"family": "my-task",
"containerDefinitions": [
{
"environment": [],
"name": "my-container",
"image": "my-namespace/my-image",
"cpu": 10,
"memory": 500,
"portMappings": [
{
"containerPort": 8080,
"hostPort": 80
}
],
"entryPoint": [
"java",
"-jar",
"my-jar.jar"
],
"essential": true
}
]
}
I have also tried using the management console to configure a cluster and services, yet I get the same error.
How do I configure the cluster to have ec2 instances, and what kind of container instances do I need to use? I thought this whole process was to create the EC2 instances to begin with!!
I figured this out after a few more hours of investigating. Amazon, if you are listening, you should state this somewhere in your management console when creating a cluster or adding instances to the cluster:
"Before you can add ECS instances to a cluster you must first go to the EC2 Management Console and create ecs-optimized instances with an IAM role that has the AmazonEC2ContainerServiceforEC2Role policy attached"
Here is the rigmarole:
1. Go to your EC2 Dashboard, and click the Launch Instance button.
2. Under Community AMIs, Search for ecs-optimized, and select the one that best fits your project needs. Any will work. Click next.
3. When you get to Configure Instance Details, click on the create new IAM role link and create a new role called ecsInstanceRole.
4. Attach the AmazonEC2ContainerServiceforEC2Role policy to that role.
5. Then, finish configuring your ECS Instance. NOTE: If you are creating a web server you will want to create a securityGroup to allow access to port 80.
After a few minutes, when the instance is initialized and running you can refresh the ECS Instances tab you are trying to add instances too.
I ran into this issue when using Fargate. I fixed it when I explicitly defined launchType="FARGATE" when calling run_task.
Currently, the Amazon AWS web interface can automatically create instances with the correct AMI and the correct name so it'll register to the correct cluster.
Even though all instances were created by Amazon with the correct settings, my instances wouldn't register. On the Amazon AWS forums I found a clue. It turns out that your clusters need internet access and if your private VPC does not have an internet gateway, the clusters won't be able to connect.
The fix
In the VPC dashboard you should create a new Internet Gateway and connect it to the VPC used by the cluster.
Once attached you must update (or create) the route table for the VPC and add as last line
0.0.0.0/0 igw-24b16740
Where igw-24b16740 is the name of your freshly created internet gateway.
Other suggested checks
Selecting the suggested AMI which was specified for the given region solved my problem.
To find out the AMI - check Launching an Amazon ECS Container Instance.
By default all the ec2 instances are added to default cluster . So the name of the cluster also matters.
See point 10 at Launching an Amazon ECS Container Instance.
More information available in this thread.
Just in case someone else is blocked with this problem as I was...
I've tried everything here and didn't work for me.
Besides what was said here regards the EC2 Instance Role, as commented here, in my case only worked if I still configured the EC2 Instance with simple information. Using the User Data an initial script like this:
#!/bin/bash
cat <<'EOF' >> /etc/ecs/ecs.config
ECS_CLUSTER=quarkus-ec2
EOF
Informing the related ECS Cluster Name created at this ecs config file, resolved my problem. Without this config, the ECS Agent Log at the EC2 Instance was showing an error that was not possible to connect to the ECS, doing this I've got the EC2 Instance visible to the ECS Cluster.
After doing this, I could get the EC2 Instance available for my EC2 Cluster:
The AWS documentation said that this part is optional, but in my case, it didn't work without this "optional" configuration.
When this happens, you need to look to the following:
Your EC2 instances should have a role with AmazonEC2ContainerServiceforEC2Role managed policy attached to it
Your EC2 Instances should be running AMI image which is ecs-optimized (you can check this in EC2 dashboard)
Your VPC's private subnets don't have public IPs assigned, OR you do not have an interface VPC endpoint configured, OR you don't have NAT gateway set up
Most of the time, this issue appears because of the misconfigured VPC. According to the Documentation:
QUOTE: If you do not have an interface VPC endpoint configured and your container instances do not have public IP addresses, then they must use network address translation (NAT) to provide this access.
To create a VPC endpoint: Follow to the documentation here
To create a NAT gateway: Follow to the documentation here
These are the reasons why you don't see the EC2 instances listed in the ECS dashboard.
If you have come across this issue after creating the cluster
Go the ECS instance in the EC2 instances list and check the IAM role that you have assigned to the instance. You can identify the instances easily with the instance name starts with ECS Instance
After that click on the IAM role and it will direct you to the IAM console. Select the AmazonEC2ContainerServiceforEC2Role policy from the permission policy list and save the role.
Your instances will be available in the cluster shortly after you save it.
The real issue is lack of permission. As long as you create and assign a IAM Role with AmazonEC2ContainerServiceforEC2Role permission, the problem goes away.
I realize this is an older thread, but I stumbled on it after seeing the error the OP mentioned while following this tutorial.
Changing to an ecs-optimized AMI image did not help. My VPC already had a route 0.0.0.0/0 pointing to the subnet. My instances were added to the correct cluster, and they had the proper permissions.
Thanks to #sanath_p's link to this thread, I found a solution and took these steps:
Copied my Autoscaling Group's configuration
Set IP address type under the Advanced settings to "Assign a public IP address to every instance"
Updated my Autoscaling Group to use this new configuration.
Refreshed my instances under the Instance refresh tab.
Another possible cause that I ran into was updating my ECS cluster AMI to an "Amazon Linux 2" AMI instead of an "Amazon Linux AMI", which caused my EC2 user_data launch script to not work.
for other than ecs-optimized instance image. Please do below step
Install ECS Agent ECS Agent download link
ECS_CLUSTER=REPLACE_YOUR_CLUSTER_NAME
add above content to /etc/ecs/ecs.config
The VPC will need to communicate with the ECR.
To do this, the security group attached to the VPC will need an outbound rule of 0.0.0.0/0.