How can I tell if an EC2 instance is inside my VPC? - amazon-web-services

My client has many EC2 instances running, and a VPC (virtual private cloud) running.
I'm using a platform called Starcluster to launch nodes, and I need to know if they're in the VPC or just ordinary EC2 nodes. How can I do that?
Amazon's VPC console at this address:
https://console.aws.amazon.com/vpc/home?region=us-east-1
shows:
1 VPC
3 Running Instances
but some of those running instances are non-VPC instances, as far as I know. Hints?

On AWS Console you can see it. Just like below:

When you select an instance in the EC2 Instances screen, you can see a bunch of fields under the Description tab. Look for a field called "VPC ID". If there is no value for that field, it is not in a VPC.

Related

Amazon Linux 2 instances won't appear in Systems Manager

I think I've done everything listed as a pre-req for this, but I just can't get the instances to appear in Systems Manager as managed instances.
I've picked an AMI which i believe should have the agent in by default.
ami-032598fcc7e9d1c7a
PS C:\Users\*> aws ec2 describe-images --image-ids ami-032598fcc7e9d1c7a
{
"Images": [
{
"ImageLocation": "amazon/amzn2-ami-hvm-2.0.20200520.1-x86_64-gp2",
"Description": "Amazon Linux 2 AMI 2.0.20200520.1 x86_64 HVM gp2",
I've also created my own Role, and included the following policy which i've used previously to get instances into Systems Manager.
Finally I've attached the role to the instances.
I've got Systems Manager set to a 30 min schedule and waited this out and the instances don't appear. I've clearly missed something here, would appreciate suggestions of what.
Does the agent use some sort of backplane to communicate, or should I have enabled some sort of communication with base in the security groups?
Could this be because the instances have private IPs only? Previous working examples had public IPs, but I dont want that for this cluster.
Besides the role for ec2 instances, SSM also needs to be able to assume role to securely run commands on the instances. You only did the first step. All the steps are described in AWS documentation for SSM.
However, I strongly recommend you use the Quick Setup feature in System Manager to setup everything for you in no time!
In AWS Console:
Go to Systems Manager
Click on Quick Setup
Leave all the defaults
In the Targets box at the bottom, select Choose instances manually and tick your ec2 instance(s)
Finish the setup
It will automatically create AmazonSSMRoleForInstancesQuickSetup Role and assign it to the selected ec2 instance(s) and also create proper AssumeRole for SSM
Go to EC2 Console, find that ec2 instance(s), right-click and reboot it by choosing Instance State > Reboot
Wait for a couple of minutes
Refresh the page and try to Connect via Session Manager tab
Notes:
It's totally fine and recommended to create your ec2 instances in private subnets if you don't need them to be accessed from internet. However, make sure the private subnet has internet access itself via NAT. It's a hidden requirement of SSM!
Some of the AmazonLinux2 images like amzn2-ami-hvm-2.0.20200617.0-x86_64-gp2 does not have proper SSM Agent pre-installed. So, recreate your instance using a different AMI and try again with the above steps if it didn't work.
Could this be because the instances have private IPs only? Previous working examples had public IPs, but I don't want that for this cluster.
If you place your instance in a private subnet (or in a public subnet but without a public IP), then the SSM agent can't connect to the SSM Service. Thus it can't register to it.
There are two solutions to this issue:
Setup VPC Interface endpoint in a private subnet for SSM System Manger. With this your intances will be able to connect to the SSM service without the internet.
Create a public subnet with NAT gateway/instance, and setup route tables to route internet traffic from the private subnets to the NAT gateway. This way your private instances will be able to access the SSM service over internet through the NAT device.

My VPC is greyed out when trying to create a EC2 Instance?

I am trying to create an instance into my already created VPC network. For some reason when I am in the middle of the launch, I cannot select my VPC and it only lets me select a default VPC which is not what I want.
The reason for grey-out is because the VPC is created with "dedicated" tenancy. Changing the VPC to"default" tenancy will solve the issue. Apparently, there is no option to make that change in the AWS GUI. Either you have to delete and re-create the VPC with default tenancy or modify the tenancy value using the AWS CLI.
To modify the instance tenancy attribute of a VPC using the AWS CLI
Use the modify-vpc-tenancy command to specify the ID of the VPC and instance tenancy value. The only supported value is default.
aws ec2 modify-vpc-tenancy --vpc-id vpc-1a2b3c4d --instance-tenancy default
Alright so the solution was that the AMI was not compatible with my Dedicated Tenancy on my VPC so I had to delete and redo the entire VPC as default instead of Dedicated.
You probably does not have any Subnet private or public in your VPC can you please confirm?
I want to also point out that the AZ may not have any instances of that TYPE available. This will cause the VPC to be greyed out as well. I was trying to create a t2.medium or below in us-east-1c and I was seeing my VPC greyed out until I changed to an M or T3 type.

Why can't my ECS service register available EC2 instances with my ELB?

I've got an EC2 launch configuration that builds the ECS optimized AMI. I've got an auto scaling group that ensures that I've got at least two available instances at all times. Finally, I've got a load balancer.
I'm trying to create an ECS service that distributes my tasks across the instances in the load balancer.
After reading the documentation for ECS load balancing, it's my understanding that my ASG should not automatically register my EC2 instances with the ELB, because ECS takes care of that. So, my ASG does not specify an ELB. Likewise, my ELB does not have any registered EC2 instances.
When I create my ECS service, I choose the ELB and also select the ecsServiceRole. After creating the service, I never see any instances available in the ECS Instances tab. The service also fails to start any tasks, with a very generic error of ...
service was unable to place a task because the resources could not be found.
I've been at this for about two days now and can't seem to figure out what configuration settings are not properly configured. Does anybody have any ideas as to what might be causing this to not work?
Update # 06/25/2015:
I think this may have something to do with the ECS_CLUSTER user data setting.
In my EC2 auto scaling launch configuration, if I leave the user data input completely empty, the instances are created with an ECS_CLUSTER value of "default". When this happens, I see an automatically-created cluster, named "default". In this default cluster, I see the instances and can register tasks with the ELB like expected. My ELB health check (HTTP) passes once the tasks are registered with the ELB and all is good in the world.
But, if I change that ECS_CLUSTER setting to something custom I never see a cluster created with that name. If I manually create a cluster with that name, the instances never become visible within the cluster. I can't ever register tasks with the ELB in this scenario.
Any ideas?
I had similar symptoms but ended up finding the answer in the log files:
/var/log/ecs/ecs-agent.2016-04-06-03:
2016-04-06T03:05:26Z [ERROR] Error registering: AccessDeniedException: User: arn:aws:sts::<removed>:assumed-role/<removed>/<removed is not authorized to perform: ecs:RegisterContainerInstance on resource: arn:aws:ecs:us-west-2:<removed:cluster/MyCluster-PROD
status code: 400, request id: <removed>
In my case, the resource existed but was not accessible. It sounds like OP is pointing at a resource that doesn't exist or isn't visible. Are your clusters and instances in the same region? The logs should confirm the details.
In response to other posts:
You do NOT need public IP addresses.
You do need: the ecsServiceRole or equivalent IAM role assigned to the EC2 instance in order to talk to the ECS service. You must also specify the ECS cluster and can be done via user data during instance launch or launch configuration definition, like so:
#!/bin/bash
echo ECS_CLUSTER=GenericSericeECSClusterPROD >> /etc/ecs/ecs.config
If you fail to do this on newly launched instances, you can do this after the instance has launched and then restart the service.
In the end, it ended up being that my EC2 instances were not being assigned public IP addresses. It appears ECS needs to be able to directly communicate with each EC2 instance, which would require each instance to have a public IP. I was not assigning my container instances public IP addresses because I thought I'd have them all behind a public load balancer, and each container instance would be private.
Another problem that might arise is not assigning a role with the proper policy to the Launch Configuration. My role didn't have the AmazonEC2ContainerServiceforEC2Role policy (or the permissions that it contains) as specified here.
You definitely do not need public IP addresses for each of your private instances. The correct (and safest) way to do this is setup a NAT Gateway and attach that gateway to the routing table that is attached to your private subnet.
This is documented in detail in the VPC documentation, specifically Scenario 2: VPC with Public and Private Subnets (NAT).
It might also be that the ECS agent creates a file in /var/lib/ecs/data that stores the cluster name.
If the agent first starts up with the cluster name of 'default', you'll need to delete this file and then restart the agent.
There where several layers of problems in our case. I will list them out so it might give you some idea of the issues to pursue.
My gaol was to have 1 ECS in 1 host. But ECS forces you to have 2 subnets under your VPC and each have 1 instance of docker host. I was trying to just have 1 docker host in 1 availability zone and could not get it to work.
Then the other issue was that the only one of the subnets had an attached internet facing gateway to it. So one of them was not accessible from public.
The end result was DNS was serving 2 IPs for my ELB. And one of the IPs would work and the other did not. So I was seeing random 404s when accessing the NLB using the public DNS.

EC2 - Remove EC2 from VPC

I was wondering if it's possible to remove an ec2 instance from VPC. If so, how can i do it? I was doing some tests and and i would like to remove my instances without terminate all of them.
Thank in advance.
You cannot move an instance between VPC and non-VPC, only launch new instances.
The 'opposite' of VPC is called EC2 Classic. As you can tell by the name, Amazon is depreciating this mode. All new accounts since December 2013 are VPC only. Several new features only work in VPC (for example, 'Enhanced Networking').
The writing is on the wall: You will need to move to VPC sooner or later. During the migration, you can use EC2 ClassicLink to let your EC2 Classic boxes talk with boxes in your VPC groups.
Remeber that when you create an instance, you specify the VPC that it will be launched in. It is not possible to change the VPC without terminating the instance and re-launching it in the new one.
One possible option would be to create an AMI of your currently running instance, and relaunch it in your preferred VPC using that AMI.

(AWS) Can't launch RDS in my chosen VPC

I'm following AWS's instructions Scenario 2: VPC with Public and Private Subnets and am having issues at the point I try to launch a DB server.
When I launch my instance, all is fine and I am able to assign it to my newly created VPC. However, when it comes to launch the RDS, the only VPC available (on step 4, configure advanced settings) is the default VPC (ie not the one I created as per their instructions).
Has anyone any idea about this or indeed how to resolve it?
RDS requires a little more setup than an EC2 instance if you want to launch it within a VPC.
Specifically, you need to create:
a DB subnet group within the VPC
a VPC security group for the RDS instance
The documentation is a little buried in the AWS RDS documents. It can be found here:
Creating a DB Instance in a VPC