Not able to update EC2 Linux instance with command 'sudo yum update' - amazon-web-services

When I try to update EC2 Amazon Linux instance, I get following error:
Loaded plugins: extras_suggestions, langpacks, priorities, update-motd
Could not retrieve mirrorlist http://amazonlinux.ap-south-1.amazonaws.com/2/core /latest/x86_64/mirror.list error was
12: Timeout on http://amazonlinux.ap-south-1.amazonaws.com/2/core/latest/x86_64/ mirror.list: (28, 'Connection timed out after 5000 milliseconds')
One of the configured repositories failed (Unknown),
and yum doesn't have enough cached data to continue. At this point the only
safe thing yum can do is fail. There are a few ways to work "fix" this:
1. Contact the upstream for the repository and get them to fix the problem.
2. Reconfigure the baseurl/etc. for the repository, to point to a working
upstream. This is most often useful if you are using a newer
distribution release than is supported by the repository (and the
packages for the previous distribution release still work).
3. Run the command with the repository temporarily disabled
yum --disablerepo=<repoid> ...
4. Disable the repository permanently, so yum won't use it by default. Yum
will then just ignore the repository until you permanently enable it
again or use --enablerepo for temporary usage:
yum-config-manager --disable <repoid>
or
subscription-manager repos --disable=<repoid>
5. Configure the failing repository to be skipped, if it is unavailable.
Note that yum will try to contact the repo. when it runs most commands,
so will have to try and fail each time (and thus. yum will be be much
slower). If it is a very temporary problem though, this is often a nice
compromise:
yum-config-manager --save --setopt=<repoid>.skip_if_unavailable=true
Cannot find a valid baseurl for repo: amzn2-core/2/x86_64
Any help would be much appreciated.

Your instance does not have access to internet.
You can resolve this in following ways:
If your instance is running in a public subnet make sure it has a public ip attached. Also check if the route table for the public subnet is associated with this subnet and has a route 0.0.0.0/0 pointing to internet gateway.
If you are running your instance in private make sure you have created the NAT Gateway in a public subnet. Check the route table has a route 0.0.0.0/0 pointing to NAT and the subnet is associated with the private route table.
Check if the security group associated with instance has outbound traffic enabled.

You are probably in a private subnet (ie a subnet without a 0.0.0.0/0 route to the outside world).
If you want to connect to the outside world, you need a NAT gatway in a public subnet, which has a route to an Internet Gateway.
EC2 -> NAT -> IGW
This is the best AWS troubleshooting page I've found (early 2021)
If you don't want to connect to the outside world, you need a VPC endpoint which allows connectivity to specific AWS services from a private subnet. I have never got this to work.

Verify that the security group attached to the instance is allowing all inbound and outbound connections.
I don't know what specific network protocol is needed for these updates, but public SSH, HTTP, and HTTPS weren't enough for me. So I simply allowed all traffic for a brief time to run the updates.
(I'm guessing it might have simply needed an FTP port open, but I didn't experiment long enough to find out. Feel free to edit this answer if you know specifically which ports are needed for yum updates on EC2 instances.)

If you have an S3 endpoint on your subnet route table then this will cause yum to fail. To fix this please try to add the following policy to the S3 endpoint:
{
"Statement": [
{
"Sid": "Amazon Linux AMI Repository Access",
"Principal": "*",
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::packages.*.amazonaws.com/*",
"arn:aws:s3:::repo.*.amazonaws.com/*"
]
}
]
}

Related

AWS App Runner service cannot access Internet when added to a VPC

I've set up an AWS App Runner service, which works fine. Currently for networking it's configured as public access, but I'd like to change this to a VPC so that I can connect the service to an RDS instance without having to open the database up to the world.
When I change the networking config to use my default security group, the service is unable to access the Internet. Cloning a git repo from Bitbucket brings up the error: ssh: Could not resolve hostname bitbucket.org: Try again
... and trying to run npm install brings up:
npm ERR! network request to https://registry.npmjs.org/gulp failed, reason: connect ETIMEDOUT 104.16.24.35:443
My security group has an outgoing rule allowing all traffic out to any destination. My RDS instance is in the same VPC/security group and I'm able to connect to this without issue (currently I've opened up port 3306 to the world). Everything else I've read from a bunch of Googling seems fine: route tables, internet gateways, firewall rules, etc.
Any help would be much appreciated!
Probably too late to be really helpful but moving the App Runner to a VPC sends all outgoing traffic to the VPC.
The two options given in the docs are
Adding NAT gateways to each VPC
Setting up VPC endpoints
Documented within the first bullet point of the Considerations when selecting a subnet section
https://docs.aws.amazon.com/apprunner/latest/dg/network-vpc.html

AWS EKS NodeGroup "Create failed": Instances failed to join the kubernetes cluster

I am able to create an EKS cluster but when I try to add nodegroups, I receive a "Create failed" error with details:
"NodeCreationFailure": Instances failed to join the kubernetes cluster
I tried a variety of instance types and increasing larger volume sizes (60gb) w/o luck.
Looking at the EC2 instances, I only see the below problem. However, it is difficult to do anything since i'm not directly launching the EC2 instances (the EKS NodeGroup UI Wizard is doing that.)
How would one move forward given the failure happens even before I can jump into the ec2 machines and "fix" them?
Amazon Linux 2
Kernel 4.14.198-152.320.amzn2.x86_64 on an x86_64
ip-187-187-187-175 login: [ 54.474668] cloud-init[3182]: One of the
configured repositories failed (Unknown),
[ 54.475887] cloud-init[3182]: and yum doesn't have enough cached
data to continue. At this point the only
[ 54.478096] cloud-init[3182]: safe thing yum can do is fail. There
are a few ways to work "fix" this:
[ 54.480183] cloud-init[3182]: 1. Contact the upstream for the
repository and get them to fix the problem.
[ 54.483514] cloud-init[3182]: 2. Reconfigure the baseurl/etc. for
the repository, to point to a working
[ 54.485198] cloud-init[3182]: upstream. This is most often useful
if you are using a newer
[ 54.486906] cloud-init[3182]: distribution release than is
supported by the repository (and the
[ 54.488316] cloud-init[3182]: packages for the previous
distribution release still work).
[ 54.489660] cloud-init[3182]: 3. Run the command with the
repository temporarily disabled
[ 54.491045] cloud-init[3182]: yum --disablerepo= ...
[ 54.491285] cloud-init[3182]: 4. Disable the repository
permanently, so yum won't use it by default. Yum
[ 54.493407] cloud-init[3182]: will then just ignore the repository
until you permanently enable it
[ 54.495740] cloud-init[3182]: again or use --enablerepo for
temporary usage:
[ 54.495996] cloud-init[3182]: yum-config-manager --disable
Adding another reason to the list:
In my case the Nodes were running in a private subnets and I haven't configured a private endpoint under API server endpoint access.
After the update the nodes groups weren't updated automatically so I had to recreate them.
In my case, the problem was that I was deploying my node group in a private subnet, but this private subnet had no NAT gateway associated, hence no internet access. What I did was:
Create a NAT gateway
Create a new routetable with the following routes (the second one is the internet access route, through nat):
Destination: VPC-CIDR-block Target: local
Destination: 0.0.0.0/0 Target: NAT-gateway-id
Associate private subnet with the routetable created in the second-step.
After that, nodegroups joined the clusters without problem.
I noticed there was no answer here, but about 2k visits to this question over the last six months. There seems to be a number of reasons why you could be seeing these failures. To regurgitate the AWS documentation found here:
https://docs.aws.amazon.com/eks/latest/userguide/troubleshooting.html
The aws-auth-cm.yaml file does not have the correct IAM role ARN for
your nodes. Ensure that the node IAM role ARN (not the instance
profile ARN) is specified in your aws-auth-cm.yaml file. For more
information, see Launching self-managed Amazon Linux nodes.
The ClusterName in your node AWS CloudFormation template does not
exactly match the name of the cluster you want your nodes to join.
Passing an incorrect value to this field results in an incorrect
configuration of the node's /var/lib/kubelet/kubeconfig file, and the
nodes will not join the cluster.
The node is not tagged as being owned by the cluster. Your nodes must
have the following tag applied to them, where is
replaced with the name of your cluster.
Key Value kubernetes.io/cluster/<cluster-name>
Value owned
The nodes may not be able to access the cluster using a public IP
address. Ensure that nodes deployed in public subnets are assigned a
public IP address. If not, you can associate an Elastic IP address to
a node after it's launched. For more information, see Associating an
Elastic IP address with a running instance or network interface. If
the public subnet is not set to automatically assign public IP
addresses to instances deployed to it, then we recommend enabling that
setting. For more information, see Modifying the public IPv4
addressing attribute for your subnet. If the node is deployed to a
private subnet, then the subnet must have a route to a NAT gateway
that has a public IP address assigned to it.
The STS endpoint for the Region that you're deploying the nodes to is
not enabled for your account. To enable the region, see Activating and
deactivating AWS STS in an AWS Region.
The worker node does not have a private DNS entry, resulting in the
kubelet log containing a node "" not found error. Ensure that the VPC
where the worker node is created has values set for domain-name and
domain-name-servers as Options in a DHCP options set. The default
values are domain-name:.compute.internal and
domain-name-servers:AmazonProvidedDNS. For more information, see DHCP
options sets in the Amazon VPC User Guide.
I myself had an issue with the tagging where I needed an uppercase letter. In reality, if you can use another avenue to deploy your EKS cluster I would recommend it (eksctl, aws cli, terraform even).
I will try to make the answer short by highlighting a few things that can go wrong in frontline.
1. Add the IAM role which is attached to EKS worker node, to the aws-auth config map in kube-system namespace. Ref
2. Login to the worker node which is created and failed to join the cluster. Try connecting to API server from inside using nc. Eg: nc -vz 9FCF4EA77D81408ED82517B9B7E60D52.yl4.eu-north-1.eks.amazonaws.com 443
3. If you are not using the EKS node from the drop down in AWS Console (which means you are using a LT or LC in the AWS EC2), dont forget to add the userdata section in the Launch template. Ref
set -o xtrace
/etc/eks/bootstrap.sh ${ClusterName} ${BootstrapArguments}
4. Check the EKS worker IAM node policy and see it has the appropriate permissions added. AmazonEKS_CNI_Policy is a must.
5. Your nodes must have the following tag applied to them, where cluster-name is replaced with the name of your cluster.
kubernetes.io/cluster/cluster-name: owned
I hope your problem lies within this list.
Ref: https://docs.aws.amazon.com/eks/latest/userguide/troubleshooting.html
https://aws.amazon.com/premiumsupport/knowledge-center/resolve-eks-node-failures/
Firstly, I had the NAT Gateway in my private subnet. Then I moved the NAT gateway back to public subnet which worked fine.
Terraform code is as follows:
resource "aws_internet_gateway" "gw" {
vpc_id = aws_vpc.dev-vpc.id
tags = {
Name = "dev-IG"
}
}
resource "aws_eip" "lb" {
depends_on = [aws_internet_gateway.gw]
vpc = true
}
resource "aws_nat_gateway" "natgw" {
allocation_id = aws_eip.lb.id
subnet_id = aws_subnet.dev-public-subnet.id
depends_on = [aws_internet_gateway.gw]
tags = {
Name = "gw NAT"
}
}
Try adding a tag to your private subnets where the worker nodes are deployed.
kubernetes.io/cluster/<cluster_name> = shared
we need to check what type of nat gateway we configured. It should be public one but in my case i configured as private.
Once i changed from private to public the issue resolved.
Auto Scaling group logs showed that we hit quote limit.
Launching a new EC2 instance. Status Reason: You've reached your quota for maximum Fleet Requests for this account. Launching EC2 instance failed.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/fleet-quotas.html
I had a similar issue and any provided solutions worked. After some investigation and running:
journalctl -f -u kubelet
in log I had:
Error: failed to run Kubelet: running with swap on is not supported, please disable swap! or set --fail-swap-on flag to false
So naturally, the solution seems to disable swap with
swapoff -a
And then it worked fine, node has been registered and output was fine when checked with jounralctl and systemctl status kubelet .
The main problem is here network subnets they are public and private subnets. Check your Private subnets are added to NAT Gateway. If it is not added, add Private subnets to NAT Gateway and also check public subnets are attached to the Internet gateway.

Cannot connect internet with EC2 instance in private subnet

I am trying to install docker on my EC2 instance in private subnet which I have SSH using Jumpbox. I even tried to allow ALL TRAFFIC in my security group, but still didnot happen.
sudo yum update -y
Loaded plugins: priorities, update-motd, upgrade-helper
Could not retrieve mirrorlist http://repo.us-west-1.amazonaws.com/latest/main/mirror.list error was
12: Timeout on http://repo.us-west-1.amazonaws.com/latest/main/mirror.list: (28, 'Connection timed out after 5001 milliseconds')
One of the configured repositories failed (Unknown),
and yum doesn't have enough cached data to continue. At this point the only
safe thing yum can do is fail. There are a few ways to work "fix" this:
1. Contact the upstream for the repository and get them to fix the problem.
2. Reconfigure the baseurl/etc. for the repository, to point to a working
upstream. This is most often useful if you are using a newer
distribution release than is supported by the repository (and the
packages for the previous distribution release still work).
3. Disable the repository, so yum won't use it by default. Yum will then
just ignore the repository until you permanently enable it again or use
--enablerepo for temporary usage:
yum-config-manager --disable <repoid>
4. Configure the failing repository to be skipped, if it is unavailable.
Note that yum will try to contact the repo. when it runs most commands,
so will have to try and fail each time (and thus. yum will be be much
slower). If it is a very temporary problem though, this is often a nice
compromise:
yum-config-manager --save --setopt=<repoid>.skip_if_unavailable=true
Cannot find a valid baseurl for repo: amzn-main/latest
An Amazon EC2 instance in a private subnet cannot directly communicate with the Internet. This is intentional, since it is a private subnet.
To allow such connectivity:
Create a NAT Gateway in a public subnet in the same VPC
Modify the Route Table for the private subnet to direct traffic destination 0.0.0.0/0 to the NAT Gateway
When the EC2 instance tries to access the Internet, its request will be sent to the NAT Gateway. The NAT Gateway will make the request on behalf of the instance and will send the response back to the instance. This allows outbound connectivity to the Internet while protecting the instance from inbound connectivity.
It is not strictly necessary to use private subnets. Security Groups can perform a similar function at the instance level rather than at the subnet level.
In this situation, when EC2 is inside VPC and we want to allow EC2 to connect outside world through internet. We need to add outbound rules to EC2.
For eg, I wanted to download Docker on EC2 from amazom repository. I have added HTTP rules in below snapshot

Secrets manager extremely slow in EC2s via awscli and boto3

I'm writing a flask API in pycharm. When I run my code locally, requests using boto3 to get secrets from secrets manager take less than a second. However, when I put my code on an EC2, it takes about 3 minutes (tried in both t2.micro and m5.large).
At first I thought it could be a Python issue, so I ran it in my EC2s through the awscli using:
aws secretsmanager get-secret-value --secret-id secretname
It sill took about 3 minutes. Why does this happen? Shouldn't this in theory be faster in an EC2 than in my local machine?
EDIT: This only happens when the EC2 is inside a VPC different than the default VPC.
After fighting with this same issue on our local machines for almost two months, we finally had some forward progress today.
It turns out the problem is related to IPv6.
If you're using IPv6, then the secrets manager domain will resolve to an IPv6 address. For some reason the cli is unable to make a secure connection using IPv6. After it times out, the cli falls back to IPv4 and then it succeeds.
To verify if you're resolving to an IPv6 address, just ping secretsmanager.us-east-1.amazonaws.com. Don't worry about the ping response, you just want to see the IP address the domain resolves to.
To fix this problem, you now have 3 options:
Figure out your networking issues. This could be something on your machine or router. If in an AWS VPC, check your routing tables and security groups. Make sure you allow outbound IPv6 traffic (::/0).
Reduce the cli connect timeout to make the IPv6 call fail faster. This will make the IPv4 fallback happen sooner. You may want give a better timeout value, but the general idea is to add something like this: --cli-connect-timeout 1
Disable IPv6. You can either disable IPv6 on your machine/router altogether, or you can adjust your machine to prefer IPv4 for this specific address (See: https://superuser.com/questions/436574/ipv4-vs-ipv6-priority-in-windows-7).
Ultimately, option 1 is the real solution, but since it is so broad, the others might be easier.
Hopefully this helps someone else maintain a bit of sanity when they hit this.
I had this issue when working from home through the Cisco AnyConnect VPN client. Apparently it blocks anything IPv6.
The solution for me was to disable IPv6 altogether on my laptop.
To do so for macos:
networksetup -setv6off Wi-Fi # wireless
networksetup -setv6off Ethernet # wired
To re-enable:
networksetup -setv6automatic Wi-Fi # wireless
networksetup -setv6automatic Ethernet # wired
I ran the following commands from my own computer and from an Amazon EC2 t2.nano instance in the ap-southeast-2 region:
aws secretsmanager create-secret --name foo --secret-string 'bar' --region ap-southeast-2
aws secretsmanager get-secret-value --secret-id foo --region ap-southeast-2
aws secretsmanager delete-secret --secret-id foo --region ap-southeast-2
In both cases, each command returned within a second.
Additional:
To test your situation, I did the following (in the Sydney region):
Created a new VPC using the VPC Wizard (just a public subnet)
Launched a new Amazon EC2 instance in the new VPC, with a Role granting permission to access Secrets Manager
Upgraded AWS CLI on the instance (the installed version didn't know about secretsmanager
Ran the above commands
They all returned immediately.
Therefore, the problem lies with something to do with your instances or your VPC.
I made the hotspot from my phone and it worked

Amazon EC2 instance can't update or use yum

I am using Amazon's tutorial for installing a LAMP server. The first several instructions involve using yum, but every single way I have tried to do it has resulted in the same message. I have found a few other recent questions about the same issue, none of which change anything on my setup.
Here is the message:
Loaded plugins: priorities, update-motd, upgrade-helper
Could not retrieve mirrorlist http://repo.us-east-1.amazonaws.com/latest/main/mirror.list error was
12: Timeout on http://repo.us-east-1.amazonaws.com/latest/main/mirror.list: (28, 'Connection timed out after 10001 milliseconds')
One of the configured repositories failed (Unknown),
and yum doesn't have enough cached data to continue. At this point the only
safe thing yum can do is fail. There are a few ways to work "fix" this:
1. Contact the upstream for the repository and get them to fix the problem.
2. Reconfigure the baseurl/etc. for the repository, to point to a working
upstream. This is most often useful if you are using a newer
distribution release than is supported by the repository (and the
packages for the previous distribution release still work).
3. Disable the repository, so yum won't use it by default. Yum will then
just ignore the repository until you permanently enable it again or use
--enablerepo for temporary usage:
yum-config-manager --disable <repoid>
4. Configure the failing repository to be skipped, if it is unavailable.
Note that yum will try to contact the repo. when it runs most commands,
so will have to try and fail each time (and thus. yum will be be much
slower). If it is a very temporary problem though, this is often a nice
compromise:
yum-config-manager --save --setopt=<repoid>.skip_if_unavailable=true
Cannot find a valid baseurl for repo: amzn-main/latest
I have done this same thing before without running into any problems, using the same tutorial, but it was several months ago. I don't know what has changed but my meager experience is keeping me from figuring it out.
Looks like the host is having trouble contacting the yum server. Make sure the instance has outbound internet access (check security groups etc). If the instance is in a VPC and the security groups look good you may need to use a nat appliance or attach an elastic IP.
Good luck-
If you have an S3 endpoint on your VPC then this will cause yum to fail as repo file is stored in S3. To fix this add the following policy to S3 VPC endpoint:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "*",
"Resource": [
"arn:aws:s3:::repo.eu-west-1.amazonaws.com",
"arn:aws:s3:::repo.eu-west-1.amazonaws.com/*"
]
}
]
}
Replace eu-west-1 with the relevant region code that your S3 endpoint is in.
A lot of first time users of Amazon EC2 run into this issue. In my experience, it's usually the result of not setting the allowed outgoing connections on their instance's security group. The tutorial that Amazon has for configuring Amazon Linux instances only mentions setting the Incoming connections so it's easy to forget that you never set the allowed outgoing ones. Simply allowing HTTP and HTTPS requests to any IP Address should fix the issue.
I have the same problem and was related to name resolution. I used the following to correct:
EC2 instance has no public DNS
This is the good explanation from Mat:
Go to console.aws.amazon.com
Go To Services -> VPC
Open Your VPCs
select your VPC connected to your EC2 and
Edit Summary ---> Change DNS hostnames: to YES
If you're using NACL on the subnet were the EC2 is located.
Quick fix
You will have to open inbound Ephemeral ports for this yum update.
For example adding the #100 inbound rule below:
Notice that this is still necessary even if the outbound rules allow all traffic:
Why did have to do this?
When yum opens an outbound connection on ports like 80/443 it comes back at a random high port (Ephemeral port).
Network ACLs are stateless (not like Security groups) and will not allow returned connection on the same port by default.
You can read more in here.
Check if your outbound entries are deleted/modified from assigned Security group. Normally Outbound entries are set to "All traffic" and allow any IP.
In my case, outbound was deleted. I again set to "All traffic" and it works.
just assign the default security group along with the one you may have created. This solved my problem. ;)
I had the same problem and the way I solved it, was by allowing inbound traffic for the HTTPS protocol port 443 on the security group of your NAT instance. Most of the repositories use HTTPS protocol. Make sure you haven't missed this.
I had the same problem, turns out another sysadmin decided to route outbound internet traffic through a proxy. I found this by noticing some wearied proxy env settings, dug a little deeper, and then noticed an entry in my /etc/yum.conf file.
Commented out the proxy= line and all worked again.
[main]
cachedir=/var/cache/yum/$basearch/$releasever
keepcache=0
debuglevel=2
logfile=/var/log/yum.log
exactarch=1
obsoletes=1
gpgcheck=1
plugins=1
installonly_limit=5
bugtracker_url=http://bugs.centos.org/set_project.php?project_id=23&ref=http://bugs.centos.org/bug_report_page.php?category=yum
distroverpkg=centos-release
#proxy=http://pos-proxy-in-my-way-of-doing-actual-real-work:666
With chadneal's comment.
It is necessary to set the DNS Resolution to Yes.
Go to console.aws.amazon.com
Go To Services -> VPC
Open Your VPCs
Select your VPC connected to your EC2
Click Edit DNS Resolution and set it Yes
I was getting the same exact error message for yum as described in the question. In my case I had a NACL that allowed all outgoing traffic but restricted incoming traffic to HTTP/HTTPS, SSH and All ICMP. Since NACLS are stateless attempting to run yum failed as incoming ephemeral connections that yum uses were not explicitly allowed and were therefore dropped.
Loaded plugins: priorities, update-motd, upgrade-helper
Could not retrieve mirrorlist http://repo.us-east-1.amazonaws.com/latest/main/mirror.list error was
12: Timeout on http://repo.us-east-1.amazonaws.com/latest/main/mirror.list: (28, 'Connection timed out after 10001 milliseconds')
One of the configured repositories failed (Unknown),
and yum doesn't have enough cached data to continue. At this point the only
safe thing yum can do is fail. There are a few ways to work "fix" this:
Contact the upstream for the repository and get them to fix the problem.
Reconfigure the baseurl/etc. for the repository, to point to a working
upstream. This is most often useful if you are using a newer
distribution release than is supported by the repository (and the
packages for the previous distribution release still work).
Disable the repository, so yum won't use it by default. Yum will then
just ignore the repository until you permanently enable it again or use
--enablerepo for temporary usage:
yum-config-manager --disable <repoid>
Configure the failing repository to be skipped, if it is unavailable.
Note that yum will try to contact the repo. when it runs most commands,
so will have to try and fail each time (and thus. yum will be be much
slower). If it is a very temporary problem though, this is often a nice
compromise:
yum-config-manager --save --setopt=<repoid>.skip_if_unavailable=true
Cannot find a valid baseurl for repo: amzn-main/latest
Same error I was also having from last week tried almost everything but not able to install server and start httpd service.
Resolved it by just allowing all traffic IN/OUT to and From Security Group and NACL... try it it will be resolved defiantly.
Check internet connectivity on your EC2 instance by pinging
ping google.com
You will get response by if you have working internet there.
If not then go to etc/resolv.conf file and add below lines in that file:
nameserver 8.8.8.8
nameserver 1.1.1.1
nameserver 1.0.0.1
Now check if internet is working.
If yes, you can easily resume you work!!!!
Also, if you are unable to get any DNS working, check your DHCP options set. I had left an old one in place, and when I cleaned up a project involving active directory integrations, it broke. The answer is simply to change back to the original/saved options.
The problem can occur at both levels Security Groups and NACLs. In my case, I figured out that even after modifying the security group, the update failed. However, when the NACLs were modified.. the update was successful
I ran the following command with sudo (can't do yum alone if you're not root) and it fixed the issue.
yum-config-manager --save --setopt=dev.mysql.com_downloads_repo_yum_.skip_if_unavailable=true
I had the same problem. In my case, I mistakenly deleted the outbound rules of my security group. Adding outbound rule to allow all traffic solved the problem.
please follow the below step
Step 1 : go to AWS-VPC
Step 2 : find DHCP option
Step 3 : if you dont have any DHCP options create a new DHCP
Step 4 : add domaine name = ap-south-1.compute.internal (if your using other region please use other regionname)
Step 5 : add domain name server = AmazonProvidedDNS
Step 6 : then select your VPC --> actions -->edit your DHCP option set --> Select DHCP set which you just created --> Save
Step 7 : Then Reboot your Instance
Step 8 : Login Your Instance then Just type yum list installed --> It will defiantly give you the list of installed things
Thank you
don't worry this is simple error.
this is not connect internet also.
just to create new file with vi editor:
vi /etc/resolv.conf
nameserver 8.8.8.8
nameserver 8.8.4.4
and then type this to quit vi:
:wq
I am using the default VPC and DNS host resolution is enabled by default; wasn't my issue. I followed the advice to add the default security group and that resolved my issue.
ACL in your vpc differs from the instances inbound or outbound rules. I see the vpc's acl get people every day multiple times.
check for private hosted zone such as "eu-west-1.s3.eu-west-1.amazonaws.com" and make sure the EC2 has internet , for instance if your EC2 instance is in a private subnet you need to make sure your routes point to a nat gateway or instance.
for me these helped, check
NACL
Security Groups
Routing table
this problem is usually caused by not being able to connect to the internet.
Do the following basic test: ping google.com (ping google), if the answer is no, if you are not pinging it is simple, your server is not connecting to the internet.
To solve this, edit the resolv.conf (nano /etc/resolv.conf) when you open the file you will see that it is empty, in my case here I wrote these lines here:
; generated by /usr/sbin/dhclient-script
search ec2.internal
timeout options:2 attempts:5
name server 172.31.0.2
Do this on yours, save the file, and test the ping again on google.com, if it responds normally, you can run yum update -y and it will work.
Hope this helps.
In my case I followed this troubleshooting (https://aws.amazon.com/premiumsupport/knowledge-center/ec2-troubleshoot-yum-errors-al1-al2/) and the file /etc/yum/vars/awsregion had invalid content. After set the correct region, yum worked fine.
I experienced the very same issue but the problem was not my Security Group or NACL.
Background:
I added a domain name via Route53.
The domain name continues to be hosted with DiscountASP.net.
The VPC was created manually (no wizard or default).
I created a DHCP Option Set with my domain name and the 4 servers IP addresses given to me by Route53.
Analysis:
First, I needed to prove that the problem was not the Security Group or the NACL.
I did this by attatching the default DHCP Option Set to my new VPC. It worked!
I could do the yum update and "curl http://www.google.com". No problem.
I then created a new DHCP Option Set using my domain name and the Google DNS Servers.
8.8.8.8 & 8.8.4.4
This also worked.
I then took 1 of the 4 DNS Servers IPs provided by Route 53 and used it with my domain name in a new DHCP Option Set.
I ran a test and it failed. I repeated the same test with 2 of the remaining 4 DNS Servers IPs, creating two separate DHCP Option Sets.
I ran tests and they both failed.
After checking the spelling of my domain name I could only conclude that the problem was the domain name servers.
Solution:
Amazon Virtual Private Cloud User Guide (PDF page 222)
Amazon DNS Server (Sub topic)
"When you create a VPC, we automatically create a set of DHCP options and associate them with the VPC. This set includes two options: domain-name-servers = AmazonProvidedDNS, and domain-name=domainname-for-your-region. AmazonProvidedDNS is an Amazon DNS server, and this option enables DNS
for instances that need to communicate over the VPC's Internet gateway. The string AmazonProvidedDNS maps to a DNS server running on a reserved IP address at the base of the VPC IPv4 network range, plus two. For example, the DNS Server on a 10.0.0.0/16 network is located at 10.0.0.2."
From page 221:
DHCP: domain-name-servers
Option Name Description
"The IP addresses of up to four domain name servers, or AmazonProvidedDNS. The default DHCP option set specifies AmazonProvidedDNS. If specifying more than one domain name server, separate them with commas."
The IP addresses that its referring to are for external domain name servers (excluding the possibility you have created a custom DNS).
So I created my final DHCP Option Set using my domain name and domain-name-servers=AmazonProvidedDNS. It worked!
By the way the VPC DNS Resolution = yes & DNS Hostname = no.
Go to the security group for which EC2 is configured.
And verify the below fields in its Inbound rules.If these below fields are not there then add it by clicking on button Edit inbound rules.
Type-: All traffic
Protocol-: All
Port range-: All
Destination-: 0.0.0.0/0
Hope this would resolve the issue.
Hay! Here is perfect answer i found
go to outbound rules add
All Traffic
That's it