I'm writing a flask API in pycharm. When I run my code locally, requests using boto3 to get secrets from secrets manager take less than a second. However, when I put my code on an EC2, it takes about 3 minutes (tried in both t2.micro and m5.large).
At first I thought it could be a Python issue, so I ran it in my EC2s through the awscli using:
aws secretsmanager get-secret-value --secret-id secretname
It sill took about 3 minutes. Why does this happen? Shouldn't this in theory be faster in an EC2 than in my local machine?
EDIT: This only happens when the EC2 is inside a VPC different than the default VPC.
After fighting with this same issue on our local machines for almost two months, we finally had some forward progress today.
It turns out the problem is related to IPv6.
If you're using IPv6, then the secrets manager domain will resolve to an IPv6 address. For some reason the cli is unable to make a secure connection using IPv6. After it times out, the cli falls back to IPv4 and then it succeeds.
To verify if you're resolving to an IPv6 address, just ping secretsmanager.us-east-1.amazonaws.com. Don't worry about the ping response, you just want to see the IP address the domain resolves to.
To fix this problem, you now have 3 options:
Figure out your networking issues. This could be something on your machine or router. If in an AWS VPC, check your routing tables and security groups. Make sure you allow outbound IPv6 traffic (::/0).
Reduce the cli connect timeout to make the IPv6 call fail faster. This will make the IPv4 fallback happen sooner. You may want give a better timeout value, but the general idea is to add something like this: --cli-connect-timeout 1
Disable IPv6. You can either disable IPv6 on your machine/router altogether, or you can adjust your machine to prefer IPv4 for this specific address (See: https://superuser.com/questions/436574/ipv4-vs-ipv6-priority-in-windows-7).
Ultimately, option 1 is the real solution, but since it is so broad, the others might be easier.
Hopefully this helps someone else maintain a bit of sanity when they hit this.
I had this issue when working from home through the Cisco AnyConnect VPN client. Apparently it blocks anything IPv6.
The solution for me was to disable IPv6 altogether on my laptop.
To do so for macos:
networksetup -setv6off Wi-Fi # wireless
networksetup -setv6off Ethernet # wired
To re-enable:
networksetup -setv6automatic Wi-Fi # wireless
networksetup -setv6automatic Ethernet # wired
I ran the following commands from my own computer and from an Amazon EC2 t2.nano instance in the ap-southeast-2 region:
aws secretsmanager create-secret --name foo --secret-string 'bar' --region ap-southeast-2
aws secretsmanager get-secret-value --secret-id foo --region ap-southeast-2
aws secretsmanager delete-secret --secret-id foo --region ap-southeast-2
In both cases, each command returned within a second.
Additional:
To test your situation, I did the following (in the Sydney region):
Created a new VPC using the VPC Wizard (just a public subnet)
Launched a new Amazon EC2 instance in the new VPC, with a Role granting permission to access Secrets Manager
Upgraded AWS CLI on the instance (the installed version didn't know about secretsmanager
Ran the above commands
They all returned immediately.
Therefore, the problem lies with something to do with your instances or your VPC.
I made the hotspot from my phone and it worked
Related
I'm trying to access lambda functions from a Windows VM I have created in EC2 for dev purposes but even a simple 'list functions' command fails to connect
I have tried using the AWS CLI through PowerShell, the dotnet sdk and the VS AWS Toolkit but each of these times out after a long waiting period. I can, however, list other services such as my databases and S3 buckets.
aws cli failure message
VS toolkit failure message
I have tried creating a new VM with the same results. I've disabled windows firewall altogether, allowed all traffic through the security group and have VPC endpoints for my subnet (ssm, ec2messages, lambda, ec2).
I have no trouble connecting to the lambda service through my own computer. On the VM, I have modified the .aws/credentials file to match the one on my computer for both the admin and current user but I still can't connect. This tells me that the problem isn't related to my access key credentials.
I'm reaching the end of the troubleshooting options I can think of so any help would be very much appreciated!
Update: using telnet, I cannot connect to lambda.ap-southeast-2 but I can connect to s3.ap-southeast-2 and lambda.ap-southeast-1. It seems lambda.ap-southeast-2 is being blocked somewhere but it isn't windows firewall because it's off and the same problem happens on Ubuntu VMs.
In the VPC Management Console, I haven't set up any firewalls under network or dns filewalls and my network ACL allows all traffic.
AWS CLI is problematic from within an EC2 instance.
I am on an EC2 Instance via Putty as well as Windows PowerShell (at different times). Command is: aws ec2 describe-instances
Result & error is:
Hanging and Could not connect to the endpoint URL: "https://ec2.us-east-1.amazonaws.com/"
The AWS CLI command is OK and correctly replies w json output when the CLI command executed from my local windows command prompt.
'aws configure' seems OK with the region is us-east-1
The EC2 instance can ping 8.8.8.8 as well as another instance (in same VPC, SubNet, region, etc) with a public IP.
I think I allowed everything open in the Security Groups (for troubleshooting purposes):
img of the security group
Network ACLs also left alone:
enter image description here
These are new and blank EC2 instances. To also note: 'sudo yum update' also returns an error of:
Could not retrieve mirrorlist http://repo.us-east-1.amazonaws.com/latest/main/mirror.list error was
12: Timeout on http://repo.us-east-1.amazonaws.com/latest/main/mirror.list: (28, 'Resolving timed out after 5515 milliseconds')
Thought I have all connections open (for sake of troubleshooting). Possible clue: Read elsewhere that my internet connection has to be "legit." I am using a free hotel wifi. But this doesnt make sense as its the EC2 doing the pinging, and my windows cmd can return results.
Thank you very much.
-Martin
I am trying to connect to Neptune DB in AWS Instance from my local machine in office, like connecting to RDS from office. Is it possible to connect Neptune db from local machine? Is Neptune db publicly available? Is there any way a developer can connect Neptune db from office?
Neptune does not support public endpoints (endpoints that are accessible from outside the VPC). However, there are few architectural options using which you can access your Neptune instance outside your VPC. All of them have the same theme: setup a proxy (EC2 machine, or ALB, or something similar, or a combination of these) that resides inside your VPC, and make that proxy accessible from outside your VPC.
It seems like you want to talk to your instance purely for development purposes. The easiest option for that would be to spin up an ALB, and create a target group that points to your instance's IP.
Brief Steps (These are intentionally not in detail, please refer to AWS Docs for detailed instructions):
dig +short <your cluster endpoint>
This would give you the current master's IP address.
Create an ALB (See AWS Docs on how to do this).
Make your ALB's target group point to the IP Address obtained for step #1. By the end of this step, you should have an ALB listening on PORT-A, that would forward requests to IP:PORT, where IP is your database IP (from Step 1) and PORT is your database port (default is 8182).
Create a security group that allows inbound traffic from everywhere. i.e. Inbound TCP rule for 0.0.0.0 on PORT-A.
Attach the security group to your ALB
Now from your developer boxes, you can connect to your ALB endpoint at PORT-A, which would internally forward the request to your Neptune instance.
Do checkout ALB docs for details around how you can create it and the concepts around it. If you need me to elaborate any of the steps, feel free to ask.
NOTE: This is not a recommended solution for a production setup. IP's used by Neptune instances are bound to change with failovers and host replacements. Use this solution only for testing purposes. If you want a similar setup for production, feel free to ask a question and we can discuss options.
As already mentioned you can't access directly outside your VPC.
The following link describes another solution using a SSH tunnel: connecting-to-aws-neptune-from-local-environment.
I find it much easier for testing and development purpose.
You can create the SSH tunnel with Putty as well.
Reference: https://github.com/M-Thirumal/aws-cloud-tutorial/blob/main/neptune/connect_from_local.md
Connect to AWS Neptune from the local system
There are many ways to connect to Amazon Neptune from outside of the VPC, such as setting up a load balancer or VPC peering.
Amazon Neptune DB clusters can only be created in an Amazon Virtual Private Cloud (VPC). One way to connect to Amazon Neptune from outside of the VPC is to set up an Amazon EC2 instance as a proxy server within the same VPC. With this approach, you will also want to set up an SSH tunnel to securely forward traffic to the VPC.
Part 1: Set up a EC2 proxy server.
Launch an Amazon EC2 instance located in the same region as your Neptune cluster. In terms of configuration, Ubuntu can be used. Since this is a proxy server, you can choose the lowest resource settings.
Make sure the EC2 instance is in the same VPC group as your Neptune cluster. To find the VPC group for your Neptune cluster, check the console under Neptune > Subnet groups. The instance's security group needs to be able to send and receive on port 22 for SSH and port 8182 for Neptune. See below for an example security group setup.
Lastly, make sure you save the key-pair file (.pem) and note the directory for use in the next step.
Part 2: Set up an SSH tunnel.
This step can vary depending on if you are running Windows or MacOS.
Modify your hosts file to map localhost to your Neptune endpoint.
Windows: Open the hosts file as an Administrator (C:\Windows\System32\drivers\etc\hosts)
MacOS: Open Terminal and type in the command: sudo nano /etc/hosts
Add the following line to the hosts file, replacing the text with your Neptune endpoint address.
127.0.0.1 localhost YourNeptuneEndpoint
Open Command Prompt as an Administrator for Windows or Terminal for MacOS and run the following command. For Windows, you may need to run SSH from C:\Users\YourUsername\
ssh -i path/to/keypairfilename.pem ec2-user#yourec2instanceendpoint -N -L 8182:YourNeptuneEndpoint:8182
The -N flag is set to prevent an interactive bash session with EC2 and to forward ports only. An initial successful connection will ask you if you want to continue connecting? Type yes and enter.
To test the success of your local graph-notebook connection to Amazon Neptune, open a browser and navigate to:
https://YourNeptuneEndpoint:8182/status
You should see a report, similar to the one below, indicating the status and details of your specific cluster:
{
"status": "healthy",
"startTime": "Wed Nov 04 23:24:44 UTC 2020",
"dbEngineVersion": "1.0.3.0.R1",
"role": "writer",
"gremlin": {
"version": "tinkerpop-3.4.3"
},
"sparql": {
"version": "sparql-1.1"
},
"labMode": {
"ObjectIndex": "disabled",
"DFEQueryEngine": "disabled",
"ReadWriteConflictDetection": "enabled"
}
}
Close Connection
When you're ready to close the connection, use Ctrl+D to exit.
Hi you can connect NeptuneDB by using gremlin console at your local machine.
USE THIS LINK to setup your local gremlin server, it works for me gremlin 3.3.2 version
Only you have to update the remote.yaml as per your url and port
I'm attempting to run a webserver that uses an RDS database with EC2 inside a docker container.
I've setup the security groups so the EC2 host's role is allowed to access the RDS and if I try to access it from the host machine directly everything works correctly.
However, when I run a simple container on the host and attempt to access the RDS, it get's blocked as if the security group weren't letting it through. After a bunch of trial and error it seemed that indeed the containers requests aren't appearing to come from the EC2 host so the firewall says no.
I was able to work around this in the short-run by setting --net=host on the docker container, however this breaks a lot of great docker networking functionality like being able to map ports (ie, now I need to make sure each instance of the container listens on a different port by hand).
Has anyone found a way around this? It seems like a pretty big limitation to running containers in AWS if you're actually using any AWS resources.
Yes, containers do hit the public IPs of RDS. But you do not need to tune low-level Docker options to allow your containers to talk to RDS. The ECS cluster and the RDS instance have to be in the same VPC and then access can be configured through security groups. The easiest way to do this is to:
Navigate to the RDS instances page
Select the DB instance and drill in to see details
Click on the security group id
Navigate over to the Inbound tab and choose Edit
And ensure there is a rule of type MySQL/Aurora with source Custom
When entering the custom source, just start typing in the name of the ECS cluster and the security group name will be auto-completed for you
This tutorial has screenshots that illustrate where to go.
Full disclosure: This tutorial features containers from Bitnami and I work for Bitnami. However the thoughts expressed here are my own and not the opinion of Bitnami.
Figured out what was happening, posting here in case it helps anyone else.
Requests from within the container were hitting the public ip of the RDS rather than the private (which is how the security groups work). It looks like the DNS inside the docker container was using the 8.8.8.8 google dns and that wouldn't do the AWS black magic of turning the rds endpoint into the private ip.
So for instance:
DOCKER_OPTS="--dns 10.0.0.2 -H tcp://127.0.0.1:4243 -H unix:///var/run/docker.sock -g /mnt/docker"
The inbound rule for the RDS should be set to the private IP of the EC2 instance rather than the public IPv4.
As #adamneilson mentions, setting the Docker options are your best bet. Here is how to discover your Amazon DNS server on the VPC. Also the section Enabling Docker Debug Output in the Amazon EC2 Container Service Developer Guide Troubleshooting mentions where the Docker options file is.
Assuming you are running a VPC block of 10.0.0.0/24 the DNS would be 10.0.0.2.
For CentOS, Red Hat and Amazon:
sed -i -r 's/(^OPTIONS=\")/\1--dns 10.0.0.2 /g' /etc/sysconfig/docker
For Ubuntu and Debian:
sed -i -r 's/(^OPTIONS=\")/\1--dns 10.0.0.2 /g' /etc/default/docker
When I tried to connect to AWS RDS in inside of docker container, I got "Access denied for user 'username'#'xxx.xx.xxx.x' (using password: YES)" error.
To solve this issue, I did below two ways:
I created new user and assigned grant.
$ CREATE USER 'newuser'#'%' IDENTIFIED BY 'password';
$ GRANT ALL ON newuser#'%' IDENTIFIED BY 'password';
$ FLUSH PRIVILEGES;
Added global DNS address 8.8.8.8 into docker container when run docker, so that the docker container can resolve IP address of AWS RDS from domain name.
$ docker run --name backend-app --dns=8.8.8.8 -p 8000:8000 -d backend-app
Then I connected from inside of docker container to AWS RDS, successfully.
Note: Firstly, I tried second way. But I didn't solve the connection problem. When I tried both two ways, I was success.
I am using chef to create amazon EC2 instances inside a VPC. I have alloted an elastic IP to new instance using --associate-eip option in knife ec2 server create. How do I bootstrap it without a gateway machine? It gets stuck at "Waiting for sshd" as it uses the private IP of newly created server to ssh into it, though it has an elastic IP allocated?
Am I missing anything? Here is the command I used.
bundle exec knife ec2 server create --subnet <subnet> --security-group-ids
<security_group> --associate-eip <EIP> --no-host-key-verify --ssh-key <keypair>
--ssh-user ubuntu --run-list "<role_list>"
--image ami-59590830 --flavor m1.large --availability-zone us-east-1b
--environment staging --ebs-size 10 --ebs-no-delete-on-term --template-file
<bootstrap_file> --verbose
Is there any other work-around/patch to solve this issue?
Thanks in advance
I finally got around the issue by using the --server-connect-attribute option, which is supposed to be used along with a --ssh-gateway attribute.
Add --server-connect-attribute public_ip_address to above knife ec2 create server command, which will make knife use public_ip_address of your server.
Note: This hack works using knife-ec2 (0.6.4). Refer def ssh_connect_host here
Chef will always use the private IP while registering the EC2 nodes. You can get this working by having your chef server inside the VPC as well. Definitely not a best practice.
The other workaround is, let your chef server be out side of VPC. Instead of bootstrapping the instance using knife ec2 command follow the instructions over here.
This way you will bootstrap your node from the node itself and not from the Chef-server/workstation.