How to Troubleshoot 'Cannot Connect to Proxy' Error - AWS S3 - amazon-web-services

New to AWS and AWS CLI, I have installed and configured the AWS CLI, and I am simply trying to list the buckets in S3, but I am behind a proxy.
How do I troubleshoot and resolve and the following error?
C:\Users\MyUserName\Desktop >aws s3 ls
HTTPSConnectionPool(host='s3.us-east-2.amazonaws.com', port=443): Max retries exceeded with url: / (Caused by ProxyError('Cannot connect to proxy.', error(10060, 'A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond')))
The only troubleshooting step I have attempted is to Set the HTTP_PROXY and HTTPS_PROXY variables to my IP on port 80.

The key to using the AWS CLI behind a proxy is to configure two environment variables.
The IP address is the address of your proxy server, which is probably not your local IP. Consult with your network administrator to get the correct IP address and basic authentication parameters.
Chrome, IE, etc. support proxy servers, so you may already have these parameters setup in your browser. For Chrome go to settings and search for Open proxy settings. Similar technique for other browsers.
For Windows:
set HTTP_PROXY=http://a.b.c.d:n
set HTTPS_PROXY=http://w.x.y.z:m
Or for basic authentication:
set HTTP_PROXY=http://username:password#a.b.c.d:n
set HTTPS_PROXY=http://username:password#w.x.y.z:m
For Linux, macOS, or Unix:
export HTTP_PROXY=http://a.b.c.d:n
export HTTPS_PROXY=http://w.x.y.z:m
Or for basic authentication:
export HTTP_PROXY=http://username:password#a.b.c.d:n
export HTTPS_PROXY=http://username:password#w.x.y.z:m
Using an HTTP Proxy

Related

Private MWAA - Snowflake Connection Issue - Amazon Managed Workflows for Apache Airflow

I set up an private Airflow environment in AWS -v2.2.2-.
Environment and plugins are up and running, I want to connect to Snowflake but I am getting the error below . -whl files in plugins.zip using requirements.txt-
snowflake.connector.vendored.urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='......snowflakecomputing.com', port=443): Max retries exceeded with url: /session/v1/login-request?request_id=....... (Caused by ConnectTimeoutError(<snowflake.connector.vendored.urllib3.connection.HTTPSConnection object at >, 'Connection to ........snowflakecomputing.com timed out. (connect timeout=60)'))
Same connection works in public mwaa.
I am adding connection informations into admin-connections tab from the UI
I know private env does not have connection to internet
I am aware i need to add some kind of outbound rule or endpoint but couldn't figure out .
Checked the endpoints and couldn't see anything related to Snowflake.
I will also be connecting to postgres, mysql DB's and few API's, which currently fails also
Is there a 1 click solution like adding some kind of outbound rule or should i be applying everything 1by1, and what would be that ?
If i want to connect to google-api something new, for snowflake something new etc ?
Also private mwaa environment is running on an existing vpc that has igw attachment, but the subnets that mwaa is running doesn't have any igw or nat attachment -as documentation suggests-

Connection refused error with AWS + Hashicorp Vault

I have configured a Hashicorp Vault server on a EC2 instance. When trying to use postman to test transit secret engine API I keep getting a error connection refused on postman, I went full ape mode and opened all ports on the security group inbound rule and it didn't work, I attached an elastic IP to the instance and didnt work either, im just trying with a simple GET and I just keep getting the same connectionrefused error.
When I use cUrl on the ssh connected session i have no issues though. The specified hosted adress is 127.0.0.1:8200, in postman I replaced that localhost with the public adress of the instance that i obviously censored in the screencap, in the headers theres the token needed to access vault, for simplicity I was just using the root token.
Postman screecap if it helps
#Emilio Marchant
I have faced similar issue (not with postman, but with telnet), Let's try to understand problem here.
The issue is with 127.0.0.1 IP. This is loopback IP and When you (or your computer) call an IP address, you are usually trying to contact another computer on the internet. However, if you call the IP address 127.0.0.1 then you are communicating with the localhost – in principle, with your own computer.
Reference link : https://www.ionos.com/digitalguide/server/know-how/localhost/
What you can try is below.
Start vault dev server with --dev-listen-address parameter.
Eg:
vault server -dev -dev-listen-address="123.456.789.1:8200"
in above command replace '123.456.789.1:8200' with '<your ec2 instance private IP : 8200'>
Next set VAULT_ADDR and VAULT_TOKEN parameter as below
export VAULT_ADDR='http://123.456.789.1:8200'
export VAULT_TOKEN='*****************'
Again replace 'http://123.456.789.1:8200' with 'http://[Your ec2 instance private IP]:8200'
For Vault_token : you should get a root token in console, when you start vault server , use that token
Now try to connect from postman or using curl command. It should work.
Reference question and solution :
How to connect to remote hashicorp vault server
The notable thing here is that the response is "connection refused". This error means that the connection is getting established and it found that there are no processes running on that port. This error means that there is no issue with firewall. A firewall will cause the connection to either drop (reject) or timeout (ignore), but won't give "Econnrefused".
The most likely issue is that the vault server process is not bound to the correct network interface. There must be a configuration in hashicorp-vault to setup the IP on which to bind. Most servers, by default, bind only on loopback address which is accessible only from 127.0.0.1. You need to bind it to "all" network interfaces by changing that to 0.0.0.0. I am not aware of the specific configuration option of hashicorp vault, but there has to be something to this effect.
Possible security issue:
Note that some servers expect you to run it behind a reverse proxy so that you can setup SSL (https) and other authentication if needed. Applications like vault servers should not be publicly accessible on http without SSL.

DataPusher is unable to connect to CKAN 2.8

DataPusher is not working with my CKAN 2.8 install. I have DataPusher and CKAN on the same VPS (an Amazon EC2 instance). I cannot curl /api/3/action/resource_show from within the instance, but I can from outside it at the same IP address I can access the CKAN web gui from. I am using the default port settings/followed the official CKAN documentation for setting up CKAN and DataPusher/DataStore.
Upon checking the error logs (specifically datapusher.error.log in /var/log/apache2) the latest message is:
ConnectionError: HTTPConnectionPool(host='{ckan.site_url value, in this case the public IP of the instance}', port=80): Max retries exceeded with url: /api/3/action/resource_show (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f3bb0151490>: Failed to establish a new connection: [Errno 110] Connection timed out',))
I had a similar issue but I used a different approach to solve it.
The system looks up DNS names in the /etc/hosts file before it goes to the external DNS server. I simply pointed my hostname (from the URL) to the local IP address like so:
172.16.22.2 ckan.installation.url
This way, the server connects to itself when it needs to reach ckan.installation.url and users connect to ckan.installation.url (public facing IP) when they need to access the site.
Ultimately the issue is that with an AWS EC2 VPS, your Ubuntu instance is not aware of its public-facing IP address, which is probably what you're using to reach the CKAN web gui hosted on said VPS.
Ideally the CKAN API can be hit internally but I have been unable to do so with localhost/127.0.0.1 in place of the VPS's external/public-facing IP address. The issue with setting the ckan site_url to localhost is that is what you will be directed to from the CKAN web gui when attempting to use DataPusher (e.g. manually initiating upload of a resource to the DataStore). Your computer obviously won't know localhost refers to the CKAN dev server... So in short, the ckan site_url value must be something accessible by both DataPusher and people/devices on the public Internet (assuming you want your CKAN instance to be publicly-accessible).
The solution here is to open port 80 to the public IP address of the AWS EC2 instance in the inbound rules of the instance's security group. In other words, you are letting the instance hit itself at port 80. Seem inefficient, but I don't have an alternative at the moment. It's better than nothing!

What are the ports to be opened for Google cloud SDK?

I am supposed to install Google cloud SDK on a secured windows server where even port for http(80) and https(443) is not enabled.
What are the ports to be opened to work with gcloud, gsutil and bq commands?
I tested the behaviour in my machine, I expected to need merely port 443 because Google Cloud SDK is based on HTTPS Rest API calls.
For example you can check what is going on behind the scenes with the flag --log-http
gcloud compute instances list --log-http
Therefore you need an egress rule allowing TCP:443 egress traffic.
With respect to the ingress traffic:
if your firewall is smart enough to recognise that since you opened the connection it should let the traffic pass (most common scenario) and therefore you do not need any rule for the incoming.
Otherwise you will need as well to allow TCP:443 incoming traffic.
Update
Therefore you will need to be able to open connection toward:
accounts.google.com:443
*.googleapis.com:443
*:9000 for serialport in case you need this feature
Below error shows it is 443
app> gcloud storage cp C:\Test-file6.txt gs://dl-bugcket-dev/
ERROR: (gcloud.storage.cp) There was a problem refreshing your current auth tokens: HTTPSConnectionPool(host='sts.googleapis.com', port=443): Max retries exceeded with url: /v1/token (Caused by NewConnectionError...
If you run netstat -anb at same time you run any gcloud command which need remote connection, you will also see below entry for the app you are using. In my case PowerShell
[PowerShell.exe]
TCP 142.174.184.157:63546 40.126.29.14:443 SYN_SENT
Do not use any proxy to see above entry else gcloud will connect to proxy and you can't see actual port. you can do this by creating new config.
gcloud config configurations create no-proxy-config

aws ec2 caching (always returns 304)

I have created a fresh EC2 instance, installed Apache2 and pointed my domain (hamidlab.com) to ip of this instance. When I browse my domain it shows default apache/ubuntu page, then I stopped apache2 service and try to access (hamidlab.com) it still shows apache/ubuntu default page, now when I try to access 1.hamidlab.com it says
Could Not Connect
Description: Could not connect to the requested server host.
and returns Header Status Code: 502 Connection refused
I tried with nginx server, still same caching issue.
Do AWS have any caching set ?
I am not using any other service than ec2.