Connection refused error when using Stripe webhooks - django

I'm constantly getting a connection refused error when trying to receive webhooks.
(venv) alexa#main:/etc/nginx/sites-available$ stripe listen --forward-to localhost:5000/webhook/
go package net: built with netgo build tag; using Go's DNS resolver
> Ready! Your webhook signing secret is whsec_************************* (^C to quit)
2021-04-05 18:13:03 --> customer.subscription.updated [evt_1Icwv5HrsuAsSZROjKy4Z5CK]
2021-04-05 18:13:03 [ERROR] Failed to POST: Post "http://localhost:5000/webhook/": dial tcp 127.0.0.1:5000: connect: connection refused)
The port is enabled through my firewall:
To Action From
-- ------ ----
5000 ALLOW Anywhere
5000/tcp ALLOW Anywhere
22/tcp ALLOW Anywhere
80/tcp ALLOW Anywhere
443/tcp ALLOW Anywhere
5555 ALLOW Anywhere
5000 (v6) ALLOW Anywhere (v6)
5000/tcp (v6) ALLOW Anywhere (v6)
22/tcp (v6) ALLOW Anywhere (v6)
80/tcp (v6) ALLOW Anywhere (v6)
443/tcp (v6) ALLOW Anywhere (v6)
5555 (v6) ALLOW Anywhere (v6)
My webapp is running on Ubuntu 20.10
After running curl -v -X POST http://localhost:5000/webhook/ as suggested by Justin Michael in the comments I got the following :
* Trying 127.0.0.1:5000...
* TCP_NODELAY set
* connect to 127.0.0.1 port 5000 failed: Connection refused
* Failed to connect to localhost port 5000: Connection refused
* Closing connection 0
curl: (7) Failed to connect to localhost port 5000: Connection refused

Based on your latest comment it sounds like you have Stripe CLI running on your local machine and you're trying to use it to forward Stripe Events to the code running on your Linode.
Stripe CLI is designed for local testing only, and while forwarding from your local machine to your Linode can probably work, it's not recommended.
The best approach here would be to set up an actual webhook endpoint in your Stripe Dashboard or create one using the Stripe API and point it to your Linode.
Alternatively you could install Stripe CLI on the Linode itself and forward locally there, but the actual webhook endpoint will be a better way to test as you'll get actual webhook endpoint behavior, such as retries.

Related

How to open port 80 on AWS EC2

I want to open port 80 to allow HTTP connections on my EC2 server. But when I'm entering "telnet xx.xx.xx.xx 80" on a terminal the following is displayed
"Trying xx.xx.xx.xx..."
telnet: Unable to connect to remote host: Connection timed out
In AWS I've opened port 80 by defining an Inbound Rule on the Security group (only one security group is defined for this EC2 server)
I'm using the Public IPv4 address to make a telnet connection
I noticed you have a fresh install -- fresh installs do not have software listening over HTTP by default.
If there is no application listening on a port, incoming packets to that port will simply be rejected by the computer's operating system. Ports can be "closed" through the use of a firewall, which you have disabled, therefore the ports are open just unresponsive which makes them appear closed.
If the port is enabled in the firewall in terminal using
sudo apt-get install ufw
sudo ufw allow ssh
sudp ufw allow https
sudo ufw allow http
sudo reboot
and enabled in the aws console as a rule, the port is open and just not responsive so it's seen as closed. By installing either nginx or something that binds to port 80, external requests to that port will be connected successfully, and the port will therefore be recognized as open. The reason ssh is recognized as open is because 1. it has firewall transparency, and 2. it is always listening (unlike port 80!).
Before installing nginx even though ports are allowed thru firewall:
sudo apt-get install nginx
sudo ufw allow 'Nginx HTTP'
sudo systemctl status nginx
(more nginx info)
After:
Simple port tester tool here

Google Cloud Firewall: Port is blocked

I am trying to set up a simple web server in Google Cloud Platform on a Debian machine.
When running a port scan (https://www.ipvoid.com/port-scan/) without any firewall rules, all ports are shown as filtered. When setting up a rule for port 80, the scan gives back that the port is blocked. Am I doing something wrong with the firewall settings?
Thanks in advance!
I re-built your issue on my Gcloud console, 80 port will be blocked if I don't select the firewall options as picture during creating an instance.
The firewall rule will auto set up if I select these firewall options.
You also can verify your firewall settings in Debian, View the full list of application profiles by running:
$ sudo ufw app list
The WWW profiles are used to manage ports used by web servers:
Output
Available applications:
. . .
WWW
WWW Cache
WWW Full
WWW Secure
. . .
If you inspect the WWW Full profile, it shows that it enables traffic to ports 80 and 443:
$ sudo ufw app info "WWW Full"
Output
Profile: WWW Full
Title: Web Server (HTTP,HTTPS)
Description: Web Server (HTTP,HTTPS)
Ports:
80,443/tcp
Allow incoming HTTP and HTTPS traffic for this profile:
$sudo ufw allow in "WWW Full"

ufw forbids docker container to connect to postgres

On ubuntu 18.04 with ufw enabled I run docker container which is supposed to connect a django app to a locally installed Postgresql server.
Everything runs perfect when ufw is disabled
docker-compose -f docker-compose.prod.yml run --rm app sh -c 'python manage.py createsuperuser'
But with enabled ufw I get the following error:
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
django.db.utils.OperationalError: could not connect to server: Operation timed out
Is the server running on host "host.docker.internal" (172.17.0.1) and accepting
TCP/IP connections on port 5432?
I have following ufw rules
$ sudo ufw status
Status: active
To Action From
-- ------ ----
Nginx Full ALLOW Anywhere
OpenSSH ALLOW Anywhere
20/tcp ALLOW Anywhere
21/tcp ALLOW Anywhere
990/tcp ALLOW Anywhere
40000:50000/tcp ALLOW Anywhere
Nginx Full (v6) ALLOW Anywhere (v6)
OpenSSH (v6) ALLOW Anywhere (v6)
20/tcp (v6) ALLOW Anywhere (v6)
21/tcp (v6) ALLOW Anywhere (v6)
990/tcp (v6) ALLOW Anywhere (v6)
40000:50000/tcp (v6) ALLOW Anywhere (v6)
How to configure ufw properly and let the container connect to Postgres?
Your firewall is blocking the connection from docker container since it originates on another network.
To fix this you should enable access from that docker network to your Postgres instance (I assume its port 5432).
So, when you use docker-compose up, a specific docker network is created. You can look it up by using command:
docker network ls
When you locate your network use command docker inspect {network name} to get additional information about it. The information you are looking for is the network's gateway. The portion of it should look something like this:
...
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
...
The ip you are looking for is in this example 172.18.0.1.
So now you know from which interface your container is connecting to Postgres and you can enable it in your firewall. Since you don't want to open your firewall for everybody you can use something like this:
ufw allow in from 172.18.0.0/16
This will also allow the entire network to access and not just specific IPs. That is a useful option since containers can change IPs when restarted.

I cannot connect to my AWS EC2 url and I have a feeling it has something to do with my security group settings

I am very new to coding so trying to figure this out was very hard for me. I'm trying to deploy my code with docker and running my code inside the EC2 cloud. But I can't seem to get the instance's url to work. I set my inbound (security group) HTTP (80) => 0.0.0.0/0, HTTPs (443) => 0.0.0.0/0, and SSH(22) => my ip. I read that setting my SSH to 0.0.0.0/0 was a bad idea, so I went with my ip (there was an option called 'my ip'). Also, I am using ubuntu for my AMI.
While successfully docker using (docker-compose up), I used curl http://localhost:3001 (3001 is my exposed port inside my code) and it works fine. But when I used curl ec2-XX-XXX-XXX-XXX.us-west-1.compute.amazonaws.com, it outputs:
curl: (6) Could not resolve host: ssh and
curl: (7) Failed to connect to ec2-XX-XXX-XXX-XXX.us-west-1.compute.amazonaws.com port 80: Connection refused
Curl ec2-xxx-xx-amazonaws.com send request on port 80 , while you are docker is running at port 3001.
First verify that you have exposed some host port to docker. Something like this should come in docker ps -a
0.0.0.0/3001--> 3001 . the first 3001 can be any host port
Next make sure that the first port whichever you used is there in security group and opened for your ip.
Hopefully if all good at vpc and route tables settings then :3001(use whatever host port you gave if used anything apart of 3001) all should work

Could not access to my application in digital ocean through public IP?

I faced an issue regarding the ufw ubuntu firewall rule in the digital ocean, I already allow the port in from anywhere but still, I can not access my application with public ip with the port that I allowed. How to allow the ufw firewall rule for my application in secure ways. Thank you.
Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), disabled (routed)
New profiles: skip
To Action From
-- ------ ----
22/tcp ALLOW IN Anywhere
80/tcp ALLOW IN Anywhere
443/tcp ALLOW IN Anywhere
Anywhere ALLOW IN 134.238.123.131
8080 ALLOW IN Anywhere
3000 ALLOW IN Anywhere
22/tcp (v6) ALLOW IN Anywhere (v6)
80/tcp (v6) ALLOW IN Anywhere (v6)
443/tcp (v6) ALLOW IN Anywhere (v6)
8080 (v6) ALLOW IN Anywhere (v6)
3000 (v6) ALLOW IN Anywhere (v6)
The problem is, I can not access to my MERN stack application via public IP 134.238.123.131:3000, the thing is, before I abled to access to it. It shows site could not be reached as below:
This site can’t be reached
104.248.153.121 took too long to respond.
Try:
Checking the connection
Checking the proxy and the firewall
Running Windows Network Diagnostics
ERR_CONNECTION_TIMED_OUT
ufw firewall rule status