Unable to access Kibana on AWS EC2 instance using url - amazon-web-services

I have Elasticseasrch and Kibana installed on EC2 instance where I am able to access Elasticsearch using on this url http://public-ip/9200. But I am unable to access Kibana using http://public-ip/5601.
I have configured kibana.yml and added certain fields.
server.port: 5601
server.host: 0.0.0.0
elasticsearch.url: 0.0.0.0:9200
On doing wget http://localhost:5601 I am getting below output:
--2022-06-10 11:23:37-- http://localhost:5601/
Resolving localhost (localhost)... 127.0.0.1
Connecting to localhost (localhost)|127.0.0.1|:5601... connected.
HTTP request sent, awaiting response... 200 OK
Length: 83731 (82K) [text/html]
Saving to: ‘index.html’
What am I doing wrong?

Server Host set to 0.0.0.0 means it should be accessible from outside localhost but double check that the listener is actually listening for external connections on that port using netstat -nltpu. The server is also accessible on it's public IP on port 9200 so try the following:
EC2 Security Group should inbound TCP traffic on that port 5601 from your IP address.
Network ACLs should allow inbound/outbound TCP traffic on port 5601.
OS firewall ( e.g. ufw or firewalld ) should allow traffic on that port. You can run iptables -L -nxv to check the firewall rules.
Try connecting to that port from a different EC2 instance in the same VPC. It is possible that what ever internet connection you are using may have a firewall blocking connections on that port. This is common with corporate firewalls.
If these fail, next you want to check if the packets are reaching your EC2 instance so you can run a packet capture on that port using tcpdump -ni any port 5601 and check if you have any packets coming in/out on that port.
if you don't see any packets on tcpdump, use VPC Flow Logs to see if packets are coming in/out that port.

Considering the kibana port (5601 ) is open via security groups
I could able to resolve the issue by updating config server.host:localhost to server.host:0.0.0.0
and elasticsearch.hosts: ["http://localhost:9200"] (in my case kibana and ES both are running on the same machine) in kibana.yml
https://discuss.elastic.co/t/kibana-url-gives-connection-refused-from-outside-machine/122067/8

Related

How Is Port Forwarding Working on AWS without Security Group Rules?

Running an AWS EC2 instance with Ubuntu 22.04. I am also running a jupyter server for python development there and connecting to that from my local Ubuntu laptop with ssh tunneling.
#!/usr/bin/env bash
# encoding:utf-8
SERVER=98.209.63.973 # My EC2 instance
# Tunnel the jupyter service
nohup ssh -N -L localhost:8081:localhost:8888 $SERVER & # 8081:Local port 8888:remote port
However, I never opened port 8888 of the ec2 instance by a security group rule. How come the port forwarding is working in that case? Should not it be blocked?
When using ssh -L, ssh will listen to local port 8081 and will send that traffic across the SSH connection (port 22) to the destination computer. The ssh daemon that receives the traffic will then forward the traffic to localhost:8888.
There is no need to permit port 8888 in the EC2 instance security group because it is receiving this traffic via port 22.
An SSH connection does more than just sending the keystrokes you type. It is a full protocol that can pass traffic across multiple logical channels.

I cannot connect to my AWS EC2 url and I have a feeling it has something to do with my security group settings

I am very new to coding so trying to figure this out was very hard for me. I'm trying to deploy my code with docker and running my code inside the EC2 cloud. But I can't seem to get the instance's url to work. I set my inbound (security group) HTTP (80) => 0.0.0.0/0, HTTPs (443) => 0.0.0.0/0, and SSH(22) => my ip. I read that setting my SSH to 0.0.0.0/0 was a bad idea, so I went with my ip (there was an option called 'my ip'). Also, I am using ubuntu for my AMI.
While successfully docker using (docker-compose up), I used curl http://localhost:3001 (3001 is my exposed port inside my code) and it works fine. But when I used curl ec2-XX-XXX-XXX-XXX.us-west-1.compute.amazonaws.com, it outputs:
curl: (6) Could not resolve host: ssh and
curl: (7) Failed to connect to ec2-XX-XXX-XXX-XXX.us-west-1.compute.amazonaws.com port 80: Connection refused
Curl ec2-xxx-xx-amazonaws.com send request on port 80 , while you are docker is running at port 3001.
First verify that you have exposed some host port to docker. Something like this should come in docker ps -a
0.0.0.0/3001--> 3001 . the first 3001 can be any host port
Next make sure that the first port whichever you used is there in security group and opened for your ip.
Hopefully if all good at vpc and route tables settings then :3001(use whatever host port you gave if used anything apart of 3001) all should work

Inbound and outbound ports needs to be same or not

Senario:
There are two servers running on different VPCs. Both servers are publically available.
Server-one(e.g. Public IP:13.126.233.125) is hosting one file on 8000 port and port 8000 inbound is open on all firewall installed on the server and security group.
Server-two wants to get that file with "wget command". Port 80 outbound Server-two is open. I tried to do "wget http://13.126.233.125:8000/file.txt", it shows connection refused. I had to open port 8000 in outbound of Server-two to make this work.
As per my logic, this should have worked without adding 8000 in out-bound list. Server-one is hosting on 8000, It's not compulsory for server-two to start the connection from 8000 port. server-two can use any ephemeral ports or port 80 as this is http connection.
Please explain why it's required to open out-bound port 8000 on server-two.
HTTP is a protocol that sits on top of TCP. Using port 80 is a convention and not a requirement. You can run HTTP (and HTTPS) on any port you want that is available. The way that TCP works, is that a process will open a TCP port (say 8000) and then "listen" on that port for connection attempts from other systems (local or remote). If you try to connect using port 80 on a system listening on port 8000, you will either connect to the wrong service or get connection refused. Only after the connection is accepted does ephemeral ports come into action.
If server A is running a service listening on port 8000, then server B needs to connect to server A using port 8000. This means that server B needs port 8000 open outbound in order to connect to port 8000.
In normal usage, you set (restrict) the inbound ports in a security group and allow ALL outbound ports. Only restrict outbound ports if you understand how TCP works and know exactly what you are doing and why. Otherwise leave all outbound ports open.
There are a few reasons to control outbound ports. For example, to prevent an instance from performing updates, to prevent an instance from communicating if was breached, etc. If you are controlling this level of communications, then you also need to understand how NACLs work and how to use each one.
AWS has some pretty good documentation that explains how security groups and NACLs work and how to use them.
Outbound firewalls are used to limit the connections to external services from within the network. That is why by default all outbound connections are enabled and inbound connections are disabled.
In this case, setting an outbound firewall on server 2 prohibits server 2 from making connections to port 8000 (and all others, except 80) of server 1. It is regardless of the port from which the connection is initiated.

Can't connect to Amazon AWS EC2 with Hansoft Client

I'm trying to connect to my Hansoft server on my AWS server that is running Windows Server.
I've tried opening all inbound traffic to test, but that hasn't worked. I'm able to ping the server so it's there.
Hansoft servers use default port 50256.
What else could I try?
Launch-wizard-1 security group settings below.
Inbound Security rules:
All Traffic, All protocols, All port range, Source 0.0.0.0/0
RDP, TCP Protocol, Port range 3389, Source 0.0.0.0/0
All ICMP, All protocols, Port range N/A, Source 0.0.0.0/0
Outbound Security rules:
All Traffic, All protocols, All port range, Source 0.0.0.0/0
Try the following:
Are you sure 100% the service is running?
While logged into the instance, can you 'telnet localhost 50256' and get a connection? Have you tested it locally and confirms it works?
Disable your local firewall and anti-virus.
Have you checked the local Windows Firewall on that server? That will block you in some configurations, so you need to check that. You may need a new inbound rule there.

Closed port when tunneling HTTP over ssh

I'm developing an application which will use AWS's SNS service to receive notifications over HTTP.
As I am developing the application locally and have no control of our company firewall, I am attempting to tunnel HTTP connections from an external EC2 host to my local machine for the purposes of testing.
Everything looks fine when verifying the connection from the EC2 host itself, however the port is closed when examined externally.
My local application is on port 2222. I have executed the following command on my local machine to establish the proxy:
ssh -i myCredentials.pem ec2-user#myserver.com -R 2222:localhost:2222
Where myserver.com points to an EC2 instance. SSH'ing to the EC2 instance, I can successfully connect to my application via the tunnel, and nmap displays the following:
Nmap scan report for localhost (127.0.0.1)
Host is up (0.00055s latency).
Not shown: 997 closed ports
PORT STATE SERVICE
22/tcp open ssh
25/tcp open smtp
2222/tcp open EtherNet/IP-1
However when I run nmap against the EC2 instance from my local machine, the port is closed:
Nmap scan report for xxxxxx
Host is up (0.24s latency).
Not shown: 998 filtered ports
PORT STATE SERVICE
22/tcp open ssh
2222/tcp closed EtherNet/IP-1
The security group assigned to the server is allowing TCP traffic on ports 2222 on 0.0.0.0/0 and iptables isn't running on the server.
What do I need to do on the EC2 end to make this port open to the outside world?
The tunnelling command is correct, however in order for SSH to bind to the wildcard address, the following setting is required in /etc/ssh/sshd_config on the remote server:
GatewayPorts yes
Once this is added, restart sshd and the tunnelling will work as desired provided no firewalls are in the way.