I have an single EC2 CentOS instance with ElasticSearch installed.
I am unable to connect externally using the public ip or hostname.
ElasticSearch starts correctly and I can access locally on the machine using:
CURL <my_internal_ip>:9200
However running the same remotely using the public ip fails.
I have the the cloud-aws plugin is installed
I have setup an AWS security group with all tcp ports open for testing
I am guessing I need to bind the address within the elasticsearch.yml file, however do not understand which setting to use, and with what address. Setting the network.host to an external address stops ES from starting - unable to bind.
Appreciate any comments.
Amazon Ec2 Instance is use remotely according with the Elastic Search Installed that provide the aws cloud plugins
After many hours....
IP Tables was enabled on the OS as default blocking my elasticsearch ports.
I've the same issue and followed below steps and get resolved:
Elasticsearch process memory locking failed
MAX_LOCKED_MEMORY=unlimited
LimitMEMLOCK=infinity
Do these two steps also.
Related
I am trying to configure the puppet server and agent making my local laptop with ubuntu 18.04 as puppet server and aws ec2 instance as puppet agent. When trying to do so i am facing the issues related to hostname adding in /etc/hosts file and whether to use the public ip or private ip address and how to do the final configuration and make this work.
I have used the public ip and public dns of both the system to specify in the /etc/hosts file but when trying to run the puppet agent --test from the agent getting the error as temporary failure in name resolution and connecting to https://puppet:8140 failed. I am using this for a project and my setup needs to remain like this.
The connection is initiated from the Puppet agent to the PE server, so the agent is going to be looking for your laptop, even if you have the details of your laptop in the hosts file it probably has no route back to your laptop across the internet as the IP of your laptop was probably provided by your router at home.
Why not build your Puppet master on an ec2 instance and keep it all on the same network, edit code on your laptop, push to github/gitlab and then deploy the code from there to your PE server using code-manager.
Alternatively you may be able to use a VPN to get your laptop onto the AWS VPC directly in which case it'll appear as just another node on the network and everything should work.
The problem here is that the puppet server needs a public IP or an IP in the same network as your ec2 instance to which your puppet agent can connect to. However, there's one solution without using a VPN though it can't be permanent. You can tunnel your local port to the ec2 instance
ssh -i <pemfile-location> -R 8140:localhost:8140 username#ec2_ip -> This tunnels port 8140 on your ec2 instance to port 8140 in your localhost.
Then inside your ec2 instance you can modify your /etc/hosts file to add this:
127.0.0.1 puppet
Now run the puppet agent on your ec2 instance and everything should work as expected. Also note that if you close the ssh connection created above then the ssh tunnel will stop working.
If you want to keep the ssh tunnel open a bit more reliably then this answer might be helpful: https://superuser.com/questions/37738/how-to-reliably-keep-an-ssh-tunnel-open
I thought of migrating my php application from shared hosting to AWS. however i'm facing some difficult while installing packages, whenever i try to install a package it saying connecting to some xyz server and halts, the installation does not proceed. I followed below steps
1.) I created amazon AWS free tier account
2.) I created a security group policy
3.) I generated a elastic IP from the pool
4.) I created an AWS ec2 instance using Ubuntu x64 v16 (Did all the setup and was fine)
5.) Generated the .pem key and downloaded it, also connected via SSH using the key, i'm able to connect
6.) I then associated the Elastic IP to the instance and restarted the instance
whenever i try to connect to the public IP address it says server took too long to respond or is not accessible. I thought might be i need to install Apache
I'm trying to install Apache, also not only Apache even if i run sudo update && upgrade it just shows a message connecting to server and hangs up!
What my be the problem? Where did i go wrong?
I am new to setting up virtual machines. I created my first Ubuntu instance using AWS EC2. Everything seemed to check out until I tried connecting to it with ssh, as per instructions.
To provide some context, my app is called "smpapp". My computer is macOS High Sierra. Naturally, my smpapp.pem file saved to ~/Downloads. First, I opened up the Terminal and set my working directory to Downloads with cd ~/Downloads. Then I entered chmod 400 smpapp.pem, which didn't return any error, so I assume it was a success. Then, I entered ssh -i "smpapp.pem" ubuntu#ec2-XX-XX-XXX-XXX.us-east-2.compute.amazonaws.com (omitting public DNS numbers with Xs). It took awhile to process before spitting out, ssh: connect to host ec2-XX-XX-XXX-XXX.us-east-2.compute.amazonaws.com port 22: Operation timed out.
Can someone explain the general problem to me and how I can fix it (methodically and in layman's terms)?
Could be a few things:
Does your ec2 instance have a public ip? (if not, you might have to attach an elastic ip or put it in a public subnet)
Is the security group attached to the ec2 instance allowing connections to port 22?
Is the ACL on the subnet allowing public connections to the subnet?
Is your VPC configured to routetraffic through your IGW?
Amazon offers step by step instructions on determining the issue, it could be for any reason of the above not being configured properly. You can find step by step instructions on what do in the official amazon docs here.
I didn't find such guide or articles how to do it for ElasticSearch hosted on Windows server.
I have the EC2 amazon windows instance which running ElasticSearch server on port 9200, but I can't achieve it by _ec2_ip_adress:9200 outside the server.
I completely sure that all TCP ports are opened in amazon security group rules, I've turned off the firewall on the server as well.
So that is the problem in ElasticSearch configs.
Can someone help me with that?
Well but you know that then any body would be able to delete/create stuff in your index until you have shield.
If you really want to open it, also make sure that in windows firewall you opened port 9200.
So what i would do i would probably restrict in firewall on in Amazon access to this port for specific IPs (Actually in my project i am doing that :) )
There is one more thing to check on which IP is runned as soon as i remember ES will run on private IP. Look to network.host default is __local__. Try network.host: 0.0.0.0
I have set up a a micro EC2 instance on AWS. Currently, I am using the free tier in Oregon. There are two problems which I am facing.
When I try to SSH the instance using the public DNS, it says host does not exist but when I try conencting it using the public IP, it connects to it. What setting is needed to use the public DNS ?
I have opened the SSH client using the IP address. I want to set up my application which needs Node.js and MongoDB. I installed Node.js using this
Next I installed MongoDB using this
Then I connected to my instance using Filezilla and uploaded my code to it. I then start my node application which uses socket.io.
When I try to connect to socket.io server using web browser, I get a message which says connection refused "error 111". I have opened TCP port 80 in instance's security groups. In iptables, I have forwarded port 80 to 8080, but still it does not work. I have also checked that the firewall is disabled in ec2. Kindly help me to resolve this issue.
Did you check if all of the necessary ports are open on Amazon Security Policy?
What you can do is to allow all traffic on Amazon Security Policy for test and see if the connection goes well or not.
You might also check if you need access DB from outside. In that case, you also have to open the mongodb port and setup mongodb correctly as well.
Other tools that might useful to test firewall and connection issue will be tcpdump and syslog file
For the dns issue, did you try to nslookup on that name and see if the IP shown matches your server IP?
As Amazon gives a long DNS hostname for the server, I always use my own domain name. It's much easier.
example : ec2.domainname.com, which points to the Amazon IP address
Hope that help.
My problem is resolved now..
For the DNS issue, earlier I needed proxy to access internet, so I guess the DNS name was not getting resolved. When I tried using proxy free internet, I was able to ssh using public DNS.
And regarding connection to socket.io, I used port 8080 instead of 80 and used "sudo node main.js" to run my node file. Now I am able to connect to the socket.io server and MongoDB.
Another thing which I want to ask is that would running the node file with sudo rights create some security issue ?
Thanks for the answer! That also worked for me. I had the same problem trying to connect through sockets (http://myipaddress:3000) to a node.js server, i tried opening ports on the actual ec2 instance and disabling the firewall through SSH but nothing worked. Had to go to Security Groups on the ec2 console and open a new inbound tcp rule enabling that port