I've been trying to connect an ec2 machine from aws to my localhost (WSL on windows) docker swarmer cluster, but i keep getting displayerd : Error response from daemon: Timeout was reached before node joined. The attempt to join the swarm will continue in the background. Use the "docker info" command to see the current swarm status of your node." and the ec2 is not being added as a node (even if later I try to add it again, it says that it is already part of a cluster, on my localhost it does not appear added).
What I've been tried:
Open the doors 2377, 7946 and 4789 (required by docker) on my wsl and ec2.
approved all traffic to all ports on my ec2 firewall.
desable my windows firewall (Tried to init a windows cluster to add ec2, but did not worked too.
Aditional information:
to open the doors on my wsl/ec2 I mainly used ufw and telnet.
I was able to connect my windows docker to my wsl cluster.
I'm being able to ping my ec2 ipv4 adress from mylocalhost, but not my localhost ip from ec2.
Any suggestions and solutions are welcome, i'm seriously HOURS in this, any progress will make me happy
Systems: I'm using ubuntu 18-04 on wsl and ecs, and windows 11
Please find below ports and protocols open on the security group my ec2 is using
Related
I have an ubuntu 18.04 based EC2 instance using an Elastic IP Address. I am able to SSH into the instance without any problems.
apt is executing some unattended updates on the instance. If I reboot the system after the updates, I am no longer able to SSH into the system. I am getting the error ssh: connect to host XXX port 22: Connection refused
Few points:
Even after the updates, I am able to SSH before the reboot
Method of restart does not make a difference. sudo shutdown -r now and EC2 dashboard have the same result.
There are no problems with sshd_config. I've detached the volume and attached it to a new working instance. sshd -t did not report any problems either
I am able to do sudo systemctl restart ssh.service after the updates but before the system restart.
I've tried with and without Elastic IP. Same result
From the system logs, I see that SSH is trying to start, but failing for some reason
I want to find out why the ssh daemon is not starting. Any pointers?
Update:
System Logs
Client Logs
No changes in the security groups before and after reboot
EC2 > Network & Security > Security Groups > Edit inblound rules > SSH 0.0.0.0/0
Step 1: EC2 > Instances > Actions > Image and templates > Create image
Step 2: Launch a new instance using the AMI image.
I missed the error Failed to start Create Static Device Nodes in /dev. in system logs. The solution given at https://askubuntu.com/questions/1301750/ubuntu-16-04-failed-to-start-create-static-device-nodes-in-dev helped solve my problem
I just did the setup of a cassandra cluster.
I have changed the seed, listen_adress and broadcast_adress in the cassandra.yalm file. But when I run the command
$ nodetool flush system
Cmd retrun this error,
nodetool: Failed to connect to '127.0.0.1:7199' - ConnectException:
'Connection refused (Connection refused)'.
in the file etc/cassandra/cassandra-env.sh I made the modification JVM_OPTS as on the screenshot
I use aws at the beginning I was on a t2.micro server. But I switched to a t2.large as recommended in many articles.
Finally, my ports are open as shown in this screenshot and I'm use ubuntu.
By default, remote JMX connections to Cassandra nodes are disabled. If you're running nodetool commands on the EC2 instance itself, it isn't necessary to modify the JVM options cassandra-env.sh.
In fact, we discourage allowing remote JMX connections for security reasons. Only allow remote access if you're an expert and know what you're doing. Cheers!
I am trying to configure the puppet server and agent making my local laptop with ubuntu 18.04 as puppet server and aws ec2 instance as puppet agent. When trying to do so i am facing the issues related to hostname adding in /etc/hosts file and whether to use the public ip or private ip address and how to do the final configuration and make this work.
I have used the public ip and public dns of both the system to specify in the /etc/hosts file but when trying to run the puppet agent --test from the agent getting the error as temporary failure in name resolution and connecting to https://puppet:8140 failed. I am using this for a project and my setup needs to remain like this.
The connection is initiated from the Puppet agent to the PE server, so the agent is going to be looking for your laptop, even if you have the details of your laptop in the hosts file it probably has no route back to your laptop across the internet as the IP of your laptop was probably provided by your router at home.
Why not build your Puppet master on an ec2 instance and keep it all on the same network, edit code on your laptop, push to github/gitlab and then deploy the code from there to your PE server using code-manager.
Alternatively you may be able to use a VPN to get your laptop onto the AWS VPC directly in which case it'll appear as just another node on the network and everything should work.
The problem here is that the puppet server needs a public IP or an IP in the same network as your ec2 instance to which your puppet agent can connect to. However, there's one solution without using a VPN though it can't be permanent. You can tunnel your local port to the ec2 instance
ssh -i <pemfile-location> -R 8140:localhost:8140 username#ec2_ip -> This tunnels port 8140 on your ec2 instance to port 8140 in your localhost.
Then inside your ec2 instance you can modify your /etc/hosts file to add this:
127.0.0.1 puppet
Now run the puppet agent on your ec2 instance and everything should work as expected. Also note that if you close the ssh connection created above then the ssh tunnel will stop working.
If you want to keep the ssh tunnel open a bit more reliably then this answer might be helpful: https://superuser.com/questions/37738/how-to-reliably-keep-an-ssh-tunnel-open
I have build a Node.js app that implements an SFTP server; when files are put to the server, the app parses the data and loads it into a Salesforce instance via API as it arrives. The app runs in a Docker container, and listens on port 9001. I'd like it run on Amazon's EC2 Container Service, listening on standard port 22. I can run it locally, remapping 9001 to host 22, and it works fine. But because 22 is also used by SSH, I'm not having any luck running it on ECS. Here are the steps I've taken so far:
Created an EC2 instance using the AMI amzn-ami-2016.03.j-amazon-ecs-optimized (ami-562cf236).
Assigned the instance to a Security Group that allows port 22 (was already present).
Created an ECR registry and pushed my Docker image up to it.
Created an ECS Task Definition for the image, which contains a port mapping from host port 22 to container port 9001
Created a service for the task and associated to the Default ECS Cluster, which contains my EC2 instance.
At this point, which viewing the "Events" tab of the Service view, I see the following error:
service sfsftp_test_service was unable to place a task because no container instance met all of its requirements. The closest matching container-instance XXXX is already using a port required by your task. For more information, see the Troubleshooting section.
I assumed that this is because my Task Definition is trying to map host port 22 which is reserved for SSH, so I tried creating a new version of the Task Definition that maps 9001 to 9001. I also updated my security group to allow port 9001 access. This task was started on my instance, and I was able to connect and upload files. So at this point I know that my Node.js app and the Docker instance are correct. It's a port mapping issue.
In trying to resolve the port mapping issue, I found this stackoverflow question about running SSH on an alternate port on EC2, and used the answer there to change my sshd to run on port 9022. I also updated the Security Group to allow traffic on port 9022. This worked; I can now SSH to port 9022, and I can no longer ssh to port 22.
However, I'm still getting the The closest matching container-instance XXXX is already using a port required error. I also tried editing the Security Group, changing the default port 22 grant from "SSH" to "Custom TCP Rule", but that change doesn't stick; I'm also not convinced that it's anything but a quick way to pick the right port.
When I view the Container instance from the Cluster screen, I can see that 5 ports are "registered", including port 22:
According to this resolved ECS Agent github issue, those ports are "reserved by default" by the ECS Agent. I'm guessing this is why ECS refuses to start my Docker image on the EC2 Instance. So is this configurable? Can I "unreserve" port 22 to allow my Docker image to run?
Edit to add: After reviewing this ECS Agent documentation, I've opened an issue on the ECS Agent Github as well.
I have set up the same version of redis in my amazon ec2 ubuntu instance and also in my home computer running ubuntu. I have set my security group in ec2 to have the port 6379 accesible publicly. I have added the line
slaveof ec2-xx-xxx-xxx-xx.us-west-2.compute.amazonaws.com 6379
where ec2-xx-xxx-xxx-xx.us-west-2.compute.amazonaws.com is the public dns of my ec2 instance
to my redis configuration file in my own computer (the slave). Now when I run redis in the master (amazon ec2) and the slave (my computer at home) both in the command line, if i set a new redis key in the master, I get no update in the slave. The slave returns nill/null as no key exists.
What's wrong? Aren't the master and the slave connected? Or is their a different way to connect to the master through the public ip/dns?
Please note that I have also tried
slaveof ubuntu#ec2-xx-xxx-xxx-xx.us-west-2.compute.amazonaws.com 6379
where ubuntu is the user through which I have logged into to the amazon ec2 instance
But this does not work either. I have not set any authentication restrictions so the slave does not requires any password to connect to the master. I have searched online, rarely any detailed stuffs on redis replication and related error handlings.