Jenkins EC2 slave SSH failure - amazon-web-services

Using the Jenkins EC2 plugin, I cannot get my Jenkins master to SSH to my Jenkins slave. The slave spins up and provisions properly, but:
INFO: Connecting to 10.99.3.6 on port 22, with timeout 10000.
Feb 24, 2016 5:13:27 PM hudson.plugins.ec2.EC2Cloud log
INFO: Failed to connect via ssh: There was a problem while connecting to 10.99.3.6:22
Though the Jenkins host claims to be failing when attempting to ssh to the slave node, I am able to ssh from a shell on the Jenkins host without error, and using the same authentication keys as specified in my configuration.
I have additionally attempted to add and id_rsa file containing the same key inputted in the EC2 configuration in a .ssh directory in the Jenkins home dir, and the ec2-user home dir, which also did not work (which wasn't entirely unexpected).
Jenkins - v1.649
Amazon EC2 Plugin - v1.31
Using in-house Centos7.1 AMIs
Additional information: The slave instance ID is listed in the build executor box, but says "offline" next to it, even after I observe the instance in the EC2 console as running and available, and am able to SSH to it manually from the master.

As it turned out, this was an issue using Centos7 and JDK1.8. When using the same configurations with Centos6.5 and JDK1.7, the slaves spun up and connected properly.

Please add id_rsa.pub key from the master host's .ssh folder to authorized_keys on the slave host.

You can debug by some steps below:
Check security group of EC2, to be certain that port 22 was opened.
Use file *.pem to authenticate your EC2 on Jenkins server.

Related

Unable to SSH into EC2 server after reboot

I have an ubuntu 18.04 based EC2 instance using an Elastic IP Address. I am able to SSH into the instance without any problems.
apt is executing some unattended updates on the instance. If I reboot the system after the updates, I am no longer able to SSH into the system. I am getting the error ssh: connect to host XXX port 22: Connection refused
Few points:
Even after the updates, I am able to SSH before the reboot
Method of restart does not make a difference. sudo shutdown -r now and EC2 dashboard have the same result.
There are no problems with sshd_config. I've detached the volume and attached it to a new working instance. sshd -t did not report any problems either
I am able to do sudo systemctl restart ssh.service after the updates but before the system restart.
I've tried with and without Elastic IP. Same result
From the system logs, I see that SSH is trying to start, but failing for some reason
I want to find out why the ssh daemon is not starting. Any pointers?
Update:
System Logs
Client Logs
No changes in the security groups before and after reboot
EC2 > Network & Security > Security Groups > Edit inblound rules > SSH 0.0.0.0/0
Step 1: EC2 > Instances > Actions > Image and templates > Create image
Step 2: Launch a new instance using the AMI image.
I missed the error Failed to start Create Static Device Nodes in /dev. in system logs. The solution given at https://askubuntu.com/questions/1301750/ubuntu-16-04-failed-to-start-create-static-device-nodes-in-dev helped solve my problem

Jenkins not connecting to AWS EC2 instance via SSH

I am trying to connect to an EC2 instance from Jenkins via SSH. I always get failure in the end. I am storing the SSH key in a global credential.
This is the task and shell, using SSH agent plugin
This is how I store the key (the whole key has been pasted in)
If I am using SSH connection from my local PC, everything is fine. I am a newbie in Jenkins so this is very chaotic for me.
you need to use SSH plugin . download the plugin using Manage Jenkins and configure
the ec2 in SSH remote.
follow the steps in this link
https://www.thesunflowerlab.com/blog/jenkins-aws-ec2-instance-ssh/

Configuring local laptop as puppet server and aws ec2 instance as puppet agent

I am trying to configure the puppet server and agent making my local laptop with ubuntu 18.04 as puppet server and aws ec2 instance as puppet agent. When trying to do so i am facing the issues related to hostname adding in /etc/hosts file and whether to use the public ip or private ip address and how to do the final configuration and make this work.
I have used the public ip and public dns of both the system to specify in the /etc/hosts file but when trying to run the puppet agent --test from the agent getting the error as temporary failure in name resolution and connecting to https://puppet:8140 failed. I am using this for a project and my setup needs to remain like this.
The connection is initiated from the Puppet agent to the PE server, so the agent is going to be looking for your laptop, even if you have the details of your laptop in the hosts file it probably has no route back to your laptop across the internet as the IP of your laptop was probably provided by your router at home.
Why not build your Puppet master on an ec2 instance and keep it all on the same network, edit code on your laptop, push to github/gitlab and then deploy the code from there to your PE server using code-manager.
Alternatively you may be able to use a VPN to get your laptop onto the AWS VPC directly in which case it'll appear as just another node on the network and everything should work.
The problem here is that the puppet server needs a public IP or an IP in the same network as your ec2 instance to which your puppet agent can connect to. However, there's one solution without using a VPN though it can't be permanent. You can tunnel your local port to the ec2 instance
ssh -i <pemfile-location> -R 8140:localhost:8140 username#ec2_ip -> This tunnels port 8140 on your ec2 instance to port 8140 in your localhost.
Then inside your ec2 instance you can modify your /etc/hosts file to add this:
127.0.0.1 puppet
Now run the puppet agent on your ec2 instance and everything should work as expected. Also note that if you close the ssh connection created above then the ssh tunnel will stop working.
If you want to keep the ssh tunnel open a bit more reliably then this answer might be helpful: https://superuser.com/questions/37738/how-to-reliably-keep-an-ssh-tunnel-open

Locked out of ssh from server

I was logged into my AWS EC2 server via ssh. I ran iptables -P INPUT DROP to check something and I forgot to enable port 22 so that I could keep my ssh connection.
Is there something I can do to regain back the connection?
You can use AWS System Manager Session Manager if your server has the AWS SSM agent installed on the EC2 server and the correct IAM permissions, etc.
Or you could use AWS Systems Manager Run Command to run a single command to fix the iptables, if you have the AWS SSM agent installed on the EC2 server.
Otherwise, you didn't save the iptables rules, so they should reset back to the previous settings if you reboot the server.

Cannot establish master-slave replication on redis from my computer to amazon ec2

I have set up the same version of redis in my amazon ec2 ubuntu instance and also in my home computer running ubuntu. I have set my security group in ec2 to have the port 6379 accesible publicly. I have added the line
slaveof ec2-xx-xxx-xxx-xx.us-west-2.compute.amazonaws.com 6379
where ec2-xx-xxx-xxx-xx.us-west-2.compute.amazonaws.com is the public dns of my ec2 instance
to my redis configuration file in my own computer (the slave). Now when I run redis in the master (amazon ec2) and the slave (my computer at home) both in the command line, if i set a new redis key in the master, I get no update in the slave. The slave returns nill/null as no key exists.
What's wrong? Aren't the master and the slave connected? Or is their a different way to connect to the master through the public ip/dns?
Please note that I have also tried
slaveof ubuntu#ec2-xx-xxx-xxx-xx.us-west-2.compute.amazonaws.com 6379
where ubuntu is the user through which I have logged into to the amazon ec2 instance
But this does not work either. I have not set any authentication restrictions so the slave does not requires any password to connect to the master. I have searched online, rarely any detailed stuffs on redis replication and related error handlings.