I am trying to connect to an EC2 instance from Jenkins via SSH. I always get failure in the end. I am storing the SSH key in a global credential.
This is the task and shell, using SSH agent plugin
This is how I store the key (the whole key has been pasted in)
If I am using SSH connection from my local PC, everything is fine. I am a newbie in Jenkins so this is very chaotic for me.
you need to use SSH plugin . download the plugin using Manage Jenkins and configure
the ec2 in SSH remote.
follow the steps in this link
https://www.thesunflowerlab.com/blog/jenkins-aws-ec2-instance-ssh/
Related
GCP has finally released managed Jupyter notebooks. I would like to be able to interact with the notebook locally by connecting to it. Ie. i use PyCharm to connect to the externaly configured jupyter notebbok server by passing its URL & token param.
Question also applies to AWS Sagemaker notebooks.
AWS does not natively support SSH-ing into SageMaker notebook instances, but nothing really prevents you from setting up SSH yourself.
The only problem is that these instances do not get a public IP address, which means you have to either create a reverse proxy (with ngrok for example) or connect to it via bastion box.
Steps to make the ngrok solution work:
download ngrok with curl https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-amd64.zip > ngrok.zip
unzip ngrok.zip
create ngrok free account to get permissions for tcp tunnels
run ./ngrok authenticate with your token
start with ./ngrok tcp 22 > ngrok.log & (& will put it in the background)
logfile will contain the url so you know where to connect to
create ~/.ssh/authorized_keys file (on SageMaker) and paste your public key (likely ~/.ssh/id_rsa.pub from your computer)
ssh by calling ssh -p <port_from_ngrok_logfile> ec2-user#0.tcp.ngrok.com (or whatever host they assign to you, it;s going to be in the ngrok.log)
If you want to automate it, I suggest using lifecycle configuration scripts.
Another good trick is wrapping downloading, unzipping, authenticating and starting ngrok into some binary in /usr/bin so you can just call it from SageMaker console if it dies.
It's a little bit too long to explain completely how to automate it with lifecycle scripts, but I've written a detailed guide on https://biasandvariance.com/sagemaker-ssh-setup/.
On AWS, you can use AWS Glue to create a developer endpoint, and then you create the Sagemaker notebook from there. A developer endpoint gives you access to connect to your python or Scala spark REPL via ssh, and it also allows you to tunnel the connection and access from any other tool, including PyCharm.
For PyCharm professional we have even tighter integration, allowing you to SFTP files and debug remotely.
And if you need to install any dependencies on the notebook, apart from doing it directly on the notebook, you can always choose new>terminal and you will have a connection to that machine directly from your jupyter environment where you can install anything you want.
There is a way to SSH into a Sagemaker notebook instance without having to use a third party reverse proxy like ngrok, nor setup an EC2 bastion, nor using AWS Systems Manager, here is how you can do it.
Prerequisites
Use your own VPC and not the VPC managed by AWS/Sagemaker for the notebook instance
Configure an ingress rule in the security group of your notebook instance to allow SSH traffic (port 22 over TCP)
How to do it
Create a lifecycle script configuration that is executed when the instance starts
Add the following snippet inside the lifecycle script :
INSTANCE_IP=$(/sbin/ifconfig eth2 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}')
echo "SSH into the instance using : ssh ec2-user#$INSTANCE_IP" > ~ec2-user/SageMaker/ssh-instructions.txt
Add your public SSH key inside /home/ec2-user/.ssh/authorized_keys, either manually with the terminal of jupyterlab UI, or inside the lifecycle script above
When your users open the Jupyter interface, they will find the ssh-instructions.txt file which gives the host and command to use : ssh ec2-user#<INSTANCE_IP>
If you want to SSH from a local environment, you'll probably need to connect to your VPN that routes your traffic inside your VPC.
GCP's AI Platform Notebooks automatically creates a persistent URL which you can use to access your notebook. Is that what you were looking for?
Try using CreatePresignedNotebookInstanceUrl to access your notebook instance using an url.
I created GCP VM for one month and connect VM instance through GCP console via browser.It worked fine for past one month until VM restart. I did not create ssh key/edit metadata and very thing so far is used default setting. I cannot establish connection after VM restart. The browser kept prompt me cannot establish the connection. I can ping this VM from another VM through VPC. any advise on it. Thank in advance.
There are several ways to connect a Linux instance via the SSH. You can connect to an instance via the terminal. You can connect via the Cloud Console Web UI which is in general the most convenient way to connect to an instance. Also, you can use Google Cloud SDK and run below command to connect to an instance via SSH:
$ gcloud compute ssh [INSTANCE_NAME]
You can also use Cloud Shell to connect your instance from the Cloud Console web UI by using the same command as above. You can connect via the serial console using the Google Cloud Platform Console, the gcloud command-line tool, or a third-party SSH client.
The serial console authenticates users with SSH keys. Specifically, you must add your public SSH key to the project or instance metadata, and store your private key on the local machine from which you want to connect. There are other advanced methods to connect to an instance which you can find at this link.
By default, the gcloud compute command-line tool uses the $USER variable to add users to the /etc/passwd file for connecting to virtual machine instances using SSH. You can specify a different user using the --ssh-key-file PRIVATE_KEY_FILE flag when running the gcloud compute ssh command. Depending on your use case and convenience, you can use any method consistently.
If you fail to connect to your instance upon following these methods then I would suggest to check this troubleshooting page for SSH and follow the instruction that matches your use case.
Yesterday, I updated my Django website (on AWS EC2) to HTTPS by using lets-encrypt. Everything works well. The website has HTTPS green icon as expected.
Today when I try to connect my instance by using SSH. The connection keep hunging. Finally, It give some message like "ssh: connect to host ec2-34-202-93-189.compute-1.amazonaws.com port 22: Resource temporarily unavailable".
I thought it might be security group problem of this instance. So I double checked my security group setting of this instance, the SSH, HTTP and HTTPS port are all open correctly. I created another instance to test if there is any problem on my local. The new instance connected successfully. Then I apply the new instance to the security group that I made for the previous instance and It connected. Then I apply the previous instance to the new security that I made for the new instance, the connection got frozen again. I also tried to connect with putty and it was not working as well.
Now I am really confused. My local machine is Windows subsystem of Linux. My EC2 instance is Ubuntu 16. I am using Nginx as web server. My ssh command is "ssh -i blog_project.pem ubuntu#ec2-34-202-93-189.compute-1.amazonaws.com".
Here is my security group setup for the instance.
This is the result I command "ssh -vvv -i blog_project.pem ubuntu#ec2-34-202-93-189.compute-1.amazonaws.com"
BTW, Is there any way that I can login to my instance without ssh connection? Is there anything like console or shell inside the AWS that I can touch with my instance?
Check if the instance exists on AWS, maybe a new one was created with different Public DNS (xxxx.compute-1.amazonaws.com) than the one you are using in your command.
I am setting up a node.js server on AWS EC2 using putty configurations.There I found to configure putty.I stuck filling the hostname of EC2 in putty.What will be the hostname can anyone Help?
screenshot of putty config :
Note: I have an EC2 instance launched which I have connect with this.
You can give the public ip of your ec2 instance there. which you can find in the aws management console. Attached is the screen-shot for the same.
Let me know if you are not able to connect with this method.
After generating the key .ppk go to SSH-->Auth-->Browse the .ppk key,
save it and load then open.
Log in with ec2-user.
If you want to give a try, we developed an alternative CLI for AWS that makes this much easier: awless.
It should work on Windows too and with awless, you don't need to set either your IP address nor username, just awless ssh i-1234 or awless ssh my-instance-name.
Note that you may also need to add: -i path/to/your/key.pem if the key was not created with awless.
I am using mac terminal and I want to connect my machine with server instance EC2 in aws with SSH. Since I am using Mac OS X is not necessary to use PUTTY. The problem is that when I download the key it is with extension .ppk but when i need to run it on terminal i need to use a command in which i have to use .pem extension . I tried to run it in that way and it said to me permission denied. Can someone help me what to do in this case? Do i have to change the permission or to convert my key from .ppk to .ppm?
You need to know the .pem file folder you download, and then follow steps below:
download the keypair(.pem file)
cd to keypair(.pem file) location (Note that you can use absolute path name for key pair instead)
chmod 400 [your_key_name].pem (Note that to make SSH work, your key must not be publicly viewable. Use this command if needed.)
ssh -i "[your_key_name].pem" ec2-user#[your ec2 dns name]
You will have to convert your "ppk" file to "pem" file follow this steps.
http://www.ramsmusings.com/2014/02/20/converting-a-putty-ppk-file-to-a-pem-file-for-accessing-aws-ec2-instances/
After you convert connect to the instance using the SSH command and converted "pem" file.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html
Quick answer
Instead of working directly with SSH keys I would consider working with AWS ec2-instance-connect.
It saves you the the management of the SSH keys and is much safer then sharing SSH keys for each EC2 machine between team members.
After authentication with the aws credentials (by referring to a profile in .aws/config file or using environment variables ) you can connect to the instance very easily by providing the instance ID:
./bin/mssh <instance-ID>
Installation of this tool can be done via pip or directly from the github repo.
Additional information
Amazon EC2 Instance Connect provides a simple and secure way to connect to your instances using Secure Shell (SSH).
With EC2 Instance Connect, you use AWS Identity and Access Management (IAM) policies and principles to control SSH access to your instances, removing the need to share and manage SSH keys.
When you connect to an instance using EC2 Instance Connect, the Instance Connect API pushes a one-time-use SSH public key to the instance metadata where it remains for 60 seconds. An IAM policy attached to your IAM user authorizes your IAM user to push the public key to the instance metadata.
The SSH daemon uses AuthorizedKeysCommand and AuthorizedKeysCommandUser, which are configured when Instance Connect is installed, to look up the public key from the instance metadata for authentication, and connects you to the instance.
You can use Instance Connect to connect to your Linux instances using a
browser-based client,
the Amazon EC2 Instance Connect CLI,
or the SSH client of your choice.
(*) Amazon Linux 2 2.0.20190618 or later and Ubuntu 20.04 or later comes preconfigured with EC2 Instance Connect.
For other supported Linux distributions, you must set up Instance Connect for every instance that will support using Instance Connect. This is a one-time requirement for each instance.
Links:
Connect using EC2 Instance Connect
Securing your bastion hosts with Amazon EC2 Instance Connect