I am having trouble with git bash, and cannot start the ssh agent so i can connect to my ec2 instance, i have deleted it many times and freshly installed it however the same problem presists.
Not long ago i did not have this problem and could easily connect to my instances, i don't know what actually happened but when i put the ssh agent command or try to add a key pair it stays blank and nothing happens.
I have no trouble connecting to my instances via windows power shell, so i'm assuming it's only a git bash problem.
Please advise on this one, i have been stuck on this problem many days now.
Related
I launched an instance & downloaded my secret key. I've attempted this on 2 different devices and instances. Im trying to connect to the instance so I can upload files. Whenever I attempt to connect, this permission denied message displays.
Note I've downloaded and used openSSH
PS C:\WINDOWS\system32> ssh -i C:\Users*\Downloads*key.pem #ec2----.us-west-2.compute.amazonaws.com
The authenticity of host 'ec2----.us-west-2.compute.amazonaws.com (...)' can't be established.
ECDSA key fingerprint is SHA256:.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added 'ec2----.us-west-2.compute.amazonaws.com,...' (ECDSA) to the list of known hosts.
*#ec2----**.us-west-2.compute.amazonaws.com: Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
PS C:\WINDOWS\system32>
P.S. This is my first post, so constructive criticism on etiquette is welcome :)
Recently started my first job (an internship really) in the IT field. 3 months ago working for a start-up. Im hoping to migrate eventually to something in the field of cloud security, OSINT, DevSecOps, Web Development. Passionate about information security, open source software.
I followed this tutorial from amazon on how to connect to my instance
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/openssh.html
Looks like it's an issue with .pem file permissions. Check this video & see if you can resolve the error.
You can use puttygen to get ppk file which is helpful to SSH from windows.
If you want to use .pem files, mobaxterm is good software to use
You are very welcome to the community, it's nice to know you started your first Job!
To get you started on how to connect to the AWS Linux instance there is a nice KB article Connect to your Linux instance from Windows using Windows Subsystem for Linux and another one using putty look at here & another SO thread.
There are a few thing you need to learn as Prerequisites:
Verify that the instance is ready
Verify the general prerequisites for connecting to your instance
Install the Windows Subsystem for Linux (WSL) and a Linux distribution on your local computer
Copy the private key from Windows to WSL
Then use :
ssh -i /path/key-pair-name.pem instance-user-name#instance-public-dns-name
OR
ssh -i /path/key-pair-name.pem instance-user-name#instance-IPv6-address
From using your Windows CMD:
PS C:\WINDOWS\system32> ssh -i C:\Users\<user_name>\Downloads\testkey.pem ec2-user#ec2----.us-west-2.compute.amazonaws.com
OR
PS C:\WINDOWS\system32> ssh -i C:\Users\<user_name>\Downloads\testkey.pem ec2-user#<Some_IP_Address>
Better Use mobaexterm and copy user key in there and you will there:
I'm having difficulty to set this up correctly, and burning through AWS server time while I try to make it work. I have segmentation code that is heavily memory intensive that I'd like to temporarily spin up an AWS server with 192GB of ram. I understand that this is possible using docker, but the instructions on pycharm are non-existent with respect to the docker instructions necessary to tie it together (it references existing code as opposed to showing how to assemble it from scratch). What would the docker run command on the server look like to enable a connection to the 2375 port?
EDIT: I am using Pycharm Professional
UPD: Checking PyCharm options I found that there is an option to use Docker Machines. This seem to be exactly what you need. With Docker Machines you can make Docker spin up an EC2 instance for you with proper security out-of-the-box. Read official documentation on how to get started here and AWS driver options to learn how to set EC2 instance type, AMI, and other options here .
Original post:
To enable this feature you have to run Docker daemon with '-H' option:
sudo dockerd -H tcp://0.0.0.0:2375
You may read more on that in the Docker docs: https://docs.docker.com/engine/reference/commandline/dockerd/ .
Beware though, for EC2 you may also need to open that port using security group https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/authorizing-access-to-an-instance.html .
I also want to add that what you want to achieve isn't good from security perspective. Exposing docker socket like that is like an invitation for bad guys to throw a party at your EC2 instance. But since you mentioned that this is temporary...
I am stuck in a technical issue on a project and I think you the forum could help me out.
I have an EC2 Instance Type:p2.xlarge running on AWS, I cloned a repository in this instance which requires pytorch and cuda dependencies(this point has been taken care of).
Now, The issue is that I wanna work & run this code-base(which is is AWS instance now) somehow in my local pyCHARM IDE. In short, I didn't have proper resources on my laptop to run the repository, so I have to run in an AWS instance but for debugging purposes the local IDE would be a great option.
Is it possible to do that?. In other words, we can do SSH into AWS instance and run code, but all will be done through command line, if we could SSH through PYCHARM and can see the code in AWS here in local machine within PYCHARM and change, debug or run it as it was local but actually it gets executed in the instance.
Please suggest a solution to it.
Thanks in advance.
EDIT-1:
After following, #Cromulent suggestion, I have arrived here
Setting the remote:
Upload happening within the local & remote repo.
I still didn't understand the requirement of syncing the local and remote folders, when I only want to open the remote folder in my PYCHARM IDE and work on it.
I think after this setup, I have to change the code in local copy and the PYCHARM will sync the code in remote copy. How will I be running(using resources-GPUs of the remote Instance, not my local machine.) the remote code in PYCHARM in this scenario, I am just syncing it, for running again I have to ssh through command line and run the script(This does not serve the purpose)?
EDIT-2:
After #Cromulent suggestions.
Actually, it did work, but still, I am not able to run the remote code locally.
I am getting the below error while running any remote script. If I run the same script using ssh in the terminal, the scripts run normally. I tried to fix the problem using this post on StackOverflow, but it didn't work too.
ssh://ubuntu#ec2-52-41-247-169.us-west-2.compute.amazonaws.com:22/home/ubuntu/anaconda3/bin/python -u <08ad9807-3477-4916-96ce-ba6155e3ff4c>/home/ubuntu/InsightProject/scripts/download_flownet2.py
/home/ubuntu/anaconda3/bin/python: can't open file '<08ad9807-3477-4916-96ce-ba6155e3ff4c>/home/ubuntu/InsightProject/scripts/download_flownet2.py': [Errno 2] No such file or directory
The below is the screenshot for the above problem:
PyCharm Professional supports remote Python interpreters (either the globally installed Python interpreter or a virtualenv). It works by creating an SSH connection to the server and then running the code on the remote host. The results are then displayed locally in PyCharm Professional. You can also do remote debugging as well.
You MUST be using the professional version of PyCharm though. The free community version does not support this feature.
You can find the documentation here:
https://www.jetbrains.com/help/pycharm/configuring-remote-interpreters-via-ssh.html
One more solution is to deploy a Jupyter Notebook on your remote server. Then you will be able to use it from PyCharm Professional Edition.
Don't forget to make rules for the jupyter ports (e.g. allow all 8888) in your AWS console and in your instance.
To configure a remote interpreter for your notebook do this (source):
Open the Jupyter Notebook page of the Settings/Preferences dialog.
On this page, select or clear the Markdown cells rendering enabled option, and specify the username and password. Note that for the
single-user notebooks these fields are optional - leave them blank.
Fill in the username (for JupyterHub) and password.
Click the link Configure remote interpreter. You'll find yourself at the Project Interpreter page.
Configure the remote interpreter, as described in the section Configuring Python Interpreter.
You will want to configure a remote interpreter.
I tried the above approach but it didn't work for me. I have edited my post so that I can get additional input from the community, but I didn't any after the first answer was posted.
My friend actually figured out a secondary way to fix the issue. He actually uses "NOMACHINE" on the local machine and open connection to the remote desktop. Then you can directly install PYCHARM in the remote machine and work in there. I hope this will help others.
The solution is in his blog post. (Thanks to Shaobo Guan)
Another solution would be to use VNC instead of NoMachine
I recently set up a Trellis/Bedrock/Sage environment. I like it so far but have been running into the issue that if I step away from my computer, I can’t reconnect to my local production environment and have to start up my wordpress install from scratch. Is there a way to “save” a vagrant box so I can close my computer and not have to vagrant destroy, then vagrant up each time?
Thanks,
You could probably Save State directly in VirtualBox to accomplish what you want:
Also, I was able to find this comment to help answer your question here which also may do the trick:
In order to keep the same VM around and not loose its state, use the
vagrant suspend and vagrant resume commands. This will make your VM
survive a host machine reboot with its state intact.
https://serverfault.com/users/210885/
We have used below command to run on booting aws machine. It ran chef client and software is installed. But i am not able to find server in chef console when i searched with vm ip address.
chef-client -r 'role[test]'
Can some one explain chef-client -r option works
The -r flag runs chef-client as normal but replaces the run-list coming from the Chef Server with the one specified on the command line. The two likely reasons for your confusion are either that chef-client is hitting an error which prevents the node data from being saved back to the server, or the way you are searching isn't correct. If you are looking for ipaddress:1.2.3.4, make sure you are using the private IP address (I think).