So, I have an ubuntu ec2 instance running, and wanted to establish an ssh(I am an Ubuntu 16.04 user). However, I do not think that I get the right response when I try to:
huzeyfekiran#huzeyfekiran-ThinkPad-L450:~/Downloads$ chmod 400 mykeypair.pem
huzeyfekiran#huzeyfekiran-ThinkPad-L450:~/Downloads$ ssh -i mykeypair.pem ubuntu#ec2-18-219-42-124.us-east-2.compute.amazonaws.com
The authenticity of host 'ec2-18-219-42-124.us-east-2.compute.amazonaws.com (18.219.42.124)' can't be established.
ECDSA key fingerprint is SHA256:T9J5/BH9RmALnv/6n4rUu0tw8nIFHn8zYvM9BwwP3fA.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'ec2-18-219-42-124.us-east-2.compute.amazonaws.com,18.219.42.124' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.4.0-1047-aws x86_64)
* Documentation:
* Management:
* Support:
Get cloud support with Ubuntu Advantage Cloud Guest:
0 packages can be updated.
0 updates are security updates.
The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.
I think that I do not get the right respond because when I try to connect to ec2 instance via jupyter notebook, the browser cant establish a connection and I am sure that I have firewall turned off. So, is there a problem with the SSH?
Related
On Windows systems, using VMware PowerCLI, I can connect to a vCenter server using appropriate credentials:
Connect-VIServer myvcenter.example.com
Once connected, I can run Get-VM to see a list of registered VMs. Note that I need not know names of ESX hosts connected to this vCenter server.
When using vmware-cmd, for the similar -l option, both these options need to be provided:
-H <host>
Specifies an ESX/ESXi host or a vCenter Server system.
-h | --vihost <esx_host>
Specifies a target host if the host specified by -H <host> is a vCenter Server system.
Why is that so?
How to list VMs if one does not know the ESX hosts on this vCenter (without using VMware PowerCLI)? I am trying to get this working over SSH on a GNU+Linux system.
Versions:
vSphere SDK for Perl version: 6.5.0
Script 'vmware-cmd' version: 6.5.0
I simply discarded VMware CLI and started using https://github.com/snobear/ezmomi. After having a config.yml file with only connection related options, I was able to list VMs (without knowing ESX hosts).
vmware-cmd is the legacy way that was provided back in the 3.x days to do CLI tasks directly from the ESX service console.
Since you're using vSphere 6.5, I would look at the new CLI called Datacenter CLI (DCLI) instead: http://pubs.vmware.com/vsphere-6-5/index.jsp?topic=%2Fcom.vmware.dcli.cmdref.doc%2Fintro.html
I have installed the latest versions of Virtualbox v.5.2.6 and Vagrant v.2.0.1 on the windows machine with the Intel-Core-i5-4210U-Processor #1.70Ghz 2.40Ghz. I have added the homestead box by running the command:
vagrant box add laravel/homestead
But, on running the vagrant up it runs fine until this point:
Timed out while waiting for the machine to boot. This means that
Vagrant was unable to communicate with the guest machine within the
configured ("config.vm.boot_timeout" value) time period.
If you look above, you should be able to see the error(s) that Vagrant
had when attempting to connect to the machine. These errors are
usually good hints as to what may be wrong.
If you're using a custom box, make sure that networking is properly
working and you're able to connect to the machine. It is a common
problem that networking isn't setup properly in these boxes. Verify
that authentication configurations are also setup properly, as well.
If the box appears to be booting properly, you may want to increase
the timeout ("config.vm.boot_timeout") value.
On running the vagrant ssh-config I get:
Host homestead-7
HostName 127.0.0.1
User vagrant
Port 2222
UserKnownHostsFile /dev/null
StrictHostKeyChecking no
PasswordAuthentication no
IdentityFile C:/Users/MyUser/.vagrant.d/insecure_private_key
IdentitiesOnly yes
LogLevel FATAL
ForwardAgent yes
I have looked around and tried out different things, among others, to uninstall and install again both vagrant and virtualbox, but couldn't find any solution. How can I get this working?
I'm trying to set up an SSH tunnel to access my server (currently an ubuntu 16.04 VM on Azure) to set up safe access to my django applications running on it.
I was able to imitate the production environment with Apache WSGI and it works pretty good but since I'm trying to develop the application I don't want to make it available to broader public right now - but to make it visible only for a bunch of people.
To the point: when I set up the ssh tunnel using putty on Windows 10 (8000 to localhost:8000) and I run http://localhost:8000/ I get the folowing error:
"Not Found HTTP Error 404. The requested resource is not found.".
How can I make it work? I run the server using manage.py runserver 0:8000.
I found somewhere that the error may be due to the fact that the application does not have access to ssh files, but I don't know whether that's the point here (or how to change it).
Regards,
Dominik
After hours of trying I was able to solve the problem.
First of all, I made sure putty connects to the server and creates the desired tunnel. To do that I right-clicked on the putty window (title bar) and clicked event log. I checked the log and found the following error:
Local port 8000 forwarding to localhost:8000 failed: Network error:
Permission denied
I was able to solve it by choosing other local port (9000 instead of 8000 in my instance).
Second of all, I edited the sshd_config file: sudo vi etc/ssh/sshd_config
and added these three lines:
AllowAgentForwarding yes
AllowTcpForwarding yes
GatewayPorts yes
I saved the file and restarted the ssh service:
sudo service ssh stop
sudo service ssh start
Now when I visit localhost:9000 everything works just fine.
I'm trying to install the enterprise edition of neo4j on an existing EC2 (Amazon linux) instance. So far I've
wget "link to enterprise"
untar the file
renamed and moved the folder to NEO4J_HOME
then went into the config files for neo4j.properties to make the following changes:
# Enable shell server so that remote clients can connect via Neo4j shell.
remote_shell_enabled=true
# The network interface IP the shell will listen on (use 0.0.0 for all interfaces)
remote_shell_host=127.0.0.1
# The port the shell will listen on, default is 1337
remote_shell_port=1337
EDITED Christophe Willemsen pointed out that for my original error, I had forgotten to restart the server at that point but I was still unable to access the web server while it was running. So to make it more clear, I've edited the remaining post:
I went to neo4j-server.properties and uncommented:
org.neo4j.server.webserver.address=0.0.0.0
And start the server
NEO4J_HOME/bin/neo4j start
WARNING: Max 1024 open files allowed, minimum of 40 000 recommended. See the Neo4j manual.
Using additional JVM arguments: -server -XX:+DisableExplicitGC -Dorg.neo4j.server.properties=conf/neo4j-server.properties -Djava.util.logging.config.file=conf/logging.properties -Dlog4j.configuration=file:conf/log4j.properties -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled -XX:-OmitStackTraceInFastThrow
Starting Neo4j Server...WARNING: not changing user
process [28557]... waiting for server to be ready..... OK.
http://localhost:7474/ is ready.
checking the status:
NEO4J_HOME/bin/neo4j status
Neo4j Server is running at pid 28557
I can run the shell but the when I go to localhost 7474 I still can not connect
Any help would be appreciative. The only tutorial or help I've found assumed I was starting from scratch with a new instance. If someone could provide some instructions for installing or fix my configuration that would be great.
Thanks!
You have to edit neo4j-server.properties and uncomment the line with:
org.neo4j.server.webserver.address=0.0.0.0
So that the db listens on an external interface not just localhost, and you have to open the port (7474) in your firewall rules.
Make sure to secure access to the db though:
http://neo4j.com/docs/stable/security-server.html
I cannot connect to any machine I create on EC2 that belongs to the C3 family.
I have no problem connecting with SSH to any other type of machine.
What do I need to do to further debug this?
The steps I've taken:
I create a t1.micro machine with the same image (an ubuntu 13.10 64bit AMI ami-2f252646), and the same key-pair, and hte same security group. It works fine.
I ssh to a t1.micro machine, and then ssh again from that machine to the C3 machine. That tells me the machine is up and running and my problem is connecting to the C3 machine from my PC directly (going through the office router).
I try to telnet to the t1.micro machine on port 22 - I get a connection.
I try to telnet to the C3 machine on port 22 - does not work.
I try to telnet to the C3 machine on port 22 from another PC, not from the office - I get a connection.
I tried this with several C3 type machines, all iwht the same result.
So:
The machine is up and running, and can accept connections.
There is obviously a problem in the coupling between my office connection and the C3 machine.
My office connection works fine with any other type of m1/c1/g1/m2 machine, so it's only the "3" family that has that problem.
I'm at a loss on how to solve this, or even debug this further. Right now I'm tunneling to my machine through a proxy t1.micro machine...
My operating system is itself Ubuntu 13.10
Here is a gist link to the output of my ssh -vvv command
It seems to get stuck at debug1: expecting SSH2_MSG_KEX_ECDH_REPLY
Solved it, with the help of AWS professional (paid) services.
The problem is with some of the authentication protocols.
I have a high (9000) MTU (minimal transfer unit) configured in my network connection, I need this to access big chunks of data.
The solution is either to lower the MTU to below 1400 (which is not good for me, because I need it), or to change the SSH config, which worked for me.
sudo vi /etc/ssh/ssh_config
and uncomment the lines starting with Ciphers and MACs
mine says:
Ciphers aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc
MACs hmac-md5,hmac-sha1,umac-64#openssh.com,hmac-ripemd160
This is a bug from Ubuntu 12.10 and onwards (it works in 12.04 and below)
There is another option besides using just the provided pem with an Ami that you've created yourself.
Go back and spin up the current image on an instance size that you know works. Ssh into the instance as the main user and then create a secondary user and add them to the sudoers group.
sudo useradd -d /home/myuser -m myuser
sudo usermod -a -G sudo myuser
Then change to the new user and go to their home directory and create a .ssh folder, change this to 0700. Go inside the .ssh directory an vi (or your text editor of choice) and create an authorized_keys file.
Insert your PUBLIC key contents into this file.
Change the permissions on this file to 0600.
sudo su myuser
cd ~
mkdir .ssh
chmod 0700 .ssh
cd .ssh
vi authorized_keys
chmod 0600 authorized_keys
Exit out of the user. Before you exit the box you probably want to edit (as sudo) /etc/passwd and change the users shell from sh to bash.
Exit out of the box and test connecting with your new user before returning your new Ami.
Now spin up the new ami as a C3 instance and connect in with your user.