Setting up passwordless ssh failed for all the HAWQ hosts - hdfs

we have 3 node and trying to setup hdfs and pivotal hawq with ambari and i have already enabled passwordless ssh for all the 3 machines but when i start hawq service i am getting "Setting up passwordless ssh failed for all the HAWQ hosts" this error please help to resolve this issue.
enter image description here

On all of your hosts, edit your /etc/ssh/sshd_config file and change "PasswordAuthentication no" to "PasswordAuthentication yes". This can be done with sed too.
sudo sed -i 's/PasswordAuthentication no/PasswordAuthentication yes/g' /etc/ssh/sshd_config
Then restart sshd on all of the hosts:
sudo /etc/init.d/sshd restart
Now you can proceed with the installation of HAWQ. The installation is using a command called gpssh-exkeys. This process uses password authentication to communicate with the hosts so that it can create and exchange keys for the gpadmin account. Once the keys have been exchanged, the gpadmin account no longer needs password authentication.
Also, after the installation is complete, you can revert back and disable password authentication if you like.
Lastly, I've asked the PM for HDB at Pivotal to enhance Ambari to do these steps for you automatically. There is a similar process for iptables being disabled during the installation of Hadoop so this would be like that. Ambari would enable password authentication, install HDB, and then disable password authentication.

Related

What is aws redhat root password

I am new in aws, i lunch an redhat instance on aws with free-tier, i logged with ssh client.
My ip starts with like this ec2-user#ec.....bla.com
that is mean i logged with ec2-user, when i try to run some service inside the instance machine, It ask me for root password.
Can anyone tell me what is the root password? i couldn't figure out this yet
Here you go to see some example:
[ec2-user#ip-172--my-aws-ip---34 ~]$ systemctl start docker.service
==== AUTHENTICATING FOR org.freedesktop.systemd1.manage-units ====
Authentication is required to start 'docker.service'.
Authenticating as: root
Password:
using sudo su - to swtich to root and execute above command or execute as sudo systemctl start docker.service

Unable to acess Keycloak via browser after configuring SSL/TLS load balancer

I currently have an AWS server set up with docker to run the Keycloak docker container. For SSL/TLS, there is an AWS loadbalancer configured to point https/443 traffic to the container and have it receive it over 8080, terminating the encryption connection on said load balancer.
When creating the container with the following command, I am able to browse to and log into the keycloak service by browsing to the server's IP address.
docker run --name keycloak -v keybase-storage -p 8080:8080 -e KEYCLOAK_USER=admin -e KEYCLOAK_PASSWORD=TempAdminPassword jboss/keycloak However if I try to log into the server by browsing to the URL, I am redirected to the url http://default-host:8080/auth/admin/ and the browser showing a connection error page.
When trying to find a solution to this, I found how to pass java options to the container when it is first run, and using the resources from this page I used the following command to start the container(URL replaced for privacy concerns)
docker run --name keycloak -v keybase-storage -p 8080:8080 -e KEYCLOAK_USER=admin -e KEYCLOAK_PASSWORD=TempAdminPassword -e JAVA_OPTS_APPEND="-Dkeycloak.frontendUrl=https://sso.IntendedURL.com" jboss/keycloak However this yields the same results when trying to browse to the page.
The main clue I have to go off of right now is this line near the end of the previously shown docker run command, which reads as follows:
19:23:00,039 INFO [org.wildfly.extension.undertow] (ServerService Thread Pool -- 67) WFLYUT0021: Registered web context: '/auth' for server 'default-server'
What I believe I need to do now is to either change the config of the docker container after it has been created(have been unable to edit files using docker exec, so this is less likely) or to pass a java option into the run command when the container is first started.
Please let me know if you have any questions or if I can provide any other information.
Thank you.
Environment information:
Operating system
Amazon Linux 2
Docker version
19.03.13-ce, build 4484c46
Keycloak version
12.0.1(WildFly Core 13.0.3.Final)

Docker SSH login fails remotely

I've created a docker within AWS server which runs SSH service.
I relied on the following example: https://docs.docker.com/engine/examples/running_ssh_service/ and added my own logic to the Dockerfile.
When trying to log in remotely to the docker I get the password message prompted but the password I set for the SSH user does not work. When trying the exact same password with local ssh connection (from within the AWS server to 127.0.0.1 -p exported_SSH_port) it works perfectly.
any ideas?
There's a little bug in docker docs:
You should change
RUN sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
To
RUN sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config

How to create ftp (vsftpd) in google cloud compute engine?

How to create ftp in google cloud compute engine? I can connect via SFTP without any issues, but my company is using a software to connect via FTP to download a XML file from the server. Unfortunately that software doesn't have SFTP connection facilities.
I saw lots of examples from the internet and to connect via SFTP not FTP.
Any idea's or tutorials ?
I found a way to do this, Please advice is there any risks.
apt-get install vsftpd libpam-pwdfile
nano /etc/vsftpd.conf
And inside the vsftpd.conf config file.
# vim /etc/vsftpd.conf
listen=YES
listen_ipv6=NO
anonymous_enable=NO
local_enable=YES
write_enable=YES
local_umask=022
nopriv_user=vsftpd
chroot_local_user=YES
allow_writeable_chroot=yes
guest_username=vsftpd
virtual_use_local_privs=YES
guest_enable=YES
user_sub_token=$USER
local_root=/var/www/$USER
hide_ids=YES
listen_address=0.0.0.0
pasv_min_port=12000
pasv_max_port=12100
pasv_address=888.888.888.888 # My server IP
listen_port=211
Remove everything from the file and add these lines instead
auth required pam_pwdfile.so pwdfile /etc/ftpd.passwd
account required pam_permit.so
Create the main user that will be used by the virtual users to authenticate:
useradd --home /home/vsftpd --gid nogroup -m --shell /bin/false vsftpd
Once that is done we can create our users/passwords file.
htpasswd -cd /etc/ftpd.passwd helloftp
Next, add the directories for the users since vsftpd will not create them automatically.
mkdir /var/www/helloproject
chown vsftpd:nogroup /var/www/helloproject
chmod +w /var/www/helloproject
Finally, start the vsftp daemon and set it to automatically start on system boot.
systemctl start vsftpd && systemctl enable vsftpd
Check the status to make sure the service is started:
systemctl status vsftpd
● vsftpd.service - vsftpd FTP server
Loaded: loaded (/lib/systemd/system/vsftpd.service; enabled)
Active: active (running) since Sat 2016-12-03 11:07:30 CST; 23min ago
Main PID: 5316 (vsftpd)
CGroup: /system.slice/vsftpd.service
├─5316 /usr/sbin/vsftpd /etc/vsftpd.conf
├─5455 /usr/sbin/vsftpd /etc/vsftpd.conf
└─5457 /usr/sbin/vsftpd /etc/vsftpd.conf
Finally add firewall rules to access via cloud.
Later I have changed my IP from 0.0.0.0 for more restriction
Yes, It is possible to host an FTP server on Google Cloud. In fact, I wrote an in-depth blog about How to set up an FTP server on Google Cloud.
If your VM is base on Linux then you have to use an application like vsftpd to set up an FTP server.
Here are the steps:
Step 1: Deploy a Virtual Instance on Google Cloud
Step 2: Open SSH terminal
Step 3: Installing VSFTPD
Step 4: Create a User
Step 5: Configure vsftpd.conf file
Step 6: Preparing an FTP Directory
Step 7: FTP/S or FTP over SSL setup (optional)
Step 8: Opening Ports in Google Cloud Firewall
Step 9: Test and Connect

How do I connect to aws ec2 server from chromebook using the secure shell extension?

I am trying to connect to my ec2 instance from my chromebook using the secure shell extension but I keep getting the following error:
Loading NaCl plugin... done.
ssh: connect to host (public DNS) port 22: Connection refused
NaCl plugin exited with status code 255.
I have been following the steps on this site but with 0 success.
http://www.mattburns.co.uk/blog/2012/11/15/connecting-to-ec2-from-chromes-secure-shell-using-only-a-pem-file/
Help please.
If you're doing this on your chromebook, you should have developer mode enabled so that you can enter the console and execute Linux commands. Once developer mode is enabled, enter the console with ctrl+alt+t and then type in shell.
First you'll want to change the permissions of your .pem key. The ssh keygen won't run if the permissions aren't restricted enough.
sudo chmod 400 myKeyPair.pem
Next you'll want to generate your own public key with ssh-keygen like mentioned in the other links.
ssh-keygen -y -f myKeyPair.pem > myKeyPair.pub
After this, you'll want to create a file with no extension and the private key pair inside.
touch myKeyPair
After this, copy the contents of the .pem file to the file with no extension, myKeyPair.
sudo cat myKeyPair.pem > myKeyPair
Next you'll want to open up the secure shell extension, which can be found here.
Enter your connection information for your machine and don't forget to specify the port number. When it comes to importing the key pair, select both the myKeyPair.pub and the myKeyPair files using ctrl.
That's it, you should be connected!