What is aws redhat root password - amazon-web-services

I am new in aws, i lunch an redhat instance on aws with free-tier, i logged with ssh client.
My ip starts with like this ec2-user#ec.....bla.com
that is mean i logged with ec2-user, when i try to run some service inside the instance machine, It ask me for root password.
Can anyone tell me what is the root password? i couldn't figure out this yet
Here you go to see some example:
[ec2-user#ip-172--my-aws-ip---34 ~]$ systemctl start docker.service
==== AUTHENTICATING FOR org.freedesktop.systemd1.manage-units ====
Authentication is required to start 'docker.service'.
Authenticating as: root
Password:

using sudo su - to swtich to root and execute above command or execute as sudo systemctl start docker.service

Related

Unable to view the Apache test page

I'm trying to start install LAMP in my EC2 instance. I have followed all the steps in this website
( https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-lamp-amazon-linux-2.html )but unable to see the test page coming up once I open my DNS id. Checked whether the service is running by typing in sudo systemctl is-enabled httpd and it comes up as enabled. My port 80 is also open. Please help!

reusing the salt states in the AMI image

I have several salt states(base and pillars) already written and present in Amazon s3. I want to re-use the salt states instead of writing the salt state again. I want to create an AMI image using packer and apply the salt-states that I have downloaded from s3 to the Packer Builder EC2 instance. Even if the salt-minion is installed on the CentOS -7 machine, I have installed salt-master service as well and started both salt-minion and salt-master by following commands.
cat > /etc/salt/minion.d/minion_id.conf <<'EOT' id: ${host} # id salt-minion id EOT
Generate the name of the master to connect to
cat > /etc/salt/minion.d/master_name.conf <<'EOT' master: localhost EOT
systemctl enable salt-minion
systemctl start salt-minion
systemctl enable salt-master
systemctl start salt-master
When running the below command it doesn't list any minions:
salt-key -L Accepted Keys: Denied Keys: Unaccepted Keys: Rejected Keys:
So the salt 'localhost-*' state.sls state.high_state
fails with errors:
"No minions matched the target. No command was sent, no jid was assigned.
ERROR: No return received"
This is because no minionid is created from salt-key.
Anybody has any idea why the salt-key is not being shown with salt-minion and how can i resolve this issue by running the existing salt-state successfully downloaded from s3 will work in AMI image?
Regards
Pradeep
What could be happening is that your minions can't find (resolve/DNS) the master salt.
What you could do is add the IP of your master to your minions /etc/salt/minion something like this:
master: 10.0.0.1
Replace 10.0.0.3 with the IP of your master
Later restart your minion and check the master again for requests.

Docker SSH login fails remotely

I've created a docker within AWS server which runs SSH service.
I relied on the following example: https://docs.docker.com/engine/examples/running_ssh_service/ and added my own logic to the Dockerfile.
When trying to log in remotely to the docker I get the password message prompted but the password I set for the SSH user does not work. When trying the exact same password with local ssh connection (from within the AWS server to 127.0.0.1 -p exported_SSH_port) it works perfectly.
any ideas?
There's a little bug in docker docs:
You should change
RUN sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
To
RUN sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config

How to create ftp (vsftpd) in google cloud compute engine?

How to create ftp in google cloud compute engine? I can connect via SFTP without any issues, but my company is using a software to connect via FTP to download a XML file from the server. Unfortunately that software doesn't have SFTP connection facilities.
I saw lots of examples from the internet and to connect via SFTP not FTP.
Any idea's or tutorials ?
I found a way to do this, Please advice is there any risks.
apt-get install vsftpd libpam-pwdfile
nano /etc/vsftpd.conf
And inside the vsftpd.conf config file.
# vim /etc/vsftpd.conf
listen=YES
listen_ipv6=NO
anonymous_enable=NO
local_enable=YES
write_enable=YES
local_umask=022
nopriv_user=vsftpd
chroot_local_user=YES
allow_writeable_chroot=yes
guest_username=vsftpd
virtual_use_local_privs=YES
guest_enable=YES
user_sub_token=$USER
local_root=/var/www/$USER
hide_ids=YES
listen_address=0.0.0.0
pasv_min_port=12000
pasv_max_port=12100
pasv_address=888.888.888.888 # My server IP
listen_port=211
Remove everything from the file and add these lines instead
auth required pam_pwdfile.so pwdfile /etc/ftpd.passwd
account required pam_permit.so
Create the main user that will be used by the virtual users to authenticate:
useradd --home /home/vsftpd --gid nogroup -m --shell /bin/false vsftpd
Once that is done we can create our users/passwords file.
htpasswd -cd /etc/ftpd.passwd helloftp
Next, add the directories for the users since vsftpd will not create them automatically.
mkdir /var/www/helloproject
chown vsftpd:nogroup /var/www/helloproject
chmod +w /var/www/helloproject
Finally, start the vsftp daemon and set it to automatically start on system boot.
systemctl start vsftpd && systemctl enable vsftpd
Check the status to make sure the service is started:
systemctl status vsftpd
● vsftpd.service - vsftpd FTP server
Loaded: loaded (/lib/systemd/system/vsftpd.service; enabled)
Active: active (running) since Sat 2016-12-03 11:07:30 CST; 23min ago
Main PID: 5316 (vsftpd)
CGroup: /system.slice/vsftpd.service
├─5316 /usr/sbin/vsftpd /etc/vsftpd.conf
├─5455 /usr/sbin/vsftpd /etc/vsftpd.conf
└─5457 /usr/sbin/vsftpd /etc/vsftpd.conf
Finally add firewall rules to access via cloud.
Later I have changed my IP from 0.0.0.0 for more restriction
Yes, It is possible to host an FTP server on Google Cloud. In fact, I wrote an in-depth blog about How to set up an FTP server on Google Cloud.
If your VM is base on Linux then you have to use an application like vsftpd to set up an FTP server.
Here are the steps:
Step 1: Deploy a Virtual Instance on Google Cloud
Step 2: Open SSH terminal
Step 3: Installing VSFTPD
Step 4: Create a User
Step 5: Configure vsftpd.conf file
Step 6: Preparing an FTP Directory
Step 7: FTP/S or FTP over SSL setup (optional)
Step 8: Opening Ports in Google Cloud Firewall
Step 9: Test and Connect

Setting up passwordless ssh failed for all the HAWQ hosts

we have 3 node and trying to setup hdfs and pivotal hawq with ambari and i have already enabled passwordless ssh for all the 3 machines but when i start hawq service i am getting "Setting up passwordless ssh failed for all the HAWQ hosts" this error please help to resolve this issue.
enter image description here
On all of your hosts, edit your /etc/ssh/sshd_config file and change "PasswordAuthentication no" to "PasswordAuthentication yes". This can be done with sed too.
sudo sed -i 's/PasswordAuthentication no/PasswordAuthentication yes/g' /etc/ssh/sshd_config
Then restart sshd on all of the hosts:
sudo /etc/init.d/sshd restart
Now you can proceed with the installation of HAWQ. The installation is using a command called gpssh-exkeys. This process uses password authentication to communicate with the hosts so that it can create and exchange keys for the gpadmin account. Once the keys have been exchanged, the gpadmin account no longer needs password authentication.
Also, after the installation is complete, you can revert back and disable password authentication if you like.
Lastly, I've asked the PM for HDB at Pivotal to enhance Ambari to do these steps for you automatically. There is a similar process for iptables being disabled during the installation of Hadoop so this would be like that. Ambari would enable password authentication, install HDB, and then disable password authentication.