Change hostname permanently in Google Compute Engine instance after reboot - google-cloud-platform

I've created a Google instance in Google Compute Engine with CentOS operating system, then I installed Cpanel. My problem is with WHM/Cpanel, it needs a hostname to be FQDN hostname, specifically for updating Cpanel or it will fail.
My problem is that after changing the hostname the instance reverts back to the old hostname after rebooting the operating system or resetting/stopping/starting the instance.
I've checked most questions before and I've tried most of the solutions with no luck. It keeps changing after reboot, I've try all the methods below and more:
create sh script in:
/etc/dhcp/dhclient-exit-hooks.d/
change hostname in
/etc/hostname
edit file
/etc/dhclient.conf
then add inside it, for my network interface:
supersede host-name "host.domain.com"
in crontab add to the end:
#reboot hostname="host.domain.com"; sed -i "s/.*Google.*//" /etc/hosts; hostname "$hostname"
But after reboot, the hostname changes back to the instance name.
Is there any other workaround to permanently change my hostname even after reboot.?
Thanks

You could create a similar crontab entry, but instead of using the line in your post, you could use hostnamectl to set the hostname on start-up.
I've tested this with Google's Centos7 and Debian9 images and it works for both. However, I found that with Centos, I had to add a delay before the commands execution (see below).
So for example, open crontab:
sudo crontab -e
Then enter this line for Centos:
#reboot sleep 15 && hostnamectl set-hostname YOUR_HOSTNAME
For Debian this worked:
#reboot hostnamectl set-hostname YOUR_HOSTNAME
I didn't experiment too much with the crontab Centos timings (you may be able to use a lower figure than 15 seconds), but from my experience, using #reboot alone didn't seem to initiate the change on start-up.

Problem of automatic change hostname without restart solve it by create an ".sh" executable file in "/etc/dhcp/dhclient-exit-hooks.d/", ex: below we create file "set_my_hostname.sh", you can create an sh file with any name:
cd /etc/dhcp/dhclient-exit-hooks.d/
nano set_my_hostname.sh
then inside the file put:
hostname hosting.domain.com
save the file and make it executable:
chmod +x set_my_hostname.sh
and to fix, hostname automatic change after reboot, create a cronjob to start at reboot with delay (thanks neilH for his help):
sudo env EDITOR=nano crontab -e
then add this line:
#reboot sleep 20 && hostnamectl set-hostname "hosting.domain.com"

This worked for me, I wanted my hostname to be a subdomain ie: server1.example.com:
1: Change /etc/hosts file add:
127.0.0.1 localhost.localdomain localhost
192.168.1.100 server1.example.com server1
2: Change etc/hostname file (if doesn't exist create it):
add just the sub-domain part ie: server1
3: Change /etc/dhcp/dhclient.conf add:
supersede host-name "server1.example.com";
4: Create a cron job: run sudo crontab -e then add:
#reboot hostnamectl set-hostname server1.example.com
5: sudo reboot

This worked for me in a GCE instance running Ubuntu 16.04:
1: Open /etc/hostname (sudo nano /etc/hostname) and change the hostname to the new one.
2: Open /etc/hosts (sudo nano /etc/hosts). The first line will probably be:
127.0.0.1 localhost
Add your new hostname to the end of the line, so it should look like this:
127.0.0.1 localhost <new_hostname>
3: Open /etc/rc.local (sudo nano /etc/rc.local). Before the line exit 0, add another line:
hostname <new_hostname>
4: That's it! The hostname has been changed permanently. You can either open a new bash shell by running bash or reboot the instance.

Related

Why would varnish.service suddenly change its port? (From 80 to 6081)

I have a WordPress site with gunicorn and varnish running on an AWS instance.
This morning, the website gave a "502 Bad Gateway nginx" error.
Upon investigation, it looks like the varnish.service port was:
ExecStart=/usr/sbin/varnishd -j unix,user=vcache -F -a :6081 -T localhost:6082 -f /etc/varnish/default.vcl -S /etc/varnish/secret -s malloc,256m
According to some notes, the port needs to be 80 and not 6081. Changing the port to 80 fixed the nginx error.
This issue seems to happen about once a year where the varnish.service port suddenly changes by itself and someone has to manually change the port back to 80.
So my question is - why would varnish.service suddenly change its port? As far as I know, there were no updates or changes anywhere.
It depends on what file you're editing.
Make sure you're editing /etc/systemd/system/varnish.service. If that file isn't there, just run the following command:
sudo cp /lib/systemd/system/varnish.service /etc/systemd/system/
When you're done editing the port, just run the following 2 commands:
sudo systemctl daemon-reload
sudo systemctl restart
See https://www.varnish-software.com/developers/tutorials/installing-varnish-ubuntu/#systemd-configuration for a detailed tutorial.

Django redis docker: port is already allocated [duplicate]

When I run docker-compose up in my Docker project it fails with the following message:
Error starting userland proxy: listen tcp 0.0.0.0:3000: bind: address already in use
netstat -pna | grep 3000
shows this:
tcp 0 0 0.0.0.0:3000 0.0.0.0:* LISTEN -
I've already tried docker-compose down, but it doesn't help.
In your case it was some other process that was using the port and as indicated in the comments, sudo netstat -pna | grep 3000 helped you in solving the problem.
While in other cases (I myself encountered it many times) it mostly is the same container running at some other instance. In that case docker ps was very helpful as often I left the same containers running in other directories and then tried running again at other places, where same container names were used.
How docker ps helped me:
docker rm -f $(docker ps -aq) is a short command which I use to remove all containers.
Edit: Added how docker ps helped me.
This helped me:
docker-compose down # Stop container on current dir if there is a docker-compose.yml
docker rm -fv $(docker ps -aq) # Remove all containers
sudo lsof -i -P -n | grep <port number> # List who's using the port
and then:
kill -9 <process id> (macOS) or sudo kill <process id> (Linux).
Source: comment by user Rub21.
I had the same problem. I fixed this by stopping the Apache2 service on my host.
You can kill the process listening on that port easily with one command below :
kill -9 $(lsof -t -i tcp:<port#>)
ex :
kill -9 $(lsof -t -i tcp:<port#>)
or for ubuntu:
sudo kill -9 `sudo lsof -t -i:8000`
Man page for lsof : https://man7.org/linux/man-pages/man8/lsof.8.html
-9 is for hard kill without checking any deps.
(Not related, but might be useful if its PORT 5000 mystery) - the culprit process is due to Mac OS monterery.
The port 5000 is commonly used to serve local development servers. When updating to the latest macOS operating system, I was unable the docker to bind to port 5000, because it was already in use. (You may find a message along the lines of Port 5000 already in use.)
By running lsof -i :5000, I found out the process using the port was named ControlCenter, which is a native macOS application. If this is happening to you, even if you use brute force (and kill) the application, it will restart itself. In my laptop, lsof -i :5000 returns that Control Center is being used by process id 433. I could do killall -p 433, but macOS keeps restarting the process.
The process running on this port turns out to be an AirPlay server. You can deactivate it in
System Preferences › Sharing, and unchecking AirPlay Receiver to release port 5000.
I had same problem,
docker-compose down --rmi all (in the same directory where you run docker-compose up)
helps
UPD: CAUTION - this will also delete the local docker images you've pulled (from comment)
For Linux/Unix:
Simple search for linux utility using following command
netstat -nlp | grep 8888
It'll show processing running at this port, then kill that process using PID (look for a PID in row) of that process.
kill PID
In some cases it is critical to perform a more in-depth debugging to the problem before stopping a container or killing a process.
Consider following the checklist below:
1) Check you current docker compose environment
Run docker-compose ps. If port is in use by another container, stop it with docker-compose stop <service-name-in-compose-file> or remove it by replacing stop with rm.
2) Check the containers running outside your current workspace
Run docker ps to see list of all containers running under your host.
If you find the port is in use by another container, you can stop it with docker stop <container-id>.
(*) Because you're not under the scope of the origin compose environment - it is a good practice first to use docker inspect to gather more information about the container that you're about to stop.
3) Check if port is used by other processes running on the host
For example if the port is 6379 run:
$ sudo netstat -ltnp | grep ':6379'
tcp 0 0 127.0.0.1:6379 0.0.0.0:* LISTEN 915/redis-server 12
tcp6 0 0 ::1:6379 :::* LISTEN 915/redis-server 12
(*) You can also use the lsof command which is mainly used to retrieve information about files that are opened by various processes (I suggest running netstat before that).
So, In case of the output above the PID is 915. Now you can run:
$ ps j 915
PPID PID PGID SID TTY TPGID STAT UID TIME COMMAND
1 915 915 915 ? -1 Ssl 123 0:11 /usr/bin/redis-server 127.0.0.1:6379
And see the ID of the parent process (PPID) and the execution command.
You can also run: $ pstree -s <PID> to a visual display of the process and its related processes.
In our case we can see that the process probably is a daemon (PPID is 1) - In that case consider running: A) $ cat /proc/<PID>/status in order to get a more in-depth information about the process like the number of threads spawned by the process, its capabilities, etc'.
B) $ systemctl status <PID> in order to see the systemd unit that caused the creation of a specific process. If the service is not critical - you can stop and disable the service.
4) Restart Docker service
Run: sudo service docker restart.
5) You reached this point and..
Only if its not placing your system at risk - consider restarting the server.
In my case it was
Error starting userland proxy: listen tcp 0.0.0.0:9000: bind: address already in use
And all that I need is turn off debug listening in php storm
Most probably this is because you are already running a web server on your host OS, so it conflicts with the web server that Docker is attempting to start.
So try this one-liner before trying anything else:
sudo service apache2 stop; sudo service nginx stop; sudo nginx -s stop;
I had apache running on my ubuntu machine. I used this command to kill it!
sudo /etc/init.d/apache2 stop
I was getting the below error when i was trying to launch a new container -
listen tcp 0.0.0.0:8080: bind: address already in use.
To check which process is running on port 8080, run below command:
netstat -tulnp | grep 8080
i got the output below
[root#ip-112-x6x-2x-xxx.xxxxx.compute.internal (aws_main) ~]# netstat -tulnp | grep 8080 tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN **12749**/java [root#ip-112-x6x-2x-xxx.xxxxx.compute.internal (aws_main) ~]#
run
kill -9 12749
Then try to relaunch the container it should work
If redis server is started as a service, it will restart itself when you using kill -9 <process_id> or sudo kill -9 `sudo lsof -t -i:<port_number>` . In that case you will need to stop the redis service using following command.
sudo service redis-server stop
I upgraded my docker this afternoon and ran into the same problem. I tried restarting docker but no luck.
Finally, I had to restart my computer and it worked. Definitely a bug.
Check docker-compose.yml, it might be the case that the port is specified twice.
version: '3'
services:
registry:
image: mysql:5.7
ports:
- "3306:3306" <--- remove either this line or next
- "127.0.0.1:3306:3306"
Changing network_mode: "bridge" to "host" did it for me.
This with
version: '2.2'
services:
bind:
image: sameersbn/bind:latest
dns: 127.0.0.1
ports:
- 172.17.42.1:53:53/udp
- 172.17.42.1:10000:10000
volumes:
- "/srv/docker/bind:/data"
environment:
- 'ROOT_PASSWORD=secret'
network_mode: "host"
I ran into the same issue several times. Restarting docker seems to do the trick
A variation of #DmitrySandalov's answer: I had tomcat/java running on 8080, which needed to keep going. Looked at the docker-compose.yml file and altered the entry for 8080 to another of my choosing.
nginx:
build: nginx
ports:
#- '8080:80' <-- original entry
- '8880:80'
- '8443:443'
Worked perfectly. (The only wrinkle is the change will be wiped if I ever update the project, since it's coming from an external repo.)
At first, make sure which service you are running in your specific port. In your case, you are already using port number 3000.
netstat -aof | findstr :3000
now stop that process which is running on specific port
lsof -i tcp:3000
I resolve the issue by restarting Docker.
It makes more sense to change the port of the docker update instead of shutting down other services that use port 80.
Just a side note if you have the same issue and is with Windows:
In my case the process in my way is just grafana-server.exe. Because I first downloaded the binary version and double click the executable, and it now starts as a service by user SYSTEM which I cannot taskkill (no permission)
I have to go to "Service manager" of Windows and search for service "Grafana", and stop it. After that port 3000 is no longer occupied.
Hope that helps.
The one that was using the port 8888 was Jupiter and I had to change the configuration file of Jupiter notebook to run on another port.
to list who is using that specific port.
sudo lsof -i -P -n | grep 9
You can specify the port you want Jupyter to run uncommenting/editing the following line in ~/.jupyter/jupyter_notebook_config.py:
c.NotebookApp.port = 9999
In case you don't have a jupyter_notebook_config.py try running jupyter notebook --generate-config. See this for further details on Jupyter configuration.
Before it was running on :docker run -d --name oracle -p 1521:1521 -p 5500:5500 qa/oracle
I just changed the port to docker run -d --name oracle -p 1522:1522 -p 5500:5500 qa/oracle
it worked fine for me !
On my machine a PID was not being shown from this command netstat -tulpn for the in-use port (8080), so i could not kill it, killing the containers and restarting the computer did not work. So service docker restart command restarted docker for me (ubuntu) and the port was no longer in use and i am a happy chap and off to lunch.
maybe it is too rude, but works for me. restart docker service itself
sudo service docker restart
hope it works for you also!
I have run the container with another port, like... 8082 :-)
I came across this problem. My simple solution is to remove the mongodb from the system
Commands to remove mongodb in Ubuntu:
sudo apt-get purge mongodb mongodb-clients mongodb-server mongodb-dev
sudo apt-get purge mongodb-10gen
sudo apt-get autoremove
Let me add one more case, because I had the same error and none of the solutions listed so far works:
serv1:
...
networks:
privnet:
ipv4_address: 10.10.100.2
...
serv2:
...
# no IP assignment, no dependencies
networks:
privnet:
ipam:
driver: default
config:
- subnet: 10.10.100.0/24
depending on the init order, serv2 may get assigned the IP 10.10.100.2 before serv1 is started, so I just assign IPs manually for all containers to avoid the error. Maybe there are other more elegant ways.
I have the same problem and by stopping docker container it was resolved.
sudo docker container stop <container-name>
i solved with this sudo service redis-server stop

Where does puppet pull the hostname info to name the certs in the ssl directory?

When I spin up my AWS machine, the first thing I do is run hostnamectl set-hostname myhost.test.com but then when I install and run puppet, it is pulling standard-1-ami.test.com as the cert name. standard-1-ami is the name of my AMI.
Where is it getting this name from on the OS?
I have this issue as well. Every time I make a new machine, without setting the hostname in a userdata script, I have this issue. I have noticed that the initial hostname is cached somewhere in memory.
Here's how I fix it:
Hostname: new_host ; IP: 192.168.10.50 ; DomainName: inside.myhouse.com
hostnamectl set-hostname new_host
echo "192.168.10.50 new_host.inside.myhouse.com new_host" >> /etc/hosts
echo "new_host" > /etc/hostname
service network restart
These 3 places are where the hostname "lives" or "can be retrieved.
To validate my configs, I run these 3 commands:
$ hostname
new_host
$ hostname -f
new_host.inside.myhouse.com
hostname -i
192.168.10.50
Note that, if your prompt is set to have your hostname displayed, your prompt may not change until you log back in. If the hostname & hostname -f commands work, you can run puppet and it should use the correct hostname.
BTW: I use Red Hat. YMMV.

Unable to register AWS host to Ambari server

While registering a host to the cluster of Ambari-server, I am getting the following error.
"Host checks were skipped on 1 hosts that failed to register."
I'm trying to install HDP 2.5 version on the instance of AWS.
I have tried to follow the documentation of Hortonworks.
https://docs.hortonworks.com/HDPDocuments/Ambari-2.5.0.3/bk_ambari-installation/content/set_the_hostname.html
I have added public ip address and public hostname to /etc/hosts file and change the name of host in /etc/hostname file on the server and on the host. Rebooted both, hostname got changed. Then I have stop iptables by
sudo service iptables stop
After doing everything, the host registration is still failing. Kindly help. I am stuck.
Background
From my experience with Ambari (Hortonworks) you have to explicitly setup your Hadoop nodes in each other's /etc/hosts file with the actual name/IPs that the Hadoop services will bind to. NOTE: hostnames should also be FQDN - fully qualified domain names.
For example if you're setting up the hosts as:
node01.mydom.com (10.0.0.2)
node02.mydom.com (10.0.0.3)
node03.mydom.com (10.0.0.4)
These entries should be in all 3 server's /etc/hosts and these should be the names used when referencing them within Ambari's installation/setup wizards.
If you do not pay special attention to this detail, Ambari's server will fail to find/manage any of the other node's that you're telling it to manage.
hostname of ambari-agents
The other item to look at is that the ambari-agent's and what hostnames they think they're going as.
$ ps -eaf|grep ambari_agent
root 3282 1 0 Jul30 ? 00:00:00 /usr/bin/python /usr/lib/python2.6/site-packages/ambari_agent/AmbariAgent.py start --expected-hostname=node01.mydom.com
root 3290 3282 1 Jul30 ? 08:24:29 /usr/bin/python /usr/lib/python2.6/site-packages/ambari_agent/main.py start --expected-hostname=node01.mydom.com
Debugging further
In the screen where you're attempting to register the other nodes as agents, there's a full log of what's happening and you can typically get the commands from this area and attempt to run them directly. I've done this on a number of occasions. The commands will often be python ... commands which you can then copy/paste from the logs and run on the Ambari server where you're attempting to run the install.

Docker SSH login fails remotely

I've created a docker within AWS server which runs SSH service.
I relied on the following example: https://docs.docker.com/engine/examples/running_ssh_service/ and added my own logic to the Dockerfile.
When trying to log in remotely to the docker I get the password message prompted but the password I set for the SSH user does not work. When trying the exact same password with local ssh connection (from within the AWS server to 127.0.0.1 -p exported_SSH_port) it works perfectly.
any ideas?
There's a little bug in docker docs:
You should change
RUN sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
To
RUN sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config