I've had to change hostname on a Google Cloud Compute that is running a WHM instance, but it keeps resetting every now and then and restart.
My /etc/hosts are currently as follow:
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.156.0.7 cpanel.server-location-c.c.ascendant-hub-hidden.internal cpanel # Added by Google
169.254.169.254 metadata.google.internal # Added by Google
My System Information are:
Linux cpanel.xxx.com 3.10.0-1127.10.1.el7.x86_64 #1 SMP Wed Jun 3 14:28:03 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
My Old Hostname is something alike:
cpanel.xxx.com
I want my new hostname to become:
brain.xxx.com
Even when I change it from WHM using their Change Hostname feature, it keeps resetting.
Is their a cleaner method then setting a crontab?
Unfortunately, you're not able to change a custom hostname after you've created VM instance. Have a look at the documentation Creating a VM instance with a custom hostname:
You can create a VM with a custom hostname by specifying any fully
qualified DNS name.
and at the section Limitations:
You cannot change a custom hostname after you have created the VM.
To change this behavior you can try to file a feature request at Google Issue Tracker under this component.
UPDATE In addition, have a look at the documentation Storing and retrieving instance metadata section Default metadata keys:
Compute Engine defines a set of default metadata entries that provide
information about your instance or project. Default metadata is always
defined and set by the server. You can't manually edit any of these
metadata pairs.
and hostname is part of the default metadata entries and could not be changed manually.
UPDATE 2 As a possible workaround, you can use a startup script or other solutions to change the hostname every time the system restarts, otherwise it will automatically get re-synced with the metadata server on every reboot. For example, I applied this startup script via Custom metadata:
Key: startup-script
Value: #! /bin/bash
hostname changed-host-name'
then restarted VM instance and it works for me:
changed-host-name:~$ hostname
changed-host-name
These are few ways to change your hostname:
One way is to edit /etc/hostname directly - just switch file content with your new hostname.
The other way is to use hostnamectl set-hostname <your new hostname> which change /etc/hostname file for you.
But I think your problem is that Google keeps to overwrite some data not only when you reboot system but also while your VM is running. Assuming that above solutions, won't solve your issue.
Solution:
Thankfully Google Cloud Platform allows you to have custom hostname but you have to define them when creating new virtual instance. Check out this GCP document.
Related
I have a google VM instance that has no external IP address assigned. I intend to establish SSH connection through PyCharm installed on my local machine (running macOS).
This can be done in terminal through gcloud IAP tunnel:
gcloud compute ssh <instance_name> --tunnel-through-iap
The entry added to ~./ssh/config for the instance is as following:
Host compute.<instance_id>
HostName compute.<instance_id>
IdentityFile /Users/<user_name>/.ssh/google_compute_engine
CheckHostIP no
HostKeyAlias compute.<instance_id>
IdentitiesOnly yes
StrictHostKeyChecking yes
UserKnownHostsFile /Users/<user_name>/.ssh/google_compute_known_hosts
ProxyCommand /Users/<user_name>/miniconda3/bin/python3 -S /Users/<user_name>/google-cloud-sdk/lib/gcloud.py beta compute start-iap-tunnel <instance_name> %p --listen-on-stdin --project=<project_name> --zone=us-central1-a --verbosity=warning
ProxyUseFdpass no
User <user_name>
With VS Code's Remote-SSH plugin, this setting can be used directly to establish SSH connection with no problem (example).
However, I have difficulty setting up the connection via PyCharm. The SSH Configurations tab takes:
- Host: compute.<instance_id>
- User name: compute.<instance_id>
- Port: 22
- Authentication type: key pair
- Private key file: path to ~/.ssh/google_compute_engine
and throws an exception for Host not being in the correct format.
If I try the internal IP address of the VM instance as host, the connection times out.
Is there a plugin similar to Remote-SSH in VS Code for PyCharm that can work properly with an IAP-tunnel? Or any other way this can be set up without exposing or assigning an External IP to the VM instance?
I know it's been a while, but I was just working on the same thing. I used the same config entry in ~./ssh/config, but PyCharm is doing some checks to make sure that top level Host value is valid (even though it isn't being used). I replaced that with something that would pass their validation checks, but I know I'd never actually use (to avoid potential conflicts).
For example, you can update to this:
Host mahmoud.local
HostName compute.<instance_id>
IdentityFile /Users/<user_name>/.ssh/google_compute_engine
CheckHostIP no
HostKeyAlias compute.<instance_id>
IdentitiesOnly yes
StrictHostKeyChecking yes
UserKnownHostsFile /Users/<user_name>/.ssh/google_compute_known_hosts
ProxyCommand /Users/<user_name>/miniconda3/bin/python3 -S /Users/<user_name>/google-cloud-sdk/lib/gcloud.py beta compute start-iap-tunnel <instance_name> %p --listen-on-stdin --project=<project_name> --zone=us-central1-a --verbosity=warning
ProxyUseFdpass no
User <user_name>
Then when you configure the SSH connection in PyCharm, you will want to use Host = mahmoud.local
Yes, also got it to work with the ~/.ssh/config host. At first I got a fingerprint error, but I turned off StrictHostkeyChecking and that solved it:
Host lukas-notebook-gpu
HostName compute.1234
IdentityFile /Users/lbatteau/.ssh/google_compute_engine
CheckHostIP no
HostKeyAlias compute.1234
IdentitiesOnly yes
StrictHostKeyChecking no
HashKnownHosts no
UserKnownHostsFile /Users/lbatteau/.ssh/google_compute_known_hosts
ProxyCommand /Users/lbatteau/.config/gcloud/virtenv/bin/python3 /Users/lbatteau/google-cloud-sdk/lib/gcloud.py compute start-iap-tunnel lukas-notebook-gpu %p --listen-on-stdin --project=myproject --zone=europe-west4-a --verbosity=warning
ProxyUseFdpass no
While registering a host to the cluster of Ambari-server, I am getting the following error.
"Host checks were skipped on 1 hosts that failed to register."
I'm trying to install HDP 2.5 version on the instance of AWS.
I have tried to follow the documentation of Hortonworks.
https://docs.hortonworks.com/HDPDocuments/Ambari-2.5.0.3/bk_ambari-installation/content/set_the_hostname.html
I have added public ip address and public hostname to /etc/hosts file and change the name of host in /etc/hostname file on the server and on the host. Rebooted both, hostname got changed. Then I have stop iptables by
sudo service iptables stop
After doing everything, the host registration is still failing. Kindly help. I am stuck.
Background
From my experience with Ambari (Hortonworks) you have to explicitly setup your Hadoop nodes in each other's /etc/hosts file with the actual name/IPs that the Hadoop services will bind to. NOTE: hostnames should also be FQDN - fully qualified domain names.
For example if you're setting up the hosts as:
node01.mydom.com (10.0.0.2)
node02.mydom.com (10.0.0.3)
node03.mydom.com (10.0.0.4)
These entries should be in all 3 server's /etc/hosts and these should be the names used when referencing them within Ambari's installation/setup wizards.
If you do not pay special attention to this detail, Ambari's server will fail to find/manage any of the other node's that you're telling it to manage.
hostname of ambari-agents
The other item to look at is that the ambari-agent's and what hostnames they think they're going as.
$ ps -eaf|grep ambari_agent
root 3282 1 0 Jul30 ? 00:00:00 /usr/bin/python /usr/lib/python2.6/site-packages/ambari_agent/AmbariAgent.py start --expected-hostname=node01.mydom.com
root 3290 3282 1 Jul30 ? 08:24:29 /usr/bin/python /usr/lib/python2.6/site-packages/ambari_agent/main.py start --expected-hostname=node01.mydom.com
Debugging further
In the screen where you're attempting to register the other nodes as agents, there's a full log of what's happening and you can typically get the commands from this area and attempt to run them directly. I've done this on a number of occasions. The commands will often be python ... commands which you can then copy/paste from the logs and run on the Ambari server where you're attempting to run the install.
After installing neo4j on my aws ec2 instance, the following seems to indicate that the server is up.
# bin/neo4j console
Active database: graph.db
Directories in use:
home: /usr/local/share/neo4j-community-3.3.1
config: /usr/local/share/neo4j-community-3.3.1/conf
logs: /usr/local/share/neo4j-community-3.3.1/logs
plugins: /usr/local/share/neo4j-community-3.3.1/plugins
import: /usr/local/share/neo4j-community-3.3.1/import
data: /usr/local/share/neo4j-community-3.3.1/data
certificates: /usr/local/share/neo4j-community-3.3.1/certificates
run: /usr/local/share/neo4j-community-3.3.1/run
Starting Neo4j.
WARNING: Max 1024 open files allowed, minimum of 40000 recommended.
See the Neo4j manual.
2017-12-01 16:03:04.380+0000 INFO ======== Neo4j 3.3.1 ========
2017-12-01 16:03:04.447+0000 INFO Starting...
2017-12-01 16:03:05.986+0000 INFO Bolt enabled on 127.0.0.1:7687.
2017-12-01 16:03:11.206+0000 INFO Started.
2017-12-01 16:03:12.860+0000 INFO Remote interface available at
http://localhost:7474/
At this point I am not able to connect. I have opened up ports 7474 - and 7687 - and I can access port 80, plus ssh into the instance, etc.
Is this a neo4j or aws problem?
Any help is appreciated.
Colin Goldberg
Set the dbms.connectors.default_listen_address to be 0.0.0.0, then only open the SSL port located on 7473 using Amazon's ec2 security groups. Don't use 7474 if you don't have to.
It looks like Neo4j is only listening on the localhost interface. If your run netstat -a | grep 7474 you want to see something like *:7474. If you see something like localhost:7474 then you won't be able to connect to the port from outside.
Take a look at Configuring Neo4j connectors. I believe you want dbms.connectors.default_listen_address set to 0.0.0.0.
And now a warning - you are opening your Neo4j to the entire planet if you do this. That may be ok but it seems unlikely that this is what you want to do. The defaults are there for a reason - you don't want the entire planet being able to try to hack into your database. Use caution if you enable this.
I am following instruction and am able to build, run apprtc on my local ubuntu machine.
I am trying to implement the same on AWS. I have added ports 8000 and 8080 to the instance security group. On AWS when I execute
/dev_appserver.py ./out/app_engine
I get console message
Starting API server at: http://localhost:45920
Starting module "default" running at: http://localhost:8080
Starting admin server at: http://localhost:8000
I check ec2...compute-1.amazonaws.com:8000, ec2...compute-1.amazonaws.com:8080 and see nothing. Could you please point to what I am missing?
By default the apprtc is bound to localhost, you need to specify --host 0.0.0.0 in order to expose it outside.
So use "/home/usertest/google_appengine/dev_appserver.py ./out/app_engine --host 0.0.0.0" to run out the machine
I have to do distributed testing using JMeter. The objective is to have multiple remote servers in AWS controlled by one local server send a file download request to another server in AWS.
How can I set up the different servers in AWS?
How can I connect to them remotely?
Can someone provide some step by step instructions on how to do it?
I have tried several things but keep running into connectivity issues across networks.
We had a similar task and we ran into a bunch of issues as well. Here are the details of the whole process and what we did to resolve the issues we encountered. Hope it helps.
We needed to send requests from 5 servers located in various regions of the world. So we launched 5 micro instances in AWS, each in a different region. We chose the regions to be as geographically apart as possible.
Remote (server) JMeters config
Here is how we set up each instance.
Installed java:
$ sudo apt-get update
$ sudo apt-get install default-jre
Installed JMeter:
$ mkdir jmeter
$ cd jmeter;
$ wget ftp://apache.mirrors.pair.com//jmeter/binaries/apache-jmeter-2.9.tgz
$ gunzip apache-jmeter-2.9.tgz;tar xvf apache-jmeter-2.9.tar
Edited the jmeter.properties file in the /bin folder of the JMeter installation and uncomment the line containing the server.rmi.localport setting. We changed the port to 50000.
server.rmi.localport=50000
Started JMeter server. Make sure the address and the port the server reports listening to are correct.
$ cd ~/jmeter/apache-jmeter-2.9/bin
$ vi jmeter-server
Local (client) JMeter config
Then we set up JMeter to run tests remotely on these instances on our local client machine:
Ensured to use the same version of JMeter as was running on the servers. Installed Java and JMeter as described above.
Enabled remote testing by editing the jmeter.properties file that can be found in the bin folder of the JMeter installation. The parameter remote_hosts needed to be set with the public DNS of the remote servers we were connecting to.
remote_hosts=54.x.x.x,54.x.x.x,54.x.x.x,54.x.x.x,54.x.x.x
We were now able to tell our client JMeter instance to run tests on any or all of our specified remote servers.
Issues and resolutions
Here are the issues we encountered and how we resolved them:
The client failed with:
ERROR - jmeter.engine.ClientJMeterEngine: java.rmi.ConnectException: Connection - refused to host: 127.0.0.1
It was due to the server host returning the private IP address as its address because of Amazon NAT.
We fixed this by setting the parameter RMI_HOST_DEF that the /usr/local/jmeter/bin/jmeter-server script includes in starting the server:
RMI_HOST_DEF=-Djava.rmi.server.hostname=54.xx.xx.xx
Now, the AWS instance returned the server’s external IP, and we could start the test.
When the server node attempted to return the result and tried to connect to the client, the server tried to connect to the external IP address of my local machine. But it threw a connection refused error:
2013/05/16 12:23:37 ERROR - jmeter.samplers.RemoteListenerWrapper: testStarted(host) java.rmi.ConnectException: Connection refused to host: xxx.xxx.xxx.xx;
We resolved this issue by setting up reverse tunnels at the client side.
First, we edited the jmeter.properties file in the /bin folder of the JMeter installation and uncommented the line containing the client.rmi.localport setting. We changed the port to 60000:
client.rmi.localport=60000
Then we connected to each of the servers using SSH, and setup a reverse tunnel to port 60000 on the client.
$ ssh -i ~/.ssh/54-x-x-x.us-east.pem -R 60000:localhost:60000 ubuntu#54.x.x.x
We kept each of these sessions open, as the JMeter server needs to be able to deliver the test results to the client.
Then we set up the JVM_ARGS environment variable on the client, in the jmeter.sh file in the /bin folder:
export JVM_ARGS="-Djava.rmi.server.hostname=localhost"
By doing this, JMeter will tell the servers to connect to localhost:60000 for delivering their results. This ends up being tunneled back to the client.
The SSH connections to the servers kept dropping after staying idle for a little bit. To prevent that from happening, we added a parameter to each of the SSH tunnel set up directing the client to wait 60 seconds before sending a null packet to the server to keep the connection alive:
$ ssh -i ~/.ssh/54-x-x-x.us-east.pem -o ServerAliveInterval=60 -R 60000:localhost:60000 ubuntu#54.x.x.x
(.ssh/config version of all required SSH settings:
Host 54.x.x.x
HostName 54.x.x.x
Port 22
User ubuntu
ServerAliveInterval 60
RemoteForward 127.0.0.1:60000 127.0.0.1:60000
IdentityFile ~/.ssh/54-x-x-x.us-east.pem
IdentitiesOnly yes
Just use ssh 54.x.x.x after setting this up.
)
I just went though this on openstack and found the same issues... no idea why the jmeter remoting documentation only covers half the required steps. You can do it without tunnels or touching the properties files.
You need
All nodes to advertise their public IP - on AWS/OS this defaults to the private IP
Ingress rules for the RMI port which defaults to 1099 - I use this
Ingress rules for the RMI "local" port which defaults to dynamic. Below I use 4001 for the client and 4000 for servers. The port can be the same but note the properties are different.
If you are using your workstation as the client you probably still need tunnels. Above Archana Aggarwal has good tips for tunnels.
Remote servers
Set java.rmi.server.hostname and server.rmi.localport inline or in the properties file.
jmeter-server -Djava.rmi.server.hostname=publicip -Dserver.rmi.localport=4000
Sneaky server on client
You can also run one on the same machine as the client. For clarity I've set java.rmi.server.hostname but left server.rmi.localport as dynamic
jmeter-server -Djava.rmi.server.hostname=localip
Client
Set java.rmi.server.hostname and client.rmi.localport inline or in the properties file. Use -R etc like so:
jmeter -n -t Test.jmx -Rremotepublicip1,remotepublicip2 -Djava.rmi.server.hostname=clientpublicip -Dclient.rmi.localport=4001 -GmypropA=1 -GmypropB=2 -lresults.jtl
When you go for distributed testing using JMeter in AWS, I would suggest you to use docker - which will help us with jmeter test infrastructure very quickly. This way we can also ensure that same version of java and jmeter are installed in all the instances of amazon which is very important of JMeter distributed testing.
Ensure that - you set below properties and ports are open for jmeter-server. [they do not have to be 1099,50000 exactly]
server.rmi.localport=50000
server_port=1099
java.rmi.server.hostname=SERVER_IP
for client
client.rmi.localport=60000
java.rmi.server.hostname=SERVER_IP - this step is very important as the container in aws instance will have their own IP address in the docker network - so master and slave can not communicate. So we explicitly set this property
More info:
http://www.testautomationguru.com/jmeter-distributed-load-testing-using-docker-in-aws/