What keeps accessing Google Cloud metadata on my instance - google-cloud-platform

I have a Google Cloud compute instance running with Ubuntu 18. We had wireshark running tracking another problem and we noticed that every minute something is accessing the meta data server. Three requests every minute:
GET /computeMetadata/v1/instance/virtual-clock/drift-token?alt=json&last_etag=XXXXXXXXXXXXXXXX&recursive=False&timeout_sec=60&wait_for_change=True
GET /computeMetadata/v1/instance/network-interfaces/?alt=json&last_etag=XXXXXXXXXXXXXXXX&recursive=True&timeout_sec=60&wait_for_change=True
GET /computeMetadata/v1/?alt=json&last_etag=XXXXXXXXXXXXXXXX&recursive=True&timeout_sec=77&wait_for_change=True
In call cases, the wireshark says the source is the IP of my instance, and the destination is the 169.254.169.254 which is the Google metadata server.
I don't have any code we have written that is accessing the server. The first one makes me think that this is some Google specific software that is accessing the meta data? But I haven't been able to prove that. What is worrisome is that the response for the third one contains ssh keys. Also, every minute seem excessive.
I see another post talking about scripts in /usr/share/google, but I don't have that directory. I do see that google-fluent is installed. I also see a installed snap for google-cloud-sdk. Could one of those be it? I don't recall installing them, AFAIK, I am not using it, so if that is it, what is the harm in uninstalling it?

You do not have a problem to worry about. The metadata server is private to your instance. The Google VM guest environment software and Stackdriver (fluentd) are making requests to the metadata server to get credentials, detect changes (new SSH keys), set the clock, etc.
The IP address 169.254.169.254 is an IPv4 Link Local Address. Only your VM has a route to that network.
Compute Engine Guest Environment
Do not attempt to uninstall the Guest Environment. You can remove Stackdriver, but I do not recommend that. Stackdriver provides logging and monitoring features that are very useful.

Related

Cannot connect vSphere ESXi 7 with Web client

I am installing VMware vSphre ESXi 7.0.2. But I cannot use web client (http://<ip_address>/ui)
When installed first time, I can connect with https://<IP_address> (It will be redirect to https://<IP_address>/ui ) and can create VM. But I found I cannot use some SDD/HDD. So I have re-installed ESXi after created the RAID partitions.
Re-Install was look OK, and I can see DCUI and set IP, DNS etc... After all set, I've tried to use https://<IP_address>. But it was timed out. (I have checked several things, then I found the ping does not work.)
I restarted the server then ping is OK. But when I try to connect with https://<IP_address> then the ping became "Destination net unreachable". (I have confirmed it with "-t" option.)
I thought it is firewall settings. So, I changed "--default-action" and "--enabled" but it still not working. Just in case, I have stop to use RAID disks and re-install it again (it is same as first installation), but it was same results.
There's likely still a networking-related misconfiguration. Use DCUI to verify IP/subnet mask/gateway/VLAN tag (if necessary) and that the appropriate NIC has been configured.
If those are set correctly, the DCUI also has some built-in testing options which allows you to do some outbound ping testing. By default it will check 3 hosts, including the gateway and usually two DNS names, but those can be changed to other options.

My SSH session into my VM Cloud is suddenly lagging

Everyday I log into my SSH session of a Google Cloud VM I maintain (Debian).
Since a week ago, I noticed my performance was lagging as I typed into the VM or when doing something else. I mostly login into this VM to check log files of scheduled scripts I have, and even when I use "cat script.log", what used to take less than 2 seconds now takes at least 5 or 7 seconds, loading the log text.
Pinging different websites bring me an reasonable 10 - 15 ms. I'm pretty sure it's not about my local connection either, everything else I do works fine in my local computer.
A warning started to appear now into my session, saying
"Please consider adding the IAP-secured Tunnel User IAM role to start using Cloud IAP for TCP forwarding for better performance. Learn more Dismiss"
I've already configured the IAP secured tunnel to my account, which is the owner account of GCP project.
Another coworker of mine is being able to access the VM without any performance issues whatsoever.
Your issue is in my opinion with the ISP. For some reason the SSH sessions are lagging.
That's why even other computers using your home ISP lag SSH sessions too. If that was firewall rule interfering you wouldn't be able to connect at all.
You may try to reset all the network hardware in your home and if that doesn't help
run tracert command in windows shell and then contact your ISP and pass your findings. It's possible it's something on their end (and if not maybe their's ISP etc).
To solve the problem you need to add "IAP-secured Tunnel User" at the project level in IAM for that user.IAP-secured Tunnel User + See instructions here in a blog I wrote about this. That should solve your problem.

How to ssh port forward into a server to access a mysql host server for local work on Django web app and Jupyter notebook?

I'm unfamiliar with this terrain, so if any one can guide me in a step by step manner- it would really help. My MySQL database sits on a AWS host X- "ec2-xxx-xxx-xxx-xx.compute-1.amazonaws.com". It is blocked to access from individual local machines and is usually accessed from another working server Y- "ec2-yy-yyy-yyy-yy.compute-1.amazonaws.com" through port '3306'. Now it is especially inconvenient to access this via terminal SSH every time and scripts while they run, its hard to prototype or build an elaborate app. I'd like to set up a SSH tunnel from my local to server Y to be able to access MySQL host X from my local machine, to run queries from my locally deployed Jupyter notebook as well as local working-in-progress Django web app.
The reason why I ask for something more step-by-step is that I have to port forward to another server hosting a redis database which again is accessible through a specific server only. So, I'll be able to carry the solution from here to there too. I'm willing to go into chat as well if needed, but I need to resolve this rather quickly. Thanks!
PS: I've tried many guides off of the internet, but nothing has worked, it's become clear to me that I'm missing some foundational understanding or pathway. That's why I'm here, trying to start from the ground.

VMware vCenter Server 5.5 Single Sign-On Install finds wrong ip address for FQDN

I am migrating my vCenter Server 5.5 to a new server (databases have already been moved to a new SQL server and all is OK on existing vCenter Server 5.5 implementation). When I begin the simple install process on the new vCenter Server host the Single Sign-On component presents me with an IP address of 10.10.10.117 as the ip address of the FQDN file01.xxxxxxxxx.com. This is the iSCSI interface address. I need it to use the 10.1.1.17 ip address that is the address of the production NIC that the ESXi 5.5 hosts will be communicating with. I have already changed the binding order of the NIC cards and flushed the DNS cache. I also added file01.xxxxxxxx.com with the proper IP address to the hosts file and also file01 to the hosts file. Still, during the install, 10.10.10.117 is discovered. Thanks in advance! Babak C.
Just to get a quick clarification...are you freshly installing vCenter 5.5? Or are you migrating an existing vCenter server to a new host and using the update utility to upgrade? I am assuming you are doing a fresh install based on your details about the SQL server and SSO. Here is my suggestion, in case it is a fresh install.
We had a similar problem with 5.5 on a new install where the IP address that was discovered during the actual vCenter Server install was that of the public facing NIC which we never use for management traffic (it's for internet access on the vC server, for update manager, etc.)
The strange thing is that there had NEVER been an entry in ANY of our DNS servers for that interface. So, after looking into it a little bit, I started thinking the IP that was returned during install was not a DNS result at all. Rather, it was (most likely) simply gathered from the interfaces on the Server based on binding order (e.g. which NIC has the default gateway.)
In order to save having to uninstall and clean up a major mess if the install completed wrong, we stopped and got in touch with VMware support. They suggested we clear all of the temporary files both in the standard "temporary" folder on windows as well as under /ApplicationData/vmware/xxx, where 'xxx' would be whatever product is giving you trouble and HAS NOT been FULLY INSTALLED* (e.g. you started the install and noticed the incorrect IP, so you terminated the installer and there is metadata and cached files remaining from the partially run install).
Basically, what we had to do, was clear the temporary files and then make sure the NIC Binding Priority was correct (so you should check in Network Adapters|(press-alt)|Advanced Settings. Make sure the correct binding is checked (e.g. if you don't use IPv6 on the private network, clear it) and make sure that the Windows Network is at the top of the priority list on the second pane of the advanced settings. This helps tremendously with SSO by making sure the Windows Network stack is the first queried when you are signing in and SSO must submit a kerberos ticket to the AD DC for validation.
It is possible, that once you delete the partial install files and temporary files and fix the network settings (probably be a good idea to reboot as well), the next time you run the installer you might have success.
I will try to check this post later to see if it helped you at all... or it I just succeeded in making your life even more difficult (which I certainly hope not!) :)
One more thing...prior to initializing the installer, open up a PS session, perform ipconfig /flushdns and then ping the hostname of your vCenter server in order to get it in the DNS cache. You should also perform the following:
nslookup
NS>{your vcenter server IP address}
/* make sure the resulting hostname is correct..this ensures your PTRs and rDNS is working correctly. vCenter HEAVILY relies on accurate reverse DNS configuration...then do the following lookup for forward DNS */
NS>{your vcenter server FQDN}
Hope it helps. Best of luck my friend!
SIETEC

Cache data on Multiple Hosts in AppFabric

Let me first explain that I am very new when it comes to use AppFabric for improving the Responsiveness of your application. I am trying to configure the Server Cluster with 2 Nodes over XML provider over Network Shared location.
My requirement is that the cached data should be created on both the Hosts so that If One of the host is down my other host in the Cluster should be able to serve the request and provide the cached data. As I said I have 2 Host in my Cluster and one of them is defined as Lead Host. Now when I am saving the data in cache I could not see the data in both the hosts (Not sure is there any specific command where you can see the data in a specific host). So what I want to test is that I’ll stop one of the Cache host and try to see if still I able to get the data from the second cache host.
thanks in advance
-Nitin
What you're talking about here is High Availability. To enable this, you'll need to be running Windows Server Enterprise Edition - if you're on Standard Edition then you just can't do it. You also really need a minimum of three hosts, so that if one goes down there are still two copies of your cached data to provide failover. If you can meet these requirements then the only extra step to create a highly-available cache is to set the Secondaries flag when you call new-cache e.g.
new-cache myHACache -Secondaries 1
There's no programmatic way to query what data is held on a specific host, because you only ever address the logical cache, not an individual physical host.
From our experience, using SQL authentication to the database does not work. Its clearly stated that only Integrated Security option is supported. Also we faced issues with the service running with "Integrated Security" since our SQL cluster was running under a domain account and AppFabric needs to run under "Network service" and we couldnt successfully connect to the sql cluster from AppFabric service.
This was a painful experience for us and I hope AppFabric caching improves the way it sends out "error messages and error codes". And also allows us to decide how we want to connect to the sql. KInd of stupid having to undergo this pain of "has to run as Network Service" and "no SQL authentication".