What trace-token option for gcloud is used for? - google-cloud-platform

Help definition is not clear for me:
Token used to route traces of service requests for investigation of
issues.
Could you provide simple example how to use it?
I tried:
gcloud compute instances create vm3 --trace-token xyz123
I can find "vm3" string in logs, but not my token xyz123.
The only use of it seems to be in grep:
history| grep xyz123

The flag --trace-token is intended to be used by the support agents when there is some error which is difficult to track from the logs. The Google Cloud Platform Support agent provides a time bound token which will expire after a specified time and asks the user to run the command for the specific product in which the user is facing the issue. Then it gets easier for the support agent to trace the error by using that --trace-token.
For example :
A user faced some error while creating a Compute Engine instance and contacted the Google Cloud Platform Support team. The support agent then inspected the logs and other resources but could not find the root cause of the issue. Then the support agent provides a --trace-token and asks the user to run the below command with the provided --trace-token.
--trace-token = abcdefgh
Command : gcloud compute instances create my-vm --trace-token abcdefgh
After the user runs the above command the support agent could find the error by analysing in depth with the help of the --trace-token
Please note that when a --trace-token flag is used the content of the trace may include sensitive information like auth tokens, the contents of any accessed files. Hence they should only be used for manual testing and should not be used in production environments.

Related

Dataproc custom image: Cannot complete creation

For a project, I have to create a Dataproc cluster that has one of the outdated versions (for example, 1.3.94-debian10) that contain the vulnerabilities in Apache Log4j 2 utility. The goal is to get the alert related (DATAPROC_IMAGE_OUTDATED), in order to check how SCC works (it is just for a test environment).
I tried to run the command gcloud dataproc clusters create dataproc-cluster --region=us-east1 --image-version=1.3.94-debian10 but got the following message ERROR: (gcloud.dataproc.clusters.create) INVALID_ARGUMENT: Selected software image version 1.3.94-debian10 is vulnerable to remote code execution due to a log4j vulnerability (CVE-2021-44228) and cannot be used to create new clusters. Please upgrade to image versions >=1.3.95, >=1.4.77, >=1.5.53, or >=2.0.27. For more information, see https://cloud.google.com/dataproc/docs/guides/recreate-cluster, which makes sense, in order to protect the cluster.
I did some research and discovered that I will have to create a custom image with said version and generate the cluster from that. The thing is, I have tried to read the documentation or find some tutorial, but I still can't understand how to start or to run the file generate_custom_image.py, for example, since I am not confortable with cloud shell (I prefer the console).
Can someone help? Thank you

`docker compose create ecs` without user input

I am looking for a way to run docker compose create ecs without having to manually select where it gets AWS credentials from (as it's being run from a build agent).
In the following AWS blog it shows it being used with a flag --from-env (which is exactly what I want), however that flag doesn't seem to actually exist, either in the official docs, or by trial and error. Is there something I am missing?
Apparently it's a known issue
https://github.com/docker/docker.github.io/issues/11845
You have to enable experimental support for the docker cli in Linux to create an ecs context :S

After being disconnected from a running job, how to show logs again in Google Cloud SDK Shell?

I'm running a job on Google's Machine Learning Engine. I issued this job from the Google Cloud SDK Shell on Windows. At some point, I closed my laptop and lost connection to Google Cloud. The job kept running on Google's servers in the mean time. Now that I have reopened my laptop and got connected to the internet again, the shell has output:
ERROR: (gcloud.ml-engine.jobs.submit.training) There was a problem refreshing your current auth tokens: Unable to find the server at www.googleapis.com
Please run:
$gcloud auth login
to obtain new credentials.
So I ran that command. The browser opened, I clicked on my Google account and authenticated. Then I saw:
You are now logged in as [my Google e-mail address].
Your current project is [None]. You can change this setting by running:
$ gcloud config set project PROJECT_ID
I did that also, and then saw the output:
Updated property [core/project].
So everything seems to work. Online, in the Google Cloud Console, I can view the logs of my job while it is running. However, my question is, is it possible to get those logs/stdout to be printed in my shell again?
I guess you are looking for something like what is explained in this documentation page about ML Engine logging.
You can either use the logging service specifying your preferred filter, with gcloud beta logging read or, in order to print ML Engine job logs in the console, you can use this ML Engine-specific command, with the options and flags you need, in order to print the logs of your job:
gcloud ml-engine jobs stream-logs
You can find the reference for that command in this other page.

Command to create google cloud backend service fails - what am I doing wrong?

I am currently working through the Google Cloud "load balancing" code lab:
https://codelabs.developers.google.com/codelabs/cpo200-load-balancing
On page 4 of the lab, it requires me to run the following command in the Cloud Shell, to create a backend-service (for load balancing of a group of web server, i.e. HTTP, instances):
gcloud compute backend-services create \
guestbook-backend-service \
--http-health-checks guestbook-health-check
However, running this command results in the following error:
ERROR: (gcloud.compute.backend-services.create) Some requests did not succeed:
- Invalid value for field 'resource.loadBalancingScheme': 'EXTERNAL'.
Backend Service based Network Load Balancing is not yet supported.
Assuming that all the preceding steps in the code lab are correct (which I have no reason to suspect is not the case), this appears to be a bug in the code lab.
I have submitted a bug report for this, however, since I am not expecting any response to the bug report any time soon but I do want to continue on with this lab, what command should I be running instead?
I presume there has been some sort of API change but the code lab has not caught up and the documentation does not appear to indicate any relevant changes.
I realize I could probably work out how to do this with the Cloud Console, but I would really like to learn the command line actions.
Does anyone have any ideas?
Thanks in advance!
And, as is the nature of these things, shortly after I post this I discover the answer for myself...
The command should be:
gcloud compute backend-services create \
guestbook-backend-service \
--http-health-checks guestbook-health-check \
--global
It appears that what the error message is actually complaining about is that regional backend-services are not supported; they must be global.
Leaving aside the fact that the lab directions are inadequate, it would be nice if this was detailed in the documentation, but I guess we can't have everything...

Pulling file from the Google Cloud server to local machine

Linux n00b here having trouble pulling a file from the server to my local Windows 7 professional 64 bit machine. I am using Wowza to stream live video and I am recording these live videos to my Google Cloud instance located here:
/usr/local/WowzaStreamingEngine/content/myStream.mp4
When I ssh:
gcutil --project=”myprojectname” pull “my instance”
“/usr/local/WowzaStreamingEngine/content/myStream.mp4” “/folder1”
I receive a permission denied error. When I try saving another folder deep on my local machine i.e "/folder1/folder2" the error returned is file or directory not found. I've checked that I have write permisions set on my local Windows 7 machine so I do not think it is a permissions error. Again, apologize for the n00b question, I'm just been stuck here for hours.
Thx,
~Greg
Comment added 7/18:
I enter the following through ssh:
gcutil --project=”Myproject” pull “instance-1” "/usr/local/WowzaStreamingEngine/content/myStream.mp4” “/content"
By entering this I'm expecting the file mystream.mp4 to be copied to my C:/content folder. The following is returned: Warning: Permanently added '107.178.218.8' (ECDSA) to the list of known hosts. Enter passphrase for key '/home/Greg/.ssh/google_compute_engine':
Here I enter the passphrase and the following error is returned: /content: Permission denied Have write set up on this folder. Thanks! – Greg
-=-=-==->
To answer the question about using Cygwin, I'm not familiar with Cygwin and I do not believe it was used in this instance. I ran these commands through the Google Cloud SDK shell which I installed per the directions found here: https://developers.google.com/compute/docs/gcutil/.
What I am doing:
After setting up my google cloud instance I open Google CLoud SDK and enter the following:
gcutil --service_version="v1" --project="myproject" ssh --zone="us-central1-a" "instance-1"
I then am prompted for a passphrase which I create and then run the following:
curl http://metadata/computeMetadata/v1/instance/id -H "X-Google-Metadata-Request:True"
This provides the password I use to login to the Wowza live video streaming engine. All of this works beautifully, I can stream video and record the video to the following location: /usr/local/WowzaStreamingEngine/content/myStream.mp4
Next I attempt to save the .mp4 file to my local drive and that is where I'm having issues. I attempt to run:
gcutil --project=”myproject” pull “instance-1” “/usr/local/WowzaStreamingEngine/content/myStream.mp4” “C:/content”
also tried, C:/content C:\content and C:\content
These attempts threw the following error:
Could not resolve hostname C: Name or service not known
Thanks again for your time, I know it is valuable, I really appreciate you helping out a novice.
Update I believe I am close thanks to your help. Switched to local C drive, entered the command as you displayed in your Answer update. Now returning a new, not before seen error:
Error: API rate limit exceeded
I did some research on S.O. and some suggestions made were that billing is not enabled or the relevant API is not enabled and I could solve by turning on Google Compute Engine. Billing has been enabled for a few weeks now on my project. In terms of Google Compute Engine, below are what I believe to be the relevant items turned on:
User Info: Enabled
Compute: Read Write
Storage: Full
Task Queue: Enabled
BigQuery: Enabled
Cloud SQL: Enabled
Cloud Database: Enabled
The test video I recorded was short and small in size. I also have not done anything else with this instance so at a loss as to why I am getting the API rate exceeded error.
I also went to the Google APIs console. I see very limited usage reported so, again, not sure why I am exceeding the API limit. Perhaps I do not have something set appropriately in the APIs console?
I'm guessing you're using Cygwin here (please correct me if I'm wrong).
The root directory for your Cygwin installation is most likely C:\cygwin (see FAQ) and not C: so when you say /content on the command line, you're referring to C:\cygwin\content and not C:\content.
Secondly, since you're likely running as a regular user (and not root) you cannot write to /content so that's why you're getting the permission denied error.
Solution: specify the target directory as C:/content (or C:\\content) rather than /content.
Update: from the update to the question, you're using the Google Cloud SDK shell, not Cygwin, so the above answer does not apply. The reason you're seeing the error:
Could not resolve hostname C: Name or service not known
is because gcutil (like ssh) parses destinations which include : as having the pattern [hostname]:[path]. Thus, you should avoid : in the destination, which means we need to drop the drive spec.
In this case, the following should suffice, assuming that you're currently at a prompt that looks like C:\...>:
gcutil --project=myproject pull instance-1 /usr/local/WowzaStreamingEngine/content/myStream.mp4 \content
If not, first switch to the C: drive by issuing the command:
C:
and then run the above command.
Note: I removed the quotes from the command line because you don't need it in the case where parameters don't have spaces in them.