Enabling Google Cloud Shell "boost" mode via gcloud cli - google-cloud-platform

I use the method mentioned in this excellent answer https://stackoverflow.com/a/49515502/10690958 to connect to Google Cloud Shell via ssh on my ubuntu workstation. Occasionally, I need to enable "boost-mode". In that case, I currently have to open the Cloud Shell via firefox (https://console.cloud.google.com/cloudshell/editor?shellonly=true), then login and enable boost mode. After that I can close firefox, and use the gcloud method to access the cloud shell VM in boost mode.
I would like to do this (access boost-mode) purely through the gcloud cli, since using the browser is quite cumbersome.
The official docs dont mention any method of enabling boost mode via gcloud There seem to be only three options i.e. ssh/scp/sshfs via gcloud alpha cloud-shell. Is there perhaps a way to enable this via some configuration option?
thanks

There does not seem to be any option to enable the boost mode from either the v1 or v1alpha1 versions of the Cloud Shell API (both versions undocumented).
The gcloud command actually uses the API to get the status of your Cloud Shell environment, which contains information about how to connect through SSH, updates the SSH keys if needed, and then connects using that info (use gcloud alpha cloud-shell ssh --log-http if you want to check it by yourself).
As far as I can see, when you click the "Boost mode" button, the browser makes a call to https://ssh.cloud.google.com/devshell?boost=true&forceNewVm=true (and some more parameters), but I can't make it work on the command line, so I'm guessing it's doing some other stuff that I can't identify.
If you need this for your workflow, you could raise a feature request on Google's issue tracker.

It is now possible to access the Cloud Shell in boost mode from the CLI with this command: gcloud alpha cloud-shell ssh --boosted. Other possible arguments are documented here. Just a warning: the first time I tried that my home directory became unreadable and started returning "Input/output error", logging out and in again fixed the issue.

Related

`docker compose create ecs` without user input

I am looking for a way to run docker compose create ecs without having to manually select where it gets AWS credentials from (as it's being run from a build agent).
In the following AWS blog it shows it being used with a flag --from-env (which is exactly what I want), however that flag doesn't seem to actually exist, either in the official docs, or by trial and error. Is there something I am missing?
Apparently it's a known issue
https://github.com/docker/docker.github.io/issues/11845
You have to enable experimental support for the docker cli in Linux to create an ecs context :S

Invalid arguments when creating new datalab instance

I am following the quickstart tutorial for datalab here, within the GCP console. When I try to run
datalab beta create-gpu datalab-instance-name
In step 3 I receive the following error
write() argument must be str, not bytes
Can anyone help explain why this is the case and how to fix it?
Thanks
Referring to the official documentation, before running Datalab instance, the corresponding APIs should be enabled: Google Compute Engine and Cloud Source Repositories APIs. To do so, visit Products -> APIs and Services -> Library and search for the APIs. Additionally, make sure that billing is enabled for your Google Cloud project.
You can also enabling the APIs by typing the following command, which will give you a prompt to enable the API:
datalab list
I made some research and found that the same issue has been reported on the Github page. If enabling API's wouldn't work, the best option would be to contribute (add a comment) in the mentioned Github topic to make it more visible to the Datalab Engineering team.

After being disconnected from a running job, how to show logs again in Google Cloud SDK Shell?

I'm running a job on Google's Machine Learning Engine. I issued this job from the Google Cloud SDK Shell on Windows. At some point, I closed my laptop and lost connection to Google Cloud. The job kept running on Google's servers in the mean time. Now that I have reopened my laptop and got connected to the internet again, the shell has output:
ERROR: (gcloud.ml-engine.jobs.submit.training) There was a problem refreshing your current auth tokens: Unable to find the server at www.googleapis.com
Please run:
$gcloud auth login
to obtain new credentials.
So I ran that command. The browser opened, I clicked on my Google account and authenticated. Then I saw:
You are now logged in as [my Google e-mail address].
Your current project is [None]. You can change this setting by running:
$ gcloud config set project PROJECT_ID
I did that also, and then saw the output:
Updated property [core/project].
So everything seems to work. Online, in the Google Cloud Console, I can view the logs of my job while it is running. However, my question is, is it possible to get those logs/stdout to be printed in my shell again?
I guess you are looking for something like what is explained in this documentation page about ML Engine logging.
You can either use the logging service specifying your preferred filter, with gcloud beta logging read or, in order to print ML Engine job logs in the console, you can use this ML Engine-specific command, with the options and flags you need, in order to print the logs of your job:
gcloud ml-engine jobs stream-logs
You can find the reference for that command in this other page.

Google Cloud Shell - Cannot open shell

I am really new to Google Cloud Shell, and I accidentally closed the tab for the shell...and I cannot find it now.
I know I need to click the highlighted button at the top of the console window to activate Google Cloud Shell, but it is now grey, and no shell is presented on the page (it should be at the bottom).
Can anyone help?
I observed the same issue.
I refreshed the console page ( CTRL + F5 ) and I was able to see the 'Activate Cloud Shell' button again !!
However, if above does not resolves the issue, some tips:
From google docs Google Cloud Shell Limitations
Weekly usage: Cloud Shell also has weekly usage limits. If you reach
your usage limit, you'll need to wait until the specified time (listed
under Usage Quota, found under the three dots menu icon) before you
can use Cloud Shell again.
Also, there are usage limits on this, check your email if you had violated any conditions on shell usage:
Warning: Violating the Terms of Service will result in Cloud Shell
being disabled for your account. This constitutes activity that
adversely impacts Google Cloud Platform services, other customers' or
their end users' use of services, or the Google network used to
provide these services. Coin mining and network scanning using Cloud
Shell are strictly prohibited.
I had the same issue here.
You can install the command line interface in your terminal to access Google Cloud Datalab.
Here you will find the quickstart to configure the environment.
Installing datalab component:
gcloud components install datalab
Connecting with your VM Instance:
datalab connect *instance-name*
Opening the initial page:
http://localhost:8081
See more:
Google Cloud Datalab - Quickstart
It works now. It seems that there was a service problem earlier today, so the page was not well functioning.

Cloud Foundry - is it possible to check files on flapping app?

Is there a way to review content/files on a flapping app instance?
I had today this problem with one go application and unfortunately since container didnt start, I couldnt check what files are there. So the only way to debug problem (which was btw related to wrong filename) was the log stream.
Thank you,
Leszek
PS.
I am using HPE Stackato, but I assume the approach will be similar to the approach in CF and PCF...
With Pivotal Cloud Foundry, you can cf ssh to SSH into the container, or to set up port-forwarding so that you can use plain ssh or even scp and sftp to access the container or view its file system. You can read some more about it in:
The diego-ssh repository's README
The documentation on Accessing Apps with SSH
I highly doubt this functionality exists with HPE Stackato because it uses an older Cloud Foundry architecture. Both PCF and HPE are based off of open source Cloud Foundry, but PCF is on the newer Diego architecture, HPE is still on the DEA architecture according to the nodes listed in the Stackato Cluster Setup docs.
With the DEA architecture, you should be able to use the cf files command, which has the following usage:
NAME:
files - Print out a list of files in a directory or the contents of a specific file
USAGE:
cf files APP_NAME [PATH] [-i INSTANCE]
ALIAS:
f
OPTIONS:
-i Instance
To deal with a container that is failing to start, there is currently no out-of-the-box solution with Diego, but it has been discussed. This blog post discusses some options, including:
For the app in question explicitly specify a start command by adding a ";sleep 1d" The push command would like this - cf push <app_name> -c "<original_command> ;sleep 1d". This will keep the container around for a day after process within the container has exited.