gsutil Updates are available check - google-cloud-platform

On seemly random occasions running a gsutil command it displays:
Updates are available for some Cloud SDK components. To install them,
please run:
$ gcloud components update
My issue is that I run gsutil commands programmatically on a "server" so I don't see this message as it does not appear in either Standard Out or Err from .Net Process.
I see there is a gsutil version command but I don't see a query to do a check if I have the current version.
Is there a gsutil, or other GCP SDK command, I can run that will tell me if my local copy needs to be updated with output via Standard Out?
Here is the output from Version -l
H:\OUTREACH\WEBSITE\GCP>gsutil version -l
gsutil version: 4.27
checksum: 522455e2d24593ff3a2d3d237eefde57 (OK)
boto version: 2.47.0
python version: 2.7.10 (default, May 23 2015, 09:40:32) [MSC v.1500 32 bit (Intel)]
OS: Windows 7
multiprocessing available: False
using cloud sdk: True
pass cloud sdk credentials to gsutil: True
config path(s): xxxx
gsutil path: xxxx
compiled crcmod: True
installed via package manager: False
editable install: False

gsutil tool comes with Cloud SDK. When you run gsutil it actually invokes a gcloud wrapper which forwards its credentials to gsutil. Among other things it occasionally checks if newer versions of Cloud SDK are available.
If you do not wish this check to be performed, you can disable it by setting corresponding gcloud property via
gcloud config set component_manager/disable_update_check true
To actually check if update is available you can run
gcloud components list
which will display something like
Your current Cloud SDK version is: 163.0.0
The latest available version is: 165.0.0
To update run gcloud components update.

You can find out if a new copy of gsutil is available by running:
gsutil update
Also, you can avoid having updates offered when running on your server by setting the software_update_check_period variable, as described in https://cloud.google.com/storage/docs/gsutil/commands/update.

Related

gcloud util installation crashed on Windows 10

I want to install gcloud ssh component on Windows 10 Home in order to ssh GCE instances. But it failed showing the following message.
Your current Cloud SDK version is: 347.0.0
Installing components from version: 347.0.0
These components will be installed.
Name: gcloud Beta Commands
Version: 2019.05.17
Size: < 1 MiB
For the latest full release notes, please visit:
https://cloud.google.com/sdk/release_notes
Do you want to continue (Y/n)? y
Creating update staging area
10%
(snip)
100%
100%
ERROR: gcloud crashed (Error): [('C:\\Users\\tafut\\gcloud\\google-cloud-sdk\\platform\\gsutil\\third_party\\funcsigs\\docs\\index.rst', 'C:\\Users\\tafut\\gcloud\\google-cloud-sdk.staging\\platform\\gsutil\\third_party\\funcsigs\\docs\\index.rst', 'symbolic link privilege not held'), ('C:\\Users\\tafut\\gcloud\\google-cloud-sdk\\platform\\gsutil\\third_party\\mock\\docs\\changelog.txt', 'C:\\Users\\tafut\\gcloud\\google-cloud-sdk.staging\\platform\\gsutil\\third_party\\mock\\docs\\changelog.txt', 'symbolic link privilege not held')]
If you would like to report this issue, please run the following command:
gcloud feedback
To check gcloud for common problems, please run the following command:
gcloud info --run-diagnostics
Here is the gcloud version installed.
$ gcloud version
Google Cloud SDK 347.0.0
bq 2.0.69
core 2021.06.25
gsutil 4.64
i suggest you once uninstall cloud sdk and reinstall it again,sometimes most of the errors will get resolved through reinstalling .refer this documentation to uninstall.you can refer this documentation for installing it. and use any of this methods to ssh
I was using gcloud command from the "Git bash" in Windows. Even though I opened the "Git bash" with "Run as Administrator", gcloud would crash.
Instead, I opened Google Cloud SDK Shell (I still used "Run as Administrator"), and the gcloud commands worked without crashing.

Google cloud compute engine - disable automatic updates (centos)

I wonder if there is a way to disable automatic updates of our Linux machines on Google Cloud (yum update)
As far as I know during maintenance window our servers get new packages of software installed. (I checked yum.log). Since our installed software must be specific version (not latest) we don't want Google to run updates for us because it usually breaks all kind of dependencies...
I have searched on Google but didn't find any info about that.
Thanks.
The centOS 7 image used in Compute Engine includes the yum-cron installed and enabled by default. You can verify it by either using one of the following commands:
sudo yum list installed yum-cron
sudo systemctl status yum-cron.service
The yum-cron will periodically check for updates and apply them if there are updates available.
Solution
If you have yum-cron running on your instance, you can disable auto-updates by accessing the configuration file /etc/yum/yum-cron.conf. Then change the following variables to ‘no’:
update_messages = no
download_updates = no
apply_updates = no
This will prevent the system from updating automatically.
As an alternative, you can opt for uninstalling the package on your system using the following command.
sudo yum remove yum-cron
This part is missing in the official documentation so It will be added soon.

change the AWS CLI from 1 to 2

When checking the AWS PATH with which aws when changing the AWS CLI from 1 to 2
/Users/username/.pyenv/shims/aws
I used to install via pyenv, but I want to remove it and install it according to system 2, but even if I try it according to the official doc, it is not changed to system 2.
https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2-mac.html
curl "https://awscli.amazonaws.com/AWSCLIV2.pkg" -o "AWSCLIV2.pkg"
sudo installer -pkg AWSCLIV2.pkg -target /
It did not change to the 2 system even if I typed this command.
I've also tried uninstalling the AWS CLI, but it doesn't work.
Does anyone know of any way to do this?
thank you
I removed my old aws cli from ~/.pyenv/versions/x.x.x/bin/aws, where x.x.x is the current Python version.
Get the current version:
$ pyenv versions
* 3.7.4
Remove aws cli from current pyenv bin:
$ rm -rf ~/.pyenv/versions/3.7.4/bin/aws*
Try again which aws:
$ which aws
/usr/local/bin/aws
I had the same issue.
This is because pyenv linked the shim to very Python version that command has been installed originally with; that's by the way ok to avoid version conflicts. pip3 and awscli v3 uninstall don't take care of that.
What you have to do is to:
First, uninstall the old awscli as noted in the AWS documentation (probably you used pip3). Note: remember to edit your bash_profile or zshrc as you may have a $HOME/.local/bin PATH in your config: you want to remove that too
The aws shim will remain in place until you get rid of that Python version (pyenv uninstall 3.7.x) BUT YOU PROBABLY DON'T WANT THAT
Just remove the shim manually rm /Users/username/.pyenv/shims/aws
Install the AWS CLI v2 using the recommended installed and verify everything works correctly
I had the same issue in my Windows system because of I was having both AWS CLI and AWS CLI v2 installed but PATH environment variable was corrupted because of installing AWS CLI 2 multiple times.
I removed AWS CLI (means version 1) from that PATH or Path or path variable and I pointed to my AWS CLI v2 folder like C:\Program Files\Amazon\AWSCLIV2\aws.exe. Then all my AWS CLI v2 and kubernate 0.xx.x all set by itself. When I check with aws --version, I get aws 2 version always now.
So, install AWS CLI 2 and also check your path system environment variable. I think no need of work around Python version. Please refer to AWS documentation at https://docs.aws.amazon.com/cli/latest/userguide/cliv2-migration-instructions.html#cliv2-migration-instructions-side-by-side

How to ensure software package version consistency in AWS SageMaker serverless compute?

I am learning AWS SageMaker which is supposed to be a serverless compute environment for Machine Learning. In this type of serverless compute environment, who is supposed to ensure the software package consistency and update the versions?
For example, I ran the demo program that came with SageMaker, deepar_synthetic. In this second cell, it executes the following: !conda install -y s3fs
However, I got the following warning message:
Solving environment: done
==> WARNING: A newer version of conda exists. <==
current version: 4.4.10
latest version: 4.5.4
Please update conda by running
$ conda update -n base conda
Since it is serverless compute, am I still supposed to update the software packages myself?
Another example is as follows. I wrote a few simple lines to find out the package versions in Jupyter notebook:
import platform
import tensorflow as tf
print(platform.python_version())
print (tf.version)
However, I got the following warning messages:
/home/ec2-user/anaconda3/envs/tensorflow_p36/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: compiletime version 3.5 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.6
return f(*args, **kwds)
/home/ec2-user/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/h5py/init.py:36: FutureWarning: Conversion of the second argument of issubdtype from float to np.floating is deprecated. In future, it will be treated as np.float64 == np.dtype(float).type.
from ._conv import register_converters as _register_converters
The prints still worked and I got the results shown beolow:
3.6.4
1.4.0
I am wondering what I have to do to get the package consistent so that I don't get the warning messages. Thanks.
Today, SageMaker Notebook Instances are managed EC2 instances but users still have full control over the the Notebook Instance as root. You have full capabilities to install missing libraries through the Jupyter terminal.
To access a terminal, open your Notebook Instance to the home page and click the drop-down on the top right: “New” -> “Terminal”.
Note: By default, conda installs to the root environment.
The following are instructions you can follow https://conda.io/docs/user-guide/tasks/manage-environments.html on how to install libraries in the particular conda environment.
In general you will need following commands,
conda env list
which list all of your conda environments
source activate <conda environment name>
e.g. source activate python3
conda list | grep <package>
e.g. conda list | grep numpy
list what are the current package versions
pip install numpy
Or
conda install numpy
Note: Periodically the SageMaker team releases new versions of libraries onto the Notebook Instances. To get the new libraries, you can stop and start your Notebook Instance.
If you have recommendations on libraries you would like to see by default, you can create a forum post under https://forums.aws.amazon.com/forum.jspa?forumID=285 . Alternatively, you can bootstrap your Notebook Instances with Lifecycle Configurations to install custom libraries. More details here: https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateNotebookInstanceLifecycleConfig.html

How to upgrade version of terraform in windows

How to upgrade version of terraform in windows. Now i am using 0.9 and on windows using git bash. can someone help me with the process or commands.
Note: I did some google search but no use.
Thanks
I know you specified using bash but this is the first answer that comes up in searches so this is more FYI for future travelers.
To find the location of terraform.exe in powershell:
(get-command terraform.exe).Path
I had used Chocolaty to install Terraform so to upgrade:
choco upgrade terraform
This is using Git Bash on Windows
Download the latest version and unzip it
Navigate to that folder through your bash CLI
Now type which terraform
Copy the path of the terraform
Now type
cp terraform.exe <your Terraform path>
e.g. cp terraform.exe /c/WINDOWS/System32/terraform
Now check by using
terraform --version
Firstly, I would read the upgrade guides written by Hashicorp to make upgrading versions transparent. In your case I would read both 0.10 and 0.11 as they're likely to have changes that will affect you.
Secondly, in addition to this test in isolation with later versions of Terraform, i.e. not using remote state file and in a sandbox environment.
Lastly, locate where the current Terraform binary is located, perhaps check your Environment Variables for a PATH that may lead to where the executable is, and replace that with the latest version of Terraform which you can download here.
Use
choco install terraform --version=0.12.14 --force
to install version that you like.