How to restore gsutil command? - google-cloud-platform

I have updated Google Cloud SDK to the latest version 135.0.0 from
After the update , I got the following message.
WARNING: There are older versions of Google Cloud Platform tools on
your system PATH. Please remove the following to avoid accidentally
invoking these old tools:
/usr/bin/git-credential-gcloud.sh
/usr/bin/bq
/usr/bin/gcloud
/usr/bin/gsutil
So I have deleted all the above folders.
After that gsutil stopped working.
Please help me how can I resolve the issue.

The issue that it was installed via
sudo apt-get update && sudo apt-get install google-cloud-sdk
see
https://cloud.google.com/sdk/docs/#deb
and you should have used the same mechanism to upgrade.
gcloud is also kind of a package manager, and is able to upgrade itself and its depended packages. Unfortunately if you use gcloud itself to upgrade it installs it in different location. It likely does not work because new location needs to be added to your path.
You can try to reinstall googcle-cloud-sdk package via apt-get.

Related

How do I permanently install an apt package in Google Cloud Shell?

I tried to install a package with apt-get cloud shell once but the next day it was gone. I saw another stackoverflow here. But it was out of date (I think). Please help.
As you can see from the link #DazWilkin has provided, the only directory where Cloud Shell persists your file is at $HOME directory. Anything installed with apt will not persist when the instance provisioned in Cloud Shell shuts down.
There's a solution for this problem. The script $HOME/.customize_environment runs everytime you boot up Cloud Shell. It is already running as root and there you can run apt to install the packages you want.
Example, as per doc:
#!/bin/sh
apt-get update
apt-get -y install erlang
Update: There seems to be an issue where .customize_environment is not working. It's been confirmed by a Google Engineer and it's currently being fixed.

aws-RunPatchBaseline doesn´t install new packages through apt-get

After running a few tests through AWS SSM Document aws-RunPatchBaseline, the content code used for Debian based OS like Ubuntu doesn´t truly install any required packages even setting this with "install" parameter and being executed. Is this something that needs to be fixed or why does this SSM document work this way without truly installing packages.
From documentation, please see below:
apt-get update downloads the package lists from the repositories and "updates" them to get information on the newest versions of packages and their dependencies. It will do this for all repositories and PPAs. From http://linux.die.net/man/8/apt-get:
Used to re-synchronize the package index files from their sources. The indexes of available packages are fetched from the location(s) specified in /etc/apt/sources.list(5). An update should always be performed before an upgrade or dist-upgrade.
apt-get upgrade will fetch new versions of packages existing on the machine if APT knows about these new versions by way of apt-get update.
From http://linux.die.net/man/8/apt-get:
Used to install the newest versions of all packages currently installed on the system from the sources enumerated in /etc/apt/sources.list(5). Packages currently installed with new versions available are retrieved and upgraded; under no circumstances are currently installed packages removed, nor are packages that are not already installed retrieved and installed. New versions of currently installed packages that cannot be upgraded without changing the install status of another package will be left at their current version. [Emphasis mine] An update must be performed first so that apt-get knows that new versions of packages are available.
Currently code content has this:
" apt-get update >/dev/null",
" fi",
"}",
¿Should I add a custom line or create a custom ssm with apt-get upgrade -y after apt-get update?, this document is supossed to work by installing packages, but so far on Ubuntu for example it just doesn´t do anything besides updating the package lists from repos (without installing any).
In my experience, on Ubuntu 20.04, it works. You can verify this by checking apt logs after you run AWS-RunPatchBaseline on such an instance. The logs are located in:
/var/log/apt/history.log
and
/var/log/apt/term.log
Since you have provided any details on what where your tests, which linux distribution did you use, nor provided any log output with possible errors of ssm agent or apt, it is difficult to speculate why it does not work for you.
The actual upgrade is performed by a python script, not the command you listed. You can inspect its code after you run AWS-RunPatchBaseline:
/var/log/amazon/ssm/patch-baseline-operations

Can't find airflow to gcp hook

after installing the gcp package to my airflow(1.10.9) set-up, i tried to call on the GSheetHook
https://airflow.readthedocs.io/en/latest/_api/airflow/providers/google/suite/hooks/sheets/index.html
but i get an error No module named 'airflow.providers'.
looking into the installed python packages for airflow, i do not find the providers package.
is the gcp airflow packagge working or am i missingg a step before i am able to use it?
EDIT: I have installed the gcp package using the pip installer: pip install apache-airflow[gcp]
and here's the list of the installed packages
The "providers" package is only available in Airflow Master. We plan to release each provider as a separate packages as "backport packages", most likely in a week or two from today.
PR to do that: https://github.com/apache/airflow/pull/8807
You should be checking https://airflow.apache.org/docs/1.10.9/ for Airflow 1.10.9 docs. You are looking at the docs for "latest" which is for Master.
It looks backport packages (providers) can now be installed in v1.10* using with following pip command $pip install apache-airflow-backport-providers-PACKAGE-NAME

Unable to run yum command on IBM DSX notebook

I am unable to run yum command in DSX environment. I need yum command access to install some packages.
Here's the error I am seeing when I type in "!yum install sox" command in DSX notebook:
Could not find platform independent libraries <prefix>
Could not find platform dependent libraries <exec_prefix>
Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>]
ImportError: No module named site
This is possible duplicate of this
Can I use MeCab on IBM Data Science Experience
You cannot use yum in DSX Notebook attach to Apache Spark service on Bluemix.
Given Apache Spark service on bluemix does not allow user to install any root level packages which are usually** installed using yum as well.
The only alternative is for you to either try to see if you can download source using !wget or !curl and then try to see if you can compile it, if the package doesn't need any root permission technically , you should be able to compile and install it using make.
You can also raise feature enhancement for getting this package installed by default.
http://ibm.biz/dsxideas
Thanks,
Charles.

Installing and Enabling PHP7.1 on AWS Elastic beanstalk

Most PHP vital libraries have been mandating PHP7.1 in their releases lately and I happen to have an API staged on AWS elastic beanstalk PHP7.0 platform that I'd like to make compliant with this recent change.
Seeing as Amazon has greatly delayed this shift since December 1, 2016 release of PHP7.1, I've tried so many things to make PHP7.1 available on this AWS Elastic beanstalk platform originally intended for PHP7.0
Below is my sample upgrade script:
sudo su
yum -y remove php70
wget http://rpms.famillecollet.com/enterprise/remi-release-6.rpm
sudo rpm -Uvh remi-release-6*.rpm
yum-config-manager --enable remi-php71
wget ftp://195.220.108.108/linux/epel/6/x86_64/scl-utils-20120229-1.el6.x86_64.rpm
rpm -Uvh scl-utils-20120229-1.el6.x86_64.rpm
yum -y install php71
source /opt/remi/php71/enable
yum -y install php71-php-soap php71-php-bcmath php71-php-devel php-71-php-intl php71-php-mbstring php71-php-mcrypt php71-php-mysqlnd php71-php-opcache php71-php-pgsql php71-php-odbc php71-php-pecl-uuid php71-php-pecl-memcache php71-php-igbinary php71-php-oauth php71-php-xml php71-php-xmlrpc php71-php-process php71-php-apcu
But unless I run the source /opt/remi/php71/enable every time, I can't seem to get PHP71 by default as the PHP cli runtime.
In a bid to fix that, I did yum remove php70* to clean up the old PHP7.0 installation but that led to a problem with the AWS EBS deployment hook scripts.
Right now, I'm in a fix and it seems like I have to be forced to work with PHP7.0 and downgrade most of my PHP libraries. I just want to know if anyone can get me out of this messed up state.
Thank you.
Remi repository provides 2 way to install PHP 7.1
base packages (php-*) 1 repository by version, single version allowed, so you need remi-php71 repository enabled
SCL packages (php71-php-*) designed for parallel installation in remi-safe repository (which you have installed)
As explain in the FAQ.
Also see the Wizard instructions.
Amazon released a new version of Elastic beanstalk with PHP 7.1 support.
Upgrade your environment to use this configuration.