Google Cloud: ERROR: Reachability Check failed - google-cloud-platform

I followed this answer already. But it didn't help, also, I re-installed gcloud CLI, but now I am not able to install CLI anymore because of the following error.
Here is my output for ./google-cloud-sdk/bin/gcloud init
ERROR: Reachability Check failed.
Cannot reach https://cloudresourcemanager.googleapis.com/v1beta1/projects with httplib2 (SSLCertVerificationError)
Cannot reach https://www.googleapis.com/auth/cloud-platform with httplib2 (SSLCertVerificationError)
Cannot reach https://cloudresourcemanager.googleapis.com/v1beta1/projects with requests (SSLError)
Cannot reach https://www.googleapis.com/auth/cloud-platform with requests (SSLError)
Network connection problems may be due to proxy or firewall settings.
Also, I am not behind any corporate proxy.
It was working perfectly few days ago, until today.I did not changed any settings whatsoever, I didn't install any new services whatsoever.
Output for ./google-cloud-sdk/bin/gcloud info.
./google-cloud-sdk/bin/gcloud info
Google Cloud SDK [354.0.0]
Python Version: [3.7.9 (v3.7.9:13c94747c7, Aug 15 2020, 01:31:08) [Clang 6.0 (clang-600.0.57)]]
Python Location: [/Users/myname/.config/gcloud/virtenv/bin/python3]
Site Packages: [Enabled]
Installation Root: [/Users/myname/Downloads/google-cloud-sdk]
Installed Components:
gsutil: [4.67]
core: [2021.08.20]
bq: [2.0.71]
System PATH: [/Users/myname/.config/gcloud/virtenv/bin:/Users/myname/Downloads/apache-maven-3.8.4/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/go/bin:/usr/local/munki:/usr/local/opt/go/libexec/bin:/Users/myname/go/bin]
Python PATH: [/Users/myname/Downloads/./google-cloud-sdk/lib/third_party:/Users/myname/Downloads/google-cloud-sdk/lib:/Library/Frameworks/Python.framework/Versions/3.7/lib/python37.zip:/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7:/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload:/Users/myname/.config/gcloud/virtenv/lib/python3.7/site-packages]
Cloud SDK on PATH: [False]
Kubectl on PATH: [/usr/local/bin/kubectl]
Installation Properties: [/Users/myname/Downloads/google-cloud-sdk/properties]
User Config Directory: [/Users/myname/.config/gcloud]
Active Configuration Name: [default]
Active Configuration Path: [/Users/myname/.config/gcloud/configurations/config_default]
Account: [None]
Project: [None]
Current Properties:
[core]
disable_usage_reporting: [True]
Logs Directory: [/Users/myname/.config/gcloud/logs]
Last Log File: [/Users/myname/.config/gcloud/logs/2022.08.10/15.35.06.807614.log]
git: [git version 2.32.0 (Apple Git-132)]
ssh: [OpenSSH_8.1p1, LibreSSL 2.7.3]

Update on this, just disable the ssl validation and everything will work.
gcloud config set auth/disable_ssl_validation True

Related

AWS Elasticbeanstalk with Django: Incorrect application version found on all instances

I'm trying to deploy a django application on elasticbeanstalk. It has been working fine then suddenly stopped and I cannot figure out why.
When I do eb deploy I get
INFO: Environment update is starting.
INFO: Deploying new version to instance(s).
INFO: New application version was deployed to running EC2 instances.
INFO: Environment update completed successfully.
Alert: An update to the EB CLI is available. Run "pip install --upgrade awsebcli" to get the latest version.
INFO: Attempting to open port 22.
INFO: SSH port 22 open.
INFO: Running ssh -i /home/ubuntu/.ssh/web-cdi_011017.pem ec2-user#54.188.214.227 if ! grep -q 'WSGIApplicationGroup %{GLOBAL}' /etc/httpd/conf.d/wsgi.conf ; then echo -e 'WSGIApplicationGroup %{GLOBAL}' | sudo tee -a /etc/httpd/conf.d/wsgi.conf; fi;
INFO: Attempting to open port 22.
INFO: SSH port 22 open.
INFO: Running ssh -i /home/ubuntu/.ssh/web-cdi_011017.pem ec2-user#54.188.214.227 sudo /etc/init.d/httpd reload
Reloading httpd: [ OK ]
When I then run eb health, I get
Incorrect application version found on all instances. Expected version
"app-c56a-190604_135423" (deployment 300).
If I eb ssh and look in /opt/python/current there is nothing there so nothing is being copied across
I think something may be wrong with .elasticbeanstalk/config.yml. Somehow the directory was deleted and setup again. This is the config.yml
branch-defaults:
master:
environment: app-prod
scoring-dev:
environment: app-dev
environment-defaults:
app-prod:
branch: null
repository: null
global:
application_name: my-app
default_ec2_keyname: am-app_011017
default_platform: arn:aws:elasticbeanstalk:us-west-2::platform/Python 2.7 running
on 64bit Amazon Linux/2.3.1
default_region: us-west-2
include_git_submodules: true
instance_profile: null
platform_name: null
platform_version: null
profile: null
sc: git
workspace_type: Application
Please, any ideas about how to troubleshoot?
I upgraded to the latest AWS stack for python 2.7 and that sorted it
I faced the same problem and the cause the command timeout
Default max deployment time -Command timeout- is 600 (10 minutes)
Your Environment → Configuration → Deployment preferences → Command timeout
Increase the Deployment preferences for example 1800
or upgrade the instance type to work faster

Cannot create image with packer in Google Cloud due to service account error

I have created a service account in Gcloud.
Installed Gcloud on my mac.
When ever I run my packer template, it complains about this account which I have no idea where it is coming from.
Packer template:
{
"builders": [
{
"type": "googlecompute",
"account_file": "/Users/Joe/Downloads/account.json",
"project_id": "rare-truck-123456",
"source_image": "centos-7-v20180129",
"zone": "us-west1-a",
"ssh_username": "centos"
}
]
}
Error:
➜ packer git:(master) ✗ packer build release_google_image.json
googlecompute output will be in this color.
==> googlecompute: Checking image does not exist...
==> googlecompute: Creating temporary SSH key for instance...
==> googlecompute: Using image: centos-7-v20180129
==> googlecompute: Creating instance...
googlecompute: Loading zone: us-west1-a
googlecompute: Loading machine type: n1-standard-1
googlecompute: Loading network: default
googlecompute: Requesting instance creation...
googlecompute: Waiting for creation operation to complete...
==> googlecompute: Error creating instance: 1 error(s) occurred:
==> googlecompute:
==> googlecompute: * The resource '123412341234-compute#developer.gserviceaccount.com' of type 'serviceAccount' was not found.
Build 'googlecompute' errored: Error creating instance: 1 error(s) occurred:
* The resource '123412341234-compute#developer.gserviceaccount.com' of type 'serviceAccount' was not found.
==> Some builds didn't complete successfully and had errors:
--> googlecompute: Error creating instance: 1 error(s) occurred:
* The resource '123412341234-compute#developer.gserviceaccount.com' of type 'serviceAccount' was not found.
==> Builds finished but no artifacts were created.
Why is it trying to use 123412341234-compute#developer.gserviceaccount.com?
I created a service account with compute admin v1 permissions under my project in google cloud and I downloaded my json file and renamed it to accounts.json. The name of that service account is different (release-builder#rare-truck-123456.iam.gserviceaccount.com), but packer seems to ignore it and go after some strange account.
Even my cli command gcloud info gives back the right service account:
Google Cloud SDK [188.0.1]
Platform: [Mac OS X, x86_64] ('Darwin', 'Alexs-MacBook-Pro.local', '16.7.0', 'Darwin Kernel Version 16.7.0: Thu Jan 11 22:59:40 PST 2018; root:xnu-3789.73.8~1/RELEASE_X86_64', 'x86_64', 'i386')
Python Version: [2.7.14 (default, Sep 25 2017, 09:53:22) [GCC 4.2.1 Compatible Apple LLVM 9.0.0 (clang-900.0.37)]]
Python Location: [/usr/local/Cellar/python/2.7.14/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python]
Site Packages: [Disabled]
Installation Root: [/Users/Joe/Downloads/google-cloud-sdk]
Installed Components:
core: [2018.02.08]
gsutil: [4.28]
bq: [2.0.28]
System PATH: [/Users/Joe/Downloads/google-cloud-sdk/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin]
Python PATH: [/Users/Joe/Downloads/google-cloud-sdk/lib/third_party:/Users/Joe/Downloads/google-cloud-sdk/lib:/usr/local/Cellar/python/2.7.14/Frameworks/Python.framework/Versions/2.7/lib/python27.zip:/usr/local/Cellar/python/2.7.14/Frameworks/Python.framework/Versions/2.7/lib/python2.7:/usr/local/Cellar/python/2.7.14/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-darwin:/usr/local/Cellar/python/2.7.14/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac:/usr/local/Cellar/python/2.7.14/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac/lib-scriptpackages:/usr/local/Cellar/python/2.7.14/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-tk:/usr/local/Cellar/python/2.7.14/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-old:/usr/local/Cellar/python/2.7.14/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload]
Cloud SDK on PATH: [True]
Kubectl on PATH: [False]
Installation Properties: [/Users/Joe/Downloads/google-cloud-sdk/properties]
User Config Directory: [/Users/Joe/.config/gcloud]
Active Configuration Name: [default]
Active Configuration Path: [/Users/Joe/.config/gcloud/configurations/config_default]
Account: [release-builder#rare-truck-123456.iam.gserviceaccount.com]
Project: [rare-truck-123456]
Current Properties:
[core]
project: [rare-truck-123456]
account: [release-builder#rare-truck-123456.iam.gserviceaccount.com]
disable_usage_reporting: [True]
[compute]
region: [us-west1]
zone: [us-west1-a]
Logs Directory: [/Users/Joe/.config/gcloud/logs]
Last Log File: [/Users/Joe/.config/gcloud/logs/2018.02.09/15.51.18.911677.log]
git: [git version 2.14.3 (Apple Git-98)]
ssh: [OpenSSH_7.4p1, LibreSSL 2.5.0]
Google Compute Engine instances use default services account to have a better integration with Google Platform.
As you can read on the documentation [1]: "(...)If the user will be managing virtual machine instances that are configured to run as a service account, you must also grant the roles/iam.serviceAccountActor role."
You need to add the role "Service Account User" to your service account in order to be able to create Google Compute Engine instances.
[1] https://cloud.google.com/iam/docs/understanding-roles#compute_name_short_roles

salt cloud error deploying to AWS

When I try to deploy to amazon EC2 using salt cloud, I’m getting this error:
[root#salt salt]# salt-cloud -p ec2_private_win_r3.xlarge server00009
[ERROR ] AWS Response Status Code and Error: [401 401 Client Error: Unauthorized] {'Errors': {'Error': {'Message': 'AWS was not able to validate the provided access credentials', 'Code': 'AuthFailure'}}, 'RequestID': '33b43015-518e-4865-88e7-b6432e61b0db'}
[ERROR ] AWS Response Status Code and Error: [401 401 Client Error: Unauthorized] {'Errors': {'Error': {'Message': 'AWS was not able to validate the provided access credentials', 'Code': 'AuthFailure'}}, 'RequestID': '4b88b080-ad32-4388-a133-4322b1c08c04'}
[ERROR ] There was a profile error: 'NoneType' object has no attribute 'copy'
I’ve verified the AWS keys that I’m using and I’m able to list and even launch new instances using the aws command line with the keys that I’m using in the cloud provider file:
## Gov Cloud Non Prod environment
company-govcloud-nonprod-us-east-1:
# Set up the location of the salt master
minion:
master: 10.0.2.15
# Set up grains information, which will be common for all nodes
# using this driver
grains:
node_type: broker
# Valid options are:
# private_ips - The salt-cloud command is run inside the EC2
# public_ips - The salt-cloud command is run outside of EC2
#
ssh_interface: private_ips
# Optionally configure the Windows credential validation number of
# t-tdetries and delay between retries. This defaults to 10 retries
# with a one second delay betdwee retries
win_deploy_auth_retries: 10
win_deploy_auth_retry_delay: 1
# Set the EC2 access credentials (see below)
id: 'AKIAIATLQ4FTDDA6BV7A'
key: 'asdfasdsfadsadasasdafadsadfafasdasda’
# Make sure this key is owned by root with permissions 0400.
#
private_key: /etc/salt/company-timd
keyname: company-timd
#securitygroup: core-sg-default
# Optionally configure default region
# Use salt-cloud --list-locations <driver> to obtain valid regions
#
location: us-east-1
availability_zone: us-east-1c
# Configure which user to use to run the deploy script. This setting is
# dependent upon the AMI that is used to deploy. It is usually safer to
# configure this individually in a profile, than globally. Typical users
# are:
# Amazon Linux -> ec2-user
# RHEL -> ec2-user
# CentOS -> ec2-user
# Ubuntu -> ubuntu
#
ssh_username: root
# Optionally add an IAM profile
#iam_profile: 'arn:aws:iam::xxxxxxxxxxxx:role/rl-company-admin'
driver: ec2
And this is the profile that I’m trying to use:
## Windows Server 2012 Alteryx & Tableau
ec2_private_win_r3.xlarge:
provider: company-govcloud-nonprod-us-east-1
image: ami-xxxxxxx
size: r3.xlarge
network_interfaces:
- DeviceIndex: 0
SubnetId: subnet-xxxxxxx
SecurityGroupId: sg-xxxxxx
PrivateIpAddresses:
- Primary: True
AssociatePublicIpAddress: False
block_device_mappings:
- DeviceName: /dev/sda1
Ebs.VolumeSize: 120
Ebs.VolumeType: gp2
- DeviceName: /dev/sdf
Ebs.VolumeSize: 250
Ebs.VolumeType: gp2
tag: {'Engagement': '999999999999', 'Owner': 'Tim', 'Name': 'non-production', 'Environment': 'COMPANY-Grouper'}
I tried commenting out the IAM profile in the cloud provider definition. I’ve checked and the AWS credentials I’m using has administrator access in IAM.
Here's my version report
[root#salt ~]# salt-cloud --versions-report
Salt Version:
Salt: 2016.11.5
Dependency Versions:
Apache Libcloud: 0.20.1
cffi: 1.6.0
cherrypy: 3.2.2
dateutil: 2.6.0
docker-py: Not Installed
gitdb: Not Installed
gitpython: Not Installed
ioflo: Not Installed
Jinja2: 2.7.2
libgit2: Not Installed
libnacl: Not Installed
M2Crypto: Not Installed
Mako: Not Installed
msgpack-pure: Not Installed
msgpack-python: 0.4.8
mysql-python: Not Installed
pycparser: 2.14
pycrypto: 2.6.1
pycryptodome: 3.4.3
pygit2: Not Installed
Python: 2.7.5 (default, Nov 6 2016, 00:28:07)
python-gnupg: Not Installed
PyYAML: 3.11
PyZMQ: 15.3.0
RAET: Not Installed
smmap: Not Installed
timelib: Not Installed
Tornado: 4.2.1
ZMQ: 4.1.4
System Versions:
dist: centos 7.2.1511 Core
machine: x86_64
release: 3.10.0-327.el7.x86_64
system: Linux
version: CentOS Linux 7.2.1511 Core
How can I solve this problem?
Are you trying to launch a windows EC2 instance with a ssh_username? That may be breaking it.

Something wrong on deploy chaincode for hyperledger v1.0

I have tried to use docker toolbox to setup Hyperledger V1.0 in my local machines.
I according to this document:
http://hyperledger-fabric.readthedocs.io/en/latest/asset_setup.html
But when I tried to deploy chaincode.
$node deploy.js
I got an error message:
info: Returning a new winston logger with default configurations
info: [Chain.js]: Constructed Chain instance: name - fabric-client1, securityEnabled: true, TCert download batch size: 10, network mode: true
info: [Peer.js]: Peer.const - url: grpc://localhost:8051 options grpc.ssl_target_name_override=tlsca, grpc.default_authority=tlsca
info: [Peer.js]: Peer.const - url: grpc://localhost:8055 options grpc.ssl_target_name_override=tlsca, grpc.default_authority=tlsca
info: [Peer.js]: Peer.const - url: grpc://localhost:8056 options grpc.ssl_target_name_override=tlsca, grpc.default_authority=tlsca
info: [Client.js]: Failed to load user "admin" from local key value store
info: [FabricCAClientImpl.js]: Successfully constructed Fabric COP service client: endpoint - {"protocol":"http","hostname":"localhost","port":8054}
info: [crypto_ecdsa_aes]: This class requires a KeyValueStore to save keys, no store was passed in, using the default store C:\Users\daniel\.hfc-key-store
[2017-04-15 22:14:29.268] [ERROR] Helper - Error: Calling enrollment endpoint failed with error [Error: connect ECONNREFUSED 127.0.0.1:8054]
at ClientRequest.<anonymous> (C:\Users\daniel\node_modules\fabric-ca-client\lib\FabricCAClientImpl.js:304:12)
at emitOne (events.js:96:13)
at ClientRequest.emit (events.js:188:7)
at Socket.socketErrorListener (_http_client.js:310:9)
at emitOne (events.js:96:13)
at Socket.emit (events.js:188:7)
at emitErrorNT (net.js:1278:8)
at _combinedTickCallback (internal/process/next_tick.js:74:11)
at process._tickCallback (internal/process/next_tick.js:98:9)
[2017-04-15 22:14:29.273] [ERROR] DEPLOY - Error: Failed to obtain an enrolled user
at ca_client.enroll.then.then.then.catch (C:\Users\daniel\helper.js:59:12)
at process._tickCallback (internal/process/next_tick.js:103:7)
events.js:160
throw er; // Unhandled 'error' event
^
Error: Connect Failed
at ClientDuplexStream._emitStatusIfDone (C:\Users\daniel\node_modules\grpc\src\node\src\client.js:201:19)
at ClientDuplexStream._readsDone (C:\Users\daniel\node_modules\grpc\src\node\src\client.js:169:8)
at readCallback (C:\Users\daniel\node_modules\grpc\src\node\src\client.js:229:12)
Is this an question about unable to connect to ca? Or other causes?
Edit:
Environment:
OS: Windows 10 Professional Edition
Docker Toolbox: 17.04.0-ce
Go: 1.7.5
Node.js: 6.10.0
My steps:
1.Open Docker Quickstart Terminal and key commands.
$curl -L https://raw.githubusercontent.com/hyperledger/fabric/master/examples/sfhackfest/sfhackfest.tar.gz -o sfhackfest.tar.gz 2> /dev/null; tar -xvf sfhackfest.tar.gz
$docker-compose -f docker-compose-gettingstarted.yml build
$docker-compose -f docker-compose-gettingstarted.yml up -d
$docker ps
It has been confirmed that six containers have been activated
2.Download examples and install modules.
$curl -OOOOOO https://raw.githubusercontent.com/hyperledger/fabric-sdk-node/v1.0-alpha/examples/balance-transfer/{config.json,deploy.js,helper.js,invoke.js,query.js,package.json}
//This link didn't work, so I downloaded the required files from GitHub of fabric-sdk-node
$npm install --global windows-build-tools
$npm install
3.Try to deploy chaincode.
$node deploy.js
There were several problems, not the least of which that documentation was outdated and was for a preview release of Hyperledger Fabric. The docs are actually in the process of being removed as we need to update our examples / samples.
You mentioned Docker Toolbox - so are you trying to run all of this on Windows or Mac?
UPDATE:
So one of the issue with Docker Toolbox or Docker for Windows is that you cannot use localhost / 127.0.0.1 as the address when trying to communicate from apps on the host (even in the QuickStart Terminal) to the endpoints of the Docker containers. When the QuickStart Terminal first launches Docker, you'll see that it will output the IP address of the endpoint you should use when communicating with exposed ports.
I was having the same issue while following the latest "Writing Your First Application" tutorial (http://hyperledger-fabric.readthedocs.io/en/latest/write_first_app.html). I had installed all the pre-requisites and the fabric-samples and started the local network.
When I got to the step of enrolling the Admin user, $ node enrollAdmin.js, I was getting the same error message as above, Error: connect ECONNREFUSED, followed by the localhost domain.
As the first answer suggests, the root cause is that I'm running Docker Toolbox. I'm developing on an older Mac, OSX v10.9.5, so I couldn't use Docker for Mac.
To fix the issue, I replaced 'localhost' in the enrollAdmin.js code with the IP from Docker Toolbox.
Here are the steps I took:
Started Docker with Applications > Docker Quickstart Terminal
Copied the IP from this sentence: docker is configured to use the default machine with IP...
Opened the copy of enrollAdmin.js from fabric-samples/fabcar directory
Found this code:
// be sure to change the http to https when the CA is running TLS enabled
fabric_ca_client = new Fabric_CA_Client('http://localhost:7054', tlsOptions , 'ca.example.com', crypto_suite); // <-- This is the line to change
Replaced 'localhost' with the Docker IP, leaving the port :7054 as is.
Saved
Re-ran the command, $ node enrollAdmin.js
The script connected to the CA and successfully completed the Admin enrollment.
On to the next step!

Hyperledger fabric node sdk deploy.js example failing

I'm following these instructions for setting up hyperledger fabric
http://hyperledger-fabric.readthedocs.io/en/latest/asset_setup.html
but when I run deploy.js
info: Returning a new winston logger with default configurations
info: [Peer.js]: Peer.const - url: grpc://localhost:8051 options grpc.ssl_target_name_override=tlsca, grpc.default_authority=tlsca
info: [Peer.js]: Peer.const - url: grpc://localhost:8055 options grpc.ssl_target_name_override=tlsca, grpc.default_authority=tlsca
info: [Peer.js]: Peer.const - url: grpc://localhost:8056 options grpc.ssl_target_name_override=tlsca, grpc.default_authority=tlsca
info: [Client.js]: Failed to load user "admin" from local key value store
info: [FabricCAClientImpl.js]: Successfully constructed Fabric CA service client: endpoint - {"protocol":"http","hostname":"localhost","port":8054}
info: [crypto_ecdsa_aes]: This class requires a CryptoKeyStore to save keys, using the store: {"opts":{"path":"/home/ubuntu/.hfc-key-store"}}
I'm able to use the docker cli but not node sdk.
Failed to load user "admin" from local key value store
How do I store admin user ?
Fixed after installing couchdb.
docker pull couchdb
docker run -d -p 5984:5984 --name my-couchdb couchdb
The certificate authorite services in the docker compose yaml file have a volumes section e.g.:
ccenv_latest:
volumes:
- ./ccenv:/opt/gopath/src/github.com/hyperledger/fabric/orderer/ccenv
ccenv_snapshot:
volumes:
- ./ccenv:/opt/gopath/src/github.com/hyperledger/fabric/orderer/ccenv
ca:
volumes:
- ./tmp/ca:/.fabric-ca
You need to make sure the local path is valid, so in the above configuration you need to have a ./tmp/ccenv and ./tmp/ca directories on the same level as the docker-compose yaml file.