How to make google-cloud-sdk work behind the proxy? - google-cloud-platform

When using google-cloud-sdk, following behavior is noticed. GOAL is to have the same behavior as point 1 mentioned below for the failed case in point 2. This will ensure the working of gcloud behind proxy. Help in this?
service account was able to be activated via Powershell (In case of Manual Proxy setup in the Windows Proxy Settings). Following is how the successful case looks like.
PS C:\Users\monica.bostina\Downloads\DLP1\Attended Framework> gcloud config list
[core]
account = vf-grp-dlplt-dev-dlp01#appspot.gserviceaccount.com
disable_user_reporting = True
project = vf-grp-dlplt-dev-dlp01
Your active configuration is: [default]
PS C:\Users\monica.bostina\Downloads\DLP1\Attended Framework> gcloud auth activate-service-account --key-file="temporary-credentials.json"
Activated service account credentials for: [vf-grp-dlplt-dev-dlp01#appspot.gserviceaccount.com]
service account was failing to get activated via Powershell (In case, Use Proxy setup is ON and Manual Proxy settings are OFF in the Windows Proxy Settings). Error message was the below.
PS C:\Users\monica.bostina\Downloads\DLP1\Attended Framework> gcloud config list
[core]
account = vf-grp-dlplt-dev-dlp01#appspot.gserviceaccount.com
disable_user_reporting = True
project = vf-grp-dlplt-dev-dlp01
Your active configuration is: [default]
PS C:\Users\monica.bostina\Downloads\DLP1\Attended Framework> gcloud auth activate-service-account --key-file="temporary-credentials.json"
ERROR: gcloud crashed (TransportError): HTTPSConnectionPool(host='oauth2.googleapis.com', port=443): Max retries exceeded with url: /token (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x00000199D6670DF0>: Failed to establish a new connection: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond'))
If you would like to report this issue, please run the following command:
gcloud feedback
To check gcloud for common problems, please run the following command:
gcloud info --run-diagnostics
I'm working in a corporate environment dealing with VPNs and proxies.
Whitelisted the following URLs at proxy level --
https://accounts.google.com/o/oauth2/auth
https://oauth2.googleapis.com/token
https://www.googleapis.com/oauth2/v1/certs
https://www.googleapis.com/robot/v1/metadata/x509/vf-grp-XXXXXXXXXXXappspot.gserviceaccount.com
https://www.googleapis.com/auth/cloud-platform

Perhaps the manual proxy configuration is working but when using automatic detection you're being given a PAC script that doesn't work with the Gcloud URLs. It also may not be applied to shell/terminal.
Gcloud has its own proxy configuration internal to the application you could rely on so you don't have to manually set the proxy config in the OS.
I had to set the following values (as documented):
gcloud config set proxy/type [PROXY_TYPE]
gcloud config set proxy/address [PROXY_IP_ADDRESS]
gcloud config set proxy/port [PROXY_PORT]
In my case because we also use custom CA certificates on our proxy so I also had to extract them and combine them into a single cert that could be trusted within Gcloud.
gcloud config set core/custom_ca_certs_file [PATH_TO_CUSTOM_CA]

Related

gcloud init ERROR: gcloud crashed (ConnectionError)

Google Cloud SDK installer is downloaded successfully.
After successful installation, it runs gcloud init command
It asked for sign-in
After providing signing in details, the following error occurs
ERROR: gcloud crashed (ConnectionError):
HTTPSConnectionPool(host='oauth2.googleapis.com', port=443):
Max retries exceeded with url: /token (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x0000016A45426A08>:
Failed to establish a new connection: [Errno 11001] getaddrinfo failed'))
How to handle this error?
I was trying gcloud init on a remote desktop, and received this error: gcloud crashed (ConnectionError)
The following solution worked for me.
I used the command gcloud init --console-only instead of gcloud init
SSL verification fails mostly if you are behind proxy. You can disable ssl auth for gcloud client using below cmd:
gcloud config set auth/disable_ssl_validation True
Google will eventually block your IP when you exceed a certain amount of requests.
You can try to create another superuser form command line may be it will resolve the issue.
When i was trying with broadband it was giving same error, but when i switched to mobile data, everything is working fine for me.

Why is Google Compute Engine not running my container?

I can do this successfully:
Bundle my app into a docker image
Build this image into a container using Google Cloud Build upon push to master
(This container is stored in the registry at, for example, gcr.io/my-project/my-container)
Deply this container to the web using Google Cloud Run
Visit the Cloud Run url and see my website
I am now trying more sophisticated builds and I think the next step is to use Google Compute Engine.
To start, I am simply trying to deploy a single instance of the same app that I deployed to Cloud Run:
Navigate to Compute Engine > VM Instances
Enter basics like instance name
Enter my container location under "Container Image": gcr.io/my-project/my-container
(As an aside, I find it suspect that the interface does not offer a selector for your existing Container Registry items here.)
Select "Allow HTTP Traffic" and "Allow HTTPS Traffic"
Click "Create"
GCE takes a minute to create it, and then it shows the green checkmark and the instance name, and "External IP: 35.238.xxx.xxx". I visit that URL in my browser and get... "35.238.xxx.xxx refused to connect."
To inspect, I go back to the GCE page and select "SSH > Open in browser window" next to my instance, which opens a type of cloud terminal to the machine.
In this terminal window, type ps and see that no processes are running. The container Dockerfile ends with CMD yarn start:prod, so I guess that's not happening here.
Further, I ls here and there and navigate around, and see that there is no /app directory from my Dockerfile's WORKDIR /app command. It seems like not only did my app not boot, but was the container not copied to the VM instance?
What am I doing wrong?
For anyone having this issue. I faced the same problem and couldn't figure it out.
Reading Serhii's answer give me the clue. I believe as of today (Jan 2021) the GCP Console UI is a bit unhelpful. It appears that if you type in a container name when creating your VM but WITHOUT specifying a tag on the end, it doesn't complain nor assume a default such as 'latest', it just fails silently. Hence the VM but with no docker container running.
At least it this now works for me, hopefully this helps others.
Check whether your VM has an external IP address.
If it doesn't, the VM might not have network access to the public repository and even to the Google Container Registry (gcr.io) and the docker container doesn't start silently.
I've decided to follow Deploying a container on a new VM instance again.
Please find my steps and commands below:
create a new VM that runs the Docker image gcr.io/cloud-marketplace/google/nginx1:latest with network tag http-server:
$ gcloud compute instances create-with-container instance-3 --tags=http-server,https-server --container-image=gcr.io/cloud-marketplace/google/nginx1:latest
Created [https://www.googleapis.com/compute/v1/projects/test-prj/zones/europe-west3-a/instances/instance-3].
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
instance-3 europe-west3-a n1-standard-1 10.156.0.30 35.XXX.111.XXX RUNNING
create a new firewall rule:
$ gcloud compute firewall-rules create default-allow-http --direction=INGRESS --priority=1000 --network=default --action=ALLOW --rules=tcp:80 --source-ranges=0.0.0.0/0 --target-tags=http-server
Creating firewall...⠹
Created [https://www.googleapis.com/compute/v1/projects/test-prj/global/firewalls/default-allow-http].
Creating firewall...done.
NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED
default-allow-http default INGRESS 1000 tcp:80 False
check current firewall rules:
$ nmap -Pn 35.XXX.111.XXX
Starting Nmap 7.70 ( https://nmap.org ) at 2020-04-02 12:04 CEST
PORT STATE SERVICE
...
80/tcp open http
check if NGINX is running in the container:
$ curl -I http://35.XXX.111.XXX
HTTP/1.1 200 OK
Server: nginx/1.16.1
...
$ curl http://35.XXX.111.XXX
...
<h1>Welcome to nginx!</h1>
...
also via web browser at http://35.XXX.111.XXX
check status of the container:
$ gcloud compute ssh instance-3
...
instance-3 ~ $ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
...
a657c8871239 gcr.io/cloud-marketplace/google/nginx1:latest "/usr/local/bin/dock…" 14 minutes ago Up 14 minutes klt-instance-3-uwtu
attach to the container and run curl http://35.XXX.111.XXX in the separate terminal:
instance-3 ~ $ docker attach a657c8871239
YY.YY.43.203 - - [02/Apr/2020:10:18:06 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.64.0" "-"
YY.YY.43.203 - - [02/Apr/2020:10:18:07 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.64.0" "-"
I found no errors while following documentation.
To solve your issue:
Compare your steps and commands to mine.
Run test Docker image by following documentation on your project.
Try to replicate steps from documentation with your custom image.
If you still have issue - update your question with all your steps, commands and outputs.
I also had the problem, the instance was running, but could not pull my container.
Error: Failed to start container: Error response from daemon:
{"message":"unautho rized: You don't have the needed permissions to
perform this operation, and you may have invalid credentials. To
authenticate your request, follow the steps in:
https://cloud.google.com/container-registry/docs/advanced-authentication"
I had to add some extra scope to the yaml file : https://www.googleapis.com/auth/source.full_control
steps:
- name: gcr.io/cloud-builders/docker
args: ['build', '-t', 'gcr.io/local-xxxxxxxxxxxxxx/apptraining', '.']
- name: 'gcr.io/cloud-builders/docker'
args: ["push", "gcr.io/local-xxxxxxxxxxxxxx/apptraining"]
- name: 'gcr.io/cloud-builders/gcloud'
args: ['compute', 'instances', 'create-with-container', 'instanceapptraining', '--machine-type=n1-standard-1', '--scopes=https://www.googleapis.com/auth/devstorage.full_control,https://www.googleapis.com/auth/trace.append,https://www.googleapis.com/auth/logging.write,https://www.googleapis.com/auth/userinfo.email,https://www.googleapis.com/auth/bigquery,https://www.googleapis.com/auth/datastore,https://www.googleapis.com/auth/cloud-platform,https://www.googleapis.com/auth/trace.append,https://www.googleapis.com/auth/source.full_control,https://www.googleapis.com/auth/source.read_only,https://www.googleapis.com/auth/compute.readonly','--zone=us-central1-a', '--preemptible', '--container-image=gcr.io/local-xxxxxxxxxxxxxx/apptraining:latest']

Ansible deployment to windows host behind bastion

I am currently successfully using Ansible to run tasks on hosts that are in a private subnet in AWS, which the below group_vars is setting up:
ansible_ssh_common_args: '-o ProxyCommand="ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -W %h:%p -q ec2-user#bastionhost#example.com"'
This is working fine.
For Windows instances not in a private subnet the following group_vars works:
---
ansible_user: "AnsibleUser"
ansible_password: "Password"
ansible_port: 5986
ansible_connection: winrm
ansible_winrm_server_cert_validation: ignore
Now, trying to get Ansible to deploy to a Windows server behind the bastion by just using the ProxyCommand won't work - which I understand.
I believe though that there is a new protocol/module I can use called psrp.
I imagine that my group_vars for my Windows hosts needs to change to something like this:
---
ansible_user: "AnsibleUser"
ansible_password: "Password"
ansible_port: 5986
ansible_connection: psrp
ansible_psrp_cert_validation: ignore
If I run with just the above changes against instances that are publicly available (and not trying to connect via a bastion), my task seems to work fine:
Using module file /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/ansible/modules/windows/win_shell.ps1
<10.100.11.14> ESTABLISH PSRP CONNECTION FOR USER: Administrator ON PORT 5986 TO 10.100.11.14
PSRP: EXEC (via pipeline wrapper)
I know there must be more changes before I can try this on a windows server behind a bastion, but ran it anyway to see what errors I get to give me clues on what to do next. Here is the result when running this on an instance behind a bastion server:
Using module file /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/ansible/modules/windows/setup.ps1
<10.100.11.14> ESTABLISH PSRP CONNECTION FOR USER: Administrator ON PORT 5986 TO 10.100.11.14
The full traceback is:
.
.
.
.
ConnectTimeout: HTTPSConnectionPool(host='10.100.11.14', port=5986): Max retries exceeded with url: /wsman (Caused by ConnectTimeoutError(<urllib3.connection.VerifiedHTTPSConnection object at 0x110bbfbd0>, 'Connection to 10.100.11.14 timed out. (connect timeout=30)'))
It seems like Ansible is ignoring my group_vars for the ProxyCommand - which I'm not sure if that's expected.
I'm also not sure on what the next steps are to enable Ansible to deploy to Windows servers behind a bastion.
What config am I missing?
The doc says, the ansible_ssh_common_args setting is appended to sftp, scp, and ssh commands. So it sounds normal to me that is not taking into account when using winrm or psrp ansible_connection.
As explained in the link provided by Pouyan in the comments, ansible_psrp_proxy variable will be used to provide proxy information.
ansible_connection: psrp
ansible_psrp_proxy=socks5h://localhost:1234
More info on the creation of the socks proxy can be found on: https://www.bloggingforlogging.com/2018/10/14/windows-host-through-ssh-bastion-on-ansible/

Generate and deploy certificate using: Letsencrypt + Docker + AWS

I'm trying to generate a certificate in my local (MacBook) environment which I can package in my Docker image and deploy into my AWS environment via Kubernetes.
I've scoured sources online for a solution to this but I'm unable to find the details I need.
From my macbook:
sudo certbot certonly -a standalone -d my.domain
Gives me this error:
Failed authorization procedure. my.domain (http-01): urn:acme:error:unauthorized ::
The client lacks sufficient authorization :: Invalid response from
http://my.domain/.well-known/acme-challenge/T8jtGQswRuMgHKIhGvb-
QD73kytTZnHfH5mK5lEZUJc: "{"timestamp":"2018-04-22T22:33:40.845+0000","status":404,
"error":"Not Found","message":"No message available","path":"/.well-kno"
Clearly, I need a way to prove that I own my own domain. How can I do this locally?
In order to verify ownership of the domain from your macbook you have these two options as stated in the certbot docs:
Use a DNS plugin - https://certbot.eff.org/docs/using.html#dns-plugins
Use the manual method - https://certbot.eff.org/docs/using.html#manual
While the standalone option does not require web server software it does require that it is run on the target web server - it is therefore not what you need to do and will result in the failure reported in your question.

How do I add authentication and endpoint to Django Celery Flower Monitoring?

I've been using flower locally and it seems easy enough to setup and run, but I can't see how I would set it up in a production environment.
In particular, how can I add authentication and how would I define a url to access it?
For custom address, use the --address flag.
For auth, use the --basic_auth flag.
See below:
# celery flower --help
Usage: /usr/local/bin/celery [OPTIONS]
Options:
--address run on the given address
--auth regexp of emails to grant access
--basic_auth colon separated user-password to enable
basic auth
--broker_api inspect broker e.g.
http://guest:guest#localhost:15672/api/
--certfile path to SSL certificate file
--db flower database file (default flower.db)
--debug run in debug mode (default False)
--help show this help information
--inspect inspect workers (default True)
--inspect_timeout inspect timeout (in milliseconds) (default
1000)
--keyfile path to SSL key file
--max_tasks maximum number of tasks to keep in memory
(default 10000) (default 10000)
--persistent enable persistent mode (default False)
--port run on the given port (default 5555)
--url_prefix base url prefix
--xheaders enable support for the 'X-Real-Ip' and
'X-Scheme' headers. (default False)
You an use https://pypi.org/project/django-revproxy/
This way Flower is hidden behind Django auth which, and you don't need rewrite rule in your webserver.
Orignal source of this answer: Celery Flower Security in Production