Whitelist Google Cloud Build IPs to test database connection - google-container-registry

I'm looking to give Cloud Build access to a PostgreSQL database during the steps because it's part of an integration testing from the Python application I'm running. Any suggestions on how to handle this authorization without exposing the database to the world?

You can do this using a Private Pool where you define the network CIDR to be used at build time; see https://cloud.google.com/build/docs/private-pools/private-pools-overview to learn more.
(Previous answer follows, which I've left in place for transparency around history.)
At this time, you would need to whitelist all of the GCE public IP address ranges -- which effectively exposes your database to the world. (So don't do that!)
However, at Google Next we announced and demoed a coming Alpha release that will enable you to run GCB workloads in a hybrid VPC world with access to protected (on-prem) resources. As part of that Alpha, you could whitelist internal-only addresses to achieve your goal securely.
You can watch for a public announcement in our release notes.

Now you can use IAP (Identity-Aware Proxy) TCP forwarding feature.
I don't know if this is still helpful or not but I run into a similar situation a while ago and I was able fix it like this.
steps:
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: /bin/sh
args:
- '-c'
- |
gcloud compute start-iap-tunnel sql-vm 5555 \
--local-host-port=localhost:5555 \
--zone=us-west1-a & sleep 5 && python echo_client.py
I also wrote a blog post about this. Check it here hodo.dev

Related

How to check if the Virtual Machines are not accessible on port 8080

In GCP how can we check if the compute engines are not accessible on port 8080. Is there any API where we can check and validate this scenario
There are many ways of doing what you want - however there are a few factors that I don't know so this answer may sound a bit generic in a few places.
Scenario 1 - instances have to be accessible from the Internet
check if the firewall settings allow incoming traffic to your instances on port 8080 (you can use cloud console or gcloud).
if there isn't such a rule you have to create one - it's best to label your instances and create a proper rule.
now you can actually check if there's anything running on port 8080 - if this is a web app / API you can just use curl host.ip:port 2> errors.log.
You mentioned you have a lot of instances to check then some script would be handy - have a look at this SO answer how to create one that will read the addressess from the file.
If you want to do it like a pro use Ansible - here's a useful answer that will be helpful for this.
Scenario 2 - instances are not available from the Internet
you need to run the mentioned instance checking script from within your VPC your instances are in. Create a new VM for this purpose and run the script from there. If your instances are spread across many VPC's you need to create a VM in each of them and run the script.
And you can automate this with Ansible - even create/test instances & delete VM's. This may sound like an overkill but everything on how often you need to run those tests and on the number of VM's you need to test.
Also there's a question of testing if the ports are open from inside the insances.
if they are running Linux then dany L's suggestion is a good one. But since you have to repeat that many-many times Ansible may again be a good way to do this - have a look at another answer describing how to run a command on the target host.
if they are running Windows then it's more complicated but you can use netsh firewall command - and again - using Ansible is possible.

What keeps accessing Google Cloud metadata on my instance

I have a Google Cloud compute instance running with Ubuntu 18. We had wireshark running tracking another problem and we noticed that every minute something is accessing the meta data server. Three requests every minute:
GET /computeMetadata/v1/instance/virtual-clock/drift-token?alt=json&last_etag=XXXXXXXXXXXXXXXX&recursive=False&timeout_sec=60&wait_for_change=True
GET /computeMetadata/v1/instance/network-interfaces/?alt=json&last_etag=XXXXXXXXXXXXXXXX&recursive=True&timeout_sec=60&wait_for_change=True
GET /computeMetadata/v1/?alt=json&last_etag=XXXXXXXXXXXXXXXX&recursive=True&timeout_sec=77&wait_for_change=True
In call cases, the wireshark says the source is the IP of my instance, and the destination is the 169.254.169.254 which is the Google metadata server.
I don't have any code we have written that is accessing the server. The first one makes me think that this is some Google specific software that is accessing the meta data? But I haven't been able to prove that. What is worrisome is that the response for the third one contains ssh keys. Also, every minute seem excessive.
I see another post talking about scripts in /usr/share/google, but I don't have that directory. I do see that google-fluent is installed. I also see a installed snap for google-cloud-sdk. Could one of those be it? I don't recall installing them, AFAIK, I am not using it, so if that is it, what is the harm in uninstalling it?
You do not have a problem to worry about. The metadata server is private to your instance. The Google VM guest environment software and Stackdriver (fluentd) are making requests to the metadata server to get credentials, detect changes (new SSH keys), set the clock, etc.
The IP address 169.254.169.254 is an IPv4 Link Local Address. Only your VM has a route to that network.
Compute Engine Guest Environment
Do not attempt to uninstall the Guest Environment. You can remove Stackdriver, but I do not recommend that. Stackdriver provides logging and monitoring features that are very useful.

Dynamic SSL allocation in GCP HTTP(s) Layer 7 Load balancer

i m exploring GCP and i love the way it lets the developer play with such costly infrastructure. till now i have learnt a lot many things. i m no more a beginner and i have this case which i m unable to find docs or example for or i might be thinking in wrong direction.
I want to build an auto-scaling hosting solution where users can :
Create Account
Create multiple websites [these websites are basically tempaltes where user can define certain fields and the website is rendered in a specific manner | users are not allowed to upload file instead just some data entries]
In a website user can connect domain [put 'A' record DNS entry in their domain]
After that an SSl is provisioned automatically by the platform and the website is up and running. [somewhat like firebase]
I could easily create such a project on one server with the following configuration[skipped simple steps like user auth etc.]:
I use ubunutu 16.04 as my machine type with 4GB ram and 10GB persistance disk
Then i install nvm [a package to manage node.js]
after that i install specific version of node.js using nvm
i have written a simple javascript package in which i use express server to respond to the client requests with some html
for managing ssl i use letsencrypt's certbot package
i use pm2 to run the javascipt file as service in background
after being able to accomplish this thing i could see everything works the way i want it to.
then i started exploring GCP's load balancers there i learnt about the 4 layer and 7 layer LBs and i implemented some hello world tests [using startup scripts] in all possible configuration like
7 layer http
7 layer https
4 layer internal tcp
4 layer internal ssl
Here is the main problem i m facing :
I can't find a way to dynamically allocate an SSL to an incoming request to the load balancer
In my case requests might be coming from any domain so GCP load balacer must have some sort of configuration to provision SSL for specific domain [i have read that it can alloccate an SSL for upto 100 domains but how could i automate things] or could there be a way that instead of requests being proxied[LB generates a new requeest to the internal servers], requests are just being redirected so that the internal servers can handle the SSL management themseleves
I might be wrong somewhere in my understanding of the concepts. Please help me solve the problem. i want to build firebase-hosting clone at my own. anykind of response is welcomed 🙏🙏🙏
One way to do it would be to update your JS script to generate Google-managed certificate for each new domain via gcloud:
gcloud compute ssl-certificates create CERTIFICATE_NAME \
--description=DESCRIPTION \
--domains=DOMAIN_LIST \
--global
and then apply it to the load balancer:
gcloud compute target-https-proxies update TARGET_PROXY_NAME \
--ssl-certificates SSL_CERTIFICATE_LIST \
--global-ssl-certificates \
--global
Please be aware that it may take anywhere from 5 to 20 minutes for the Load Balancer to start using new certificates.
You can find more information here.

How can I setup Jmeter distributed configuration in cloud(AWS)?

I am currently running JMeter in 5 local VMs in which one acts as master and 4 as slaves. I want to move them to amazon servers. Can anyone suggest step by step configuration methods. Searched internet and couldn't find a documentation with full clarity. Or can anyone share a good documentation link on this?
jmeter version: 3.2
My requirements are:
1 master and 4 slaves.
the master should have Linux GUI because I need JMETER GUI to run the test, since we are analyzing real time running data.
First of all, double check you looked for instructions well enough, i.e. there is JMeter ec2 Script project which automates the process of installation and configuration of JMeter remote engines.
In general, the process doesn't differ from configuring JMeter in distributed mode locally, Amazon EC2 instances are basically the same machines as local ones and require the same configuration steps. Just make sure to open the following ports:
1099
the port you define as server.rmi.localport
the ports you define as client.rmi.localport
It has to be done both in Linux Firewall and AWS Security Groups
Check out the following material:
Remote Testing
JMeter Distributed Testing Step-by-step
JMeter Distributed Testing with Docker
Load Testing with Jmeter and Amazon EC2

How to disable RC4 cipher in Azure VM Scaleset

I have a VM scale set with this image:
Publisher: MicrosoftWindowsServer
Offer: WindowsServer
SKU: 2016-Datacenter-with-Containers
Version: latest
These machines are running SSL web endpoint hosted in service fabric. The website is build in dotnetcore with a WebListener which propably uses the http.sys
I was wondering why new VM images still supports RC4 ciphers and how to disable them. I don't want to do it manually because that will break to autoscaling.
Similar issue, but then for Worker roles: How to disable RC4 cipher on Azure Web Roles
Treating this as two separate questions:
For the Windows 2016 virtual machine images - typically backwards compatibility is prioritized to avoid breaking existing applications which rely on older protocols. Adding the windows-server-2016 tag in case anyone wants to comment further on that.
For scale sets - Write a custom script extension to apply the same changes you'd have applied manually. This will then apply to every VM, and new VMs that are subsequently created.