Which python library should i use to control or launch or delete an instance on my Google cloud platform from my private PC ?
I think the simplest way is to use the Google Compute Engine Python API Client Library. You can see a sample with examples here.
You can see the complete list of functions regarding instances in REST Resource: instances
As you can see, you might do:
import googleapiclient.discovery
compute = googleapiclient.discovery.build('compute', 'v1')
listInstance = compute.instances().list(project=project, zone=zone).execute()
stopInstance = compute.instances().stop(project=project, zone=zone, instance=instance_id).execute()
startInstance = compute.instances().start(project=project, zone=zone, instance=instance_id).execute()
deleteInstance = compute.instances().delete(project=project, zone=zone, instance=instance_id).execute()
Don't confuse the param name "instance" with the chosen name for the path parameter "resourceId". You can see on the right side or at the bottom of the page examples with the real parameter's name.
You also could directly call the the REST API (see example) with in Python if you prefer to using POST/PUT methods.
You also might want to use OAuth. As you can see in the examples of the links provided, it would be something like:
from googleapiclient import discovery
from oauth2client.client import GoogleCredentials
credentials = GoogleCredentials.get_application_default()
service = discovery.build('compute', 'v1', credentials=credentials)
# Project ID for this request.
project = 'my-project' # TODO: Update placeholder value.
# The name of the zone for this request.
zone = 'my-zone' # TODO: Update placeholder value.
# Name of the instance resource to start.
instance = 'my-instance' # TODO: Update placeholder value.
request = service.instances().start(project=project, zone=zone, instance=instance)
response = request.execute()
You may also want to check out libcloud.
Related
first question ever on StackOverflow.
I am trying to write a Cloud Function on gcp to login to vault via hvac.
https://hvac.readthedocs.io/en/stable/usage/auth_methods/gcp.html#login
It says here that a path to a SA json but I am writing this on Cloud Function.
Does anyone have an example on how to do this properly? The default cloud identity SA associated with the function has permission already to the vault address.
Thanks
In Cloud Functions you don't need the path to the Service Account key because the Cloud Identity SA is already loaded as the Application Default Credentials (ADC).
The code from the link you share it's okay for environments where you don't have configured the ADC or simply you prefer to use another account.
For Functions, the code can be simpler:
import time
import json
import googleapiclient.discovery
import google.auth
import hvac
credentials, project = google.auth.default(scopes=["https://www.googleapis.com/auth/cloud-platform"])
now = int(time.time())
expires = now + 900
payload = {
'iat': now,
'exp': expires,
'sub': credentials.service_account_email,
'aud': 'vault/my-role'
}
body = {'payload': json.dumps(payload)}
name = f'projects/{project}/serviceAccounts/{credentials.service_account_email}'
iam = googleapiclient.discovery.build('iam', 'v1', credentials=credentials)
request = iam.projects().serviceAccounts().signJwt(name=name, body=body)
resp = request.execute()
jwt = resp['signedJwt']
client.auth.gcp.login(
role='my-role',
jwt=jwt,
)
I have read extensively on how to access GCP Gmail API using service account and have given it domain-wide authority, using the instruction here:
https://support.google.com/a/answer/162106
Here is my service account:
Here is the scopes added to the domain-wide authority. You can see that the ID matches the service account.
One thing I notice is that my GCP project is an internal project, I havent' published it or anything, yet when I added the scope, it is not showing the service account email name but the project name. Does it make any difference? Do I need to set anything here? In the OAuth Consent Screen, I see the name of the project is being defined there. I have added all same scope on this screen too, not sure if it make any difference.
Here is my code:
from google.oauth2 import service_account
from googleapiclient import discovery
credentials_file = get_credentials('gmail.json')
scopes = ['https://www.googleapis.com/auth/gmail.readonly', 'https://www.googleapis.com/auth/gmail.labels', 'https://www.googleapis.com/auth/gmail.modify']
credentials = service_account.Credentials.from_service_account_info(credentials_file, scopes=scopes)
delegated_credentials = credentials.with_subject("abc#mydomain.com")
GMAIL_SERVICE = discovery.build('gmail', 'v1', credentials=delegated_credentials)
labels = GMAIL_SERVICE.users().labels().list(userId='me').execute()
Error message:
Google.auth.exceptions.RefreshError: ('unauthorized_client: Client is
unauthorized to retrieve access tokens using this method, or client
not authorized for any of the scopes requested.', {'error':
'unauthorized_client', 'error_description': 'Client is unauthorized to
retrieve access tokens using this method, or client not authorized for
any of the scopes requested.'})
Not sure I can answer precisely on the original question (I think not), but here how things are done in cloud functions developed by me. The following particular code snippet is written/adopted for this answer, and it was not tested:
import os
import google.auth
import google.auth.iam
from google.oauth2 import service_account
from google.auth.exceptions import MutualTLSChannelError
from google.auth.transport import requests
import googleapiclient.discovery
from google.cloud import error_reporting
GMAIL_SERV_ACCOUNT = "A service account which makes the API CALL"
OAUTH_TOKEN_URI = "https://accounts.google.com/o/oauth2/token"
GMAIL_SCOPES_LIST = ["https://mail.google.com/"] # for example
GMAIL_USER = "User's email address, who's email we would like to access. abc#mydomain.com - from your question"
# inside the cloud function code:
local_credentials, project_id = google.auth.default()
local_credentials.refresh(requests.Request())
signer = google.auth.iam.Signer(requests.Request(), local_credentials, GMAIL_SERV_ACCOUNT)
delegate_credentials = service_account.Credentials(
signer, GMAIL_SERV_ACCOUNT, OAUTH_TOKEN_URI, scopes=GMAIL_SCOPES_LIST, subject=GMAIL_USER)
delegate_credentials.refresh(requests.Request())
try:
email_api_service = googleapiclient.discovery.build(
'gmail', 'v1', credentials=delegate_credentials, cache_discovery=False)
except MutualTLSChannelError as err:
# handle it somehow, for example (stupid, artificial)
ER = error_reporting.Client(service="my-app", version=os.getenv("K_REVISION", "0"))
ER.report_exception()
return 0
So, the idea is to use my (or 'local') cloud function's service account to create credentials of a dedicated service account (GMAIL_SERV_ACCOUNT - which is used in many different cloud functions running under many different 'local' service accounts); then use that 'delegate' service account to get API service access.
I don't remember if the GMAIL_SERV_ACCOUNT should have any specific IAM roles. But I think the 'local' cloud function's service account should get roles/iam.serviceAccountTokenCreator for it.
Updated:
Some clarification on the IAM role. In terraform (I use it for my CICD) for a given functional component, it looks:
# this service account is an 'external' for the given functional component,
# it is managed in another repository and terraform state file
# so we should get it at first
data "google_service_account" "gmail_srv_account" {
project = "some project id"
account_id = "actual GMAIL_SERV_ACCOUNT account"
}
# now we provide IAM role for that working with it
# where 'google_service_account.local_cf_sa' is the service account,
# under which the given cloud function is running
resource "google_service_account_iam_member" "iam_token_creator_gmail_sa" {
service_account_id = data.google_service_account.gmail_srv_account.name
role = "roles/iam.serviceAccountTokenCreator"
member = "serviceAccount:${google_service_account.local_cf_sa.email}"
depends_on = [
google_service_account.local_cf_sa,
]
}
I am trying to configure Superset with multiple ldap servers, but at this moment, I was able to setup for only one server.
Any work around that can be done in the 'Config.py' to configure multiple servers at a same time??
I have given the following configuration in the ‘config.py’ file.
config.py - LDAP configs
AUTH_TYPE = AUTH_LDAP
AUTH_USER_REGISTRATION = True
AUTH_USER_REGISTRATION_ROLE = "Alpha"
AUTH_LDAP_SERVER = "ldap://ldap_example_server_one:389"
AUTH_LDAP_USE_TLS = False
AUTH_LDAP_BIND_USER = "CN=my_user,OU=my_users,DC=my,DC=domain"
AUTH_LDAP_BIND_PASSWORD = "mypassword"
AUTH_LDAP_SEARCH = "DC=my,DC=domain"
AUTH_LDAP_UID_FIELD = "sAMAccountName"
Note – It worked for ‘ldap_example_server_one:389’ server but when tried to add another server it threw an Configuration failure error.
You can't use multiple LDAP servers with default LDAP authenticator from Flask Appbuilder. You have to implement your own custom security manager which will be able to operate as many LDAP servers as you want.
At first, you should create new file, e.g. my_security_manager.py. Put these lines into it:
from superset.security import SupersetSecurityManager
class MySecurityManager(SupersetSecurityManager):
def __init__(self, appbuilder):
super(MySecurityManager, self).__init__(appbuilder)
Secondly, you should let Superset know that you want to use your brand new security manager. To do so, add these lines to your Superset configuration file (superset_config.py):
from my_security_manager import MySecurityManager
CUSTOM_SECURITY_MANAGER = MySecurityManager
Here is additional information on the topic.
I have created a Google Cloud function that can be invoked through HTTP. The access to the function is limited to the Service account only.
If I had a Django View which should invoke this function and expect a response?
Here is what I have tried
1) Before starting Django I set the environment variable
export GOOGLE_APPLICATION_CREDENTIALS
2) I tried invoking the function using a standalone code, but soon realised this was going nowhere, because I could not figure out the next step after this.
from google.oauth2 import service_account
from apiclient.http import call
SCOPES = ['https://www.googleapis.com/auth/cloud-platform']
SERVICE_ACCOUNT_FILE = 'credentials/credentials.json'
credentials = service_account.Credentials.from_service_account_file(
SERVICE_ACCOUNT_FILE, scopes=SCOPES)
Google's documentation does give you documentation around the API, but there is no sample code on how to invoke the methods or what to import within your Python code and what are the ways to invoke those methods.
How do you send a POST request with JSON data to an Cloud Function, with authorization through a service account?
**Edit
A couple hours later I did some more digging and figured this out partially
from google.oauth2 import service_account
import googleapiclient.discovery
import json
SCOPES = ['https://www.googleapis.com/auth/cloud-platform']
SERVICE_ACCOUNT_FILE = 'credentials/credentials.json'
credentials = service_account.Credentials.from_service_account_file(
SERVICE_ACCOUNT_FILE, scopes=SCOPES)
cloudfunction = googleapiclient.discovery.build('cloudfunctions', 'v1', credentials=credentials)
#projects/{project_id}/locations/{location_id}/functions/{function_id}.
path='some project path'
data='some data in json that works when invoked through the console'
data=json.dumps(data)
a=cloudfunction.projects().locations().functions().call(name=path, body=data).execute()
I get another error now.
Details: "[{'#type': 'type.googleapis.com/google.rpc.BadRequest', 'fieldViolations': [{'description': 'Invalid JSON payload received. Unknown name "": Root element must be a message.'}]}]">
I cant find any documentation on this. How should the JSON be formed?
making the json like {"message":{my actual payload}} doesn't work.
The requested documentation can be found here.
The request body argument should be an object with the following form:
{ # Request for the `CallFunction` method.
"data": "A String", # Input to be passed to the function.
}
The following modification on your code should work correctly:
from google.oauth2 import service_account
import googleapiclient.discovery
SCOPES = ['https://www.googleapis.com/auth/cloud-platform']
SERVICE_ACCOUNT_FILE = 'credentials/credentials.json'
credentials = service_account.Credentials.from_service_account_file(
SERVICE_ACCOUNT_FILE, scopes=SCOPES)
cloudfunction = googleapiclient.discovery.build('cloudfunctions', 'v1', credentials=credentials)
path ="projects/your-project-name/locations/cloud-function-location/functions/name-of-cloud-function"
data = {"data": "A String"}
a=cloudfunction.projects().locations().functions().call(name=path, body=data).execute()
Notice that very limited traffic is allowed since there are limits to the API calls.
I am testing out deploying my Django application into AWS's Fargate Service.
Everything seems to run, but I am getting Health Check errors as the Application Load Balancer is sending requests to my Django application using the Local Ip of the host. This give me an Allowed Host error in the logs.
Invalid HTTP_HOST header: '172.31.86.159:8000'. You may need to add '172.31.86.159' to ALLOWED_HOSTS
I have tried getting the Local ip at task start up time and appending it to my ALLOWED_HOSTS, but this fails under Fargate:
import requests
EC2_PRIVATE_IP = None
try:
EC2_PRIVATE_IP = requests.get('http://169.254.169.254/latest/meta-data/local-ipv4', timeout = 0.01).text
except requests.exceptions.RequestException:
pass
if EC2_PRIVATE_IP:
ALLOWED_HOSTS.append(EC2_PRIVATE_IP)
Is there a way to get the ENI IP Address so I can append it to ALLOWED_HOSTS?
Now this works, and it lines up with the documentation, but I don't know if it's the BEST way or if there is a BETTER WAY.
My containers are running under the awsvpc network mode.
https://aws.amazon.com/blogs/compute/under-the-hood-task-networking-for-amazon-ecs/
...the ECS agent creates an additional "pause" container for each task before starting the containers in the task definition. It then sets up the network namespace of the pause container by executing the previously mentioned CNI plugins. It also starts the rest of the containers in the task so that they share their network stack of the pause container. (emphasis mine)
I assume the
so that they share their network stack of the pause container
Means we really just need the IPv4 Address of the pause container. In my non-exhaustive testing it appears this is always Container[0] in the ECS meta: http://169.254.170.2/v2/metadata
With those assumption in play this does work, though I don't know how wise it is to do:
import requests
EC2_PRIVATE_IP = None
METADATA_URI = os.environ.get('ECS_CONTAINER_METADATA_URI', 'http://169.254.170.2/v2/metadata')
try:
resp = requests.get(METADATA_URI)
data = resp.json()
# print(data)
container_meta = data['Containers'][0]
EC2_PRIVATE_IP = container_meta['Networks'][0]['IPv4Addresses'][0]
except:
# silently fail as we may not be in an ECS environment
pass
if EC2_PRIVATE_IP:
# Be sure your ALLOWED_HOSTS is a list NOT a tuple
# or .append() will fail
ALLOWED_HOSTS.append(EC2_PRIVATE_IP)
Of course, if we pass in the container name that we must set in the ECS task definition, we could do this too:
import os
import requests
EC2_PRIVATE_IP = None
METADATA_URI = os.environ.get('ECS_CONTAINER_METADATA_URI', 'http://169.254.170.2/v2/metadata')
try:
resp = requests.get(METADATA_URI)
data = resp.json()
# print(data)
container_name = os.environ.get('DOCKER_CONTAINER_NAME', None)
search_results = [x for x in data['Containers'] if x['Name'] == container_name]
if len(search_results) > 0:
container_meta = search_results[0]
else:
# Fall back to the pause container
container_meta = data['Containers'][0]
EC2_PRIVATE_IP = container_meta['Networks'][0]['IPv4Addresses'][0]
except:
# silently fail as we may not be in an ECS environment
pass
if EC2_PRIVATE_IP:
# Be sure your ALLOWED_HOSTS is a list NOT a tuple
# or .append() will fail
ALLOWED_HOSTS.append(EC2_PRIVATE_IP)
Either of these snippets of code would then in in the production settings for Django.
Is there a better way to do this that I am missing? Again, this is to allow the Application Load Balancer health checks. When using ECS (Fargate) the ALB sends the host header as the Local IP of the container.
In fargate, there is an environment variable injected by the AWS container agent:${ECS_CONTAINER_METADATA_URI}
This contains the URL to the metadata endpoint, so now you can do
curl ${ECS_CONTAINER_METADATA_URI}
The output looks something like
{
"DockerId":"redact",
"Name":"redact",
"DockerName":"ecs-redact",
"Image":"redact",
"ImageID":"redact",
"Labels":{ },
"DesiredStatus":"RUNNING",
"KnownStatus":"RUNNING",
"Limits":{ },
"CreatedAt":"2019-04-16T22:39:57.040286277Z",
"StartedAt":"2019-04-16T22:39:57.29386087Z",
"Type":"NORMAL",
"Networks":[
{
"NetworkMode":"awsvpc",
"IPv4Addresses":[
"172.30.1.115"
]
}
]
}
Under the key Networks you'll find IPv4Address
Putting this into python, you get
METADATA_URI = os.environ['ECS_CONTAINER_METADATA_URI']
container_metadata = requests.get(METADATA_URI).json()
ALLOWED_HOSTS.append(container_metadata['Networks'][0]['IPv4Addresses'][0])
An alternative solution to this is to create a middleware that bypasses the ALLOWED_HOSTS check just for your healthcheck endpoint, eg
from django.http import HttpResponse
from django.utils.deprecation import MiddlewareMixin
class HealthEndpointMiddleware(MiddlewareMixin):
def process_request(self, request):
if request.META["PATH_INFO"] == "/health/":
return HttpResponse("OK")
I solved this issue by doing this:
First i've installed this middleware that can handle CIDR masks on top of ALLOWED_HOSTS: https://github.com/mozmeao/django-allow-cidr
With this middleware i can use a env var like this:
ALLOWED_CIDR_NETS = ['192.168.1.0/24']
So you need to find out the subnets you had configured on your ECS Service Definition, for me it was: 10.3.112.0/24 and 10.3.111.0/24.
You add that to your ALLOWED_CIDR_NETS and you're good to go.