How do you debug google deployment manager templates? - google-cloud-platform

Im looking at this example: https://github.com/GoogleCloudPlatform/deploymentmanager-samples/tree/master/examples/v2/cloud_functions
which uses this template. I added a print statement to it, but how do I see this output?
import base64
import hashlib
from StringIO import StringIO
import zipfile
def GenerateConfig(ctx):
"""Generate YAML resource configuration."""
in_memory_output_file = StringIO()
function_name = ctx.env['deployment'] + 'cf'
zip_file = zipfile.ZipFile(
in_memory_output_file, mode='w', compression=zipfile.ZIP_DEFLATED)
####################################################
############ HOW DO I SEE THIS????? ################
print('heelo wworrld')
####################################################
####################################################
for imp in ctx.imports:
if imp.startswith(ctx.properties['codeLocation']):
zip_file.writestr(imp[len(ctx.properties['codeLocation']):],
ctx.imports[imp])
zip_file.close()
content = base64.b64encode(in_memory_output_file.getvalue())
m = hashlib.md5()
m.update(content)
source_archive_url = 'gs://%s/%s' % (ctx.properties['codeBucket'],
m.hexdigest() + '.zip')
cmd = "echo '%s' | base64 -d > /function/function.zip;" % (content)
volumes = [{'name': 'function-code', 'path': '/function'}]
build_step = {
'name': 'upload-function-code',
'action': 'gcp-types/cloudbuild-v1:cloudbuild.projects.builds.create',
'metadata': {
'runtimePolicy': ['UPDATE_ON_CHANGE']
},
'properties': {
'steps': [{
'name': 'ubuntu',
'args': ['bash', '-c', cmd],
'volumes': volumes,
}, {
'name': 'gcr.io/cloud-builders/gsutil',
'args': ['cp', '/function/function.zip', source_archive_url],
'volumes': volumes
}],
'timeout':
'120s'
}
}
cloud_function = {
'type': 'gcp-types/cloudfunctions-v1:projects.locations.functions',
'name': function_name,
'properties': {
'parent':
'/'.join([
'projects', ctx.env['project'], 'locations',
ctx.properties['location']
]),
'function':
function_name,
'labels': {
# Add the hash of the contents to trigger an update if the bucket
# object changes
'content-md5': m.hexdigest()
},
'sourceArchiveUrl':
source_archive_url,
'environmentVariables': {
'codeHash': m.hexdigest()
},
'entryPoint':
ctx.properties['entryPoint'],
'httpsTrigger': {},
'timeout':
ctx.properties['timeout'],
'availableMemoryMb':
ctx.properties['availableMemoryMb'],
'runtime':
ctx.properties['runtime']
},
'metadata': {
'dependsOn': ['upload-function-code']
}
}
resources = [build_step, cloud_function]
return {
'resources':
resources,
'outputs': [{
'name': 'sourceArchiveUrl',
'value': source_archive_url
}, {
'name': 'name',
'value': '$(ref.' + function_name + '.name)'
}]
}
EDIT: this is in no way a solution to this problem but I found that if I set a bunch of outputs for info im interested in seeing it kind of helps. So I guess you could roll your own sort of log-ish thing by collecting info/output into a list or something in your python template and then passing all that back as an output- not great but its better than nothing

Deployment Manager is an infrastructure deployment service that automates the creation and management of Google Cloud Platform (GCP) resources. What you are trying to do on deployment manager is not possible due to its managed environment.
As of now, the only way to troubleshoot is to rely on the expanded template from the Deployment Manager Dashboard. There is already a feature request in order to address your use case here. I advise you to star the feature request in order to get updates via email and to place a comment in order to show the interest of the community. All the official communication regarding that feature will be posted there.

Related

I'm not getting the expected response from client.describe_image_scan_findings() using Boto3

I'm trying to use Boto3 to get the number of vulnerabilities from my images in my repositories. I have a list of repository names and image IDs that are getting passed into this function. Based off their documentation
I'm expecting a response like this when I filter for ['imageScanFindings']
'imageScanFindings': {
'imageScanCompletedAt': datetime(2015, 1, 1),
'vulnerabilitySourceUpdatedAt': datetime(2015, 1, 1),
'findingSeverityCounts': {
'string': 123
},
'findings': [
{
'name': 'string',
'description': 'string',
'uri': 'string',
'severity': 'INFORMATIONAL'|'LOW'|'MEDIUM'|'HIGH'|'CRITICAL'|'UNDEFINED',
'attributes': [
{
'key': 'string',
'value': 'string'
},
]
},
],
What I really need is the
'findingSeverityCounts' number, however, it's not showing up in my response. Here's my code and the response I get:
main.py
repo_names = ['cftest/repo1', 'your-repo-name', 'cftest/repo2']
image_ids = ['1.1.1', 'latest', '2.2.2']
def get_vuln_count(repo_names, image_ids):
container_inventory = []
client = boto3.client('ecr')
for n, i in zip(repo_names, image_ids):
response = client.describe_image_scan_findings(
repositoryName=n,
imageId={'imageTag': i}
)
findings = response['imageScanFindings']
print(findings)
Output
{'findings': []}
The only thing that shows up is findings and I was expecting findingSeverityCounts in the response along with the others, but nothing else is showing up.
THEORY
I have 3 repositories and an image in each repository that I uploaded. One of my theories is that I'm not getting the other responses, such as findingSeverityCounts because my images don't have vulnerabilities? I have inspector set-up to scan on push, but they don't have vulnerabilities so nothing shows up in the inspector dashboard. Could that be causing the issue? If so, how would I be able to generate a vulnerability in one of my images to test this out?
My theory was correct and when there are no vulnerabilities, the response completely omits certain values, including the 'findingSeverityCounts' value that I needed.
I created a docker image using python 2.7 to generate vulnerabilities in my scan to test out my script properly. My work around was to implement this if statement- if there's vulnerabilities it will return them, if there aren't any vulnerabilities, that means 'findingSeverityCounts' is omitted from the response, so I'll have it return 0 instead of giving me a key error.
Example Solution:
response = client.describe_image_scan_findings(
repositoryName=n,
imageId={'imageTag': i}
)
if 'findingSeverityCounts' in response['imageScanFindings']:
print(response['imageScanFindings']['findingSeverityCounts'])
else:
print(0)

Facebook Marketing API - Creating an "advideo" in Sandbox Environment

I'm trying to create a video in Sandbox mode but it throws me an output such as;
Params: {'title': 'test1', 'description': 'test'}
Status: 400
Response:
{
"error": {
"message": "Unsupported post request. Object with ID 'act_x' does not exist, cannot be loaded due to missing permissions, or does not support this operation. Please read the Graph API documentation at https://developers.facebook.com/docs/graph-api",
"type": "GraphMethodException",
"code": 100,
"error_subcode": 33,
"fbtrace_id": "AL_IO0ED9eQLAYcVGH2Ae94"
}
}
Here is the code I'm trying to run;
from facebook_business.api import FacebookAdsApi
from facebook_business.adobjects.adaccount import AdAccount
from pathlib import Path
my_app_id = 'xxxx'
my_app_secret = 'xxxx'
my_access_token = 'xxxx'
FacebookAdsApi.init(my_app_id, my_app_secret, my_access_token, api_version="v14.0")
my_account = AdAccount('act_x')
video_path = Path(__file__).parent / 'video.mp4'
fields=[]
params = {
"title" : "test1",
"description": "test",
"source": video_path
}
video = my_account.create_ad_video(params=params, fields=fields)
I'm wondering if im not able to create adimages or advideos in Sandbox mode.
My bad... I've mistakenly removed a character from act_id so it threw that error.
Be sure you have valid credentials, you can upload media in Sandbox mode.

Dataproc: Deployment Manager equivalent of gcloud --optional-components ANACONDA,JUPYTER

I would like to include --optional-components ANACONDA,JUPYTER in my CICD deployment using Deployment Manager
I have tried to place it in python template configuration under metadata secion as well as directly in properties, I checked schemas, existing templates, however I cant find proper spot for it nor anything related in documentation(maybe I missed something, but I only reached gcloud CLI part not DM equvalent)
My expected result would have --optional-components ANACONDA,JUPYTER inside :
`
resources = []
# cluster X
resources.append({
'name': 'X',
'type': 'dataproc.py',
'subnetwork': 'default',
'properties': {
'zone': ZONE,
'region': REGION,
'serviceAccountEmail': 'X',
'softwareConfig': {
'imageVersion': 'X',
'properties': {
'dataproc:dataproc.conscrypt.provider.enable' : 'False'
}
},
'master': {
'numInstances': 1,
'machineType': 'n1-standard-1',
'diskSizeGb': 50,
'diskType': 'pd-standard',
'numLocalSsds': 0
},
'worker': {
'numInstances': 2,
'machineType': 'n1-standard-1',
'diskType': 'pd-standard',
'diskSizeGb': 50,
'numLocalSsds': 0
},
'initializationActions':[{
'executableFile': 'X'
}],
'metadata': {
'PIP_PACKAGES':'requests_toolbelt==0.9.1 google-auth==1.6.31'
},
'labels': {
'environment': 'dev',
'data_type': 'X'
}
}
})`
Deployment manager only uses the APIs and only supports certain APIs. For your case to write --optional-components ANACONDA,JUPYTER , please follow this link as you almost close.To have details about the cloud dataproc components refer to the dataproc API reference.

Flask Configuration for multiple instances. Best practices?

In flask the configuration module is pretty-straight forward and there are ample best practices on the same topic from internet.
If I had to develop an Application which supports multiple instances, for example lets say there is a database for every city supported by application and every city db is independent MongoDB instance hosted on different physical machines.
A sample API code to support my example:
from flask import Flask, request
from flask_restful import Resource, Api
app = Flask(__name__)
api = Api(app)
class CityPopulation(Resource):
def get(self, city_name):
'''
CODE TO GET CITY BASED DB Config
Currently in JSON format
'''
total_population = helper_city(city_name)
return { population : total_population }
api.add_resource(CityPopulation, '/<string:city_name>/population')
if __name__ == '__main__':
app.run(debug=True)
Currently what I've thought about is a json file with section for DB's as below
{
db:[{
'bengaluru' :{
'host' : 'bengaluru.host.db'
'port' : 27017,
'user_name' : 'some_user',
'password' : 'royalchallengers'
},
'hyderabad' :{
'host' : 'hyderabad.host.db'
'port' : 27017,
'user_name' : 'some_user',
'password' : 'sunrisers'
}
}]
}
and the class to read the configuration from JSON as
class project_config:
def __init__(self):
with open(config_full_path, 'r') as myfile:
configuration_raw = json.load(myfile)
In Flask, the config module best practices was suggested as below
class BaseConfig(object):
DEBUG = False
TESTING = False
class DevelopmentConfig(BaseConfig):
DEBUG = True
TESTING = True
class TestingConfig(BaseConfig):
DEBUG = False
TESTING = True
Is there a way for my scenario to be included the flask configuration and not to maintain a separate project configuration? In terms of best practices.
The object based config in the flask docs is described an "an interesting pattern" but it's just one approach, not necessarily best practice.
You can update the contents of app.config in any way that makes sense for your use case. You could fetch values at run time from a service like etcd, zookeeper, or consul, or set them all via environment variables (a useful pattern with containerized apps), or load them from a config file like this.
import os
import json
from flask import Flask
from ConfigParser import ConfigParser
app = Flask(__name__)
def load_config():
config = ConfigParser()
config.read(os.environ.get('MY_APP_CONFIG_FILE'))
for k, v in config.items('my_app'):
app.config[k] = v
#app.route('/')
def get_config():
return json.dumps(dict(app.config), default=str)
load_config()
And then run it like:
$ cat test.ini
[my_app]
thing = stuff
other_thing = junk
$ MY_APP_CONFIG_FILE=test.ini FLASK_APP=test.py flask run
$ curl -s localhost:5000 | jq '.'
{
"JSON_AS_ASCII": true,
"USE_X_SENDFILE": false,
"SESSION_COOKIE_SECURE": false,
"SESSION_COOKIE_PATH": null,
"SESSION_COOKIE_DOMAIN": null,
"SESSION_COOKIE_NAME": "session",
"LOGGER_HANDLER_POLICY": "always",
"LOGGER_NAME": "test",
"DEBUG": false,
"SECRET_KEY": null,
"EXPLAIN_TEMPLATE_LOADING": false,
"MAX_CONTENT_LENGTH": null,
"APPLICATION_ROOT": null,
"SERVER_NAME": null,
"PREFERRED_URL_SCHEME": "http",
"JSONIFY_PRETTYPRINT_REGULAR": true,
"TESTING": false,
"PERMANENT_SESSION_LIFETIME": "31 days, 0:00:00",
"PROPAGATE_EXCEPTIONS": null,
"TEMPLATES_AUTO_RELOAD": null,
"TRAP_BAD_REQUEST_ERRORS": false,
"thing": "stuff", <---
"JSON_SORT_KEYS": true,
"JSONIFY_MIMETYPE": "application/json",
"SESSION_COOKIE_HTTPONLY": true,
"SEND_FILE_MAX_AGE_DEFAULT": "12:00:00",
"PRESERVE_CONTEXT_ON_EXCEPTION": null,
"other_thing": "junk", <---
"SESSION_REFRESH_EACH_REQUEST": true,
"TRAP_HTTP_EXCEPTIONS": false
}

djangosaml2: cannot serialize IdpUnspecified('No IdP to send to given the premises',) (type IdpUnspecified)

I am trying to get djangosaml2 working, I have tried configured the settings as best as I can against https://openidp.feide.no/ But I am getting the following error when I navigate to /saml2/login/:
cannot serialize IdpUnspecified('No IdP to send to given the premises',) (type IdpUnspecified)
This is what I have in settings
LOGIN_URL = '/saml2/login/'
SESSION_EXPIRE_AT_BROWSER_CLOSE = True
from os import path
import saml2
BASEDIR = path.dirname(path.abspath(__file__))
SAML_CONFIG = {
# full path to the xmlsec1 binary programm
'xmlsec_binary': '/usr/bin/xmlsec1',
# your entity id, usually your subdomain plus the url to the metadata view
'entityid': 'http://localhost:8000/saml2/metadata/',
# directory with attribute mapping
'attribute_map_dir': path.join(BASEDIR, 'attributemaps'),
# this block states what services we provide
'service': {
# we are just a lonely SP
'sp' : {
'name': 'Just a saml test SP',
'endpoints': {
# url and binding to the assetion consumer service view
# do not change the binding or service name
'assertion_consumer_service': [
('http://localhost:8000/saml2/acs/',
saml2.BINDING_HTTP_POST),
],
# url and binding to the single logout service view
# do not change the binding or service name
'single_logout_service': [
('http://localhost:8000/saml2/ls/',
saml2.BINDING_HTTP_REDIRECT),
],
},
# attributes that this project need to identify a user
'required_attributes': ['uid'],
# attributes that may be useful to have but not required
'optional_attributes': ['eduPersonAffiliation'],
# in this section the list of IdPs we talk to are defined
'idp': {
# we do not need a WAYF service since there is
# only an IdP defined here. This IdP should be
# present in our metadata
# the keys of this dictionary are entity ids
'https://openidp.feide.no/simplesaml/saml2/idp/metadata.php': {
'single_sign_on_service': {
saml2.BINDING_HTTP_REDIRECT: 'https://openidp.feide.no/simplesaml/saml2/idp/SSOService.php',
},
'single_logout_service': {
saml2.BINDING_HTTP_REDIRECT: 'https://openidp.feide.no/simplesaml/saml2/idp/SingleLogoutService.php',
},
},
},
},
},
# where the remote metadata is stored
'metadata': {
'local': [path.join(BASEDIR, 'remote_metadata.xml')],
},
# set to 1 to output debugging information
'debug': 1,
# certificate
'key_file': path.join(BASEDIR, 'mycert.key'), # private part
'cert_file': path.join(BASEDIR, 'mycert.pem'), # public part
# own metadata settings
'contact_person': [
{'given_name': 'James',
'sur_name': 'Lin',
'company': 'Company',
'email_address': 'james#james.com',
'contact_type': 'technical'},
],
# you can set multilanguage information here
'organization': {
'name': [('Company', 'en'),],
'display_name': [('Company', 'en')],
'url': [('http://www.company.com', 'en')],
},
'valid_for': 24, # how long is our metadata valid
}
OKAY!
I got the old instruction from here https://pypi.python.org/pypi/djangosaml2/0.1.0
But when I install via PIP it installed the latest version, the latest instruction is here https://bitbucket.org/lgs/djangosaml2
After digging through the code I finally found out the idp key should have been 'idpsso', see below:
'idpsso': {
# we do not need a WAYF service since there is
# only an IdP defined here. This IdP should be
# present in our metadata
# the keys of this dictionary are entity ids
'https://openidp.feide.no/simplesaml/saml2/idp/metadata.php': {
'single_sign_on_service': {
saml2.BINDING_HTTP_REDIRECT: 'https://openidp.feide.no/simplesaml/saml2/idp/SSOService.php',
},
'single_logout_service': {
saml2.BINDING_HTTP_REDIRECT: 'https://openidp.feide.no/simplesaml/saml2/idp/SingleLogoutService.php',
},
},
},
},