How to get region name in custom heat resource? - openstack-heat

I have an OpenStack deployment with 2 regions. Keystone and Horizon are common for both regions. And each region has its own Nova, Heat, Neutron, Glance etc.
I am writing a custom heat resource which should behave differently depending on the region in which the stack is created. Therefore, I am looking for a way to get the region name in the custom heat resource.
I tried to experiment with different methods of keystoneclient, i.e.:
auth = keystoneclient.auth.identity.v3.Token(
auth_url=auth_url,
token=token,
reauthenticate=False)
session = keystoneclient.session.Session(auth=auth)
keystone_client = keystoneclient.v3.client.Client(session=session)
endpoint_manager = keystoneclient.v3.endpoints.EndpointManager(keystone_client)
endpoint_list = endpoint_manager.list()
This code returns an error:
2018-12-06 14:27:11.885 1305729 TRACE heat.engine.resource File "/usr/lib/python2.7/site-packages/keystoneclient/service_catalog.py", line 216, in url_for
2018-12-06 14:27:11.885 1305729 TRACE heat.engine.resource raise exceptions.EmptyCatalog(_('The service catalog is empty.'))
2018-12-06 14:27:11.885 1305729 TRACE heat.engine.resource EmptyCatalog: The service catalog is empty.
Is there a "standard" way of getting the region name of your "own" region?

Related

How to update disk in GCP using terraform?

Is it possible to create a terraform module that updates a specific resource which is created by another module?
Currently, I have two modules...
linux-system: which creates a linux vm with boot disks
disk-updater: which I'm planning to use to update the disks I created from the first module
The reason behind is I want to create a pipeline that will do disk operations tasks via terraform like disk resizing.
data "google_compute_disk" "boot_disk" {
name = "linux-boot-disk"
zone = "europe-west2-b"
}
resource "google_compute_disk" "boot_disk" {
name = data.google_compute_disk.boot_disk.name
zone = data.google_compute_disk.boot_disk.zone
size = 25
}
I tried to use data block to retrieve the existing disk details and pass it to resource block hoping to update the same disk but it seems like it will just try to create a new disk with the same name thats why im getting this error.
Error creating Disk: googleapi: Error 409: The resource ... already exists, alreadyExists
I think I'm doing it wrong, can someone give me advice how to proceed without using the first module I built. btw I'm a newbie when it comes to terraform
updates a specific resource which is created by another module?
No. You have to update the resource using its original definition.
The only way to update it from other module, is to import to the other module, which is bad design, as now you will have to definitions for the same resource, resulting in out-sync state files.

AWS CDK - Exclude stage name from logical ID of resource

I have a CDK project where initially it was deployed via CLI. I am now wrapping it in a pipelines construct.
Old:
Project
|
Stacks
|
Resources
New:
Project
|
Pipeline
|
Stage
|
Stacks
|
Resources
The issue I'm running into is that there are resources I would rather not be deleted in the application, however adding the stage causes the logical ID's to change to Stage-Stack-Resource from Stack-Resource. I found this article that claims you can provide an id of 'Default' to a resource, and cause it to go unused in the process of making the logical ID. however for some reason when I pass an Id of Default to the stage it simply uses that "Default" literal value instead of omitting it.
End goal is that I can keep my existing cloudformation resources, but have them deployed via this pipeline.
You can override the logical id manually like this:
S3 example:
const cfnBucket = s3Bucket.node.defaultChild as aws_s3.CfnBucket;
cfnBucket.overrideLogicalId('CUSTOMLOGICALID');
However, if you did not specify a logical id initially and do it now, CloudFormation will delete the original resource and create a new one with the new custom logical id because CloudFormation identifies resources by their logical ID.
Stage is something you define and it is not related to CloudFormation. You are probably using it in your Stack name or in your Resource names and that's why it gets included in the logical id.
Based on your project description, the only option to not have any resources deleted is: make one of the pipeline stages use the exact same stack name and resource names (without stage) as the CLI deployed version.
I ended up doing a full redeploy of the application. Luckily this was a development environment where trashing our data stores isn't a huge loss. But would be much more of a concern in a production environment.

How can I get an invoking lambda to run a cloud custodian policy in multiple different accounts on one run?

I have multiple c7n-org policies to be run in all regions in a list of accounts. Locally I can do this easily with the c7n-org run -c accounts.yml -s out --region all -u cost-control.yml.
The goal is to have an aws lambda function running this daily on all accounts like this. Currently I have a child lambda function for each policy in cost-control.yml and an invoker lambda function that loops through each function and calls it passing it the appropriate arn role to assume and region each time. Because I am calling the child functions for all accounts and all regions, the child functions are called over and over with different parameters to parse.
To get the regions to change each time I needed to remove an if statement in the SDK in handler.py (line 144) that is caching the config files so that it reads the new config w the parameters in subsequent invocations.
# one time initialization for cold starts.
global policy_config, policy_data
if policy_config is None:
with open(file) as f:
policy_data = json.load(f)
policy_config = init_config(policy_data)
load_resources(StructureParser().get_resource_types(policy_data))
I removed the "if policy_config is None:" line and modified the filename to a new config file that I wrote to tmp within the custodian_policy.py lambda code which is the config with the parameters for this invocation.
In the log streams for each invocation of the child lambdas the accounts are not assumed properly. The regions are changing properly and cloud custodian is calling the policy on the different regions but it is keeping the initial account from the first invocation. Each log stream shows the lambda assuming the role of the first called parameters from the invoker and then not changing the role in the next calls though it is receiving the correct parameters.
I've tried changing the cloud custodian SDK code in handler.py init_config() to try to force it to change the account_id each time. I know I shouldn't be changing the SDK code though and there is probably a way to do this properly using the policies.
I've thought about trying the fargate route which would be more like running it locally but I'm not sure if I would come across this issue there too.
Could anyone give me some pointers on how to get cloud custodian to assume roles on many different lambda invocations?
I found the answer in local_session function in utils.py of the c7n SDK. It was caching the session info for up to 45 minutes which is why it was reusing the old account info each lambda invocation within each log stream.
By commenting out lines 324 and 325, I forced c7n to create a new session each time with the passed in account parameter. The new function should look like this:
def local_session(factory, region=None):
"""Cache a session thread local for up to 45m"""
factory_region = getattr(factory, 'region', 'global')
if region:
factory_region = region
s = getattr(CONN_CACHE, factory_region, {}).get('session')
t = getattr(CONN_CACHE, factory_region, {}).get('time')
n = time.time()
# if s is not None and t + (60 * 45) > n:
# return s
s = factory()
setattr(CONN_CACHE, factory_region, {'session': s, 'time': n})
return s

Is there a way to pass credentials programmatically for using google documentAI without reading from a disk?

I am trying to run the demo code given in pdf parsing of GCP document AI. To run the code, exporting google credentials as a command line works fine. The problem comes when the code needs to run in memory and hence no credential files are allowed to be accessed from disk. Is there a way to pass the credentials in the document ai parsing function?
The sample code of google:
def main(project_id='YOUR_PROJECT_ID',
input_uri='gs://cloud-samples-data/documentai/invoice.pdf'):
"""Process a single document with the Document AI API, including
text extraction and entity extraction."""
client = documentai.DocumentUnderstandingServiceClient()
gcs_source = documentai.types.GcsSource(uri=input_uri)
# mime_type can be application/pdf, image/tiff,
# and image/gif, or application/json
input_config = documentai.types.InputConfig(
gcs_source=gcs_source, mime_type='application/pdf')
# Location can be 'us' or 'eu'
parent = 'projects/{}/locations/us'.format(project_id)
request = documentai.types.ProcessDocumentRequest(
parent=parent,
input_config=input_config)
document = client.process_document(request=request)
# All text extracted from the document
print('Document Text: {}'.format(document.text))
def _get_text(el):
"""Convert text offset indexes into text snippets.
"""
response = ''
# If a text segment spans several lines, it will
# be stored in different text segments.
for segment in el.text_anchor.text_segments:
start_index = segment.start_index
end_index = segment.end_index
response += document.text[start_index:end_index]
return response
for entity in document.entities:
print('Entity type: {}'.format(entity.type))
print('Text: {}'.format(_get_text(entity)))
print('Mention text: {}\n'.format(entity.mention_text))
When you run your workloads on GCP, you don't need to have a service account key file. You MUSTN'T!!
Why? 2 reasons:
It's useless because all GCP products have, at least, a default service account. And most of time, you can customize it. You can have a look on Cloud Function identity in your case.
Service account key file is a file. It means a lot: you can copy it, send it by email, commit it in Git repository... many persons can have access to it and you loose the management of this secret. And because it's a secret, you have to store it securely, you have to rotate it regularly (at least every 90 days, Google recommendation),... It's nighmare! When you can, don't use service account key file!
What the libraries are doing?
There are looking if GOOGLE_APPLICATION_CREDENTIALS env var exists.
There are looking into the "well know" location (when you perform a gcloud auth application-default login to allow the local application to use your credential to access to Google Resources, a file is created in a "standard location" on your computer)
If not, check if the metadata server exists (only on GCP). This server provides the authentication information to the libraries.
else raise an error.
So, simply use the correct service account in your function and provide it the correct role to achieve what you want to do.

ObjectNotFoundException when applying autoscaling policy to DynamoDB table

I'm running a lambda function using the boto3 SDK in order to add autoscaling policies to a number of dynamoDB tables and indices, however it's consistently throwing this error:
An error occurred (ObjectNotFoundException) when calling the PutScalingPolicy operation: No scalable target registered for service namespace: dynamodb, resource ID: table/tableName, scalable dimension: dynamodb:table:ReadCapacityUnits: ObjectNotFoundException
Relevant code here:
def set_scaling_policy(resource_type, capacity_type, resource_id):
dbClient = boto3.client('application-autoscaling')
response = dbClient.put_scaling_policy(
PolicyName= 'dynamoDBScaling',
ServiceNamespace= 'dynamodb',
ResourceId= resource_id,
ScalableDimension= 'dynamodb:{0}:{1}CapacityUnits'.format(resource_type,capacity_type),
PolicyType='TargetTrackingScaling',
TargetTrackingScalingPolicyConfiguration={
'TargetValue': 50.0,
'PredefinedMetricSpecification': {
'PredefinedMetricType': 'DynamoDB{0}CapacityUtilization'.format(capacity_type)
}
}
)
(resource_type is either 'table' or 'index'; capacity_type is either 'Read' or 'Write')
A few solutions I've considered:
fixing permissions - it was having some permissions issues before, I gave it AmazonDynamoDBFullAccess, which seems to have fixed all that. Also, presumably it would throw a different error if it didn't have access
formatting of parameters - according to the API here, it all seems correct. I've tried variants like using the full ARN instead of table/tableName, using just tablename, etc.
checking that tableName actually exists - it does, and I can add and remove scaling policies via the AWS console just fine
put_scaling_policy
http://boto3.readthedocs.io/en/latest/reference/services/application-autoscaling.html#ApplicationAutoScaling.Client.put_scaling_policy
You cannot create a scaling policy until you register the scalable
target using RegisterScalableTarget
register_scalable_target
http://boto3.readthedocs.io/en/latest/reference/services/application-autoscaling.html#ApplicationAutoScaling.Client.register_scalable_target
Registers or updates a scalable target. A scalable target is a
resource that Application Auto Scaling can scale out or scale in.
After you have registered a scalable target, you can use this
operation to update the minimum and maximum values for its scalable
dimension.