The error is below:
ERROR: (gcloud.deployment-manager.deployments.update) Error in Operation [operation-1544517871651-57cbb1716c8b8-4fa66ff2-9980028f]: errors:
- code: MISSING_REQUIRED_FIELD
location: /deployments/infrastructure/resources/projects/resources-practice/serviceAccounts/storage-buckets-backend/keys/json->$.properties->$.parent
message: |-
Missing required field 'parent' with schema:
{
"type" : "string"
}
Below is my jinja template content:
resource:
- name: {{ name }}-keys
type: iam.v1.serviceAccounts.key
properties:
name: projects/{{ properties["projectID"] }}/serviceAccounts/{{ serviceAccount["name"] }}/keys/json
privateKeyType: enum(TYPE_GOOGLE_CREDENTIALS_FILE)
keyAlgorithm: enum(KEY_ALG_RSA_2048)
P.S.
My reference for the properties is based on https://cloud.google.com/iam/reference/rest/v1/projects.serviceAccounts.keys
I will post the response of #John as the answer for the benefit of the community.
The parent was missing, needing an existing service account:
projects/{PROJECT_ID}/serviceAccounts/{ACCOUNT}
where ACCOUNT value can be the email or the uniqueID of the service account.
Regarding the template, please remove the enum wrapping the privateKeyType and keyAlgoritm.
The above deployment creates a service account credentials for an existing service account, and in order to retrieve this downloadable json key file, it can be exposed using outputs using the publicKeyData property then have it base64decoded.
Related
Here is the terraform template for aws service catalog that I am building.
resource "aws_servicecatalog_product" "data-ml-pipeline-service-catalog-product" {
name = "data-ml-pipeline-service-catalog-product"
owner = "data-ml"
type = "CLOUD_FORMATION_TEMPLATE"
provisioning_artifact_parameters {
template_url = "https://s3.amazonaws.com/cf-templates-ozkq9d3hgiq2-us-east-1/temp1.json"
type = "CLOUD_FORMATION_TEMPLATE"
}
Based on this question, Terraform /AWS aws_servicecatalog_portfolio, this should work.
Exact error: Error: error creating Service Catalog Product: InvalidParametersException: Invalid templateBody. Please make sure that your template is valid
Edit: Here is the new template that I am using.
---
ModelBuildCodeCommitRepository:
Properties:
Code:
BranchName: main
S3:
Bucket: sagemaker-servicecatalog-seedcode-us-west-2
Key: toolchain/image-build-model-building-workflow-v1.0.zip
RepositoryDescription:
? "Fn::Sub"
: "SageMaker Model building workflow infrastructure as code for the Project ${SageMakerProjectName}"
RepositoryName:
? "Fn::Sub"
: "sagemaker-${SageMakerProjectName}-${SageMakerProjectId}-modelbuild"
Type: "AWS::CodeCommit::Repository"
Parameters:
SageMakerProjectId:
Description: "Service-generated id of the project"
NoEcho: true
Type: String
SageMakerProjectName:
AllowedPattern: "^[a-zA-Z](-*[a-zA-Z0-9])*"
Description: "Name of the project"
MaxLength: 32
MinLength: 1
NoEcho: true
Type: String
I'd like to provide a general answer to this error message.
AFAIK, InvalidParametersException: Invalid templateBody. Please make sure that your template is valid can imply that AWS cannot access the template you're trying to create a Service Catalog product version from (the one which is usually provided by key LoadTemplateFromURL).
There are 2 possible reasons for this:
The URL of the template to deploy is invalid. Make sure that the URL provided actually points to a template file. When using Cloudformation with variables inside the URL, make sure to use !Sub etc.
The IAM user/role executing the deployment may not have the required permissions, as seen in a different SO question. Make sure that the permission cloudFormation:validateTemplate is in place.
Basically, this error message is misleading because it suggests that the template is invalid but actually the template cannot even be accessed in the 1st place.
Context
I am trying to associate serverless egress with a static IP address (GCP Docs). I have been able to set this up manually through the gcp-console, and now I am trying to implement it with deployment manager. However, with just the IP address and the router, once I add the NAT config, I get 400's, "Request contains an invalid argument.", which is not giving me enough information to fix the problem.
# config.yaml
resources:
# addresses spec: https://cloud.google.com/compute/docs/reference/rest/v1/addresses
- name: serverless-egress-address
type: compute.v1.address
properties:
region: europe-west3
addressType: EXTERNAL
networkTier: PREMIUM
# router spec: https://cloud.google.com/compute/docs/reference/rest/v1/routers
- name: serverless-egress-router
type: compute.v1.router
properties:
network: projects/<project-id>/global/networks/default
region: europe-west3
nats:
- name: serverless-egress-nat
natIpAllocateOption: MANUAL_ONLY
sourceSubnetworkIpRangesToNat: ALL_SUBNETWORKS_ALL_IP_RANGES
natIPs:
- $(ref.serverless-egress-address.selfLink)
# error response
code: RESOURCE_ERROR
location: /deployments/<deployment-name>/resources/serverless-egress-router
message: '{
"ResourceType":"compute.v1.router",
"ResourceErrorCode":"400",
"ResourceErrorMessage":{
"code":400,
"message":"Request contains an invalid argument.",
"status":"INVALID_ARGUMENT",
"statusMessage":"Bad Request","requestPath":"https://compute.googleapis.com/compute/v1/projects/<project-id>/regions/europe-west3/routers/serverless-egress-router",
"httpMethod":"PUT"
}}'
Notably, if I remove the 'natIPs' array and set 'natIpAllocateOption' to 'AUTO_ONLY', it goes through without errors. While this is not the configuration I need, it does narrow the problem down to these config options.
Question
Which is the invalid argument?
Are there things outside of the YAML which I should check? In the docs it says the following, which makes me wonder if there are other caveats like it:
Note that if this field contains ALL_SUBNETWORKS_ALL_IP_RANGES or ALL_SUBNETWORKS_ALL_PRIMARY_IP_RANGES, then there should not be any other Router.Nat section in any Router for this network in this region.
I checked the API reference and passing the values that you used should work. Furthermore, if you talk directly to the API using a JSON payload with these, it return 200:
{
"name": "nat",
"network": "https://www.googleapis.com/compute/v1/projects/project/global/networks/nat1",
"nats": [
{
"natIps": [
"https://www.googleapis.com/compute/v1/projects/project/regions/us-central1/addresses/test"
],
"name": "nat1",
"natIpAllocateOption": "MANUAL_ONLY",
"sourceSubnetworkIpRangesToNat": "ALL_SUBNETWORKS_ALL_IP_RANGES"
}
]
}
From what I can see the request is correctly formed using methods other than Deployment Manager so there might be an issue in the tool.
I have filed an issue about this on Google's Issue Tracker for them to take a look at it.
The DM team might be able to shed light on what's happening here.
Using Google Deployment Manager, has anybody found a way to first create a view in BigQuery, then authorize one or more datasets used by the view, sometimes in different projects, and were not created/managed by deployment manager? Creating a dataset with a view wasn't too challenging. Here is the jinja template named inventoryServices_bigquery_territory_views.jinja:
resources:
- name: territory-{{properties["OU"]}}
type: gcp-types/bigquery-v2:datasets
properties:
datasetReference:
datasetId: territory_{{properties["OU"]}}
- name: files
type: gcp-types/bigquery-v2:tables
properties:
datasetId: $(ref.territory-{{properties["OU"]}}.datasetReference.datasetId)
tableReference:
tableId: files
view:
query: >
SELECT DATE(DAY) DAY, ou, email, name, mimeType
FROM `{{properties["files_table_id"]}}`
WHERE LOWER(SPLIT(ou, "/")[SAFE_OFFSET(1)]) = "{{properties["OU"]}}"
useLegacySql: false
The deployment configuration references the above template like this:
imports:
- path: inventoryServices_bigquery_territory_views.jinja
resources:
- name: inventoryServices_bigquery_territory_views
type: inventoryServices_bigquery_territory_views.jinja
In the example above files_table_id is the project.dataset.table that needs the newly created view authorized.
I have seen some examples of managing IAM at project/folder/org level, but my need is on the dataset, not project. Looking at the resource representation of a dataset it seems like I can update access.view with the newly created view, but am a bit lost on how I would do that without removing existing access levels, and for datasets in projects different than the one the new view is created in. Any help appreciated.
Edit:
I tried adding the dataset which needs the view authorized like so, then deploy in preview mode just to see how it interprets the config:
-name: files-source
type: gcp-types/bigquery-v2:datasets
properties:
datasetReference:
datasetId: {{properties["files_table_id"]}}
access:
view:
projectId: {{env['project']}}
datasetId: $(ref.territory-{{properties["OU"]}}.datasetReference.datasetId)
tableId: $(ref.territory_files.tableReference.tableId)
But when I deploy in preview mode it throws this error:
errors:
- code: MANIFEST_EXPANSION_USER_ERROR
location: /deployments/inventoryservices-bigquery-territory-views-us/manifests/manifest-1582283242420
message: |-
Manifest expansion encountered the following errors: mapping values are not allowed here
in "<unicode string>", line 26, column 7:
type: gcp-types/bigquery-v2:datasets
^ Resource: config
Strange to me, hard to make much sense of that error since the line/column it points to is formatted exactly the same as the other dataset in the config, except that maybe it doesn't like that the files-source dataset already exists and was created from outside of deployment manager.
This tutorial shows how to Invoke a Lambda from CodePipeline passing a single parameter:
http://docs.aws.amazon.com/codepipeline/latest/userguide/how-to-lambda-integration.html
I've built a slackhook lambda that needs to get 2 parameters:
webhook_url
message
Passing in JSON via the CodePipeline editor results in the JSON block being sent in ' ' so it can't be parsed directly.
UserParameter passed in:
{
"webhook":"https://hooks.slack.com/services/T0311JJTE/3W...W7F2lvho",
"message":"Staging build awaiting approval for production deploy"
}
User Parameter in Event payload
UserParameters: '{
"webhook":"https://hooks.slack.com/services/T0311JJTE/3W...W7F2lvho",
"message":"Staging build awaiting approval for production deploy"
}'
When trying to apply multiple UserParameters directly in the CLoudFormation like this:
Name: SlackNotification
ActionTypeId:
Category: Invoke
Owner: AWS
Version: '1'
Provider: Lambda
OutputArtifacts: []
Configuration:
FunctionName: aws-notify2
UserParameters:
- webhook: !Ref SlackHook
- message: !Join [" ",[!Ref app, !Ref env, "build has started"]]
RunOrder: 1
Create an error - Configuration must only contain simple objects or strings.
Any guesses on how to get multiple UserParameters passing from a CloudFormation template into a Lambda would be much appreciated.
Here is the lambda code for reference:
https://github.com/byu-oit-appdev/aws-codepipeline-lambda-slack-webhook
You should be able to pass multiple UserParameters as a single JSON-object string, then parse the JSON in your Lambda function upon receipt.
This is exactly how the Python example in the documentation handles this case:
try:
# Get the user parameters which contain the stack, artifact and file settings
user_parameters = job_data['actionConfiguration']['configuration']['UserParameters']
decoded_parameters = json.loads(user_parameters)
Similarly, using JSON.parse should work fine in Node.JS to parse a JSON-object string (as shown in your Event payload example) into a usable JSON object:
> JSON.parse('{ "webhook":"https://hooks.slack.com/services/T0311JJTE/3W...W7F2lvho", "message":"Staging build awaiting approval for production deploy" }')
{ webhook: 'https://hooks.slack.com/services/T0311JJTE/3W...W7F2lvho',
message: 'Staging build awaiting approval for production deploy' }
I would like to understand what is the best approach for modeling an action on a resource using RAML.
E.g. I have the following resource definition in RAML:
/orders:
type: collection
get:
description: Gets all orders
post:
description: Creates a new order
/{orderId}:
type: element
get:
description: Gets a order
put:
description: Updates a order
delete:
description: Deletes a order
Now for an order I would like to model an "approve" action. Is there a best practice of doing this with RAML ?
You could PUT or PATCH for setting some "Approval" to true in your model.
You could think about the approval as a resource. For example:
/orders:
type: collection
get:
post:
/{orderId}:
type: element
get:
put:
delete:
/approval:
post:
get:
...
It's not a RAML best practice. It's more related with how do you represent your model in REST.
You could use a PATCH request with a "patch document" that raises the approved flag on an order.