Trying to format my yaml to download a script in S3 bucket to run in SSM.
I've tried many different formats, but all examples seem to be JSON formatted
- action: aws:downloadContent
name: downloadContent
inputs:
sourceType: "S3"
sourceInfo:
path: https://bucket-name.s3.amazonaws.com/scripts/script.ps1
destinationPath: "C:\\Windows\\Temp"
Fails with the following message:
standardError": "invalid format in plugin properties map[destinationPath:C:\\Windows\\Temp sourceInfo:map[path:https://bucket-name.s3.amazonaws.com/scripts/script.ps1] sourceType:S3]; \nerror json: cannot unmarshal object into Go struct field DownloadContentPlugin.sourceInfo of type string"
This is what ended up working for me:
- action: aws:downloadContent
name: downloadContent
inputs:
sourceType: S3
sourceInfo: "{\"path\":\"https://bucket-name.s3.amazonaws.com/scripts/script.ps1\"}"
destinationPath: "C:\\Windows\\Temp"
I needed that exact JSON syntax embedded in the YAML.
Posting JSON example as well, as we struggled to find examples in json that worked. Hoping this will help someone in the future.
Our error was related to "sourceInfo" key:
> invalid format in plugin properties map[destinationPath:C:\PATHONTARGETSYSTEM sourceInfo:map[path:https://S3BUCKETNAME.s3.amazonaws.com/SCRIPTNAME.ps1] sourceType:S3]; error json: cannot unmarshal object into Go struct field DownloadContentPlugin.sourceInfo of type string
Solution was ultimately the wrong S3 url format + the wrong json formatting. Should look like so:
"sourceInfo": "{\"path\": \"https://s3.amazonaws.com/S3BUCKETNAME/SCRIPTNAME.ps1\"}",
Related
I am facing issues crawling data from S3 bucket.
File format is form.
When I try crawling this data from S3 I get "Internal Service Exception".
Can you please suggest a fix?
When I try loading the data directly from Athena, I see the following error for a field which is an array of strings:
HIVE_CURSOR_ERROR: Row is not a valid JSON Object - JSONException: Duplicate key
Thanks,
..
There were spaces in the key names that I was using in the JSON.
{
...
"key Name" : "value"
...
}
I formatted my data to remove spaces from key names and converted all the keys to lower case.
{
...
"keyname" : "value"
...
}
This resolved the issue.
Using Google Deployment Manager, has anybody found a way to first create a view in BigQuery, then authorize one or more datasets used by the view, sometimes in different projects, and were not created/managed by deployment manager? Creating a dataset with a view wasn't too challenging. Here is the jinja template named inventoryServices_bigquery_territory_views.jinja:
resources:
- name: territory-{{properties["OU"]}}
type: gcp-types/bigquery-v2:datasets
properties:
datasetReference:
datasetId: territory_{{properties["OU"]}}
- name: files
type: gcp-types/bigquery-v2:tables
properties:
datasetId: $(ref.territory-{{properties["OU"]}}.datasetReference.datasetId)
tableReference:
tableId: files
view:
query: >
SELECT DATE(DAY) DAY, ou, email, name, mimeType
FROM `{{properties["files_table_id"]}}`
WHERE LOWER(SPLIT(ou, "/")[SAFE_OFFSET(1)]) = "{{properties["OU"]}}"
useLegacySql: false
The deployment configuration references the above template like this:
imports:
- path: inventoryServices_bigquery_territory_views.jinja
resources:
- name: inventoryServices_bigquery_territory_views
type: inventoryServices_bigquery_territory_views.jinja
In the example above files_table_id is the project.dataset.table that needs the newly created view authorized.
I have seen some examples of managing IAM at project/folder/org level, but my need is on the dataset, not project. Looking at the resource representation of a dataset it seems like I can update access.view with the newly created view, but am a bit lost on how I would do that without removing existing access levels, and for datasets in projects different than the one the new view is created in. Any help appreciated.
Edit:
I tried adding the dataset which needs the view authorized like so, then deploy in preview mode just to see how it interprets the config:
-name: files-source
type: gcp-types/bigquery-v2:datasets
properties:
datasetReference:
datasetId: {{properties["files_table_id"]}}
access:
view:
projectId: {{env['project']}}
datasetId: $(ref.territory-{{properties["OU"]}}.datasetReference.datasetId)
tableId: $(ref.territory_files.tableReference.tableId)
But when I deploy in preview mode it throws this error:
errors:
- code: MANIFEST_EXPANSION_USER_ERROR
location: /deployments/inventoryservices-bigquery-territory-views-us/manifests/manifest-1582283242420
message: |-
Manifest expansion encountered the following errors: mapping values are not allowed here
in "<unicode string>", line 26, column 7:
type: gcp-types/bigquery-v2:datasets
^ Resource: config
Strange to me, hard to make much sense of that error since the line/column it points to is formatted exactly the same as the other dataset in the config, except that maybe it doesn't like that the files-source dataset already exists and was created from outside of deployment manager.
The error is below:
ERROR: (gcloud.deployment-manager.deployments.update) Error in Operation [operation-1544517871651-57cbb1716c8b8-4fa66ff2-9980028f]: errors:
- code: MISSING_REQUIRED_FIELD
location: /deployments/infrastructure/resources/projects/resources-practice/serviceAccounts/storage-buckets-backend/keys/json->$.properties->$.parent
message: |-
Missing required field 'parent' with schema:
{
"type" : "string"
}
Below is my jinja template content:
resource:
- name: {{ name }}-keys
type: iam.v1.serviceAccounts.key
properties:
name: projects/{{ properties["projectID"] }}/serviceAccounts/{{ serviceAccount["name"] }}/keys/json
privateKeyType: enum(TYPE_GOOGLE_CREDENTIALS_FILE)
keyAlgorithm: enum(KEY_ALG_RSA_2048)
P.S.
My reference for the properties is based on https://cloud.google.com/iam/reference/rest/v1/projects.serviceAccounts.keys
I will post the response of #John as the answer for the benefit of the community.
The parent was missing, needing an existing service account:
projects/{PROJECT_ID}/serviceAccounts/{ACCOUNT}
where ACCOUNT value can be the email or the uniqueID of the service account.
Regarding the template, please remove the enum wrapping the privateKeyType and keyAlgoritm.
The above deployment creates a service account credentials for an existing service account, and in order to retrieve this downloadable json key file, it can be exposed using outputs using the publicKeyData property then have it base64decoded.
This tutorial shows how to Invoke a Lambda from CodePipeline passing a single parameter:
http://docs.aws.amazon.com/codepipeline/latest/userguide/how-to-lambda-integration.html
I've built a slackhook lambda that needs to get 2 parameters:
webhook_url
message
Passing in JSON via the CodePipeline editor results in the JSON block being sent in ' ' so it can't be parsed directly.
UserParameter passed in:
{
"webhook":"https://hooks.slack.com/services/T0311JJTE/3W...W7F2lvho",
"message":"Staging build awaiting approval for production deploy"
}
User Parameter in Event payload
UserParameters: '{
"webhook":"https://hooks.slack.com/services/T0311JJTE/3W...W7F2lvho",
"message":"Staging build awaiting approval for production deploy"
}'
When trying to apply multiple UserParameters directly in the CLoudFormation like this:
Name: SlackNotification
ActionTypeId:
Category: Invoke
Owner: AWS
Version: '1'
Provider: Lambda
OutputArtifacts: []
Configuration:
FunctionName: aws-notify2
UserParameters:
- webhook: !Ref SlackHook
- message: !Join [" ",[!Ref app, !Ref env, "build has started"]]
RunOrder: 1
Create an error - Configuration must only contain simple objects or strings.
Any guesses on how to get multiple UserParameters passing from a CloudFormation template into a Lambda would be much appreciated.
Here is the lambda code for reference:
https://github.com/byu-oit-appdev/aws-codepipeline-lambda-slack-webhook
You should be able to pass multiple UserParameters as a single JSON-object string, then parse the JSON in your Lambda function upon receipt.
This is exactly how the Python example in the documentation handles this case:
try:
# Get the user parameters which contain the stack, artifact and file settings
user_parameters = job_data['actionConfiguration']['configuration']['UserParameters']
decoded_parameters = json.loads(user_parameters)
Similarly, using JSON.parse should work fine in Node.JS to parse a JSON-object string (as shown in your Event payload example) into a usable JSON object:
> JSON.parse('{ "webhook":"https://hooks.slack.com/services/T0311JJTE/3W...W7F2lvho", "message":"Staging build awaiting approval for production deploy" }')
{ webhook: 'https://hooks.slack.com/services/T0311JJTE/3W...W7F2lvho',
message: 'Staging build awaiting approval for production deploy' }
I have hit a roadblock using the loopback-component-storage with Amazon S3.
As a test, I am trying to upload a file to S3 from my browser app, which is calling my loopback API on the backend.
My server config for datasources.json looks like:
"s3storage": {
"name": "s3storage",
"connector": "loopback-component-storage",
"provider": "amazon",
"key": “blahblah”,
"keyId": “blahblah”
},
My API endpoint is:
‘/api/Storage’
The error response I am getting from the API is as follows:
. error: {name: "MissingRequiredParameter", status: 500, message: "Missing required key 'Bucket' in params",…}
. code: "MissingRequiredParameter"
. message: "Missing required key 'Bucket' in params"
. name: "MissingRequiredParameter"
. stack: "MissingRequiredParameter: Missing required key 'Bucket' in params …”
. status: 500
. time: "2015-03-18T01:54:48.267Z"
How do i pass the {“params”: {“Bucket”: “bucket-name”}} parameter to my loopback REST API?
Please advice. Thanks much!
AFAIK Buckets are known as Containers in the loopback-component-storage or pkgcloud world.
You can specify a container in your URL params. If your target is /api/Storage then you'll specify your container in that path with something like /api/Storage/container1/upload as the format is PATH/:DATASOURCE/:CONTAINER/:ACTION.
Take a look at the tests here for more examples:
https://github.com/strongloop/loopback-component-storage/blob/4e4a8f44be01e4bc1c30019303997e61491141d4/test/upload-download.test.js#L157
Bummer. "container" basically translates to "bucket" for S3. I was trying to pass the params object via POST but the devil was in the details i.e. the HTTP POST path for upload was looking for the bucket/container in the path itself. /api/Storage/abc/upload meant 'abc' was the bucket.