What is the right AWS Quicksight Template Schema? - amazon-web-services

Questions related to this post :
"Expected 2 placeholders. Given 1" error when creating an AWS QuickSight Template
Creating a template with multiple data sets using the QuickSight API, from Python 3/boto3
I'm facing an issue with the AWS QuickSight template skeleton format.
I can't find how to write the file when my analysis has 2 or more datasets.
Here's the only example I could find from the AWS docs, followed by what I tried.
I attempted another way to write the dataSetReferences without success.
Example :
{
"SourceAnalysis": {
"Arn": "string",
"DataSetReferences": [
{
"DataSetPlaceholder": "string",
"DataSetArn": "string"
}
...
]
},
"SourceTemplate": {
"Arn": "string"
}
}
I tried
{
"AwsAccountId": "91********43",
"TemplateId": "my-template-analysis-id",
"Name": "my-template-analysis-name",
"SourceEntity": {
"SourceAnalysis": {
"Arn": "arn:aws:quicksight:eu-west-1:91********43:analysis/12******-***-****-****-******ef",
"DataSetReferences": [
{
"DataSetPlaceholder": "datasetname1",
"DataSetArn": "arn:aws:quicksight:eu-west-1:91********43:dataset/fd******-***-****-****-******e6"
},
{
"DataSetPlaceholder": "datasetname2",
"DataSetArn": "arn:aws:quicksight:eu-west-1:91********43:dataset/2d******-***-****-****-******cb"
}
]
}
},
"VersionDescription": "1"
}
See API response

The json file is actually well written in the example.
I just had a typing error in my file.

Related

AWS Cloudwatch (EventBridge) Event Rule for AWS Batch with Environment Variables

I have created a Cloudwatch Event (EventBridge) Rule that triggers an AWS Batch Job and I want to specify an environment variable and parameters. I'm trying to do so with the following Configured Input (Constant [JSON text]), but when the job is submitted, then environment variables I'm trying to setup in the job with are not included and the parameters are the defaults. The parameters are working as expected.
{
"ContainerProperties": {
"Environment": [
{
"Name": "MY_ENV_VAR",
"Value": "MyVal"
}
]
},
"Parameters": {
"one": "1",
"two": "2",
"three": "3"
}
}
As I was typing out the question, I actually thought to look at the Submit Job API to see what I was doing wrong (I was referencing the CFTs for the Job Definition as my thought process above). For others it may help, I found that I needed to use ContainerOverrides rather than ContainerProperties to have it work properly.
{
"ContainerOverrides": {
"Environment": [
{
"Name": "MY_ENV_VAR",
"Value": "NorthAmerica"
}
]
},
"Parameters": {
"one": "1",
"two": "2",
"three": "3"
}
}
The preceding solution DIDN'T work for me. The correct answer can be found here:
https://aws.amazon.com/premiumsupport/knowledge-center/batch-parameters-trigger-cloudwatch/
I was only able to pass parameters to the job like so:
{
"Parameters": {
"customers": "tgc,localhost"
}
}
I wasn't able to get environment variables to work and didn't try ContainerOverrides.

How to use multiple prefixes in anything-but clause in AWS eventbridge eventpattern?

I have a situation where I need to filter out certain events using eventpatterns in eventbridge.
I want to run the rule for all events except those where username starts with abc or xyz.
I have tried below 2 syntax but none worked :
"userIdentity": {
"sessionContext": {
"sessionIssuer": {
"userName": [
{
"anything-but": {
"prefix": [
"abc-",
"xyz-"
]
}
}
]
}
}
}
"userIdentity": {
"sessionContext": {
"sessionIssuer": {
"userName": [
{
"anything-but": [{
"prefix": "abc-",
"prefix": "xyz-"
}]
}
]
}
}
}
Getting following error on saving the rule :
"Event pattern is not valid. Reason: Inside anything but list, start|null|boolean is not supported."
Am I missing something in the syntax or if this is a limitation then is there any alternative to this problem?
You can use prefix within an array in event pattern. Here is an example pattern:
{
"detail": {
"alarmName": [{
"prefix": "DemoApp1"
},
{
"prefix": "DemoApp2"
}
],
"state": {
"value": [
"ALARM"
]
},
"previousState": {
"value": [
"OK"
]
}
}
}
This event will match alarm that has name starting with either DemoApp1 or DemoApp2
TLDR: user #samtoddler is sort of correct.
Prefix matches only work on values as called out in https://docs.aws.amazon.com/eventbridge/latest/userguide/content-filtering-with-event-patterns.html#filtering-prefix-matching. They do not work with arrays. You can file a feature request with AWS support but if you'd like to unblock yourself; you it's probably best to either control the prefixes you have for userName (guessing this is something IAM related and in your control).
If that's not possible; consider filtering as much as you can via other properties before sending over to a compute (probably lambda) to performing additional filtering.

Cannot create AWS ServiceCatalogProduct using cloudformation

I am trying to create a ServiceCatalogProduct using a cloudformation json file like below:
> aws cloudformation create-stack --stack-name hmm --template-body file:///tmp/1.json
My cfn template file (1.json) looks like shown below. I have confirmed that the template file is valid. When I try to create the stack I get a generic error message "Failed to create following Provisioning Artifacts: [ pa-jas39ah3a1d ]". What am I missing?
{
"Resources": {
"Product": {
"Properties": {
"Description": "",
"Name": "redis-DEV-shu-cluster",
"Owner": "shubham",
"ProvisioningArtifactParameters": [
{
"Description": "Time created (UTC): 2020-11-04T04:13:42.897954",
"DisableTemplateValidation": "true",
"Info": {
"LoadTemplateFromURL": "https://s3:amazonaws.com/my-artifact-bucket-name/ag28ajo1-1ef1-47c9-80dc-7tuha718"
},
"Name": "1.0.0"
}
],
"SupportEmail": ""
},
"Type": "AWS::ServiceCatalog::CloudFormationProduct"
}
}
}
Here is the error in the "events" tab in cloudformation console:
A likely reason is spelling mistake:
https://s3:amazonaws.com/my-artifact-bucket-name/ag28ajo1-1ef1-47c9-80dc-7tuha718
it should be (note s3., not s3:):
https://s3.amazonaws.com/my-artifact-bucket-name/ag28ajo1-1ef1-47c9-80dc-7tuha718

What should the mapping template look like for AWS Firehose PutRecordBatch in API Gateway?

I've successfully setup an API that has Kinesis Firehose integration with AWS API Gateway using PutRecord using these instructions (https://aws.mannem.me/?p=1152 - note: it says insecure but I still clicked through since I needed it).
I'm trying to setup an API for PutRecordBatch (essentially allows for more than one record to be written at a time) but I keep getting
{
"__type": "SerializationException" }
Based on hours of research, API gateway throws that error when incoming API call format doesn't match the mapping template noted in the Integration Request. I'm struggling to figure out how to fix my mapping template.
Here's my mapping template:
{
"StreamName": "$input.path('DeliveryStreamName')",
"Records": [
#foreach($elem in $input.path('$.Records'))
{
"Data": "$util.base64Encode($elem.Data)",
}#if($foreach.hasNext),#end
#end
]
}
Here's the test data that I'm sending:
{
"DeliveryStreamName": "test",
"Records": [{
"Data": "SampleDataStringToFirehose"
},
{
"Data": "SampleDataStringToFirehose2"
}]
}
So dumb but the mapping template has an error: there is an extra comma in there at the end of
"Data": "$util.base64Encode($elem.Data)",
that's causing the issue. Below is the correct version:
{
"DeliveryStreamName": "$input.path('$.DeliveryStreamName')",
"Records": [
#foreach($elem in $input.path('$.Records'))
{
"Data": "$util.base64Encode($elem.Data)"
}#if($foreach.hasNext),#end
#end
]
}
Your example helped me a lot so I wanted to complement it, just in case anybody else run into my specific scenario.
In my case, instead of a simple string I needed to send a JSON object, similar to this:
{
"DeliveryStreamName": "test",
"Records": [{
"Data": {"foo": "bar", "count": 321}
},
{
"Data": {"foo1": "bar1", "count": 10}
}]
}
In this case, what happened when I used the template in your example is that the object is stored in a non JSON format, which is not suitable for further analysis.
With a simple adjustment to the template, you could store a correctly formated JSON object:
{
"StreamName": "$input.path('DeliveryStreamName')",
"Records": [
#foreach($elem in $input.path('$.Records'))
{
#set($jsonPath = "$.Records[$foreach.index].Data")
"Data": "$util.base64Encode($input.json($jsonPath))"
}#if($foreach.hasNext),#end
#end
]
}

List PowerBI workspace collection keys from arm template

When using ARM templates to deploy various Azure components you can use some functions. One of them is called listkeys and you can use it to return through the output the keys that were created during the deployment, for example when deploying a storage account.
Is there a way to get the keys when deploying a Power BI workspace collection?
According to you mentioned link, if we want to use listKeys function, then we need to know resourceName and ApiVersion.
From the Azure PowerBI workspace collection get access keys API, we could get resource name
Microsoft.PowerBI/workspaceCollections/{workspaceCollectionName} and API version "2016-01-29"
So please have a try to use the follow coding, it works for me correctly.
"outputs": {
"exampleOutput": {
"value": "[listKeys(resourceId('Microsoft.PowerBI/workspaceCollections', parameters('workspaceCollections_tompowerBItest')), '2016-01-29')]",
"type": "object"
}
Check the created PowerBI Service from Azure portal
Whole ARM template I used:
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"workspaceCollections_tompowerBItest": {
"defaultValue": "tomjustforbitest",
"type": "string"
}
},
"variables": {},
"resources": [
{
"type": "Microsoft.PowerBI/workspaceCollections",
"sku": {
"name": "S1",
"tier": "Standard"
},
"tags": {},
"name": "[parameters('workspaceCollections_tompowerBItest')]",
"apiVersion": "2016-01-29",
"location": "South Central US"
}
],
"outputs": {
"exampleOutput": {
"value": "[listKeys(resourceId('Microsoft.PowerBI/workspaceCollections', parameters('workspaceCollections_tompowerBItest')), '2016-01-29')]",
"type": "object"
}
}
}