We have two different accounts:
one for developing
another clien prod account
We have cloudformation templates to deploy resources, during developing new features firstly we test on dev and then deploy to prod. But with quicksight it not so easy, there are no cloudformation templates for quicksight. We need to reacreate all reports in prod account, manually it is very hard. I found QuickSight API and create-analysis command but I don't understand how I can create analysis via this command.
Maybe someone have examples or know how to create analysis with cli?
Slavik
It's not possible to create an entirely new analysis or dashboard via the API, however it is possible to promote these throughout the environments via the API. I found the following AWS blog post to be of some use:
AWS QuickSight Blog
Rich
First create an Analysis Template using:
aws quicksight create-template --aws-account-id 123456789123 --cli-input-json file://./create-template.json
You can use the following JSON (create-analysis-cli-input.json):
{
"AwsAccountId":"123456789123",
"AnalysisId":"TestAnalysis",
"Name":"TestAnalysis-Report",
"Parameters":{
"StringParameters":[
{
"Name":"Parameters1",
"Values":[
"All"
]
},
{
"Name":"Parameters2",
"Values":[
"All"
]
}
],
"IntegerParameters":[
{
"Name":"IntParameter1",
"Values":[
0
]
},
{
"Name":"IntParameter2",
"Values":[
1000
]
}
],
"DateTimeParameters":[
{
"Name":"Date1",
"Values":[
20160327
]
},
{
"Name":"Date2",
"Values":[
20160723
]
}
]
},
"Permissions":[
{
"Principal":"arn:aws:quicksight:ap-southeast-2:123456789123:user/default/user-qs",
"Actions":[
"quicksight:UpdateDataSourcePermissions",
"quicksight:DescribeDataSource",
"quicksight:DescribeDataSourcePermissions",
"quicksight:PassDataSource",
"quicksight:UpdateDataSource",
"quicksight:DeleteDataSource"
]
}
],
"SourceEntity":{
"SourceTemplate":{
"DataSetReferences":[
{
"DataSetPlaceholder":"Template-SRM-Payments Dataset",
"DataSetArn":"arn:aws:quicksight:ap-southeast-2:123456789123:dataset/abc"
},
{
"DataSetPlaceholder":"Template-SRM-DailyPayments Dataset",
"DataSetArn":"arn:aws:quicksight:ap-southeast-2:123456789123:dataset/def"
},
{
"DataSetPlaceholder":"Template-SRM-DateTable Dataset",
"DataSetArn":"arn:aws:quicksight:ap-southeast-2:123456789123:dataset/ghi"
}
],
"Arn":"arn:aws:quicksight:ap-southeast-2:123456789123:template/report-template"
}
},
"ThemeArn":"arn:aws:quicksight::aws:theme/SEASIDE",
"Tags":[
{
"Key":"Name",
"Value":"TestReport"
}
]
}
The CLI command to run is:
aws quicksight create-analysis --aws-account-id 123456789123 --cli-input-json file://./create-analysis-cli-input.json
Related
I would like to forward the logs of select services running on my EKS cluster to CloudWatch for cluster-independent storage and better observability.
Following the quickstart outlined at https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Container-Insights-setup-EKS-quickstart.html I've managed to get the logs forwarded via Fluent Bit service, but that has also generated 170 Container Insights metrics channels. Not only are those metrics not required, but they also appear to cost a fair bit.
How can I disable the collection of cluster metrics such as cpu / memory / network / etc, and only keep forwarding container logs to CloudWatch? I'm having a very hard time finding any documentation on this.
I think I figured it out - the cloudwatch-agent daemonset from quickstart guide is what's sending the metrics, but it's not required for log forwarding. All the objects with names related to cloudwatch-agent in quickstart yaml file are not required for log forwarding.
As suggested by Toms Mikoss, you need to delete the metrics object in your configuration file. This file is the one that you pass to the agent when starting
This applies to "on-premises" "linux" installations. I havent tested this on windows, nor EC2 but I imagine it will be similar. The AWS Documentation here says that you can also distribute the configuration via SSM, but again, I imagine the answer here is still applicable.
Example of file with metrics:
{
"agent": {
"metrics_collection_interval": 60,
"run_as_user": "root"
},
"logs": {
"logs_collected": {
"files": {
"collect_list": [
{
"file_path": "/var/log/nginx.log",
"log_group_name": "nginx",
"log_stream_name": "{hostname}"
}
]
}
}
},
"metrics": {
"metrics_collected": {
"cpu": {
"measurement": [
"cpu_usage_idle",
"cpu_usage_iowait"
],
"metrics_collection_interval": 60,
"totalcpu": true
}
}
}
}
Example of file without metrics:
{
"agent": {
"metrics_collection_interval": 60,
"run_as_user": "root"
},
"logs": {
"logs_collected": {
"files": {
"collect_list": [
{
"file_path": "/var/log/nginx.log",
"log_group_name": "nginx",
"log_stream_name": "{hostname}"
}
]
}
}
}
}
For reference, the command to start for linux on-premises servers:
sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config \
-m onPremise -s -c file:configuration-file-path
More details in the AWS Documentation here
Need to automate compound index generation in firebase cli as below
curl -sL https://firebase.tools | bash
firebase login ##looking a way to automate it
firebase init ##looking a way to automate it
replacing firestore.indexes.json to create index on user and timestamp field as below
{
"indexes": [{
"collectionGroup": "notifications",
"queryScope": "COLLECTION",
"fields": [
{ "fieldPath": "users", "order": "DESCENDING" },
{ "fieldPath": "timestamp", "order": "DESCENDING" }
]
}
],
"fieldOverrides": []
}
and run
firebase deploy --only firestore:indexes
to automate firestore functionalities and compound index creation. Is it feasible..?
I'm trying to create a CodePipeline to deploy an application to EC2 instances using Blue/Green Deployment.
My Deployment Group looks like this:
aws deploy update-deployment-group \
--application-name MySampleAppDeploy \
--deployment-config-name CodeDeployDefault.AllAtOnce \
--service-role-arn arn:aws:iam::1111111111:role/CodeDeployRole \
--ec2-tag-filters Key=Stage,Type=KEY_AND_VALUE,Value=Blue \
--deployment-style deploymentType=BLUE_GREEN,deploymentOption=WITH_TRAFFIC_CONTROL \
--load-balancer-info targetGroupInfoList=[{name="sample-app-alb-targets"}] \
--blue-green-deployment-configuration file://configs/blue-green-deploy-config.json \
--current-deployment-group-name MySampleAppDeployGroup
blue-green-deploy-config.json
{
"terminateBlueInstancesOnDeploymentSuccess": {
"action": "KEEP_ALIVE",
"terminationWaitTimeInMinutes": 1
},
"deploymentReadyOption": {
"actionOnTimeout": "STOP_DEPLOYMENT",
"waitTimeInMinutes": 1
},
"greenFleetProvisioningOption": {
"action": "DISCOVER_EXISTING"
}
}
I'm able to create a blue/green deployment manually using this command, it Works! :
aws deploy create-deployment \
--application-name MySampleAppDeploy \
--deployment-config-name CodeDeployDefault.AllAtOnce \
--deployment-group-name MySampleAppDeployGroup \
# I can specify the Target Instances here
--target-instances file://configs/blue-green-target-instances.json \
--s3-location XXX
blue-green-target-instances.json
{
"tagFilters": [
{
"Key": "Stage",
"Value": "Green",
"Type": "KEY_AND_VALUE"
}
]
}
Now, In my CodePipeline Deploy Stage, I have this:
{
"name": "Deploy",
"actions": [
{
"inputArtifacts": [
{
"name": "app"
}
],
"name": "Deploy",
"actionTypeId": {
"category": "Deploy",
"owner": "AWS",
"version": "1",
"provider": "CodeDeploy"
},
"outputArtifacts": [],
"configuration": {
"ApplicationName": "MySampleAppDeploy",
"DeploymentGroupName": "MySampleAppDeployGroup"
/* How can I specify Target Instances here? */
},
"runOrder": 1
}
]
}
All EC2 instances are tagged correctly and everything works as expected when using CodeDeploy via the command line, I'm missing something about how AWS CodePipeline works in this case.
Thanks
You didn't mention which error you get when you invoke the pipeline? Are you getting this error:
"The deployment failed because no instances were found in your green fleet"
Taking this assumption, since you are using manual tagging in your CodeDeploy configuration, this is not going to work to deploy using Blue/Green with manual tags as CodeDeploy expects to see a tagSet to find the "Green" instances and there is no way to provide this information via CodePipeline.
To workaround this, please use the 'Copy AutoScaling' option for implementing Blue/Green deployments in CodeDeploy using CodePipeline. See Step 10 here [1]
Another workaround is that you can create lambda function that is invoked as an action in your CodePipeline. This lambda function can be used to trigger the CodeDeploy deployment where you specify the target-instances with the value of the green AutoScalingGroup. You will then need to make describe calls at frequent intervals to the CodeDeploy API to get the status of the deployment. Once the deployment has completed, your lambda function will need to signal back to the CodePipeline based on the status of the deployment.
Here is an example which walks through how to invoke an AWS lambda function in a pipeline in CodePipeline [2].
Ref:
[1] https://docs.aws.amazon.com/codedeploy/latest/userguide/applications-create-blue-green.html
[2] https://docs.aws.amazon.com/codepipeline/latest/userguide/actions-invoke-lambda-function.html
Trying to create a cloud formation template to configure WAF with geo location condition. Couldnt find the right template yet. Any pointers would be appreciated.
http://docs.aws.amazon.com/waf/latest/developerguide/web-acl-geo-conditions.html
Unfortunately, the actual answer (as of this writing, July 2018) is that you cannot create geo match sets directly in CloudFormation. You can create them via the CLI or SDK, then reference them in the DataId field of a WAFRule's Predicates property.
Creating a GeoMatchSet with one constraint via CLI:
aws waf-regional get-change-token
aws waf-regional create-geo-match-set --name my-geo-set --change-token <token>
aws waf-regional get-change-token
aws waf-regional update-geo-match-set --change-token <new_token> --geo-match-set-id <id> --updates '[ { "Action": "INSERT", "GeoMatchConstraint": { "Type": "Country", "Value": "US" } } ]'
Now reference that GeoMatchSet id in the CloudFormation:
"WebAclGeoRule": {
"Type": "AWS::WAFRegional::Rule",
"Properties": {
...
"Predicates": [
{
"DataId": "00000000-1111-2222-3333-123412341234" // id from create-geo-match-set
"Negated": false,
"Type": "GeoMatch"
}
]
}
}
There is no documentation for it, but it is possible to create the Geo Match in serverless/cloudformation.
Used the following in serverless:
Resources:
Geos:
Type: "AWS::WAFRegional::GeoMatchSet"
Properties:
Name: geo
GeoMatchConstraints:
- Type: "Country"
Value: "IE"
Which translated to the following in cloudformation:
"Geos": {
"Type": "AWS::WAFRegional::GeoMatchSet",
"Properties": {
"Name": "geo",
"GeoMatchConstraints": [
{
"Type": "Country",
"Value": "IE"
}
]
}
}
That can then be referenced when creating a rule:
(serverless) :
Resources:
MyRule:
Type: "AWS::WAFRegional::Rule"
Properties:
Name: waf
Predicates:
- DataId:
Ref: "Geos"
Negated: false
Type: "GeoMatch"
(cloudformation) :
"MyRule": {
"Type": "AWS::WAFRegional::Rule",
"Properties": {
"Name": "waf",
"Predicates": [
{
"DataId": {
"Ref": "Geos"
},
"Negated": false,
"Type": "GeoMatch"
}
]
}
}
I'm afraid that your question is too vague to solicit a helpful response. The CloudFormation User Guide (pdf) defines many different WAF / CloudFront / R53 resources that will perform various forms of geo match / geo blocking capabilities. The link you provide seems a subset of Web Access Control Lists (Web ACL) - see AWS::WAF::WebACL on page 2540.
I suggest you have a look and if you are still stuck, actually describe what it is you are trying to achieve.
Note that the term you used: "geo location condition" doesn't directly relate to an AWS capability that I'm aware of.
Finally, if you are referring to https://aws.amazon.com/about-aws/whats-new/2017/10/aws-waf-now-supports-geographic-match/, then the latest Cloudformation User Guide doesn't seem to have been updated yet to reflect this.
The command I use:
aws s3api put-bucket-notification-configuration --bucket bucket-name --notification-configuration file:///Users/chris/event_config.json
Works fine if I take out the "Filter" key. As soon as I add it in, I get:
Parameter validation failed:
Unknown parameter in NotificationConfiguration.LambdaFunctionConfigurations[0]: "Filter", must be one of: Id, LambdaFunctionArn, Events
Here's my JSON file:
{
"LambdaFunctionConfigurations": [
{
"LambdaFunctionArn": "arn:aws:lambda:us-east-1:000000000:function:name",
"Events": [
"s3:ObjectCreated:*"
],
"Filter": {
"Key": {
"FilterRules": [
{
"Name": "prefix",
"Value": "images/"
}
]
}
}
}
]
}
When I look at the command's docs (http://docs.aws.amazon.com/cli/latest/reference/s3api/put-bucket-notification-configuration.html), I don't see any mistake. I've tried copy/pasting, carefully looking over, etc... Any help would be greatly appreciated!
You need to be running at least version 1.7.46 of aws-cli, released 2015-08-20.
This release adds Amazon S3 support for event notification filters and fixes some issues.
https://aws.amazon.com/releasenotes/CLI/3585202016507998
The aws-cli utility contains a lot of built-in intelligence and validation logic. New features often require the code in aws-cli to be updated, and Filter on S3 event notifications is a relatively recent feature.
See also: https://aws.amazon.com/blogs/aws/amazon-s3-update-delete-notifications-better-filters-bucket-metrics/