Run TaskCat with parameters file? - amazon-web-services

So I want to run taskcat tests with a parameter file but it doesn't seem to work, I have a setup like this but I keep getting errors:
├── .taskcat.yml
├── ci
│ ├── parameters.json
│ └── taskcat.yml
├── templates
└── sqs-yaml.template
-taskcat.yml is:
global:
qsname: sample-taskcat-project
regions:
- us-east-1
tests:
taskcat-yaml:
parameter_input: parameters.json
template_file: sqs.yml
regions:
- us-east-1
-.taskcat.yml is:
global:
qsname: sample-taskcat-project
regions:
- us-east-1
tests:
taskcat-yaml:
parameter_input: parameters.json
template_file: sqs.yml
regions:
- us-east-1
-parameters.json is:
[
{
"ParameterKey": "MyQueueName",
"ParameterValue": "TestQueue"
}
]
-sqs-yaml.template is:
AWSTemplateFormatVersion: '2010-09-09'
Description: Creates an SQS Queue.
Parameters:
MyQueueName:
Description: My Queue Name
Type: String
Resources:
MyQueue:
Type: AWS::SQS::Queue
Properties:
MyQueueName
Outputs:
MyQueueARN:
Value:
Ref: MyQueue

#Guzdo - Thanks for using taskcat. Github is the best support mechanism for taskcat.
That said, in v0.9.x, the entire configuration is now within the single config file. Looks like it's been auto-generated for you - have a look at '.taskcat.yml'
Here are a few examples:
project:
name: my-cfn-project
az_blacklist:
- use1-az1
build_submodules: false
lambda_source_path: functions/source
lambda_zip_path: functions/packages
owner: me#example.com
package_lambda: false
parameters:
KeyPairName: blah
project:
(...)
tests:
my-example-test:
(...)
parameters:
KeyPairName: blah
More comprehensive example config:
https://raw.githubusercontent.com/taskcat/taskcat/master/tests/data/config_full_example/.taskcat.yml

Related

aws_ssm_document unable to validate YAML automation file from S3

Getting a returned error when trying to create a AWS Systems Manager automation document via aws_ssm_document Terraform resource.
Error: creating SSM document: InvalidDocumentContent: YAML not well-formed. at Line: 1, Column: 1
Test for sanity to create the YAML automation document manually using the same document and also to import it inline (which is less than ideal due to the size)
Sample below of the Terraform resource and the YAML document.
resource "aws_ssm_document" "rhel_updates" {
name = "TEST-DW"
document_format = "YAML"
content = "YAML"
document_type = "Automation"
attachments_source {
key = "SourceUrl"
values = ["s3://rhel/templates/101/runbooks/test.yaml"]
name = "test.yaml"
}
}
schemaVersion: '0.3'
description: |-
cloud.support#test.co.uk
parameters:
S3ArtifactStore:
type: String
default: rhel01
description: S3 Artifact Store.
ApiInfrastructureStackName:
type: String
description: API InfrastructureStackName.
default: rhel-api
mainSteps:
- name: getApiInfrastructureStackOutputs
action: 'aws:executeAwsApi'
outputs:
- Selector: '$.Stacks[0].Outputs'
Name: Outputs
Type: MapList
inputs:
Service: cloudformation
Api: DescribeStacks
StackName: '{{ApiInfrastructureStackName}}'

No files match include/exclude patterns when deploying API Gateway with mock integration

I want to deploy a APIGateway with mock integration for testing via serverless using this example.
However when I try to deploy it I get the following error:
No file matches include / exclude patterns
Which I don't understand because I don't want to include or exclude anything. I just wanna create the Gateway with mock integration which should give me an endpoint which returns a specific response. I don't want to deploy any lambda. Here is the full serverless:
service: sls-apig
provider:
name: aws
functions:
hello:
handler: handler.hello
events:
- http:
path: hello
cors: true
method: get
integration: mock
request:
template:
application/json: '{"statusCode": 200}'
response:
template: $input.path('$')
statusCodes:
201:
pattern: ''
The error normally occurs when you are testing the Serverless Framework in a directory with only the serverless.yml file. This looks like a bug with the Serverless Framework.
Your project structure must be like the following:
project
├── .serverless/
├── serverless.yml
To stop this error we only need to add a file on this directory, in the following example I am adding the foo.txt file:
project
├── .serverless/
├── foo.txt
├── serverless.yml
And the error No file matches include / exclude patterns will stop to happen.
Also, we can use this file as the template of the response using the following code:
response:
template: ${file(foo.txt)}
statusCodes:
201:
pattern: ''

Serverless framework is ignoring CLI options

I'm trying to dynamically pass in options to resolve when deploying my functions with serverless but they're always null or hit the fallback.
custom:
send_grid_api: ${opt:sendgridapi, 'missing'}
SubscribedUsersTable:
name: !Ref UsersSubscriptionTable
arn: !GetAtt UsersSubscriptionTable.Arn
bundle:
linting: false
provider:
name: aws
lambdaHashingVersion: 20201221
runtime: nodejs12.x
memorySize: 256
stage: ${opt:stage, 'dev'}
region: us-west-2
environment:
STAGE: ${self:provider.stage}
SEND_GRID_API_KEY: ${self:custom.send_grid_api}
I've also tried:
environment:
STAGE: ${self:provider.stage}
SEND_GRID_API_KEY: ${opt:sendgridapi, 'missing'}
both yield 'missing', but why?
sls deploy --stage=prod --sendgridapi=xxx
also fails if I try with space instead of =.
Edit: Working Solution
In my github action template, I defined the following:
- name: create env file
run: |
touch .env
echo SEND_GRID_API_KEY=${{ secrets.SEND_GRID_KEY }} >> .env
ls -la
pwd
In addition, I explicitly set the working directory for this stage like so:
working-directory: /home/runner/work/myDir/myDir/
In my serverless.yml I added the following:
environment:
SEND_GRID_API_KEY: ${env:SEND_GRID_API_KEY}
sls will read the contents from the file and load them properly
opt is for serverless' CLI options. These are part of serverless, not your own code.
You can instead use...
provider:
...
environment:
...
SEND_GRID_API_KEY: ${env:SEND_GRID_API_KEY}
And pass the value as an environment variable in your deploy step.
- name: Deploy
run: sls deploy --stage=prod
env:
SEND_GRID_API_KEY: "insert api key here"

aws serverless multiple yaml java

I got AWS sample serverless app in which I have single yaml files along with multiples handlers.
The problem is that template.yaml is keep growing, how can separate yaml for each handler or group of handlers so it would be easy to manage.
For our projects we started dividing the main YAML into multiple on in a next way:
all lambdas are still described in serverless.yml file
in serverless.yml
resources:
- ${file(./sls-config/cognito-user-pools-authorizer.yml)}
- ${file(./sls-config/aurora.yml)}
- ${file(./sls-config/bucket.yml)}
- ${file(./sls-config/queues.yml)}
- ${file(./sls-config/alarms.yml)}
- ${file(./sls-config/roles.yml)}
- ${file(./sls-config/outputs.yml)}
Yep, splitting of the resources up to you.
alarm.yml
Resources:
SQSAlarmTopic:
Type: AWS::SNS::Topic
Properties:
DisplayName: ${self:provider.prefix}-sqs-alarm-topic
TopicName: ${self:provider.prefix}-sqs-alarm-topic
Subscription:
- Endpoint: example-email#mail.com
Protocol: email
Tags: ${self:custom.sqsTags}
and so on.
outputs.yml
Outputs:
CognitoUserPoolId:
Value: ${self:custom.userPool}
CognitoUserPoolClientId:
Value: ${self:custom.userPoolClientId}
DSClusterID:
Description: "RDS Cluster "
Value: { Ref: RDSCluster }
DBAddress:
Value: !GetAtt RDSCluster.Endpoint.Address
The variables from custom and provide may be used easily through the sub_configs.yml.
Be careful with spacing/paddings in yaml files :)

Unable to locate global value from helm subchart

This is my first time using nested Helm charts and I'm trying to access a global value from the root values.yaml file. According to the documentation I should be able to use the syntax below in my secret.yaml file, however if I run helm template api --debug I get the following error:
Error: template: api/templates/secret.yaml:7:21: executing "api/templates/secret.yaml" at <.Values.global.sa_json>: nil pointer evaluating interface {}.sa_json
helm.go:84: [debug] template: api/templates/secret.yaml:7:21: executing "api/templates/secret.yaml" at <.Values.global.sa_json>: nil pointer evaluating interface {}.sa_json
/primaryChart/charts/api/templates/secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: {{ .Chart.Name }}-service-account-secret
type: Opaque
data:
sa_json: {{ .Values.global.sa_json }}
primaryChart/values.yaml
global:
sa_json: _b64_sa_credentials
Folder structure is as follows:
/primaryChart
|- values.yaml
|-- /charts
|-- /api
|-- /templates
|- secret.yaml
Having the following directory layout, .Values.global.sa_json will only be available if you call helm template api . from your main chart
/mnt/c/home/primaryChart> tree
.
├── Chart.yaml <-- your main chart
├── charts
│ └── api
│ ├── Chart.yaml <-- your subchart
│ ├── charts
│ ├── templates
│ │ └── secrets.yaml
│ └── values.yaml
├── templates
└── values.yaml <--- this is where your global.sa_json is defined
Your values file should be called values.yaml and not value.yaml, or use any other file with -f flag helm template api . -f value.yaml
/mnt/c/home/primaryChart> helm template api .
---
# Source: primaryChart/charts/api/templates/secrets.yaml
apiVersion: v1
kind: Secret
metadata:
name: api-service-account-secret
type: Opaque
data:
sa_json: _b64_sa_credentials