Issue:
I would like to steer clear of using the traditional.
authenticationType: jwt
clientEmail: <Service Account Email>
defaultProject: <Default Project Name>
tokenUri: https://oauth2.googleapis.com/token
And use a service account json file from GCP. Is there anyway of doing this?
Environment:
OpenShift running in GCP. ServiceAccount key is mounted.
So if understand your comments correctly, you want to create a BigQuery data source using the Grafana API.
This is the JSON body to send with your request:
{
"orgId": YOUR_ORG_ID,
"name": NAME_YOU_WANT_TO_GIVE,
"type": "doitintl-bigquery-datasource",
"access": "proxy",
"isDefault": true,
"version": 1,
"readOnly": false,
"jsonData": {
"authenticationType": "jwt",
"clientEmail": EMAIL_OF_YOUR_SERVICE_ACCOUNT,
"defaultProject": YOUR_PROJECT_ID,
"tokenUri": "https://oauth2.googleapis.com/token"
},
"secureJsonData": {
"privateKey": YOUR_SERVICE_ACCOUNT_JSON_KEY_FILE
}
}
So there is no way to avoid the code snippet you wanted to "steer clear of", however there is no need to take the JSON key file apart, just provide it to privateKey. You only have to provide the service account email additionally to clientEmail and the project id to defaultProject. Otherwise not different than using the UI.
I was able to setup AutoScaling events as rules in EventBridge to trigger SSM Commands, but I've noticed that with my chosen Target Value the event is passed to all my active EC2 Instances. My Target key is a tag shared by those instances, so my mistake makes sense now.
I'm pretty new to EventBridge, so I was wondering if there's a way to actually target the instance that triggered the AutoScaling event (as in extracting the "InstanceId" that's present in the event data and use that as my new Target Value). I saw the Input Transformer, but I think that just transforms the event data to pass to the target.
Thanks!
EDIT - help with js code for Lambda + SSM RunCommand
I realize I can achieve this by setting EventBridge to invoke a Lambda function instead of the SSM RunCommand directly. Can anyone help with the javaScript code to call a shell command on the ec2 instance specified in the event data (event.detail.EC2InstanceId)? I can't seem to find a relevant and up-to-date base template online, and I'm not familiar enough with js or Lambda. Any help is greatly appreciated! Thanks
Sample of Event data, as per aws docs
{
"version": "0",
"id": "12345678-1234-1234-1234-123456789012",
"detail-type": "EC2 Instance Launch Successful",
"source": "aws.autoscaling",
"account": "123456789012",
"time": "yyyy-mm-ddThh:mm:ssZ",
"region": "us-west-2",
"resources": [
"auto-scaling-group-arn",
"instance-arn"
],
"detail": {
"StatusCode": "InProgress",
"Description": "Launching a new EC2 instance: i-12345678",
"AutoScalingGroupName": "my-auto-scaling-group",
"ActivityId": "87654321-4321-4321-4321-210987654321",
"Details": {
"Availability Zone": "us-west-2b",
"Subnet ID": "subnet-12345678"
},
"RequestId": "12345678-1234-1234-1234-123456789012",
"StatusMessage": "",
"EndTime": "yyyy-mm-ddThh:mm:ssZ",
"EC2InstanceId": "i-1234567890abcdef0",
"StartTime": "yyyy-mm-ddThh:mm:ssZ",
"Cause": "description-text"
}
}
Edit 2 - my Lambda code so far
'use strict'
const ssm = new (require('aws-sdk/clients/ssm'))()
exports.handler = async (event) => {
const instanceId = event.detail.EC2InstanceId
var params = {
DocumentName: "AWS-RunShellScript",
InstanceIds: [ instanceId ],
TimeoutSeconds: 30,
Parameters: {
commands: ["/path/to/my/ec2/script.sh"],
workingDirectory: [],
executionTimeout: ["15"]
}
};
const data = await ssm.sendCommand(params).promise()
const response = {
statusCode: 200,
body: "Run Command success",
};
return response;
}
Yes, but through Lambda
EventBridge -> Lambda (using SSM api) -> EC2
Thank you #Sándor Bakos for helping me out!! My JavaScript ended up not working for some reason, so I ended up just using part of the python code linked in the comments.
1. add ssm:SendCommand permission:
After I let Lambda create a basic role during function creation, I added an inline policy to allow Systems Manager's SendCommand. This needs access to your documents/*, instances/* and managed-instances/*
2. code - python 3.9
import boto3
import botocore
import time
def lambda_handler(event=None, context=None):
try:
client = boto3.client('ssm')
instance_id = event['detail']['EC2InstanceId']
command = '/path/to/my/script.sh'
client.send_command(
InstanceIds = [ instance_id ],
DocumentName = 'AWS-RunShellScript',
Parameters = {
'commands': [ command ],
'executionTimeout': [ '60' ]
}
)
You can do this without using lambda, as I just did, by using eventbridge's input transformers.
I specified a new automation document that called the document I was trying to use (AWS-ApplyAnsiblePlaybooks).
My document called out the InstanceId as a parameter and is passed this by the input transformer from EventBridge. I had to pass the event into lambda just to see how to parse the JSON event object to get the desired instance ID - this ended up being
$.detail.EC2InstanceID
(it was coming from an autoscaling group).
I then passed it into a template that was used for the runbook
{"InstanceId":[<instance>]}
This template was read in my runbook as a parameter.
This was the SSM playbook inputs I used to run the AWS-ApplyAnsiblePlaybook Document, I just mapped each parameter to the specified parameters in the nested playbook:
"inputs": {
"InstanceIds": ["{{ InstanceId }}"],
"DocumentName": "AWS-ApplyAnsiblePlaybooks",
"Parameters": {
"SourceType": "S3",
"SourceInfo": {"path": "https://testansiblebucketab.s3.amazonaws.com/"},
"InstallDependencies": "True",
"PlaybookFile": "ansible-test.yml",
"ExtraVariables": "SSM=True",
"Check": "False",
"Verbose": "-v",
"TimeoutSeconds": "3600"
}
See the document below for reference. They used a document that was already set up to receive the variable
https://docs.aws.amazon.com/systems-manager/latest/userguide/automation-tutorial-eventbridge-input-transformers.html
This is the full automation playbook I used, most of the parameters are defaults from the nested playbook:
{
"description": "Runs Ansible Playbook on Launch Success Instances",
"schemaVersion": "0.3",
"assumeRole": "<Place your automation role ARN here>",
"parameters": {
"InstanceId": {
"type": "String",
"description": "(Required) The ID of the Amazon EC2 instance."
}
},
"mainSteps": [
{
"name": "RunAnsiblePlaybook",
"action": "aws:runCommand",
"inputs": {
"InstanceIds": ["{{ InstanceId }}"],
"DocumentName": "AWS-ApplyAnsiblePlaybooks",
"Parameters": {
"SourceType": "S3",
"SourceInfo": {"path": "https://testansiblebucketab.s3.amazonaws.com/"},
"InstallDependencies": "True",
"PlaybookFile": "ansible-test.yml",
"ExtraVariables": "SSM=True",
"Check": "False",
"Verbose": "-v",
"TimeoutSeconds": "3600"
}
}
}
]
}
I had quite a hard time setting up an automization with Beanstalk and Codepipeline...
I finally got it running, the main issue was the S3 Cloudwatch event to trigger the start of the Codepipeline. I missed the Cloudtrail part which is necessary and I couldn't find that in any documentation.
So the current Setup is:
S3 file gets uploaded -> a CloudWatch Event triggers the Codepipeline -> Codepipeline deploys to ElasticBeanstalk env.
As I said to get the CloudWatch Event trigger you need a Cloudtrail trail like:
resource "aws_cloudtrail" "example" {
# ... other configuration ...
name = "codepipeline-source-trail" #"codepipeline-${var.project_name}-trail"
is_multi_region_trail = true
s3_bucket_name = "codepipeline-cloudtrail-placeholder-bucket-eu-west-1"
event_selector {
read_write_type = "WriteOnly"
include_management_events = true
data_resource {
type = "AWS::S3::Object"
values = ["${data.aws_s3_bucket.bamboo-deploy-bucket.arn}/${var.project_name}/file.zip"]
}
}
}
But this is only to create a new trail. The problem is that AWS only allows 5 trails max. On the AWS console you can add multiple data events to one trail, but I couldn't manage to do this in terraform. I tried to use the same name, but this just raises an error
"Error creating CloudTrail: TrailAlreadyExistsException: Trail codepipeline-source-trail already exists for customer: XXXX"
I tried my best to explain my problem. Not sure if it is understandable.
In a nutshell: I want to add a data events:S3 in an existing cloudtrail trail with terraform.
Thx for help,
Daniel
As I said to get the CloudWatch Event trigger you need a Cloudtrail trail like:
You do not need multiple CloudTrail to invoke a CloudWatch Event. You can create service-specific rules as well.
Create a CloudWatch Events rule for an Amazon S3 source (console)
From CloudWatch event rule to invoke CodePipeline as a target. Let's say you created this event rule
{
"source": [
"aws.s3"
],
"detail-type": [
"AWS API Call via CloudTrail"
],
"detail": {
"eventSource": [
"s3.amazonaws.com"
],
"eventName": [
"PutObject"
]
}
}
You add CodePipeline as a target for this rule and eventually, Codepipeline deploys to ElasticBeanstalk env.
Have you tried to add multiple data_resources to your current trail instead of adding a new trail with the same name:
resource "aws_cloudtrail" "example" {
# ... other configuration ...
name = "codepipeline-source-trail" #"codepipeline-${var.project_name}-trail"
is_multi_region_trail = true
s3_bucket_name = "codepipeline-cloudtrail-placeholder-bucket-eu-west-1"
event_selector {
read_write_type = "WriteOnly"
include_management_events = true
data_resource {
type = "AWS::S3::Object"
values = ["${data.aws_s3_bucket.bamboo-deploy-bucket.arn}/${var.project_A}/file.zip"]
}
data_resource {
type = "AWS::S3::Object"
values = ["${data.aws_s3_bucket.bamboo-deploy-bucket.arn}/${var.project_B}/fileB.zip"]
}
}
}
You should be able to add up to 250 data resources (across all event selectors in a trail), and up to 5 event selectors to your current trail (CloudTrail quota limits)
I have an application where I want to keep track of three type of logs.
Audit
Trace
System
I am sending this logs using AWS Log Driver to Cloud Watch Logs.
Below is the pseudo code.
If(Audit Logs)
{
foreach(var audit in AuditLogs)
{
console.writeline(audit);
}
}
If(Trace Logs)
{
foreach(var trace in TraceLogs)
{
console.writeline(trace);
}
}
If(System Logs)
{
foreach(var system in SystemLogs)
{
console.writeline(system);
}
}
Below is the task.Json configuration for sending logs to the Cloud Watch Logs:
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "lt-stg-secops",
"awslogs-region": "us-east-1",
"awslogs-stream-prefix": "transactions-stg-secops"
}
}
Question: Now with above configuration all the logs are going to same log group. Is there any way I can direct all three logs to go to separate log groups.
We have log error in our log groups. The errors are not thrown by the expected tasks we have. How do we trace which task/service log this errors?
You can have a separate Task Definition for each cluster/service and define here where the logs will be send to:
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "my-cluster",
"awslogs-region": "us-east-1",
"awslogs-stream-prefix": "my-service"
}
as well the name of container:
"name": "my-container"
It will appear in CloudWatch logs as:
my-cluster/my-service/my-container/fe788b38-4194-43ff-a128-71c57df15f1a, where long ID is the ID of running container (task).
If you want to share Task Definition across many cluster/services you can search for relevant logs from ECS service itself (Logs tab).
Also you can grab container (task) ID from log and search for this ID in ECS console (tasks tab).