Logstash-cloudwatch-input plugin is not sending data to Elasticsearch - amazon-web-services

I have an EC2 instance setup in a private vpc network with the IAM role shown below:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "CloudWatchAccess",
"Effect": "Allow",
"Action": [
"cloudwatch:*"
],
"Resource": "*"
},
{
"Sid": "EC2",
"Effect": "Allow",
"Action": [
"ec2:DescribeInstances"
],
"Resource": "*"
}
]
}
And my logstash configuration file has:
input {
stdin{}
cloudwatch {
namespace => "AWS/EC2"
metrics => [ 'CPUUtilization']
filters => { "tag:TAG" => "VALUE" }
region => "us-east-1"
proxy_uri => “http://proxy.company.com:port/”
}
}
output {
stdout {
codec => rubydebug
}
elasticsearch {
hosts => ["IP:PORT"]
}
}
I start my logstash using:
/path/to/logstash -f /path/to/logstash.conf
The command runs and I can see data from cloudwatch in debug mode:
{:timestamp=>"2016-04-13T17:26:40.685000-0400", :message=>"DPs: {:datapoints=>[{:timestamp=>2016-04-13 21:24:00 UTC, :sample_count=>2.0, :unit=>\"Percent\", :minimum=>0.17, :maximum=>0.25, :sum=>0.42000000000000004, :average=>0.21000000000000002}, {:timestamp=>2016-04-13 21:22:00 UTC, :sample_count=>2.0, :unit=>\"Percent\", :minimum=>0.08, :maximum=>0.17, :sum=>0.25, :average=>0.125}], :label=>\"CPUUtilization\", :response_metadata=>{:request_id=
but logstash doesn't push anything to elasticsearch.
Does anyone have any idea what might be the issue? Or know how I can debug this?

To solve this issue you can use the 1.1.2 version of the plugin or update your cloudwatch.rb and logstash-input-cloudwatch.gemspec:
https://github.com/logstash-plugins/logstash-input-cloudwatch/pull/3/files

Related

AWS SCP: Deny action "RunEC2" when you have 2 conditions

I don't undertand why my SCP is not working as I expected or how can I do this..
I want to block "Run EC2 instances" without a certain tag ONLY if the instance is not created by Data Pipeline.
I was testing diff options and I tried this:
{
"Sid": "Name",
"Effect": "Deny",
"Action": [
"ec2:RunInstances",
"ec2:CreateVolume"
],
"Resource": [
"arn:aws:ec2:*:*:instance/*",
"arn:aws:ec2:*:*:volume/*"
],
"Condition": {
"Null": {
"aws:RequestTag/cost_center": "true"
},
"StringNotEquals": {
"aws:CalledVia": [
"datapipeline.amazonaws.com"
]
}
}
}
Why doesn't it work?
This SCP always allows EC2 creation (by datapipeline and EC2 instances console without tags)

How do I add permissions in Amazon Keyspaces?

When I try to query AWS Keyspaces (managed Cassandra) from an AWS Lambda, I get this error:
{
"errorType": "AggregateException",
"errorMessage": "One or more errors occurred. (All hosts tried for query failed (tried 11.11.111.11:9142: UnauthorizedException 'User arn:aws:iam::111111111111:user/user-for-keyspaces has no permissions.'; 11.11.111.11:9142: UnauthorizedException 'User arn:aws:iam::111111111111:user/user-for-keyspaces has no permissions.'))",
"stackTrace": [
"at lambda_method(Closure , Stream , Stream , LambdaContextInternal )"
],
"cause": {
"errorType": "NoHostAvailableException",
...
But in the AWS console for Keyspaces, I don't see anywhere to adder permissions.
The user policy for user-for-keyspaces already has this attached:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"cassandra:*"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
How do I add permissions in AWS Keyspaces?
You should only require cassandra
{
"Statement": [
{
"Sid": "keyspaces-full-access",
"Principal": "*",
"Action": [
"cassandra:*"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
Additionally, Amazon Keyspaces populates the system.peers table in your account with an entry for each availability zone where a VPC endpoint is available. To look up and store available interface VPC endpoints in the system.peers table, Amazon Keyspaces requires that you grant the IAM entity used to connect to Amazon Keyspaces access permissions to query your VPC for the endpoint and network interface information.
{
"Version":"2012-10-17",
"Statement":[
{
"Sid":"ListVPCEndpoints",
"Effect":"Allow",
"Action":[
"ec2:DescribeNetworkInterfaces",
"ec2:DescribeVpcEndpoints"
],
"Resource":"*"
}
]
}
Learn more about VPC endpoints here
The problem was actually nothing to do with the user in the error message, but the VPC endpoint I had created for Keyspaces.
The endpoint requires cassandra:* permissions to perform queries, e.g.
{
"Statement": [
{
"Sid": "keyspaces-full-access",
"Principal": "*",
"Action": [
"cassandra:*",
"keyspaces:*"
],
"Effect": "Allow",
"Resource": "*"
}
]
}

ELastic search exporter for prometheus throwing 403 errors

I am running into following issue while running elastic search exporter(prometheus) for VPC based AWS elasticsearch instance.
Appreciate if anyone knows what could could be the issue?
[elasticsearch-exporter-5558555bbf-blzhn elasticsearch-exporter] level=error ts=2020-09-02T15:56:20.134343455Z caller=clusterinfo.go:174 msg="failed to retrieve cluster info from ES" err="HTTP Request failed with code 403"
[elasticsearch-exporter-5558555bbf-blzhn elasticsearch-exporter] level=debug ts=2020-09-02T15:56:20.13437932Z caller=clusterinfo.go:120 msg="updating cluster info metrics"
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::awsaccount:role/role1”,
"arn:aws:iam::awsaccount:role/role2"
]
},
"Action": [
"es:ESHttpHead",
"es:ListDomainNames",
"es:DescribeElasticsearchDomain",
"es:ESHttpPost",
"es:ESHttpGet",
"es:ESHttpPatch",
"es:DescribeElasticsearchDomains",
"es:ESHttpDelete",
"es:ESHttpPut"
],
"Resource": "arn:aws:es:us-east-1:awsaccount:domain/domain/*”
}
]
}

Unable to send data to SNS from lambda using designer vew

When following Introducing AWS Lambda Destinations I'm told to create an SNS as destination, I do that:
But it doesn't send anything. I had already an SNS able to send mail to my account, and I have adapted the policy to accept everything from everyone (it works with the 'Publish another message' button)
If I call the sns from code it works:
if (event.Success) {
console.log("Success");
context.callbackWaitsForEmptyEventLoop = false;
var sns = new AWS.SNS();
sns.publish({
Message: 'File(s) uploaded successfully',
TopicArn: 'arn:aws:sns:XXX:YYY:ZZZ'
}, (err,data) => {
if (err) {
console.log(err.stack);
return;
}
callback(null);
});
}
But I was hoping not having to write code for that (that what's suggested from the blog entry) so for example if I change the SNS topic I don't have to change the code.
Have any of you succeeded in doing this?
Thanks,
I have reviewed and replicated the AWS Lambda Destinations blog successfully without modifying the sample code snippet from the blog.
I would suggest, you review your SNS configuration (and change us-west-2 region to your AWS region of use as need be) and check if it matches the following:
1. On your SNS topic ('arn:aws:sns:us-west-2:1234567890:YourSNSTopicOnSuccess'), navigate to the access policy and check if you have a policy similar to the following :
{
"Version": "2008-10-17",
"Id": "__default_policy_ID",
"Statement": [
{
"Sid": "__default_statement_ID",
"Effect": "Allow",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Action": [
"SNS:GetTopicAttributes",
"SNS:SetTopicAttributes",
"SNS:AddPermission",
"SNS:RemovePermission",
"SNS:DeleteTopic",
"SNS:Subscribe",
"SNS:ListSubscriptionsByTopic",
"SNS:Publish",
"SNS:Receive"
],
"Resource": "arn:aws:sns:us-west-2:1234567890:YourSNSTopicOnSuccess"
}
]
}
2. On your Lambda role ('arn:aws:iam::1234567890:role/YourLambdaDestinationRole'), make sure of the following:
(i) The "Trust relationship" of your role has the following statement :
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service":"lambda.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
(ii) The Lambda role has an attached policy document similar to one given below:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"sns:publish"
],
"Resource": "*"
}
]
}
The successful published message from Amazon Lambda to SNS topic should output something similar to:
{"version":"1.0","timestamp":"2020-03-22T16:29:50.528Z","requestContext":{"requestId":"43d109d2-54be-4e2e-b8d8-2757e3f06f76","functionArn":"arn:aws:lambda:eu-west-1:1234567890:function:event-destinations:$LATEST","condition":"Success","approximateInvokeCount":1},"requestPayload":{ "Success": true },"responseContext":{"statusCode":200,"executedVersion":"$LATEST"},"responsePayload":null}
Hope this helps.

Mandatory tagging when launching EC2 instance

In AWS, is there a way to force an IAM user to tag the instance he/she is about to launch? It doesn't matter what the value is. I want to make sure it is correctly tagged so that long running instances can be properly identified and the owner notified. Currently tagging is optional.
What I do currently is to use CloudTrail and identify the instances with their IAM users. I do not like it because it is an extra work to run the script periodically and CloudTrail has only 7 days worth of data. It would be nice if AWS has an instance attribute for owner.
Using keypairs to identify the owners is not a viable solution in our case. Anyone faced this problem before and how did you tackle it?
One way: Don't give them IAM permissions to launch boxes. Instead, have a web service that allows them to do it. (Production should be fully automated anyway). When they use your service, you can enforce all the rules you want. Yes, it's quite a bit of work, so not for everybody.
Currently tagging is optional.
It's worse than that. Tagging requires a 2nd API call, so even when using the API, things can launch without tags because of a hiccup.
I resolved this by using AWS Lambda. When CloudTrail creates an object in S3, it triggers an event that cause a Lambda function to execute. The Lambda function then parses the S3 object and creates the tag. There is a lag of ~2 mins but the solution works perfectly.
As #helloV mentions, this is possible by using AWS CloudTrail logs (once properly enabled) and AWS Lambda. I was able to accomplish this with the following code running in a python Lambda function:
s3 = boto3.client('s3')
ec2 = boto3.client(service_name='ec2', aws_access_key_id=aws_key, aws_secret_access_key=aws_secret_key)
def lambda_handler(event, context):
# Get the object from the event and show its content type
bucket = event['Records'][0]['s3']['bucket']['name']
key = urllib.unquote_plus(event['Records'][0]['s3']['object']['key']).decode('utf8')
try:
response = s3.get_object(Bucket=bucket, Key=key)
compressed_file = StringIO.StringIO()
compressed_file.write(response['Body'].read())
compressed_file.seek(0)
decompressed_file = gzip.GzipFile(fileobj=compressed_file, mode='rb')
successful_tags = 0;
json_data = json.load(decompressed_file)
for record in json_data['Records']:
if record['eventName'] == 'RunInstances':
instance_user = record['userIdentity']['userName']
instances_set = record['responseElements']['instancesSet']
for instance in instances_set['items']:
instance_id = instance['instanceId']
ec2.create_tags(Resources=[instance_id], Tags=[{'Key':'Owner', 'Value':instance_user}])
successful_tags += 1
return 'Tagged ' + str(successful_tags) + ' instances successfully'
except Exception as e:
print(e)
print('Error tagging object {} from bucket {}'.format(key, bucket))
raise e
Check out the capitalone.io/cloud-custodian open source project -- it has the ability to enforce policies like this
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "GrantIAMPassRoleOnlyForEC2",
"Action": [
"iam:PassRole"
],
"Effect": "Allow",
"Resource": [
"arn:aws:iam::*:role/ec2tagrestricted",
"arn:aws:iam::*:role/ec2tagrestricted"
],
"Condition": {
"StringEquals": {
"iam:PassedToService": "ec2.amazonaws.com"
}
}
},
{
"Sid": "ReadOnlyEC2WithNonResource",
"Action": [
"ec2:Describe*",
"iam:ListInstanceProfiles"
],
"Effect": "Allow",
"Resource": "*"
},
{
"Sid": "ModifyingEC2WithNonResource",
"Action": [
"ec2:CreateKeyPair",
"ec2:CreateSecurityGroup"
],
"Effect": "Allow",
"Resource": "*"
},
{
"Sid": "RunInstancesWithTagRestrictions",
"Effect": "Allow",
"Action": "ec2:RunInstances",
"Resource": [
"arn:aws:ec2:us-east-1:*:instance/*",
"arn:aws:ec2:us-east-1:*:volume/*"
],
"Condition": {
"StringEquals": {
"aws:RequestTag/test": "${aws:userid}"
}
}
},
{
"Sid": "RemainingRunInstancePermissionsNonResource",
"Effect": "Allow",
"Action": "ec2:RunInstances",
"Resource": [
"arn:aws:ec2:us-east-1::image/*",
"arn:aws:ec2:us-east-1::snapshot/*",
"arn:aws:ec2:us-east-1:*:network-interface/*",
"arn:aws:ec2:us-east-1:*:key-pair/*",
"arn:aws:ec2:us-east-1:*:security-group/*"
]
},
{
"Sid": "EC2RunInstancesVpcSubnet",
"Effect": "Allow",
"Action": "ec2:RunInstances",
"Resource": "arn:aws:ec2:us-east-1:*:subnet/*",
"Condition": {
"StringEquals": {
"ec2:Vpc": "arn:aws:ec2:us-east-1:*:vpc/vpc-8311b8f9"
}
}
},
{
"Sid": "EC2VpcNonResourceSpecificActions",
"Effect": "Allow",
"Action": [
"ec2:DeleteNetworkAcl",
"ec2:DeleteNetworkAclEntry",
"ec2:DeleteRoute",
"ec2:DeleteRouteTable",
"ec2:AuthorizeSecurityGroupEgress",
"ec2:AuthorizeSecurityGroupIngress",
"ec2:RevokeSecurityGroupEgress",
"ec2:RevokeSecurityGroupIngress",
"ec2:DeleteSecurityGroup",
"ec2:CreateNetworkInterfacePermission",
"ec2:CreateRoute",
"ec2:UpdateSecurityGroupRuleDescriptionsEgress",
"ec2:UpdateSecurityGroupRuleDescriptionsIngress"
],
"Resource": "*",
"Condition": {
"StringEquals": {
"ec2:Vpc": "arn:aws:ec2:us-east-1:*:vpc/vpc-8311b8f9"
}
}
},
{
"Sid": "AllowInstanceActionsTagBased",
"Effect": "Allow",
"Action": [
"ec2:RebootInstances",
"ec2:StopInstances",
"ec2:TerminateInstances",
"ec2:StartInstances",
"ec2:AttachVolume",
"ec2:DetachVolume",
"ec2:AssociateIamInstanceProfile",
"ec2:DisassociateIamInstanceProfile",
"ec2:GetConsoleScreenshot",
"ec2:ReplaceIamInstanceProfileAssociation"
],
"Resource": [
"arn:aws:ec2:us-east-1:347612567792:instance/*",
"arn:aws:ec2:us-east-1:347612567792:volume/*"
],
"Condition": {
"StringEquals": {
"ec2:ResourceTag/test": "${aws:userid}"
}
}
},
{
"Sid": "AllowCreateTagsOnlyLaunching",
"Effect": "Allow",
"Action": [
"ec2:CreateTags"
],
"Resource": [
"arn:aws:ec2:us-east-1:347612567792:instance/*",
"arn:aws:ec2:us-east-1:347612567792:volume/*"
],
"Condition": {
"StringEquals": {
"ec2:CreateAction": "RunInstances"
}
}
}
]
}
This policy restricts a user to Launch an ec2 instance only if the Tag key is test and value is the variable ${aws.userid} different values can be found here
Notable things
This does not restrict the number of ec2 instances a user can launch
User can change the tag of existing instances tag and gain control
We can use TagKeys https://docs.aws.amazon.com/IAM/latest/UserGuide/access_tags.html#access_tags_control-tag-keys to tackle the above two situations but I did not do it
Attach this policy to the user or group to prevent them from launching an instance without tagging it:
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Deny",
"Action": "ec2:RunInstances",
"Resource": "*",
"Condition": {
"Null": {
"aws:RequestTag/Owner": "true"
}
}
}
}
When the user tries to launch an instance, they'll get an error:
(If anyone knows a way to display a cleaner error message, please let us know in the comments.)
Decode the error like so:
aws sts decode-authorization-message \
--encoded-message <encoded-message> \
--query DecodedMessage --output text | jq '.'
Part of the (giant) response is as follows:
{
"allowed": false,
"explicitDeny": true,
"matchedStatements": {
"items": [
{
"statementId": "",
"effect": "DENY",
"principals": {
"items": [
{
"value": "AIDATDOMLI3YFAYEBFGSO"
}
]
},
"principalGroups": {
"items": []
},
"actions": {
"items": [
{
"value": "ec2:RunInstances"
}
]
},
"resources": {
"items": [
{
"value": "*"
}
]
},
"conditions": {
"items": [
{
"key": "aws:RequestTag/Owner",
"values": {
"items": [
{
"value": "true"
}
]
}
}
]
}
}
]
}
}
It shows that the launch failed because the Owner tag is missing.
Do you use/require userdata scripts at launch time? We use that script process to properly tag each instance as it is launched.
We burn a support script into the AMI that is launched by the userdata, and parses the command line for parameters. These parameters are then used to create tags for the newly launched instances.
For manual launches, the user must load the correct userdata script for this to work. But from automated launching script, or from a properly configured Launch Configuration in an Auto-scaling Group, it works perfectly.
<script>
PowerShell -ExecutionPolicy Bypass -NoProfile -File c:\tools\server_userdata.ps1 -function Admin -environment production
</script>
Using this method, an instance launched with that userdata will be automatically tagged with the Function and Environment tags.