I have been trying to configure logstash to read logs which are getting generated in my amazon S3 bucket, but have not been successful. Below are the details :
I have installed logstash on an ec2 instance
My logs are all gz files in the s3 bucket
The conf file looks like below :
input {
s3 {
access_key_id => "MY_ACCESS_KEY_ID"
bucket => "MY_BUCKET"
region => "MY_REGION"
secret_access_key => "MY_SECRET_ACESS_KEY"
prefix => "/"
type => "s3"
add_field => { source => gzfiles }
}
}
filter {
if [type] == "s3" {
csv {
columns => [ "date", "time", "x-edge-location", "sc-bytes", "c-ip", "cs-method", "Host", "cs-uri-stem", "sc-status", "Referer", "User-Agent", "cs-uri-query", "Cookie", "x-edge-result-type", "x-edge-request-id" ]
}
}
if([message] =~ /^#/) {
drop{}
}
}
output {
elasticsearch {
host => "ELASTICSEARCH_URL" protocol => "http"
}
}
Related
I am using Terraform version 1.0.5. I would like to enable aws S3's RTC for S3 buckets created using my Terraform script. As Terraform does not offer the ability to enable/disable RTC yet (https://github.com/hashicorp/terraform-provider-aws/issues/10974), I was thinking of using a local-exec provisioner to update replication config of the created buckets (see below).
resource "null_resource" "s3_bucket" {
depends_on = [
# a module that creates S3 buckets
]
triggers = {
# below statement makes sure the local-exec provisioner is invoked on every run
always_run = timestamp()
encoded_replication_config = local.replication_config
}
provisioner "local-exec" {
command = "aws s3api put-bucket-replication --bucket '${primary_bucket_name}' --replication-configuration '${self.triggers.encoded_replication_config}'"
}
}
locals {
replication_config = jsonencode({
"Role" : role_arn,
"Rules" : [
{
"ID": "replication-id"
"Status" : "Enabled",
"Priority" : 1,
"DeleteMarkerReplication" : { "Status" : "Disabled" },
"Filter" : { "Prefix" : "" },
"Destination" : {
"Bucket" : replica_bucket_arn,
"ReplicationTime" : {
"Status" : "Enabled",
"Time" : {
"Minutes" : 15
}
},
"Metrics" : {
"Status" : "Enabled",
"EventThreshold" : {
"Minutes" : 15
}
}
}
}
]
})
}
While this works alright for a few buckets, in case of a large no. of S3 buckets (say 500), because the replication config is not stored in tfstate, the local-exec to update replication config is executed every time for all the buckets on terraform apply, even when a completely new bucket is created.
I would really appreciate it if anyone could suggest other workarounds for this problem.
I am trying to list the contents of my S3 bucket. I went through all the configuration steps and created & authenticated a user (I'm using Amplify). For record
Auth.currentCredentials(); gives
Object {
"accessKeyId": "ASIA6NIS4CMFGX3FWTNF",
"authenticated": true,
"expiration": 2020-10-14T13:30:49.000Z,
"identityId": "eu-west-2:6890ebd2-e3f3-4e1d-9725-9f9218241f60",
"secretAccessKey": "CGZsahSl53ulG9BJqGueM78xlMGhKcOs33UP2GUC",
"sessionToken": "IQoJb3JpZ2luX2VjEIX//////////wEaCWV1LXdlc3QtM ...
}
My code:
async function listObjects() {
Storage.list('s3://amplify-mythirdapp-dev-170201-deployment/',
{
level:'public',
region:'eu-west-2',
bucket:'arn:aws:s3:::amplify-mythirdapp-dev-170201-deployment'
})
.then(result => console.log(result))
.catch(err => console.log(err));
}
Which throws an error: Access point ARN region is empty
If I instead do bucket:amplify-mythirdapp-dev-170201-deployment it just returns Array []
But aws s3api list-objects --bucket amplify-mythirdapp-dev-170201-deployment correctly lists objects
What am I missing?
FYI I've also asked this question at aws-amplify/amplify-js/issues/2828.
I am trying to create encrypted S3 bucket. After I execute terraform apply, it all looks good, but when I look at the bucket in the AWS Console, it's not encrypted. I am also aware of the previous question.
Here is my terraform version:
Terraform v0.11.13
+ provider.aws v2.2.0
Here is my tf file:
resource "aws_s3_bucket" "test-tf-enc" {
bucket = "test-tf-enc"
acl = "private"
tags {
Name = "test-tf-enc"
}
server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
}
This is the output after I execute the command:
aws_s3_bucket.test-tf-enc: Creating...
acceleration_status: "" => "<computed>"
acl: "" => "private"
arn: "" => "<computed>"
bucket: "" => "test-tf-enc"
bucket_domain_name: "" => "<computed>"
bucket_regional_domain_name: "" => "<computed>"
force_destroy: "" => "false"
hosted_zone_id: "" => "<computed>"
region: "" => "<computed>"
request_payer: "" => "<computed>"
server_side_encryption_configuration.#: "" => "1"
server_side_encryption_configuration.0.rule.#: "" => "1"
server_side_encryption_configuration.0.rule.0.apply_server_side_encryption_by_default.#: "" => "1"
server_side_encryption_configuration.0.rule.0.apply_server_side_encryption_by_default.0.sse_algorithm: "" => "AES256"
tags.%: "" => "1"
tags.Name: "" => "test-tf-enc"
versioning.#: "" => "<computed>"
website_domain: "" => "<computed>"
website_endpoint: "" => "<computed>"
aws_s3_bucket.test-tf-enc: Still creating... (10s elapsed)
aws_s3_bucket.test-tf-enc: Creation complete after 10s (ID: test-tf-enc)
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
Works as expected.
Using different user without sufficient permissions to validate operation through UI in AWS Management Console resulted in the confusion. Insufficient permissions message in UI only visible after expanding the Encryption pane.
Use aws cli for troubleshooting to reduce the problem surface.
i use aws S3 bucket with Symfony 3.4 and when i send file i have this error :
Requests specifying Server Side Encryption with AWS KMS managed keys require AWS Signature Version 4.
I think i need change 'signature_version' to v4 https://docs.aws.amazon.com/sdk-for-php/v3/developer-guide/guide_configuration.html#signature-version
But i don't know how
config.yml:
aws:
version: 'lastest'
region: 'eu-west-3'
credentials: false
Sqs:
credentials: "#a_service"
sendFile.php
use Aws\S3\S3Client;
public function __construct(S3Client $s3Client){
$this->s3Client = $s3Client;
}
public function sendFile($dataBase64){
$this->s3Client->putObject([
'Bucket' => $monbucket,
'Key' => $key,
'Body' => $dataBase64,
'ACL' => 'public-read'
]);
}
Bundle version: "aws/aws-sdk-php-symfony": "^2.0",
According to this link it is possible to tag spot fleet instances. Tags are automatically propagated to the launched instances. Is it possible to do the same for normal spot instances? My approach so far
ec2 = Aws::EC2::Resource.new({region: region, credentials: creds})
launch_specification ={
:security_groups => ['ccc'],
:ebs_optimized => true,
:image_id => "image_id",
:instance_type => "type",
:key_name => "key",
:placement => {:group_name => "ggg"},
:user_data => ""
}
resp = ec2.client.request_spot_instances(:instance_count => count,
:launch_specification => launch_specification,
:spot_price => price.to_s,
:type => 'one-time',
:dry_run => false
)
resp.spot_instance_requests.each do |sir|
ec2.create_tags({
dry_run: false,
resources: [sir.spot_instance_request_id],
tags: [
{
key: "owner",
value: "ooo",
},
],
})
end
Tags are created for for the spot_instance_request, but are not propagated to the launched instances.