AWS elasticsearch log rotation - amazon-web-services

I want to use AWS elasticsearch to store the log of my application. Since there a huge amount of data to input to AWS elasticsearch ( ~30GB daily), so i would only keep 3 days of data. Are there any way to schedule data removal from AWS elasticsearch or do a log rotation? What happen if the AWS elasticsearch storage is full?
Thanks for the help

A possible way is to specify the index parameter in elasticsearchoutput to something like logstash-%{appname}-%{date_format}". Hence you can then use curator plugin in order to delete the old indices by number of days or so.
This SO pretty much explains the same. Hope it helps!

I assume you are using the AWS Amazon Elasticsearch Service?
The storage type is an EBS volume with a fixed size of disk space. If you want to keep only the last three days, I assume you have 3 indices then, like that
my-index-2017.01.30
my-index-2017.01.31
my-index-2017.02.01
Basically you can write some simple script which deletes indices older than 3 days. With the REST API it just is in Sense DELETE my-index-2017.01.30.
I recommend to use Elasticsearch Curator for the job. See https://www.elastic.co/guide/en/elasticsearch/client/curator/current/delete_indices.html
I'm not sure if the Service interface itself has an option for that. But Elasticsearch Curator should do the job for you.

Update for 2020:
AWS ES has now support for Index state management which lets you define custom management policies to automate routine tasks and apply them to indices and index patterns. You no longer need to set up and manage external processes to run your index operations.
For example, you can define a policy that moves your index into a read_only state after 30 days and then ultimately deletes it after 90 days.
Index State Management - https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/ism.html

Related

Move data from AWS Elasticsearch to S3

I have an application pumping logs to an AWS OpenSearch (earlier Elasticsearch) cluster. I want to move old logs to S3 to save cost and still be able to read the logs (occasionally).
One approach I can think of is writing a cron job that reads the old indexes, writes them (in text format) to the s3 and deletes the indexes. This also requires keeping day wise indexes. Is there a more efficient/better way?
You can use the manual snapshots approach to backup your indexes to s3: https://docs.aws.amazon.com/opensearch-service/latest/developerguide/managedomains-snapshots.html
Another option as suggested toward the end of the first link is to use a tool named Curator within lambda that will handle the index rotation:
https://docs.aws.amazon.com/opensearch-service/latest/developerguide/curator.html
Depending on your use case UltraWarm could be the best approach, in case you want those logs to be searchable later on without the need of manual restores, that will be required in case you go with the first two options I have listed:
https://aws.amazon.com/blogs/aws/general-availability-of-ultrawarm-for-amazon-elasticsearch-service/
There is one tool elasticdump
# Export ES data to S3 (using s3urls)
elasticdump \
--s3AccessKeyId "${access_key_id}" \
--s3SecretAccessKey "${access_key_secret}" \
--input=http://production.es.com:9200/my_index \
--output "s3://${bucket_name}/${file_name}.json"

Best way to retire an index

I am retiring an old elastic search index in AWS that has not received a new document since 2016. However, something is still trying to search it.
I still want deprecate this index in a manner manner where I can get back to the original state quickly. I have created a manual snapshot of the index and it is sitting in S3. I was planning on deleting the domain, but, from what I understand, that deletes everything billable under AWS including the end point. As I mentioned above, I want to be able to get back to the original state of the index. So this domain contains a series of indexes. The largest index is 20.5 Gb. I was going to delete the large index and resize the cluster to a smaller instance size and footprint. Will this work or will it be unsearchable?
I've no experience using Elasticsearch on AWS, but I have an idea about your index.
You say the index has received no new documents for a long time. If this also means no deletions and no updates, you could theoretically just take this index to a new cluster, using either snapshot + restore, or a cross-cluster reindex. Continue operating your old cluster until you're sure the new one is working well.
Again - not familiar with AWS terminology, but it sounds like this approach translates to using separate "domains". First you fully ensure the new "domain" is working with the right hardware spec and data, and then delete the old "domain".
TL;DR -> yes!
The backup to S3 will work, but the documents will be unsearchable because in order to downsize the storage you have to delete the index.
But if someday you want to restore the data from S3 back to the index, you can.
You can resize instances and storage sizes with no downtime, however, that takes a long time and you pay extra for the machines while they are resizing.
Example:
you change your storage size from 100gb to 99gb
elasticsearch service will spin up another instance, copy all your data from the old instance to the new one and then delete the old one.
same for instance sizes.
machine up, cluster sync, machine down.
while they are syncing, you pay for them.
your plan will work, es is very flexible.
if you really don't trust aws, just make a json export from the index and keep it on s3 too, just in case things go south.

Cheapest way to delete 2 billion objects from S3 IA

I have a bucket in S3 (Infrequent access) containing 2 billion objects. It is too big to delete in the console or over the api without taking years.
I can create a lifecycle rule to expire and delete the objects but the calculator predicts this will cost me >$20,000. Is that correct? Is there a better way to delete a bucket?
I have a file effectively containing a list of all the objects in that bucket if that helps.
Update 2021:
An answer below from #MAP points out that there is now an "Empty" button. I haven't tested yet, but looks like the way to go (I'll accept that answer once tested):
If you have a list of all the objects available then you can certainly use Multi Delete Object action. Apparently this API is free. I would create AWS Step Functions state machine to loop through the file and delete 1000 objects at a time. 1000 appears to be the limit.
It will take around 2M step function transactions to delete all the objects in the bucket. As per the pricing for step function it will cost you around $50 + cost of Lambda invocations around $1 so total cost roughly $51.
Update
Using Lambda or Step Functions is probably not the most cost effective option because both ways you will need to read the file (that contains object keys) from some source such as S3. So I think running the script from local machine or any EC2 linux screen appears to be the best option.
In 2021, anyone who comes across this question may benefit to know that AWS console now provides an empty button.
Select the bucket and click on "empty" button and all objects versioned or not versioned would be emptied/deleted. Depending on the number of objects it can take minutes to days.
Expiration lifecycle rules are free. From the original feature announcement:
As with standard delete requests, Amazon S3 doesn’t charge you for using Object Expiration.
Delete operations are for free. You can create a lifecycle
Policy to automate a bulk delete.
I would start with a small number of objects first and check billing report to 100% confirm that the delete will not be charged, then go for the rest.

deleting old indexes in amazon elasticsearch

We are using AWS Elasticsearch for logs. The logs are streamed via Logstash continuously. What is the best way to periodically remove the old indexes?
I have searched and various approaches recommended are:
Use lambda to delete old indexes - https://medium.com/#egonbraun/periodically-cleaning-elasticsearch-indexes-using-aws-lambda-f8df0ebf4d9f
Use scheduled docker containers - http://www.tothenew.com/blog/running-curator-in-docker-container-to-remove-old-elasticsearch-indexes/
These approaches seem like an overkill for such a basic requirement as "delete indexes older than 15 days"
What is the best way to achieve that? Does AWS provide any setting that I can tweak?
Elasticsearch 6.6 brings a new technology called Index Lifecycle Manager See here. Each index is assigned a lifecycle policy, which governs how the index transitions through specific stages until they are deleted.
For example, if you are indexing metrics data from a fleet of ATMs into Elasticsearch, you might define a policy that says:
When the index reaches 50GB, roll over to a new index.
Move the old index into the warm stage, mark it read only, and shrink it down to a single shard.
After 7 days, move the index into the cold stage and move it to less expensive hardware.
Delete the index once the required 30 day retention period is reached.
The technology is in beta stage yet, however is probably the way to go from now on.
Running curator is pretty light and easy.
Here you can find a Dockerfile, config and action-file.
https://github.com/zakkg3/curator
Also, Curator can help you if you need to (among others):
Add or remove indices (or both!) from an alias
Change shard routing allocation
Delete snapshots
Open closed indices
forceMerge indices
reindex indices, including from remote clusters
Change the number of replicas per shard for indices
rollover indices
Take a snapshot (backup) of indices
Restore snapshots
https://www.elastic.co/guide/en/elasticsearch/client/curator/current/index.html
Here is a typical action file for delete indices older than 15 days:
actions:
1:
action: delete_indices
description: >-
Delete indices older than 15 days (based on index name), for logstash-
prefixed indices. Ignore the error if the filter does not result in an
actionable list of indices (ignore_empty_list) and exit cleanly.
options:
ignore_empty_list: True
disable_action: True
filters:
- filtertype: pattern
kind: prefix
value: logstash-
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: 15
I followed the elasticsearch-curator documentation to install the package:
https://www.elastic.co/guide/en/elasticsearch/client/curator/current/pip.html
Then I used the AWS base example of how to automate the indexes cleanup using the signed based authentication provided by requests_aws4auth package:
https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/curator.html
It worked like a charm.
You can decide to run this inside a lambda, docker or include it in your own DevOps cli.

AWS Kinesis Firehose - using Index Rotation (Elasticsearch)

I have set up a new AWS Kinesis Firehose stream and I'd like to create a new index on a weekly basis.
For that, I should use Index Rotation setting when configuring the stream.
But, do I have to the new index every weekend for the next upcoming week?
If not (hopefully not), how does Firehose knows what mapping to use? does it use the mapping defined in the index that I specified in the Index setting?
More over, lets say I have old data, can I make Firehose to create an index with the relevant timestamp according to the dates in specified in my old data?
Thank you !
Have you tried to consider creating index template in elastic search. Thay way new indexes will pick the mapping defined in index template.
Refer to following link for details
https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-templates.html
Well, apparently the answer is yes, but in a bad way.
If Firehose pushes data to a new index, that is not pre-defined with a mapping, that data is ingested to Elasticsearch and a mapping is created automatically for you.
This is really bad.
You should auto create an index 1-2 hours prior the event.
I'll post a Lambda function and its configuration for doing that automatically.