AWS: Is there a way to delete every artifact using string matching? - amazon-web-services

I need to remove a lot of created resources in AWS. Buckets, Lambdas, cloudformation, and more. I know everything I need to delete will start with "ABC". Is there a way to just delete everything from the AWS CLI that starts with "ABC"? or even delete resource types that start with the string?

Sadly there is not a single command for all of these. You would have to create a custom script or program, e.g. in python, to list all your resources in questions, filter them out by name, and delete what is needed.

While it won't handle everything (CloudFormation isn't on their list, unfortunately), cloud-nuke can delete artifacts based on regex strings (both inclusive and exclusive) so this might be a good tool for most cases.

Related

Deletion Operation in AWS DocumentDB

I have a question about deleting data in AWS DocumentDB.
I am using PuTTY to connect to EC2 instance and I use mongo shell command to connect with my Document DB.
I checked AWS DocumentDB documentation but I couldn't find how to delete singular data or whole data in one collection. For example I say:
rs0:PRIMARY> show databases
gg_events_document_db 0.000GB
rs0:PRIMARY> use gg_events_document_db
switched to db gg_events_document_db
rs0:PRIMARY> db.data_collection.find({"Type": "15"})
{"Type" : "15", "Humidity" : "14.3%"}
Now I have found the data and I want to delete it. What query should I need to run?
Or what if I want to delete all data in the collection? Without deleting my collection how can I do it?
Probably I am asking very basic questions but I couldn't find a query like this on my own.
I would be so happy if some experienced people in AWS DocumentDB can help me or share some resources.
Thanks a lot 🙌
Amazon DocumentDB has compatibility with MongoDB APIs for 3.6 and 4.0. This said, the same APIs can be used for this need. With respect to:
Or what if I want to delete all data in the collection? Without
deleting my collection how can I do it?
Yo can use:
db.data_collection.drop()
To delete a single document matching a filter, you would use the deleteOne() method.
For example, for your case that would be:
db.data_collection.deleteOne({"Type": "15"})
To delete all documents matching the filter, then use deleteMany().
There's also remove() method, but it was deprecated.
The drop() method deletes the entire collection.

aws delete reports group history

I'm attempting to use the AWS CLI to delete the history of a codebuild reports-group. (Context: It was muddied when we were initially setting up these reports.)
I notice that it's possible to just delete the entire reports-group, but I only want to clear the history. Is there an easy way to delete the history without destroying the entire reports-group?
The man page gives options for deleting an individual report, but there are possibly 500+, and I've no idea how nor the intent to run that command that many times.
My man page diving so far has landed me here:
aws codebuild delete-reports help
So far I have also found batch-delete-builds, but there's no batch-delete-reports that I can tell. Should I just delete the reports-group or is there a command that just isn't named as expected?
there is no such api, you can delete the report group. https://docs.aws.amazon.com/cli/latest/reference/codebuild/delete-report-group.html

How to clear the Cache of multiple distributions including listing them?

First I want to say Hello to all, second I am very scared since I just got a new job and one of my tasks is something I have never done before in my life.
In this case the task I am assigned to is to find a way to delete the cache from the S3 Cloudfront Distributions. I have tried to see if there is a way to list all of the distributions and then clear the cache from them using a script but I could not find if that is even possible and what the script should look like.
The idea that I have is to have a cli script that will:
A) list all of the distributions in a txt file output;
B) Read from that output the distributions ID's and afterwards use that output to clear their current cache.
So that afterwards new cache can be created on the distributions after new files have been uploaded. I have read upon https://docs.aws.amazon.com/cli/latest/reference/cloudfront/list-distributions.html but unfortunately I could not grasp how the script would look like to list all of the distributions ID's > distribution.txt and afterwards read from it to delete their cache.
Any tips or information that I can read upon to create such a script if it's even possible will be very helpful, since I am really nervus and scared of my first task.
Want to say thanks to all that have read the topic even if they did not have any tips to give :).
Okay, I think i understand the requirements fully now. What I would do:
Architecturally: Make it a Lambda function, I would use Python 3.7 for this personally.
Coding steps to implement:
Read the domain you want invalidated from the Lambda request input.
Save the result of the aws cloudfront list-distributions in a
variable
Since it's a JSON Structure you can loop through it as a dictionary, do that and for
each of the distributions check if the "Aliases" attribute includes
your domain. Save the ID's of these distributions in a list.
Loop through your list and for each of the ids execute: aws cloudfront create-invalidation --distribution-id *id_from_list* --paths *
Make sure that the Lambda function has permission to list Cloudfront distributions and to create invalidations. Also make sure that everyone who might need to execute this function has rights to do so.

Does using tags for object-level deletion in AWS S3 work?

I need object level auto deletion after some time in my s3 bucket, but only for some objects. I want to accomplish this by having a lifecycle rule that auto deletes objects with a certain (Tag, Value) pair. For example, I am adding the tag pair (AutoDelete, True) for objects I want to delete, and I have a lifecycle rule that deletes such objects after 1 day.
I ran some experiments to see if objects are getting deleted using this technique, but so far my object has not been deleted. (It may get deleted soon??)
If anyone has experience with this technique, please let me know if this does not work, because so far my object has not been deleted, even though it is past its expiration date.
Yes, you can use S3 object lifecycle rules to delete objects having a given tag.
Based on my experience, lifecycle rules based on time are approximate, so it's possible that you need to wait longer. I have also found that complex lifecycle rules can be tricky -- my advice is to start with a simple test in a test bucket and work your way up.
There's decent documentation here on AWS.

Is there a way to build AWS S3 bucket endpoints automatically, regardless of region?

So I have an app in Node that accesses stuff in buckets. I want it to be able to use buckets in any region, transparently. Unfortunately, the way of building the URL for the endpoint differs based on what region you're in.
If it's in US-Standard, I can say http://s3.amazonaws.com/BUCKETNAME/path/to/file. If it's anywhere else, that doesn't work (non-coincidentally, you're limited to domain-allowed characters (lowercase and numbers only) for bucket names in non-US Standard) and you use http://BUCKETNAME.s3.amazonaws.com/path/to/file.
(Note you can get more complicated and say
I'm thinking this is not a unique problem, so want to put it out there.
http://bucketname.s3.amazonaws.com/path/to/file works in US-Standard also, so you should be able to use this single construct on any bucket anywhere (unless I'm missing something in your question).