Is there ways to get deleted history of AWS s3 bucket?
Problem Statement :
Some of s3 folders got deleted . Is there way to figure out when it got deleted
There are at least two ways to accomplish what you want to do, but both are disabled by default.
The first one is to enable server access logging on your bucket(s), and the second one is to use AWS CloudTrail.
You might be out of luck if this already happened and you had no auditing set up, though.
Related
I am new to AWS. I have created a S3 bucket a few days ago and I have noticed the number of requests made to it is already very high. Over the free tier limit for Put... I don't understand what is going on. I did connect a django heroku hosted app to the bucket but I am the only one having access to it and I only made a dozens of requests in the past few days.
Can you please help me understand what is going on, is this normal behaviour?
I didn't find my answer on Amazon forum and to access the technical support I need to upgrade my plan...
Thank you
Check if your bucket is public. It is possible that if correct restrictions are not set then anyone can modify bucket.
Another thing to check is cloudtrail logs. It will show you the config changes (if any) on your S3 Bucket.
Also check if new files are added in the bucket. If yes, then maybe it's already compromised.
In my company we are storing log files in cloudwatch and then after 7days it will get sent to s3 however I have trouble finding exactly where log files are being stored in s3.
Since process of moving from cloudwatch to s3 is automated I've followed https://medium.com/tensult/exporting-of-aws-cloudwatch-logs-to-s3-using-automation-2627b1d2ee37 in hope to find the path.
We are not using step functions so I've check lambda services however there were no function that move log file from cloudwatch to s3.
I've tried looking at cloudwatch rules in hope to fine something like:
{
"region":"REGION",
"logGroupFilter":"prod",
"s3BucketName":"BUCKET_NAME",
"logFolderName":"backend"
}
so I can find which bucket log files are going to and into which folder.
How can I find where my logs are stored, if moving data is being automated why is there no functions visible?
addtional note: I am new to aws, if there is good resource on aws architecture please recommend.
Thanks in advance!
If the rule exists or was created properly then you must see it in the AWS console and same is the true for S3 bucket.
One common problem when it comes to visibility of an asset in AWS console is wrong region selection. So verify in which region the rule and the S3 bucket was created, if they were ever created and selecting the right region on the top right corner should show the assets in that region.
Hope it helps!
Have you tried using the View all exports to Amazon S3 in the CloudWatch -> >Logs console. It is one of the items in the Actions menu.
I have a few java lambda functions that depend on dynamoDB. The issue I am having is that my tables are being automatically deleted from AWS for no reason. Has anyone had this happen to them ?
Enable CloudTrail logs and you will be able to see who deleted the tables next time this happens. There is a tutorial on how to do this here.
Btw, double check that you're in the right region because sometimes I find all my AWS resources missing and then I notice I'm in the wrong region.
I'm trying to run a simple GroundTruth labeling job with a public workforce. I upload my images to S3, start creating the labeling job, generate the manifest using their tool automatically, and explicitly specify a role that most certainly has permissions on both S3 bucket (input and output) as well as full access to SageMaker. Then I create the job (standard rest of stuff -- I just wanted to be clear that I'm doing all of that).
At first, everything looks fine. All green lights, it says it's in progress, and the images are properly showing up in the bottom where the dataset is. However, after a few minutes, the status changes to Failure and I get this: ClientError: Access Denied. Cannot access manifest file: arn:aws:sagemaker:us-east-1:<account number>:labeling-job/<job name> using roleArn: null in the reason for failure.
I also get the error underneath (where there used to be images but now there are none):
The specified key <job name>/manifests/output/output.manifest isn't present in the S3 bucket <output bucket>.
I'm very confused for a couple of reasons. First of all, this is a super simple job. I'm just trying to do the most basic bounding box example I can think of. So this should be a very well-tested path. Second, I'm explicitly specifying a role arn, so I have no idea why it's saying it's null in the error message. Is this an Amazon glitch or could I be doing something wrong?
The role must include SageMakerFullAccess and access to the S3 bucket, so it looks like you've got that covered :)
Please check that:
the user creating the labeling job has Cognito permissions: https://docs.aws.amazon.com/sagemaker/latest/dg/sms-getting-started-step1.html
the manifest exists and is at the right S3 location.
the bucket is in the same region as SageMaker.
the bucket doesn't have any bucket policy restricting access.
If that still doesn't fix it, I'd recommend opening a support ticket with the labeling job id, etc.
Julien (AWS)
There's a bug whereby sometimes the console will say something like 401 ValidationException: The specified key s3prefix/smgt-out/yourjobname/manifests/output/output.manifest isn't present in the S3 bucket yourbucket. Request ID: a08f656a-ee9a-4c9b-b412-eb609d8ce194 but that's not the actual problem. For some reason the console is displaying the wrong error message. If you use the API (or AWS CLI) to DescribeLabelingJob like
aws sagemaker describe-labeling-job --labeling-job-name yourjobname
you will see the actual problem. In my case, one of the S3 files that define the UI instructions was missing.
I had the same issue when I tried to write to a different bucket to the one that was used successfully before.
Apparently the IAM role ARN can be assigned permissions for a particular bucket only.
I would suggest to refer to CloudWatch logs and look for a CloudWatch>>CloudWatch Logs >> Log groups >> /aws/sagemaker/LabelingJobs group. I had all points ticked from another post, but my pre-processing Lambda function had wrong id for my region and the error was obvious in the logs.
Hi I am planning to move to AWS S3 to store files. Though I been through the S3 FAQs but still I want to be sure about few more things specifically about S3 mentioned below -
1.How S3 recovers the data if some bucket is lost ? Does it keep the data back-up as well ?
2.Though my application will not be using S3 exhaustively, but how about if S3 gets down(availability issues handling by S3) ?
Thanks.
If you want your data to retain at most, you can enable versioning and cross version replicate so you have higher chance to get to your data even if a zone gets down. You can refer to this blog post https://aws.amazon.com/blogs/aws/new-cross-region-replication-for-amazon-s3/ for more information about this feature
you can refer to https://aws.amazon.com/s3/sla/ about SLA consideration