I am new to AWS. I have created a S3 bucket a few days ago and I have noticed the number of requests made to it is already very high. Over the free tier limit for Put... I don't understand what is going on. I did connect a django heroku hosted app to the bucket but I am the only one having access to it and I only made a dozens of requests in the past few days.
Can you please help me understand what is going on, is this normal behaviour?
I didn't find my answer on Amazon forum and to access the technical support I need to upgrade my plan...
Thank you
Check if your bucket is public. It is possible that if correct restrictions are not set then anyone can modify bucket.
Another thing to check is cloudtrail logs. It will show you the config changes (if any) on your S3 Bucket.
Also check if new files are added in the bucket. If yes, then maybe it's already compromised.
Related
I have been monitoring my billing dashboard for a few days now and I notice that my s3 requests, put, copy, post, list requests and get requests keep adding up even though Im not using s3, in fact I have stopped using my AWS account for few days to monitor any changes, also I have deleted all my previously created s3 buckets, lambda functions associated with it, dynamodb tables, api gateways. I remeber hosting a website using s3, but I had deleted that bucket, is there something that i am missing which is causing it. I am on the free tier and I am afraid that I might exceed the free tier if I dont know what is causing this, despite mine not using my AWS account. i am new to AWS hence, the difficulty in understanding it. I would really appreciate some help in this matter.
Is there any direct service which can be used to write data feeds from Adobe Analytics (Omniture) to Google cloud storage bucket or any alternative solution apart from setting up ftp server on gcp instance?
Unfortunately, there isn't.
Datafeeds currently can either deliver directly to an AWS S3 bucket or FTP/SFTP account (note, I didn't list FTPS as its unsupported).
You'll likely need to setup a jump point somewhere - either in AWS or an FTP site in Google as you suggest. I realize this doesn't answer your question, but I hope it at least gets you moving in the right direction.
A bit late in the party but you might find some help setting up your own Data Feed transfer process and data loading in BigQuery in Python at https://analyticsmayhem.com/adobe-analytics/data-feeds-google-bigquery/. Let me know if you have any questions.
Hi I am planning to move to AWS S3 to store files. Though I been through the S3 FAQs but still I want to be sure about few more things specifically about S3 mentioned below -
1.How S3 recovers the data if some bucket is lost ? Does it keep the data back-up as well ?
2.Though my application will not be using S3 exhaustively, but how about if S3 gets down(availability issues handling by S3) ?
Thanks.
If you want your data to retain at most, you can enable versioning and cross version replicate so you have higher chance to get to your data even if a zone gets down. You can refer to this blog post https://aws.amazon.com/blogs/aws/new-cross-region-replication-for-amazon-s3/ for more information about this feature
you can refer to https://aws.amazon.com/s3/sla/ about SLA consideration
Is there ways to get deleted history of AWS s3 bucket?
Problem Statement :
Some of s3 folders got deleted . Is there way to figure out when it got deleted
There are at least two ways to accomplish what you want to do, but both are disabled by default.
The first one is to enable server access logging on your bucket(s), and the second one is to use AWS CloudTrail.
You might be out of luck if this already happened and you had no auditing set up, though.
How do I integrate Amazon Cloud Front and S3 in a photo sharing application?
I currently upload to S3, and return the cloudfront url but this has not been very successful because it appears there is a latency between s3 and cloudfront such that the returned url is not immediately valid.
Does any know how I can work around this?
Facebook uses Akamai and if I upload an image it is immediately available.
Would appreciate some ideas on this.
You must be trying to fetch the object immediately through cloudfront. I'm unsure why that might be, but you are hitting the limits of S3's eventual consistency model.
When you upload an object, the message takes a tiny amount of time to propagate across the S3 service. Generally this is well under one second and is hard to detect. (in a previous life job, we found we could reasonably guarantee all files arrived within 10 seconds, and 99.9% within 1 second)
Here's the official word from AWS; it's worth reading the whole page:
A process writes a new object to Amazon S3 and immediately lists keys within its bucket. Until the change is fully propagated, the object might not appear in the list.
There's a much longer discussion on this stackoverflow question; assuming you are using the standard S3 bucket, you need to change your endpoint slightly to take advantage of the read-after-write model.
Further reading:
* Instrumental: Why you should stop using the us-standard Region in S3. Right Now™
* Read-After-Write Consistency in Amazon S3 (from 2009, contains dated info)
One way you can debug/prove this is by calling getObjectMetadata right before your CloudFront call. It should fail in this case.