Integrating Akamai with S3 bucket - amazon-web-services

I want to serve the contents stored in my S3 bucket with Akamai, not with Amazon CloudFront.
Is there any way to integrate Akamai with S3 bucket?

Its quite huge sathya, but I suggest you to contact solution architect's. If you are configuring for the production systems. Its risks if you are doing it for the first time and things can go wrong. Any how I am writing the steps here, though it will not cover the all the steps.
GO to lunar control center and configure->tools->Edge hostnames-> Create edge hostname.
Make sure you have declared your s3 bucket as a static web site, it becomes easy to access. The name of the s3 bucket should be the name of the domain or sub-domain. Put the end-point of the bucket or your subdomain name and akamai will give you the end point. Copy the end point generated by akamai.
Go to configure->property->site
Choose the configuration name you want to add or create a new configuration from the exiting one, here you should be carefully. This were akamai people can help you to understand set the configurations.

Yes you can integrate your S3 buckets with Akamai. Once you have access to the akamai's lunar control center you can do it. I have done it. Its better to contact Akamai customer support then posting here.

Related

Amazon S3 - 1 bucket with a folder per sub-domain?

I need to create a service that allows users to publish a static page in a custom subdomain.
I've never done this so excuse me if the question sounds a bit too basic.
To do so, I would like to host all those static files in something like Amazon S3 or Google cloud Storage to separate it from my server, make it scalable and secure it all.
While considering Amazon S3 I noticed a user account is limited to 100 buckets. So I can't just use a bucket per customer.
I guess, I could use 1 bucket for multiple users by just creating folders in it and pointing each folder to a different subdomain?
Does this sound like a proper solution to this problem?
I was reading you can't just point any subdomain to any bucket? Both names should be the same? Wouldn't this be a problem here?
You can do it, one bucket, one folder per website - but you would then use aws cloudfront to serve the data instead of s3 directly - the custom domain would point to cloudfront, and cloudfront would have a different distribution for each website (which would be the matching folder under the single bucket) - not as complicated as it sounds, and is probably the best way to do what you want.
You are correct though, there is a 100 bucket limit (without requesting more), and the bucket name must match the domain name exactly (which can be a problem), but those restrictions don't apply if you use the cloudfront solution I mentioned above.

Is it best practice to enable both CloudFront and S3 access logs?

We are implementing a static site hosted in S3 behind CloudFront with OAI. Both CloudFront and S3 can have logging enabled and I'm trying to determine whether it is necessary/practical to have logging enabled for both.
The only S3 access that would not come from CloudFront should be site deployments from our CI/CD pipeline which may still be useful to log. It may be useful for that exact reason to find any unintended access that was not from CloudFront or deployments too? The downside of course is that there would be two sets of access logs that would mostly overlap and adding to the monthly bill.
We will need to make a decision on this, but curious what the consensus is out there.
Thanks,
John
If you are using CloudFront with origin Access Identity , then your bucket can be private to the world.
Only the OAI and any other users you want can have read access and denying any others access to the s3 bucket/files inside the bucket which host the static website.
Which means user around the world need to mandatorily come via cloud front and direct access to s3 bucket and it's files will be denied.
So if you have implemented it right , you do not need to have s3 access logging enabled.
However the value of security is only know once we face a disaster , just weigh in and take a decision.
References :
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html

Use S3 for website in multi-regions

I want to host my website in S3 (because it seems cheaper and i don't have server side script). I have a domain, and i want my domain link to my S3 website. So far, what i do is enabling Static website hosting in my S3 website bucket, and set Route53 record set's Alias Target to my S3 website. it's working. But it's not good enough, i want it to deal with multi regions.
I know that Transfer acceleration can auto sync files to other regions so it's faster for other regions. But i don't know how to make it work with Route53. I hear that some people uses CloudFront to do that but i don't quite understand how. And i don't want to manually create buckets in several regions and manually set up for each region
do you have any suggestion for me?
If your goal is to reduce latency for users worldwide, then Amazon CloudFront is definitely the way to go.
Amazon CloudFront has over 100 edge locations globally, so it has more coverage than merely using AWS regions.
You simply create a CloudFront distribution, point it to your S3 bucket and then point your domain name to CloudFront.
Whenever somebody accesses your content, CloudFront will retrieve it from S3 and cache it in the edge location closest to that user. Then, other users who want the data will receive it from the local cache. Thus, your website appears fast for many users.
See also: Amazon CloudFront pricing

Amazon web hosting URL not changing

Just hosted a website on amazon aws in a s3 bucket. When I move around in the website the URL doesn't change, even if the link redirect on a page with a different path.
I read around that it has something to do with iframes, even though I'm not sure what they are.
Regardless, I'm just wondering whether it's possible with the aws s3 to make so that by moving around in the website, the URL gets updated as well.
For testing purposes, this is the link to the website, and to go to another part of the website, just scroll down and click on the website image.
Thank you!
I've manage to find out how to connect the web hosting s3 bucket to the freenom free domain provider.
The s3 bucket needs to have the same name as your domain + the "www". In my example my domain was paolo-caponeri.ga, the bucket needs to be www.paolo-caponeri.ga
Then in the freenom domains manager you need to go the name servers section, select the "Use default nameservers" and then press "save"
Finally you need to go to the freenom DNS manager and add a new CNAME record with "www" on the left and the full link to the s3 bucket provided in the amazon s3 properties on the right; in my case it was "www.paolo-caponeri.ga.s3-website.eu-central-1.amazonaws.com"
And that's it, after a while you should be able to connect to your website without having the URL being masked.
(thank you to Frederic Henri, who got me much closer to the answer!)
NB: I have no experience with freenom so those are more advices than a proven solution.
It seems freenom is doing frame forwarding and you would need instead a "A" / "CNAME" referral.
Your site runs fine if you go to http://testpages.paolo.com.s3-website.eu-central-1.amazonaws.com/ and as such bypass the freenom redirection.
A quick search on freenom seems it could be possible: https://my.freenom.com/knowledgebase.php?action=displayarticle&id=4

S3 Static Website Hosting when Bucketname is taken?

I'm trying to host a simple static website from my AWS account with S3. I had an old dusty account with lots of strange settings from testing over the years, I also had an S3 account with a 'mypersonaldomain.com' and 'wwww.mypersonaldomain.com' bucket. Anyways I wanted to start fresh so I canceled the account to start new.
Now when I go to create a 'mypersonaldomain.com' and 'www.mypersonaldomain.com' it says the bucket name is taken even though the account was deleted a while ago. I had assumed that amazon would release the bucketname back to the public. However when I deleted the account, I didn't explicitly delete the buckets beforehand.
I'm under the impression to use S3 for static website hosting the bucket names need to match the domain name for the DNS to work. However If I can't create a bucket with the proper name is there anyway I can use S3 for static hosting? Its just a simple low traffic website that doesn't need to be in an EC2 instance.
FYI I'm using Route 53 for my DNS.
[note 'mypersonldomain.com' is not the actual domain name]
One way to solve your problem would be to store your website on S3 but serve it through CloudFront:
You create a bucket on S3 with whatever name you like (no need to match the bucket's name to your domain name).
You create a distribution on CloudFront.
You add an origin to your distribution pointing to your bucket.
You make the default behavior of your distribution to grab content from the origin created on the previous step.
You change your distribution's settings to make it respond to your domain name, by adding a CNAME
You create a hosted zone on Route 53 and create ALIAS entries pointing to your distribution.
That should work. By doing this, you have the added benefit of potentially improving performance for your end users.
Also note that CloudFront has been included in the Free Tier a couple months ago (http://aws.amazon.com/free).
In my opinion, this is a tremendous design flaw in the way that S3 requires that bucket names are universally unique across all users.
I could, for example, make buckets for any well known company name (assuming they weren't taken) and then the legitimate users of those domains would be blocked from using them for the purpose of creating a static s3 website if they ever wanted to.
I've never liked this requirement that s3 bucket names be unique across all users - a major design flaw (which I am sure had a legitimate reason when it was first designed), but can't imagine that if AWS could go back to the drawing board on this that they wouldn't re-think it now.
In your case, with a delete account, it is probably worth dropping a note to s3 tech support - they may be able to help you out quite easily.
I finally figured out a solution that worked for me. For my apex domain name I am using CloudFront which isn't an issue that someone already has my bucket name. The issue was the www redirect. I needed a server to rewrite the URL and issue a permanent redirect. Since I am using Elastic Beanstalk for my API I leveraged Nginx to accomplish the redirect. You can read all the details on my post here: http://brettmathe.com/aws/aws-apex-redirect-using-elastic-beanstalk/