We are using cloudfront CDN and connected a S3 bucket to a distribution of it while using an alternate domain name.
We set a folder called AAA/ in S3 bucket as an origin path since from the website this is the place which needs to host all the files of this website. (Btw, other directories in S3 shouldn't be reached by this website since they contain user data). All is working till here:)
Problem is, we need to use the Url links of some objects of another S3 directory (called BBB/) in a dynamodb table as a column. How to achieve that?
As I understand, we need to put distribution domain name rooted URLs (like https://d359xyz.cloudfront.net/......) to dynamodb column, is that so? How to achieve that, can you help? Thanks a lot in advance :)
Related
I need to create a service that allows users to publish a static page in a custom subdomain.
I've never done this so excuse me if the question sounds a bit too basic.
To do so, I would like to host all those static files in something like Amazon S3 or Google cloud Storage to separate it from my server, make it scalable and secure it all.
While considering Amazon S3 I noticed a user account is limited to 100 buckets. So I can't just use a bucket per customer.
I guess, I could use 1 bucket for multiple users by just creating folders in it and pointing each folder to a different subdomain?
Does this sound like a proper solution to this problem?
I was reading you can't just point any subdomain to any bucket? Both names should be the same? Wouldn't this be a problem here?
You can do it, one bucket, one folder per website - but you would then use aws cloudfront to serve the data instead of s3 directly - the custom domain would point to cloudfront, and cloudfront would have a different distribution for each website (which would be the matching folder under the single bucket) - not as complicated as it sounds, and is probably the best way to do what you want.
You are correct though, there is a 100 bucket limit (without requesting more), and the bucket name must match the domain name exactly (which can be a problem), but those restrictions don't apply if you use the cloudfront solution I mentioned above.
I am using reactJS to develop our website, which I uploaded to S3 bucket with both index and error documents pointing to "index.html".
If I use the s3 bucket's url, say http://assets.s3-website-us-west-2.amazonaws.com", I get served my index.html. So far, so good. If I then go to specific subpage by deliberately appending /merchant, it goes there to without any problem although there is no folder called /merchant in my s3 bucket.
However, if I now attach this S3 bucket to my CloudFront distribution, and I try to directly address "https://blah.cloudfront.net/merchant", it responds with "access denied" because it could not find the subfolder /merchant in s3 bucket.
How do people get around this issue with CloudFront? I have so many virtual subpages that don't map to physical folders.
Thank you!
I have the answer.
In the cloudfront, set a custom error response like this
I have hosted react application on s3 bucket.
I want to access that same application with different domain name.
For Example:
www.example.com, www.demoapp.com, www.reactapp.com
I want to access s3 bucket hosted application with different domain.
How can I do that ?
Is there any other Amazon services do I need to use ?
if you have done something like this then please help me.
Use amazon cloudfront and you should be able to do this: http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/CNAMEs.html
You won't be able to do it as a standard s3 static website, but using cloud front will make it possible.
Using Amazon CloudFront Distributions service we can achieve it.
1) Create new CloudFront Distributions service,
2) select the bucket name that you want to use
3) add multiple domain name which we want to add
4) CloudFront Distributions will generate new host something like
abcded.cloudfront.net
5) Add abcded.cloudfront.net that in our original domains CNAME record.
If you are trying to route multiple domain names to the same S3 bucket then you are doing something wrong and you need to re-think your strategy.
domain names are meant to be a unique way of identifying resources on the internet (in this case its your S3 bucket) and there is no point of routing multiple domain names to the same S3 bucket.
If you really want to do this, then there are couple of options:
Option one:
Easy option is to point domain1.com to the S3 bucket and then have redirect rule that redirects requests to domain2.com and domain3.com to domain1.com.
Option two:
Another way is to create 3 S3 buckets and duplicate the content on all 3 and then use different domain names to point to each one. This of course is going to create a nightmare maintenance scenario as if you changed the application in one bucket then you have to do the same change in all other buckets. If you are using GIT to host your application then you can push to all 3 S3 bucket at once.
Again, all of the above is really nothing but a hack to get around the fact that domain names point to unique resources on the internet and I can't really see why would you do something like this.
Hope this gives you some insight on how to resolve this issue.
This should be easy as there is no shortage of pages on custom domains and S3 but for some reason I can't seem to get it to work as expected.
I have a S3 Bucket full of videos. The S3 bucket is called for example "videos.foo.com". I bought the domain "videos.foo.com" and set it up in cloudflare with the cname "videos.foo.com" pointing to "videos.foo.com.s3-website-us-east-1.amazonaws.com".
I can view files in my bucket by going to there full url such as "videos.foo.com.s3-website-us-east-1.amazonaws.com/myvideo.mpg".
My problem is I can't view them by going to "videos.foo.com/myvideo.mpg".
I tried enabling "Redirect all requests to another host name" and entering "videos.foo.com" but that didn't work either. To note, I will 'not' be hosting a site at "videos.foo.com" just serving files.
All the files have permissions everyone: open/download.
If anything sees the error in my ways please let me know. In the mean time I'll keep searching and going through trial and error. Thanks!
Nginx S3 proxy helps to solve that problem, please check more details: https://stackoverflow.com/a/44749584/290338
A bucket can be named differently, the only performance rule is to have EC2 and a bucket in the same geo location.
You may need to configure the Route 53 record as an alias rather than a cname. In R53, edit your record set and configure it like this -
Type: A - IPv4 address
Alias: Yes
When you click in the Alias Target textbox it will drop down a list that contains your S3 bucket (if it's properly configured). It can take a while to load that list so be patient. Select your bucket and hit save.
Check out this document for more info - http://docs.aws.amazon.com/AmazonS3/latest/dev/website-hosting-custom-domain-walkthrough.html
I want to serve the contents stored in my S3 bucket with Akamai, not with Amazon CloudFront.
Is there any way to integrate Akamai with S3 bucket?
Its quite huge sathya, but I suggest you to contact solution architect's. If you are configuring for the production systems. Its risks if you are doing it for the first time and things can go wrong. Any how I am writing the steps here, though it will not cover the all the steps.
GO to lunar control center and configure->tools->Edge hostnames-> Create edge hostname.
Make sure you have declared your s3 bucket as a static web site, it becomes easy to access. The name of the s3 bucket should be the name of the domain or sub-domain. Put the end-point of the bucket or your subdomain name and akamai will give you the end point. Copy the end point generated by akamai.
Go to configure->property->site
Choose the configuration name you want to add or create a new configuration from the exiting one, here you should be carefully. This were akamai people can help you to understand set the configurations.
Yes you can integrate your S3 buckets with Akamai. Once you have access to the akamai's lunar control center you can do it. I have done it. Its better to contact Akamai customer support then posting here.