I have an S3 bucket which I'm using for storing images uploaded by users. Then I save paths to those images in my database, each path looks like so https://bucket-name.s3.eu-central-1...
I then added an image resizing feature which requires the S3 bucket to be a static website, so that redirection rules can be used.
Appearantly this static website thingy made it impossible to download those pictures using old paths with https protocol, and because my website is using https - I can't make http requests. So now all the profile pictures of the users aren't displayed at all.
I'm looking for a solution to this problem. I can change pictures' paths stored in the database if needed.
One possible solution I have in mind is using a subdomain with CloudFront, e.g. pictures.my-website.com/name-of-the-picture.png
Do you think it'll work and it's a good solution, or there is a better way?
Use CloudFront as a server with predefined domain and SSL certificates https://medium.com/#tsubasakondo_36683/serve-images-with-cloudfront-s3-8691d5c387b6
Related
I have followed everything from the above link. By following the link .I had hosted my website on google cloud. My static website contains multiple pages(5 pages). In the hosted website I cant find images and other html pages except "index.html" page.
Can anyone please help me by letting me know how to host static website of multiple pages and letting me know how to keep my website secured?, so it would be very helpful for me.
https://codelabs.developers.google.com/codelabs/cloud-webapp-hosting-gcs#5
i think reason behind image is not loading is
you examine the file paths for your files, also be sure that you spelled the name of the image correctly.
refer this link for more details.
Since you didn't provide any more details I can only give you some pointers on what to focus on.
Create a bucket named www.yourdomain.com - www is very important unless you just want to use just yourdomain.com.
Change access permissions so everyone can read it's contents.
Upload your files to the main directory. When someone accesses the site first file that GCP will look for is index.html so make it your home page. Make sure you uploaded all of them. If the images in your page are stored in a folder (img, images etc) then upload that folder with the files inside to your bucket. From your description it looks like either you're missing them or they are in the wrong folder.
Obtain your own SSL certificate or use GCP's managed certificate (free of charge)
Set up a load balancer
Point your domain to your LB's external IP
At that point you're ready to go and your site is up & running.
If this is the first time you're doing it I recommend to start from readin official documentation on how to set up a static website in GCP.
You can access your site without using load balancer to check if it's running correctly by using a link in the format https://storage.googleapis.com/my-bucket/my-object. Have a look at the truobleshooting static websites to get more insight.
Have a look at my other answer covering this topic: https://stackoverflow.com/a/64442826/12257250
Alternatively you can try hosting your site using firebase.
On a Django 1.9 project I need to redirect:
https://example.com/app/
to
https://examplebucket.s3.amazonaws.com/app/index.html
But I need https://examplce.com/app/ to be still visible on the browser address bar...
I know this must be possible in theory with Django because the previous team working on this project did a setup to serve the /static/ media files from an S3 bucket. And if I access those static files via https://example.com/static/app/index.html, they are served from the S3 bucket but the browser address bar still shows the original url I input.
I'm deploying an Ionic Browser project and I want the files (including the index) to be served from the S3 but the url needs to be user friendly, thats the reason.
The old (dirty) way of doing this is frame-based forwarding.
You set up an iframe on a page in /app/ which points at the real app, letting the url stay the same.
It's not considered a good practice because of security issues (can't be sure where you are typing credentials into), and bookmarking issues (url is always the same so can't bookmark inner pages).
Another alternative is to set up a proxy script that just takes the url, turns that into the equivalent aws url, downloads it and then returns it. This would break the benefits of your cloud hosting if it has multiple regions... it would always be passed through the bottleneck of your server.
I'm using github pages with a custom domain to publish my website, and I'm serving all images from my public google drive folder (because the images are licensed differently than the content served from my github repo).
Now I would like to assign a subdomain to the google drive folder: static.mydomain.com, but I don't have access to anything but the DNS settings for my own domain, mydomain.com (so no .htaccess or anything).
Is it possible to redirect the subdomain static.mydomain.com to my google drive folder, so that: static.mydomain.com/path/to/image.jpg -> points to http://googledrive.com/host/folder-id/path/to/image.jpg?
P.S.: Of course I would like to do all this without affecting the redirect that github pages requires for my custom domain. Although I don't think that will be an issue. But just as an aside.
It's no longer possible to do that; see https://stackoverflow.com/questions/13636286/using-a-custom-domain-for-google-drive-public-folder-website.
I'm looking for a combination of policies to access a static website in a S3 bucket only with a certain token/sign string.
I mean, is it possibile to make the static website not readable by everyone by default but temporary accessible with something like http://mybucket.s3-website-location.amazonaws.com/myfolder/index.html?sign=XXXXX?
With this call you should also have access to all the tree in the "myfolder" folder.
I don't think its possible - think about how you would do that on a regular website, you would need to read the querystring and then do some sort of lookup/logic to determine if the token was valid, i.e. you need to do some server-side processing to carry out that logic.
Once you need to add server-side logic you are no longer have a 'static' website (even though ultimately you may be serving up static pages). S3 may not be the right solution for you in this case.
From aws: http://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteHosting.html
You can host a static website on Amazon S3. On a static website, individual web pages include static content. They may also contain client-side scripts. By contrast, a dynamic website relies on server-side processing, including server-side scripts such as PHP, JSP, or ASP.NET. Amazon S3 does not support server-side scripting.
You can only do this for a single URL at a time, using a signed S3 URL with an expiration time. There is no way to create a signature that can be appended to any of a group of URLs that will make them all work with the signature, but not work without it.
Sorry.
However, this is fairly easy to do with an actual website as a front end. You'd have to code the website to redirect every request to a signed URL specific to that object. To do that, you'd need an EC2 instance that runs the code you write. But as of now, S3 doesn't have a way to do this all by itself.
My main domain is 'btaylorweb.com'. I have a subdomain 'static.btaylorweb.com' that uses a CNAME to point to my CloudFront URL.
TinyMCE is loading just fine from S3, however, my popups are blank. I've set the domain as such:
document.domain = 'btaylorweb.com';
in tiny_mce_popup.js and in tiny_mce.js, but that's still not working. Can anyone please point out what I'm doing wrong?
I ended up leveraging the Image plugin in DjangoCMS, which can be used in conjunction with django-storages to push files directly to the S3 bucket. It works, but the Image plugin isn't quite as nice as seeing the images inline with the rest of the content.