There many questions regarding this topic, but i didnt find any exact answer to my problem. I am building a django app where users can store photos and they are private. When showing them the thumbnails I am having to show the thumbnails by generating signed url from S3. But it takes quite a long time. I wanted to know is there any better option that I am missing.???
Sorl-Thumbnail also has the same problem. I have looked at it and didn't find any better option.
Is there any other option where the source of the image cant be visible?? That might also work.
Please help me as soon as possible
It's not entirely clear what you are asking, but sounds like you have some slowness because you are generating thumbnails on the fly. If that's the case, you could improve performance pre-generating the thumbnails-- generate them when you store the image instead of when you request them.
I don't see much room for improvement if you are concerned with the performance of signed requests because your images are private in S3, without changing your application design. In some cases, it's acceptable to use hard-to-guess URLs for private photos. For example, I believe Flickr does that.
Related
I want to develop an app for a friend's small business that will store/serve media files. However I'm afraid of having a piece of media goes viral, or getting DDoS'd. The bill could go up quite easily with a service like S3 and I really want to avoid surprise expenses like that. Ideally I'd like some kind of max-bandwidth limit.
Now, the solutions for S3 this has been posted here
But it does require quite a few steps. So I'm wondering if there is a cloud storage solution that makes this simpler I.e. where I don't need to create a custom microservice. I've talked to the support on Digital Ocean and they also don't support this
So in the interest of saving time, and perhaps for anyone else who finds themselves in a similar dilemma, I want to ask this question here, I hope that's okay.
Thanks!
Not an out-of-the-box solution, but you could:
Keep the content private
When rendering a web page that contains the file or links to the file, have your back-end generate an Amazon S3 pre-signed URLs to grant time-limited access to the object
The back-end could keep track of the "popularity" of the file and, if it exceeds a certain rate (eg 1000 over 15 minutes), it could instead point to a small file with a message of "please try later"
There are cases in a project where I'd like to store images on a model.
For example:
Company Logos
Profile Pictures
Programming Languages
Etc.
Recently I've been using AWS S3 for file storage (primarily hosting on Heroku) via ImageField uploads.
I feel like there's a better way to store files than what I've been doing.
For some things (like for the examples above) I think it would make sense to actually just get an image url from a more publically available url than take up space in my own database.
For the experts in the Django community who have built and deployed really professional projects, do you typically store files directly into the Django media folder via ImageField?
or do you normally use a URLField and then pull a url from an API or an image link from the web (e.g., go on any Google image, right click and copy then paste image URL)?
Bonus: What does your image storing setup look like?
Hope this makes sense.
Thanks in advance!
The standard is what you've described, using something like AWS S3 to store the actual image and handle the URL in your database. Here's a few reasons why:
It's cheap. like really cheap
Instead of making your web server serve the files, you're offloading that onto the client (e.g. their browser grabbing the file from S3)
If you're using an ephemeral system (like Heroku), your only option is to use something like S3.
Control. Sure, you can pull an image link from somewhere else that isn't managed by you. But this does not scale. What happens if that server goes offline? What if they take that image down? This way, you control what happens to the objects.
An example of a decently large internet company but not large enough to run their own infrastructure (like Facebook/Instagram, Google, etc.) is VSCO. They're taking a decent amount of photo uploads every day and they're handling them with AWS.
I recently started creating a website with angular and Django. This is to be an online bookstore or an ELibraby something like Amazon Kindle, my problem is that I found out that it's not advisable to store ebooks on a database but I need a way for users to get these ebooks from the database and for admins to be able to upload to some sort of file system since database is not possible, please is there anyway I can accomplish this on my site.
I have checked the internet but I haven't seen anything helpful, maybe I am searching wrong or something but I will really appreciate any advice.
And also I will like to know if there is any API that can help me add books to my website at least to fill in some space till actual ebooks are uploaded.
Any advise will really help...
First, you will never want to store binary data of any sorts in the database. You will use a storage and the database will refer to that storage instead. I think you need to see how you can archive that first and then proceed with the rest.
Check Amazon S3 and https://pypi.org/project/django-storages/
We currently host online tutorials on our website embeding the videos using Youtube.
However I have been asked to secure the video links so users need to authenticate in order to view the videos, and once authenticated, not be able to copy the video link and share it with others as they will be paid tutorials.
We use AWS to store our other assets (Website images, documents, etc) and want to use AWS to now store our videos.
Does anyone know the best way to secure these links so they can only be used from within our website and not be able to share the video links?
First of all think how much effort you want to put into solving a problem, that the world failed to solved in the last 40 years. We had VHS and everyone could copy everything. We had CDs and DVDs with copy protection. BlueRays can and are ripped too. If you consider how a book can be copied then it is a problem we failed to solve in the last 2000+ years.
Have you played with youtube-dl? Have you seen how easy it is to download things from youtube once you get access to it? And I could always use a screen recorder tool to capture the screen if all else fails.
Given how easy it is to bypass the copy protection, how much time do you want to spend into solving the impossible? Do you want to make the code more complex and the architecture more crappy (and the usability worse) along the way?
If the history has shown anything is that legal measures are the only way to protect from piracy. So you have two options here: pretend you do something to protect knowing you will fail or talk to the managers and convince them, that there are better ways of spending money.
By default, all objects in the bucket are private.
A pre-signed URL may solve your current problem.
Have a look on below links:
https://docs.aws.amazon.com/AmazonS3/latest/dev/ShareObjectPreSignedURL.html
https://www.msp360.com/resources/blog/s3-pre-signed-url-guide/
I'm a beginner to caching. I'm currently working on a small project with Django and will be implementing caching later via memcached.
I have a page with a video on it and the video has a bunch of comments. The only content on the page that is likely to change regularly is the comments and the "You are logged in as.../You are not logged in..." message.
I was thinking I could create a JSON file that serves the username and most recent comments, including it in the head with <script src="videojson.js"></script>. That way I could populate the HTML via Javascript instead of caching the whole page on a per-user basis.
Is this a suitable approach, or is the caching system smarter than I give it credit for?
How is the JavaScript going to get the json object? Are going to serve from a django view that the us calls? And in that view you will just pull out of memcached if available and DB if not?
That seems reasonable assuming your json isn't very big. If your comments change a lot and you have to spend a lot of time querying the db, building the json object and saving to memcache every time a new comment is written, it won't work well. But if you only fill the cache when your json expires, and you don't care about having the latest and greatest comments on there instantly, it should work.
One thing to point out is that if you aren't getting that much traffic now, you might be adding a level of complexity that won't give you much return on your time spent. But if you are using this to learn how to do caching then it is a good exercise.
Hope that helps