I recently started creating a website with angular and Django. This is to be an online bookstore or an ELibraby something like Amazon Kindle, my problem is that I found out that it's not advisable to store ebooks on a database but I need a way for users to get these ebooks from the database and for admins to be able to upload to some sort of file system since database is not possible, please is there anyway I can accomplish this on my site.
I have checked the internet but I haven't seen anything helpful, maybe I am searching wrong or something but I will really appreciate any advice.
And also I will like to know if there is any API that can help me add books to my website at least to fill in some space till actual ebooks are uploaded.
Any advise will really help...
First, you will never want to store binary data of any sorts in the database. You will use a storage and the database will refer to that storage instead. I think you need to see how you can archive that first and then proceed with the rest.
Check Amazon S3 and https://pypi.org/project/django-storages/
Related
I want to develop an app for a friend's small business that will store/serve media files. However I'm afraid of having a piece of media goes viral, or getting DDoS'd. The bill could go up quite easily with a service like S3 and I really want to avoid surprise expenses like that. Ideally I'd like some kind of max-bandwidth limit.
Now, the solutions for S3 this has been posted here
But it does require quite a few steps. So I'm wondering if there is a cloud storage solution that makes this simpler I.e. where I don't need to create a custom microservice. I've talked to the support on Digital Ocean and they also don't support this
So in the interest of saving time, and perhaps for anyone else who finds themselves in a similar dilemma, I want to ask this question here, I hope that's okay.
Thanks!
Not an out-of-the-box solution, but you could:
Keep the content private
When rendering a web page that contains the file or links to the file, have your back-end generate an Amazon S3 pre-signed URLs to grant time-limited access to the object
The back-end could keep track of the "popularity" of the file and, if it exceeds a certain rate (eg 1000 over 15 minutes), it could instead point to a small file with a message of "please try later"
We currently host online tutorials on our website embeding the videos using Youtube.
However I have been asked to secure the video links so users need to authenticate in order to view the videos, and once authenticated, not be able to copy the video link and share it with others as they will be paid tutorials.
We use AWS to store our other assets (Website images, documents, etc) and want to use AWS to now store our videos.
Does anyone know the best way to secure these links so they can only be used from within our website and not be able to share the video links?
First of all think how much effort you want to put into solving a problem, that the world failed to solved in the last 40 years. We had VHS and everyone could copy everything. We had CDs and DVDs with copy protection. BlueRays can and are ripped too. If you consider how a book can be copied then it is a problem we failed to solve in the last 2000+ years.
Have you played with youtube-dl? Have you seen how easy it is to download things from youtube once you get access to it? And I could always use a screen recorder tool to capture the screen if all else fails.
Given how easy it is to bypass the copy protection, how much time do you want to spend into solving the impossible? Do you want to make the code more complex and the architecture more crappy (and the usability worse) along the way?
If the history has shown anything is that legal measures are the only way to protect from piracy. So you have two options here: pretend you do something to protect knowing you will fail or talk to the managers and convince them, that there are better ways of spending money.
By default, all objects in the bucket are private.
A pre-signed URL may solve your current problem.
Have a look on below links:
https://docs.aws.amazon.com/AmazonS3/latest/dev/ShareObjectPreSignedURL.html
https://www.msp360.com/resources/blog/s3-pre-signed-url-guide/
Hello, thanks for your viewing my question first!
I am running the Amazon dynamoDb locally and all databases are saved locally. With the local dynamoDb, I have to show everything with a lot of code, but I feel the interface at web service is much better, in which I can perform operations and see the tables directly and clearly:
So may I ask how can connect them, then I can practice the coding and check the status easily?
Looking forward to your reply and thank you so much!
Sincerely
You cannot connect them as they are completely separate databases. However, you can put a simple user interface on top of your local DynamoDB database.
I use the SQLite Browser: http://sqlitebrowser.org/. Once you have it installed, open the .db file located in the folder where you are running DynamoDBLocal.jar. You should be able to see all your tables and the data within them. You won't be able to see DynamoDB specific things like your provisioned capacity, but I think this will give you enough of what you're looking for.
Does this help?
This is just a general question, but I'm wondering if there's an existing API that displays the current price for an item on Amazon? As in, if the price changes, the site will reflect that change as well.
If not, would building a web crawler to go through and find the Amazon items of my choice be the best way to build my own version of this? If so, what language would you recommend to begin this sort of project.
I'm not sure if I should have actually asked this in SuperUser but I appreciate the input. Thanks guys!
There are plenty of web crawling services for this task.
https://import.io/
https://www.kimonolabs.com/
http://www.diffbot.com/
If you want to make your own, I recommend node.js because it's asynchronous behavior.
I need to write an API which would provide access to data being served as HTML documents from a web server. I need for my users to be able to perform queries over the data.
Say on a web site there is a page which lists items and their owners. Then there is additional set of profile pages for owners which for each owner provide information about their reputation. An example query I may need to answer is "Give me ID's and owners of all items submitted in 2013 whose owners have reputation of at least 10".
Given a query to answer, I need to be able to screen scrape only the parts of the web site I need for answering the query at hand. And ideally cache the obtained information for future use with new queries.
I have no problem writing the screen scraping part, but I am struggling with designing the storage/query/cache part. Is there something about Clojure/Datomic that makes it an especially suitable technology choice for this kind of processing of data? I have been pointed in this direction before.
It seems a nice challenge but not sure about a few things: a) would you like to expose to your users a Datalog query box and so make them learn datalog-like syntax? b) what exact kind of results do you wish to cache, raw DB responses, html fomatted text, json ?
Anyway I suggest you to install and play a little bit with the Datomic console to get a grasp if you didn't before as it seems to me the more close idea to what you want to achieve atm https://www.youtube.com/watch?v=jyuBnl0XQ6s http://blog.datomic.com/2013/10/datomic-console.html
For the API I suggest you to use http://clojure-liberator.github.io/liberator/ as it provides sane defaults to implement REST services and let you focus on your app behaviour