I am using AWS S3 to store my private MP4 files to be rendered on my site. Besides, I also have AWS Cloudfront distribution to speed up the content delivery. S3 bucket has Policy to be accessed from my site and OAI so content can only be accessed using distribution.
Problem I'm facing is my videos are downloadable using browser extension though absolutely path of video is blocked outside of the site. Is there I can do to avoid it?
Any help/direction would be appreciated
If the browser needs to play the video then it will need to download it.
As you say, it is not that hard to download/capture the file so you have to consider what your goals are.
The usual approach is to accept that it can be downloaded and encrypt the file so that only users with access to the decryption key can play back the content.
The tricky part then becomes how to securely share the decryption key with authorised users in a way that neither they nor a third party can view or share the key. This is the essence of nearly all common DRM systems.
You can use a proprietary way to share the key securely, even something as simple as via some other communication channel, if this addresses your requirements. It will likely not be leveraging the full security capability of the devices, such as a secure media path, but it may be enough for your needs.
If not, then you will probably want to look at one or more of the common DRM systems in use today - you generally need multiple ones to cover all devices and clients, Widevine for Android, Chrome etc, FairPlay for Safari, iOS and PlayReady for xBox, Edge etc.
Related
This seems like something that happens often enough that there might already be some provision for doing it that I'm not aware of...
Users of our app download our installer package via a link on our site that points to the file hosted in an S3 bucket on AWS. Once installed, our app uses the same (hard-coded) URL to download and install updates when they become available.
One downside to this is that it requires that the download URL be static, and thus it can't contain any version information.
If we were hosting the download on our own (configurable) web server, I'd have an idea of how to set up a redirect from https://.../Foo_Latest to https://.../Foo_v1.0.13, and we could just manage the alias in that one place?
But since we upload new releases to an S3 bucket on AWS, I'm wondering whether there's not some existing capability on AWS to alias the URL. It seems like this might be a common enough use case that there's some solution already in place for doing this?
Of course, we could just have the static URL point to a server we control, and then do the redirect there to the AWS URL. But that feels like it would somewhat defeat the purpose of using high-availability and high-bandwidth benefits of AWS...
Am I missing anything?
What would be the best approach to update a static website hosted on an S3 bucket such that there is no downtime. The updates will be done by marketing teams of a company with zero knowledge of cli commands or how to move around in the console. Are there ways to achieve this without having to learn to move around in the console?
Edit
The web site is a collection of static html pages and will be updated using an Html editor. Once edited the marketing team will upload each individual updated file to the S3 bucket. There are no more than 10 such files including html and images. This was currently being hosted on a shared server and we now want to move it to an S3 bucket capable of hosting simple web pages. The preference is to not provision console access for certain users as they are comfortable only using a WYSIWYG html editor and uploading using an FTP client. The editors don't know html and the site doesn't use javascript. I am thinking of writing a batch script to manage the uploads to keep all that cli complexity away so they only work on the HTML in the editor. Looking for the simplest approach to achieve this.
You can set up a CI/CD pipeline as mentioned here
Update static website in an automated way
The above pipeline generally has a code commit as trigger. It depends what your marketing teams are doing on the content and how they are updating it. If they are updating the content that is hosted on AWS, you can change the trigger to S3 updates. The solution depends on an individual use case, which may require some dev from your side to make it simpler for your marketing teams.
I'm unsure what you are asking here, because to "update" a static website, surely you must have some technical knowledge in the very basics of web development.
It's important here to define what exactly you mean by update, because again, updating a website and updating a bucket are two completely different things.
Also, S3 has eventual consistency with PUT's (updating) so there will be some minimal downtime.
The easiest way to update an S3 bucket is via the console, and not the CLI. The console is pretty user friendly, and shouldn't take long to get used to.
You can disable cookies, change your ip 500 times but can’t anyone just track you through fingerprinting?
You could disable Java and Flash. Though that would break the page and make you stand out anyway.
You could use Tor but I think if you use Tor you get blacklisted from some sites instantly.
What’s the workaround? Using Chrome is a big nono. Internet explorer maybe and firefox perhaps…
Are there any apps that deal with this? Or just design a good web scraper, have an ip and cross your fingers.
I realize the average site is not going to implement all these features, but I am how one would workaround a site that was extremely vigilant.
There are two types of browser fingerprinting:
1. static fingerprinting - can identify browsers (and probably operating systems) just based on details of their requests. That's the order and capitalization of http headers, browser specific headers etc.
One small aspect is described here: https://gwillem.gitlab.io/2017/05/02/http-header-order-is-important/
As this can be done without any javascript, I guess scrapy is identifyable this way.
How to get around this?
As mentioned in the above article you need to exactly emulate a particular browser's fingerprint by emulating its headers' order and capitalization (and it has to match the user agent, of course)
2. dynamic fingerprinting - uses Javascript to collect data on installed plugins, plugin versions etc ... As Granitosaurus wrote, that won't be triggered by scrapy. But sites that use fingerprinting for scraping protection will block the scraper if it doesn't get any data from its fingerprinting module.
As this type of fingerprinting yields much more dimensions it can be used to identify particular users with a high reliability (over 90%)
You can find a good example how this is done here: https://github.com/Valve/fingerprintjs2
How to get around this?
use a lot of different real browsers for scraping (for example through selenium, no phantomjs, it can be detected)
randomize these browsers' settings and installed plugins (ideally using different versions)
when scraping rotate these browser instances instead of rotating IPs (each browser instance should keep its IP over its livetime)
If one of the instances is "burnt" replace it with a new instance that has a fresh IP and randomized browser fingerprint
... as you'll need many browsers this has to be done in an automated way, of course.
Resetting cookies sounds like a good idea at first, but if the fingerprinting system is worth its salt it won't need cookies to identify each of these machines reliably.
I've been running a process for a client that involves grabbing their publicly available calendar file from me.com by making an https GET call in a ruby script, and then converting the data in the .ics file to html, then copying it to their website.
They recently upgraded to Lion and iCloud, and it appears that, while the calendar I want is still publicly available, it's only usable by webCal enabled apps--I can no longer get it over https.
I've poked around a bit on google, but haven't see anything that points me in the right direction yet. Does anyone know if there's a way to access public calendars on iCloud via http/https? Or is it strictly via webcCl? The documentation does make it sound like iCloud is designed to only share data among Apple devices. Am I just stuck here?
I'm surprised no one has answered this yet...
If you go to iCloud.com, you should be able to get the URL of the calendar that you have syncing using a public share. It should be a webcal protocol (webcal://). However, if you change that webcal to https, it will download the ics format instead of trying to sync using Macs iCalendar.
I have my website linking to the https ics file, and it appears to be working just fine (for now at least).
icloud is gona supply an API for developers soon at least thats what they sayed in the last keynote
My requirement is pretty interesting, I want to maintain one cookie between two different browser for same domain.
so lets say I have create one cookie with name "mydata" and value "hiscal" from IE, then if i browse same website from firefox and trying to read cookie "mydata" then system should give me value "hiscal"
but this is not happen in general case
so can any one tell me how i can share cookie between to different browser(client) of same domain.
Thanks,
Hiscal
You can build a cookie-proxy by creating a Flash application and use Shared Objects (SO = Flash cookies) to store data.
Any Browsers with Flash installed could retrieve the informations stored in the SO.
But, it's an ugly workaround.
Just don't share cookies... and find another way to build your website/app.
Every browser maintains it's own cookies. So in general, no this is not possible.
With a lot of hard work you could in theory write an application that sits on the client computer that looks at all the locations the different browsers store cookies, parses the different cookie formats, synchronises them and then writes them out.
That would be error prone and will break as soon as a browser changes how it works with cookies (not to mention that some of the browsers secure their cookies, so you won't be able to get to them in the first place).
In my opinion, this is not practical and I wouldn't even try.
Use YUI's storage utility and force it to use the SWF storage engine.
All computers and browsers would still have to have Flash installed, but you wouldn't have to write your own Flash app. You would benefit from using the one maintained by the YUI team.
As others have said, this is not very portable, but in a controlled environment, it might work for you.
Cookies can be shared with other data storage, through browser extensions. Maybe in Flash or Google Gears you can maintain shared DB between browsers, but it needs to be installed on both of them, of course.
Edit:
In Google Gears you can't. Maybe you should write self-made extension... or some user-login system, where the data will sit on the server.