Can two different browser share one cookie? - cookies

My requirement is pretty interesting, I want to maintain one cookie between two different browser for same domain.
so lets say I have create one cookie with name "mydata" and value "hiscal" from IE, then if i browse same website from firefox and trying to read cookie "mydata" then system should give me value "hiscal"
but this is not happen in general case
so can any one tell me how i can share cookie between to different browser(client) of same domain.
Thanks,
Hiscal

You can build a cookie-proxy by creating a Flash application and use Shared Objects (SO = Flash cookies) to store data.
Any Browsers with Flash installed could retrieve the informations stored in the SO.
But, it's an ugly workaround.
Just don't share cookies... and find another way to build your website/app.

Every browser maintains it's own cookies. So in general, no this is not possible.
With a lot of hard work you could in theory write an application that sits on the client computer that looks at all the locations the different browsers store cookies, parses the different cookie formats, synchronises them and then writes them out.
That would be error prone and will break as soon as a browser changes how it works with cookies (not to mention that some of the browsers secure their cookies, so you won't be able to get to them in the first place).
In my opinion, this is not practical and I wouldn't even try.

Use YUI's storage utility and force it to use the SWF storage engine.
All computers and browsers would still have to have Flash installed, but you wouldn't have to write your own Flash app. You would benefit from using the one maintained by the YUI team.
As others have said, this is not very portable, but in a controlled environment, it might work for you.

Cookies can be shared with other data storage, through browser extensions. Maybe in Flash or Google Gears you can maintain shared DB between browsers, but it needs to be installed on both of them, of course.
Edit:
In Google Gears you can't. Maybe you should write self-made extension... or some user-login system, where the data will sit on the server.

Related

What is the proper long-term solution for the safari third-party cookie problem?

I've read a gazillion posts about the safari third party cookie problem and lots of kludgy workarounds that stop working as soon as the next version is released, but what I'm missing is what is supposed to be a sensible "best practice" solution.
I have a situation where app A on one domain needs to iFrame app B on another domain and app B needs a cookie purely to maintain state specific to that user. There's no tracking or anything cross-origin involved; the two apps don't share any cookies with each other or anyone else, so I don't really understand why what I'm doing is so dangerous.
At the moment the only option I can see is to use some kind of proxying to make the two apps appear to be on the same origin
Am I missing something?

How to reduce your fingerprint in browser for privacy and for web scraping

You can disable cookies, change your ip 500 times but can’t anyone just track you through fingerprinting?
You could disable Java and Flash. Though that would break the page and make you stand out anyway.
You could use Tor but I think if you use Tor you get blacklisted from some sites instantly.
What’s the workaround? Using Chrome is a big nono. Internet explorer maybe and firefox perhaps…
Are there any apps that deal with this? Or just design a good web scraper, have an ip and cross your fingers.
I realize the average site is not going to implement all these features, but I am how one would workaround a site that was extremely vigilant.
There are two types of browser fingerprinting:
1. static fingerprinting - can identify browsers (and probably operating systems) just based on details of their requests. That's the order and capitalization of http headers, browser specific headers etc.
One small aspect is described here: https://gwillem.gitlab.io/2017/05/02/http-header-order-is-important/
As this can be done without any javascript, I guess scrapy is identifyable this way.
How to get around this?
As mentioned in the above article you need to exactly emulate a particular browser's fingerprint by emulating its headers' order and capitalization (and it has to match the user agent, of course)
2. dynamic fingerprinting - uses Javascript to collect data on installed plugins, plugin versions etc ... As Granitosaurus wrote, that won't be triggered by scrapy. But sites that use fingerprinting for scraping protection will block the scraper if it doesn't get any data from its fingerprinting module.
As this type of fingerprinting yields much more dimensions it can be used to identify particular users with a high reliability (over 90%)
You can find a good example how this is done here: https://github.com/Valve/fingerprintjs2
How to get around this?
use a lot of different real browsers for scraping (for example through selenium, no phantomjs, it can be detected)
randomize these browsers' settings and installed plugins (ideally using different versions)
when scraping rotate these browser instances instead of rotating IPs (each browser instance should keep its IP over its livetime)
If one of the instances is "burnt" replace it with a new instance that has a fresh IP and randomized browser fingerprint
... as you'll need many browsers this has to be done in an automated way, of course.
Resetting cookies sounds like a good idea at first, but if the fingerprinting system is worth its salt it won't need cookies to identify each of these machines reliably.

programmatically get ics file from icloud?

I've been running a process for a client that involves grabbing their publicly available calendar file from me.com by making an https GET call in a ruby script, and then converting the data in the .ics file to html, then copying it to their website.
They recently upgraded to Lion and iCloud, and it appears that, while the calendar I want is still publicly available, it's only usable by webCal enabled apps--I can no longer get it over https.
I've poked around a bit on google, but haven't see anything that points me in the right direction yet. Does anyone know if there's a way to access public calendars on iCloud via http/https? Or is it strictly via webcCl? The documentation does make it sound like iCloud is designed to only share data among Apple devices. Am I just stuck here?
I'm surprised no one has answered this yet...
If you go to iCloud.com, you should be able to get the URL of the calendar that you have syncing using a public share. It should be a webcal protocol (webcal://). However, if you change that webcal to https, it will download the ics format instead of trying to sync using Macs iCalendar.
I have my website linking to the https ics file, and it appears to be working just fine (for now at least).
icloud is gona supply an API for developers soon at least thats what they sayed in the last keynote

Django : open application on client station

I would like to ask a relatively simple question in which I didn't put much thought in yet. But I'd just like to know if this is possible before starting painful implementations of my app!
Here it is : Is it possible (and easy..) to run an application and communicate with files on the client's computer from a web application developped in Django ?
I'd really just like to know if it's feasible. Of course if you have a few hints on how to do it, they would be appreciated :)
Yes, it's perfectly possible to do that. You just need to add a proper URL with its corresponding view, processing your HTTP request from client machines and interacting with local files as you need.
Key thing here is to use a proper HttpClient on the machine accessing your client's file.

user browse web site history data

I would like to list user connect web site link,get all history data
where can i got those data.
thanks
Well, since I'm new I'll just have to post as broad an answer as I can for your vague question.
If your goal is to get a users recent browsing history, you should just be able to look up the places where all of the mainstream browsers store their history data. I highly doubt the devs would put such insensitive information under encryption, so this shouldn't be too hard. Browsers that you should take in to consideration include Internet Explorer, Firefox, Opera, Chrome, Netscape Navigator, and all of the other Mozilla spinoffs, such as Sea Monkey.
If your goal is to establish a connection to a web server, and then download a list of data provided by the server, there is a lot of setup involved. First, you need a server. You can use something like Apache, and use the HTTP protocol for all data transmission, or if you're feeling brave, you could whip up a server of your own design. Second, you need a way to connect to this server. Since it appears you're using visual C++, WinSock would be the way to do this. There are plenty of tutorials online for WinSock, just Google away.
I hope this helps you, and best of luck to your endeavor.
As your question is tagged "C++", I assume that your program works on local computer.
Each browser has its own format of "history storage". You will have to work on different formats if you are targeting the major browsers, e.g. Firefox, Chrome, IE, etc.
For example, Firefox and Chrome stores its history in a SQLite database, while IE stores in a binary file named "index.dat".
Here are some places to start:
Firefox :
http://kb.mozillazine.org/Places.sqlite
https://developer.mozilla.org/en/The_Places_database
IE :
http://www.forensicswiki.org/wiki/Internet_Explorer_History_File_Format