Export cookies for all the pages visited on a website at the end - cookies

I need to visit all the pages on the production website fairly quickly (i cannot look at firebug and note down the value of cookie)and then at the end , at our order confirmation level, i need a tool that can tell me the cookies and value of cookies on each page of the website i visited. Is there any free tool that can help in that?
TIA

This is a little wonky, but if you don't want to write something custom, you could use SeleniumIDE. I'm assuming (I don't have enough rep to comment yet to ask you) that you mean visit not "all the pages," but rather all the necessary pages to get to your order confirmation step.
If you needed to do that, you could just record yourself going through the steps using SeleniumIDE, then insert a storeCookie function after each page loads and write that out to a variable. Example:
[info] Executing: |storeCookie | cookieList | |
That would store all the cookies on the page, then you could echo them at the end.
[info] Executing: |echo | ${cookieList} | |
Not saying this is ideal -- I would write something in our Selenium WebDriver framework to do this, but if you want something standalone, you could try it.

Related

Cleaning up empty sessions in django side effects

I am already familiar with django's management command clearsessions, But we had some legacy logic in our code base that used to create session objects for all the visitors - if they did not have a previous session value assigned to them. It was for A/B testing purposes (We had to fix the test for each user and figured the session key would be a good value for this usage)
The problem is now that we have gotten rid of this redundant session creation logic, we have ended up with something around 180 million empty session records.
$ echo NjU3ODI5ZTg5MDY0OWE1YzYzNTczMzEzM2ExMGQ4NTI4ODNlZGFiNTp7fQ== | base64 -d
$ 657829e890649a5c635733133a10d852883edab5:{}
Obviously this session_data is an empty session.
In [9]: Session.objects.filter(session_data="NjU3ODI5ZTg5MDY0OWE1YzYzNTczMzEzM2ExMGQ4NTI4ODNlZGFiNTp7fQ==").count()
Out [9]: 175163446
Now you can see that we have something around 180 million records associated with this session_data.
Question
Are there any side effects regarding deletion of these empty sessions? I can not think of any problems occurring to any of our users.
Any insights or clarifications would be appreciated.

Read AICC Server response in cross domain implementation

I am currently trying to develop a web activity that a client would like to track via their Learning Management System. Their LMS uses the AICC standard (HACP binding), and they keep the actual learning objects on a separate content repository.
Right now I'm struggling with the types of communication between the LMS and the "course" given that they sit on two different servers. I'm able to retreive the sessionId and the aicc_url from the URL string when the course launches, and I can successfully post values to the aicc_url on the LMS.
The difficulty is that I can not read and parse the return response from the LMS (which is formatted as plain text). AICC stipulates that the course start with posting a "getParam" command to the aicc_url with the session id in order to retrieve information like completion status, bookmarking information from previous sessions, user ID information, etc, all of which I need.
I have tried three different approaches so far:
1 - I started with using jQuery (1.7) and AJAX, which is how I would typically go about a same-server implementation. This returned a "no transport" error on the XMLHttpRequest. After some forum reading, I tried making sure that the ajax call's crossdomain property was set to true, as well as a recommendation to insert $.support.cors = true above the ajax call, neither of which helped.
2 & 3 - I tried using an oldschool frameset with a form in a bottom frame which would submit and refresh with the returned text from the LMS and then reading that via javascript; and then a variation upon that using an iFrame as a target of an actual form with an onload handler to read and parse the contents. Both of these approaches worked in a same-server environment, but fail in the cross-domain environment.
I'm told that all the other courses running off the content repository bookmark as well as track completion, so obviously it is possible to read the return values from the LMS somehow; AICC is pitched frequently as working in cross-server scenarios, so I'm thinking there must be a frequently-used method to doing this in the AICC structure that I am overlooking. My forum searches so far haven't turned up anything that's gotten me much further, so if anyone has any experience in cross-domain AICC implementations I could certainly use recommendations!
The only idea I have left is to try setting up a PHP "relay" form on the same server as the course, and having the front-end page send values to that, and using the PHP to submit those to the LMS, and relay the return text from the LMS to the front-end iframe or ajax call so that it would be perceived as being within the same domain.... I'm not sure if there's a way to solve the issue without going server-side. It seems likely there must be a common solution to this within AICC.
Thanks in advance!
Edits and updates:
For anyone encountering similar problems, I found a few resources that may help explain the problem as well as some alternate solutions.
The first is specific to Plateau, a big player in the LMS industry that was acquired by Successfactors. It's some documentation that provide on setting up a proxy to handle cross-domain content:
http://content.plateausystems.com/ContentIntegration/content/support_files/Cross-domain_Proxlet_Installation.pdf
The second I found was a slide presentation from Successfactors that highlights the challenge of cross-domain content, and illustrates so back-end ideas for resolving it; including the use of reverse proxies. The relevant parts start around slide 21-22 (page 11 in the PDF).
http://www.successfactors.com/static/docs/successconnect/sf/successfactors-content-integration-turley.pdf
Hope that helps anyone else out there trying to resolve the same issues!
The answer in this post may lead you in the right direction:
Best Practice: Legitimate Cross-Site Scripting
I think you are on the right track with setting up a PHP "relay." I think this is similar to choice #1 in the answer from the other post and seems to make most sense with what you described in your question.

Verifying unauthorized modifications to website

I made a script that crawls through a domain, and I want it to determine if there were any unauthorized modifications. For static pages I can simply compare to a pre-set hash value, but for dynamic-length pages, what's a good way to check if any significant changes were made?
Sorry if it sounds dumb.
You could download the pages locally, then use diff to check the length of the answer. Eg,
diff old_version new_version | wc
which will give you a summary of the length of the changes.

rails 3 cookies

I have a simple app where users type in stuff in a text filed to get various results. I would like a feature where if a user enters something and then closes the browser tab, the next time they come, I can show them their previous/recent searches. This will persist even if they close the whole browser and open it again.
I believe this can be done by help of cookies. Are there some good rails3 gems for using cookies or any simple tutorial that could guide me in a direction?
http://railstutorial.org/chapters/sign-in-sign-out#sec:remember_me
This is a great book to get you started with rails3. (I would recommend to read it from the beginning)
In the link above, listing 9.12 gives you a good explanation about cookies.
Store the info in the session object:
session[:user_entry] = the_user_entry
http://guides.rubyonrails.org/action_controller_overview.html#session

How to track page views

What is the best way to track page views? For instance: SO has how many views a Question has, yet hitting refresh doesn't up the view count.
I've read that using cookies is a pretty good way to do so, but I'm at a loss on how this doesn't get out of hand.
I've searched all over and can't find a good answer for this.
EDIT:
I also see that another option (once again I could be horribly wrong) is to use Google Analytics API to get page views. Is this even an viable option? How does Stackoverflow, youtube, and others track their views?
You can track them in a database if you're rolling your own. Every time a page loads, you call a method that will decide whether or not to up the page views. You can add whatever criteria you like.
IF IP is unique
OR IP hasn't visited in 20 minutes based on a session variable
ETC
THEN add a page view record
| ID | IPAddress | ViewDateTime |
| 1 | 1.2.3.4 | Oct 18 ... |
However, session variables can get pretty load intensive on sites with as many visitors as SO. You might have to get a little more creative.
Now, if you don't want to code it, the I would suggest looking into SmarterStats as it reads your server logs and is more robust than analytics.
note: i'm not sure about a similar Apache software
I set a session variable with the address I'm checking, then toggle it if it's been hit by that browser. In my page template I then check the var for that page and handle as appropriate.
A simple, hacky method:
Create local MySQL table for tracking:
CREATE TABLE pageviews (
pageview_count int(9) default NULL
)
Then, on the index.php page, or wherever the user is going to be landing, you can run an update query on that field.
<php
$link = mysql_connect('localhost','root','');
if (!$link) {
die('could not connect ' .mysql_error());
}
$mysql = mysql_select_db($database_name,$link);
$query = mysql_query('UPDATE pageviews set pageview_count = pageview_count + 1;');
mysql_close();