setting cookie values in load test - visual-studio-2017

I am trying to build a load test (visual studio 2017) that works on a web app that sets a value in the cookie via javascript (client side). For future requests to work, this value needs to be present in the cookie, however because it was set via javascript, this value did not get recorded when building the load test script.
It changes and I know how to the get the needed value into a context parameter.
The problem I have, is;
How do you set a value in the cookie, in a load test in visual studio 2017?

After looking into this, it looks like the way to do this is to write a custom request plugin to insert the desired value into the cookie before the request is sent.
From what I have working, it looks like this also needs to be done on each of the requests before it is sent. Its not like it actually sets the cookie to have the desired value. (unless someone knows how?).

Related

How do I ensure 'Automatically follow redirects' setting is always disabled?

I'm truing to verify that 301 redirects are configured correctly in my webapp. When I send a request I want to receive 301 response with new expected location header.
It appeared that instead of 301 I receive 200 because of the Postman setting 'Automatically follow redirects' which is enabled OOTB. Disabling the setting fixes my tests.
I'm just wondering how to store this configuration somewhere in the collection? I do not want any other dev (or CI?) to know that there is some setting in the Postman tool that needs to be changed. What if I work on two collections simultaneously where one requires the setting to be disabled and the other one is not?
If you're using Newman, in your CI system, there is a setting for this that you can pass as an argument from the command line. --ignore-redirects.
https://github.com/postmanlabs/newman#newman-options
There are changes coming in to bring this down to the request level in the UI to make it more visible. Currently, there is not a programmatic way to do this in the Collection.
You could include this requirement in the Collection or Request Description, so others would know this needed to be disabled. Likewise, if this was a feature that you enabled without telling these external folks, how would they know that?
If it's something that you would like to see included in the app, you could create a feature request for it on the GH issue tracker repo, it's the only sure-fire way to tell the Postman Team, that this would be a cool feature to include.
https://github.com/postmanlabs/postman-app-support/issues

How to monitor an action by user on Glass

I have a mirror API based app in which i have assigned a custom menu item, clicking on which should insert a new card. I have a bit of problem in doing that. I need to know of ways i can debug this.
Check if the subscription to the glass timeline was successful.
Print out something on console on click of the menu.
Any other way i can detect whether on click of the menu, the callback URL was called or not.
It sounds like you have a problem, but aren't sure how to approach debugging it? A few things to look at and try:
Question 1 re: checking subscriptions
The object returned from the subscriptions.insert should indicate that the subscription is a success. Depending on your language, an exception or error would indicate a problem.
You can also call subscriptions.list to make sure the subscriptions are there and are set to the values you expect. If a user removes authorization for your Glassware, this list will be cleared out.
Some things to remember about the URL used for subscriptions:
It must be an HTTPS URL and cannot use a self-signed certificate
The address must be resolvable from the public internet. "localhost" and local name aliases won't work.
The machine must be accessible from the public internet. Machines with addresses like "192.168.1.10" probably won't be good enough.
Question 2 re: printing when clicked
You need to make sure the subscription is setup correctly and that you have a webapp listening at the address you specified that will handle POST operations at that URL. The method called when that URL is hit is up to you, of course, so you can add logging to it. Language specifics may help here.
Try testing it yourself by going to the URL you specify using your own browser. You should see the log message printed out, at a minimum.
If you want it printed for only the specific menu item, you will need to make sure you can decode the JSON body that is sent as part of the POST and respond based on the operation and id of the menu item.
You should also make sure you return HTTP code 200 as quickly as possible - if you don't, Google's servers may retry for a while or eventually give up if they never get a response.
Update: From the sample code you posted, I noticed that you're either logging at INFO or sending to stdout, which should log to INFO (see https://developers.google.com/appengine/docs/java/#Java_Logging). Are you getting the logging from the doGet() method? This StackOverflow question suggests that appengine doesn't display items logged at INFO unless you change the logging.properties file.
Question 3 re: was it clicked or not?
Depending on the configuration of your web server and app server, there should be logs about what URLs have been hit (as noted by #scarygami in the comments to your question).
You can test it yourself to make sure you can hit the URL and it is logging. Keep in mind, however, the warnings I mentioned above about what makes a valid URL for a Mirror API callback.
Update: From your comment below, it sounds like you are seeing the URL belonging to the TimelineUpdateServlet is being hit, but are not seeing any evidence that the log message in TimelineUpdateServlet.doPost() is being called. What return code is logged? Have you tried calling this URL manually via POST to make sure the URL is going to the servlet you expect?

Importing Firefox and Chrome cookies to libcurl

I'm using Code::Blocks with MinGW, under Windows 7.
I'm writing a multithreaded web crawler with libcurl, using a CURLSH object with CURL_LOCK_DATA_COOKIE enabled to share cookies among different threads.
Once a handle receives a cookie, it is successfully shared among every other handle. However, I need to copy the initial set of cookies from Firefox or Chrome. I found that they store cookies using sqlite, and I've been able to read cookies from both of them from within my program. The problem is, how do I give these cookies to libcurl? Ideally, there should be some way to feed these cookies to my CURLSH object, so that they get distributed to every handle. I have found no such thing.
Following this document, I can try to save the cookies I read from my browser to a cookies.txt file, which reduces to finding a correspondence between the fields in the database used by Firefox/Chrome and the Netscape format.
Netscape uses the following format:
domain flag path secure expiration name value
The problem comes with the flag field. I don't know what to write there. Firefox uses the following fields (file cookies.sqlite, table *moz_cookies*), which correspond with the Netscape format as follows (is this correct?):
host ??? path isSecure expiry name value
Chrome uses the following fields (file Cookies, table cookies):
host_key ??? path secure expires_utc name value
So, to create this cookies.txt file, I'm only missing that flag field. The document linked above says:
flag - A TRUE/FALSE value indicating if all machines within a given domain
can access the variable. This value is set automatically by the
browser, depending on the value you set for domain.
Which doesn't really tell me what to write there.
However, writting a file and then reading it seems like unnecessary work, given that I'll first load the cookies from Firefox/Chrome in RAM, and I should be able to give them to libcurl directly without going through the hard drive. I've found the CURLOPT_COOKIE option, but it is missing some fields (namely, domain). Also, that option doesn't seem to save the cookies for posterior use. It looks like I would need to call it for every transaction with only the cookies of the corresponding domain (and what if these cookies get changed? I would not want to check for changes manually, given that libcurl can do that).
So, given that I have all my cookies from Firefox/Chrome in memory, how do I give them to libcurl? If the only option is to use a cookies.txt file, what should I write in the flag field?
I've found the answer, with CURLOPT_COOKIELIST (I was confusing it with CURLINFO_COOKIELIST, which can only be used to read cookies). Using CURLOPT_COOKIELIST, I can enter my cookies as HTTP headers, which do not need that flag field. I'll only need to give format to the date. It looks like specifying the cookies for any handle is enough to set them in the CURLSH object, because I can set them in one handle and read them any other handle.

Cookie handling in subsequent requests in same page

I'm creating my own (multi threaded) ISAPI based website in C++ and I'm trying to implement correct session management.
The problem is that when a new session should be created, the session is created twice when using subsequent requests in the generated web page.
Here's how it works:
- Client requests http://localhost and sends either no cookie or a cookie with an old session ID in it.
- Server looks at the session cookie and feels that it needs to create a new one because it no longer exists: it prepares a header with a cookie in it with a new session ID and sends the complete header to the client (I tracked this with http live headers plugin in firefox and it is correct). It also prepares some data like the page and and stuff like that (not yet body data, as it is still processing data from the database and stuff like that) and sends what it has back to the client.
- Client should have this new session cookie now, and sees the stylesheet link and immediately sends the stylesheet request http://localhost/css to my server. But... he still does this with the old session ID for some reason, not with the newly received one!
- Server sees this request (with again an no longer existing session id), generates another new session and sends the new session id with a cookie along with the stylesheet data.
So the client has received two session id's now and will from now on keep using the second one as the first one is overwritten, but nevertheless the first page has used the wrong session (or actually, the second page has).
You could say that this is not a problem, but when I start using personalized stylesheets, I will have the wrong stylesheet on the first page and as the page will use AJAX to refresh the content (if available), it is possible that the stylesheet is never reloaded unless the client refreshes.
So, is this a problem that is always there when doing this kind of thing? Will the browser always send an old cookie although it has already received a new one but is still processing the page? Is this a problem that, for example PHP, also has?
Note: before all the discussions start about "use php instead" or something: I am rewriting a website that I had first written in PHP, it became popular, had thousands of (real) visitors every hour and started killing my server (the website doesn't make that kind of money that I can throw lots of servers at it). By writing it in C++, requests take 2ms instead of 200ms in PHP... I can optimize everything. By taking my time to develop this ISAPI correctly, it is safely multi-threaded and can be multi-processed, multi-servered. And most of all, I like the challenge.
Added note: It seems that the problem is only there when an old session exists in the cookies, because when I completely clear all cookies from my browser, and a new one is created and sent back to the client, the subsequent stylesheet request immediately uses the given session id. This seems to be some kind of proof that I'm doing something wrong when an old session id is sent... Should an existing cookie be deleted first? How?
Added note: The cookie is written with an expire-date one year ahead.
I have found out what the problem was, I was in the assumption that setting a cookie without specifying a path would result in making the session work on all paths on that domain.
By using http://foo.bar/home as main page and http://foo.bar/home/css as stylesheet, internally translating that url to ?s1=home and ?s1=home&css=y, I was actually using two different paths according to the browser which did not pass the cookie to the css-request.
For some reason, they actually got together afterwards, I don't fully understand why.
Isn't this silly? Will people not often have a http://foo.bar/index.php and a http://foo.bar/css/style.css.php , just because they use subdirectories to keep their structure clean?
If anyone knows of a way to fix it, to make subpaths also work with the same cookies, let me know, but as I understand it from the definition of cookies, they are stuck within a specific path (although, it seems that if you specifically add a path other than /, it will work on subdirectories as well?)

Writing multiple cookies to Java HTTP Response (HttpSlingServletResponse) not working

I am trying to write multiple cookies to a SlingHttpServletResponse, however only the last cookie i write is visible in the browser.
Ex.
response.addCookie(new Cookie("foo", "bar"));
response.addCookie(new Cookie("lion", "bear"));
response.addCookie(new Cookie("cat", "dog"));
When I look at the cookies on my browser, the only cookie i see is the: "cat", "dog" cookie
If i switch the order, the last cookie is always the one that displays (so i dont think its something with a specific cookie).
The Java API indicates that you can call response.addCookie() any number of times to add any number of cookies.
Im not sure if this is a Sling specific issue (i dont think it would be) but it might be?
Looking at the code, I suspect this is an issue of the underlying servlet container used.
By default Sling uses the Jetty 6 Container contained in the Apache Felix Http Bundle.