How do I set cookies in Load Impact? - cookies

We’ve come across this question fairly often at Load Impact, so I’m adding it to the Stack Overflow community to make it easier to find
Q: When performing a Load Impact load test, I need to have the VUs send cookies with their requests. How do I set a cookie for a VU?

Load Impact VUs will automatically save and use cookies sent to them by the server (through the "Set-Cookie:" header). When the user scenario executed by the VU ends and gets restarted (i.e. starts a new user scenario script iteration), cookies stored by the VU/client will be cleared.
Cookies, or more specifically the “Cookie:” header, is currently the only header that is set automatically by the client. Other headers, such as e.g. “If-Modified-Since:” will not be set unless the user specifies it in the load script (this is why caching is not emulated automatically - client caching behaviour has to be programmed).
You can't manipulate the stored cookies that the VU client has, but you can override or set a cookie used by the client if you specify the "Cookie:" header in the requests you make, like this:
http.request_batch({
{"GET", "http://example.com/", headers={["Cookie"]="name=value"}}
})

Related

Django expire cache every N'th HTTP request

I have a Django view which needs can be cached, however it needs to be recycled every 100th time when the view is called by the HTTP request.
I cannot use the interval based caching here since the number will keep changing upon traffic.
How would I implement this? Are there other nice methods around except maintaining a counter (in db) ?
Here are some ideas / feedback:
You're going to have to centralize something if you need it to be exact - the Redis idea in this linked solution looks OK if you can't put it in the main DB. If Redis is in your stack, I'd use that. If the 100 requests can be per user and you're using sessions, you could attach a counter to the session.
implementing a counter that counts requests with django
To not centralize the counter outside of the webserver would mean your app needs to be and stay single-threaded to keep counts in memory. It would also reset if the server was restarted. Not a great idea IMO...
If you really can't make it work with anything else, you could hack something like a request counter on your load balancer (...if the load balancer is a single machine you control, and you're comfortable doing that) and pass it as a header for Django to read.

How can I have my Filter before a javax.websocket Upgrade

I want to write my own "ServletContainerInitializer" that adds my local filter to the ServletContext. And I also want to manage ordering of ServletContainerInitializer invocation so that my local filter will get register and hit by the request before the websocket upgrade filter.
I want to know how to initialize my local ServletContainerInitializer ?
First, ServletContextInitializer are not ordered, that feature is not part of the Servlet spec. You can't accomplish that part of your question. (maybe in a future version of the Servlet spec)
Next, filtering on WebSocket Upgrade requests is highly discouraged, and a cause for a large number of problems in WebSocket. You have to be very careful to not do any of the following.
Access anything on the Response object
Do not wrap the Request or Response objects
Do not access the Request input streams or readers
Do not access the Response output streams or writers
Do not add headers
Do not change headers
Do not access request HttpSession
Do not access request user principal
Do not access request authentication / authorization methods
Do not access request parts (multipart/form-data)
Do not access request parameters
Do not access ServletContext
Do not access request.getScheme or isSecure
Do not remove things from the request (attributes, headers, parameters, etc)
In short, the only safe things you can do are
request.getAttribute(String name)
request.getContextPath()
request.getCookies()
request.getHeader(String name)
request.getIntHeader(String name)
request.getLocalName()
request.getLocalPort()
request.getPathInfo()
request.getPathTranslated()
request.getQueryString()
request.getRemoteAddr()
request.getRemotePort()
request.getRequestURI()
request.getRequestURL()
request.getServerName()
request.getServerPort()
As all other accesses on the request or response objects will change the state of the request and prevent an upgrade.
The fact that Jetty has a WebSocketUpgradeFilter is just our choice on implementation for the JSR-356 (aka javax.websocket) spec. It is added by a server side ServletContextInitializer and is forced to be first, always.
In practice you should work with the expectation that upgrades occur before the Servlet processing (and this includes filters), as this is how the spec is written. There are open bugs against the spec about how interactions with filters and whatnot should be treated, but those are currently unanswered and loosely scheduled for a future version of the javax.websocket spec.
Future versions of Jetty will likely change from using a filter to using something internal that cooperates at the path mapping level, merging the logic from the Servlet spec and the WebSocket spec into a single new set of rules.
Since this question comes up often, i've ticked the community wiki flag.
The number one reason this gets asked is because there is some authentication or authorization logic built into a filter on your project.
If this is the case, you have 2 options.
Refactor out the authentication and/or authorization logic into a standalone classs, unassociated with your filter.
Build a new Filter and a new ServerEndpointConfig.Configurator that uses this now common logic to accomplish the end results you need. Note that you do not have access to the entire HttpServletRequest object when under a potential WebSocket upgrade, you only have access to the HandshakeRequest object contents. (you can see the restrictions now)
Use the Servlet spec, and containers properly and implement / configure Security at the container level, which will always execute before websocket or servlets or filters. Thus dropping your security based Filters entirely.

How does a cookie match work if browser security restricts 1 domain from reading the cookie set by another domain?

How are cookies matched between DSPs like DoubleClick and DMPs like BlueKai for the purpose of ad serving if browser security prevents 1 party from reading the cookie of another party?
From what I've read, the DSP ad pixel would piggyback on the DMPs container tag so that each time the DMP's pixel is called the DSP's pixel is called. At this point, what information can be passed from the DMP to the DSP that allows the DSP to equate its ViewerId to the DMPs ViewerId?
Perhaps, I'm misunderstanding how piggybacking works. Any help is greatly appreciated
Thanks!
Usually u will place a DMP container tag on the page (well, there are other ways as well, I only list one of the standard approaches), where the first request sent is to hit the DMP and the response is DMP id plus a bunch of redirects from the partners (DSP's pixel link could be one of them. Actually if u are using Bluekai,these seats are biddable through their data marketplace). Then the browser will hit all these redirects with the DMP id. So DSP knows the DMP id to its own id mapping. The responses of these redirects return each unique id of these partners, so DMP can store the mappings as well. A simplified explanination could be found at http://www.adopsinsider.com/online-ad-measurement-tracking/data-management-platforms/syncing-online-data-to-a-data-management-platform/
The param passed by http GET or POST is usually cookie ids, where the actual data syn is usually carried out through real time or more often batch download server to server communication.

AppFabric Syncing Local Caches

We have a very simple AppFabric setup where there are two clients -- lets call them Server A and Server B. Server A is also the lead cache host, and both Server A and B have a local cache enabled. We'd like to be able to make an update to an item from server B and have that change propagate to the local cache of Server A within 30 seconds (for example).
As I understand it, there appears to be two different ways of getting changes propagated to the client:
Set a timeout on the client cache to evict items every X seconds. On next request for the item it will get the item from the host cache since the local cache doesn't have the item
Enable notifications and effectively subscribe to get updates from the cache host
If my requirement is to get updates to all clients within 30 seconds then setting a timeout of less than 30 seconds on the local cache appears to be the only choice if going with option #1 above. Due to the size of the cache, this would be inefficient to evict all of the cache (99.99% of which probably hasn't changed in the last 30 seconds).
I think what we need to implement is option #2 above, but I'm not sure I understand how this works. I've read all of the msdn documentation (http://msdn.microsoft.com/en-us/library/ee808091.aspx) and have looked at some examples but it is still unclear to me whether it is really necessary to write custom code or if this is only if you want to do extra handling.
So my question is: is it necessary to add code to your existing application if want to have updates propagated to all local caches via notifications, or is the callback feature just an bonus way of adding extra handling or code if a notification is pushed down? Can I just enable Notifications and set the appropriate polling interval at the client and things will just work?
It seems like the default behavior (when Notifications are enabled) should be to pull down fresh items automatically at each polling interval.
I ran some tests and am happy to say that you do NOT need to write any code to ensure that all clients are kept in sync. If you set the following as a child element of the cluster config:
In the client config you need to set sync="NotificationBased" on the element.
The element in the client config will tell the client how often it should check for new notifications on the server. In this case, every 15 seconds the client will check for notifications and pull down any items that have changed.
I'm guessing the callback logic that you can add to your app is just in case you want to add your own special logic (like emailing the president every time an item changes in the cache).

What HTTP status code should I use for a GET request that may return stale data?

The scenario is: I'm implementing a RESTful web-service that will act as a cache to entities stored on remote a C system. One of the web-service's requirements is that, when the remote C system is offline, it would answer GET requests with the last cached data, but flagging it as "stale".
The way I was planning to flag the data as stale was returning a HTTP status code other than 200 (OK). I considered using 503 (service unavailable), but I believe that it would make some C#/Java HTTP clients throw exceptions, and that would indirectly force the users to use exceptions for control flow.
Can you suggest a more appropriate status code? Or should I just return 200 and add a staleness flag to the response body? Another option would be defining a separate resource that informs the connectivity state, and let the clients handle that separately.
Simply set the Last-Modified header appropriately, and let the client decide if it's stale. Stale data will have the Last-Modified date farther back than "normal". For fresh data, keep the Last-Modified header current.
I would return 200 OK and an appropriate application-specific response. No other HTTP status code seems appropriate, because the decision if and how to use the response is being passed to the client. I would also advise against using standard HTTP cache control headers for this purpose. I would use them only to control third-party (intermediary and client) caches. Using these headers to communicate application-specific information uneccesarily ties application logic to cache control. While it might not be immediately obvious, there are real long-term benefits in the ability to independently evolve application logic and caching strategy.
If you are serving stale responses RFC-2616 says:
If a stored response is not "fresh enough" by the most
restrictive freshness requirement of both the client and the
origin server, in carefully considered circumstances the cache
MAY still return the response with the appropriate Warning
header (see section 13.1.5 and 14.46), unless such a response
is prohibited (e.g., by a "no-store" cache-directive, or by a
"no-cache" cache-request-directive; see section 14.9).
In other words, serving 200 OK is perfectly fine.
In Mark Nottingham's caching article he says
Under certain circumstances — for example, when it’s disconnected from
a network — a cache can serve stale responses without checking with
the origin server.
In your case, your web service is behaving like an intermediary cache.
A representation is stale when either it's Expires or Max-age header has passed. Therefore if you returned a representation with
Cache-control: Max-age=0
Then you are effectively saying that the representation you are returning is already stale. Assuming that when you retrieve representations from the "System C" that the data can be considered fresh for some non-zero amount of time, your web service can return representations with something like,
Cache-control: Max-age=3600
The client can check cache control header for max-age == 0 to determine if the representation was stale when it was first retrieved or not.