AppFabric simulate cache failure - appfabric

Is there a way to simulate a Cache failure in AppFabric?
I want to hit the AddFailureNotificationCallback delegate in my code, for testing purposes.
Thanks

Related

Stackdriver Trace with Google Cloud Run

I have been diving into a Stackdriver Trace integration on Google Cloud Run. I can get it to work with the agent, but I am bothered by a few questions.
Given that
The Stackdriver agent aggregates traces in a small buffer and sends them periodically.
CPU access is restricted when a Cloud Run service is not handling a request.
There is no shutdown hook for Cloud Run services; you can't clear the buffer before shutdown: the container just gets a SIGKILL. This is a signal you can't catch from your application.
Running a background process that sends information outside of the request-response cycle seems to violate the Knative Container Runtime contract
The collections of logging data is documented and does not require me to run an agent, but there is no such solution for telemetry.
I found one report of someone experiencing lost traces on Cloud Run using the agent-based approach
How Google does it
I went into the source code for the Cloud Endpoints ESP, (the Cloud Run integration is in beta) to see if they solve it in a different way, but there the same pattern is used: there is a buffer with traces (1s) and it is cleared periodically.
Question
While my tracing integration seems to work in my test setup, I am worried about incomplete and missing traces when I run this in a production environment.
Is this a hypothetical problem or a real issue?
It looks like the right way to approach this is to write telemetry to logs, instead of using an agent process. Is that supported with Stackdriver Trace?
Is this a hypothetical problem or a real issue?
If you consider a Cloud Run service receiving a single request, then it is definitely a problem, as the library will not have time to flush the data before the CPU of the container instance get throttled.
However, in real life use cases:
A Cloud Run service often receives requests continuously or frequently, which means that its container instance are going to either: continuously have CPU or have CPU available from time to time.
It is OK to drop traces: If some traces are not collected because the instance is turned down, it is likely that you have collected a diverse enough set of samples before this happens. Also, you might just be interested in the aggregated reports, in which case, collecting individual traces do not matter.
Note that Trace libraries usually themselves sample the requests to trace, they rarely trace 100% of the requests.
It looks like the right way to approach this is to write telemetry to logs, instead of using an agent process. Is that supported with Stackdriver Trace?
No, Stackdriver Trace takes its data from the spans sent to its API. Note that to send data to Stackdriver Trace, you can use libraryes like OpenCenss and OpenTelemetry, proprietary Stackdriver Trace libraries are not the recommended way anymre.
You're right. This is a fair concern since most tracing libraries tend to sample/upload trace spans in the background.
Since (1) your CPU is nearly scaled nearly to zero when the container isn't handling any requests and (2) the container instance can be killed any time due to inactivity, you cannot reliably upload those trace spans collected in your app. As you said, it may sometimes work since we don't fully stop CPU, but it won't always work.
It appears like some of the Stackdriver (and/or OpenTelemetry f.k.a. OpenCensus) libraries let you control the lifecycle of pushing trace spans.
For example, this Go package for OpenCensus Stackdriver exporter has a Flush() method that you can call before completing your request rather than relying on the runtime to periodically upload the trace spans: https://godoc.org/contrib.go.opencensus.io/exporter/stackdriver#Exporter.Flush
I assume other tracing libraries in other languages also expose similar Flush() methods, if not, please let me know in the comments and this would be a valid feature request to those libraries.
Cloud Run now supports sending SIGTERM. If your application handles SIGTERM it'll get 10 seconds grace time before shutdown.
You can use the 10 seconds to:
Flush buffers that have unsent data
Close connections to other systems
Docs: Container runtime contract

What should unit test cover in a list display app from api?

I have an app that connects to an API endpoint and displays a list of people. I want to write a unit test for the app. But I am not sure what to test here. There is no arithmetic operation happening It's just fetching the data from API and displaying it.
What should the unit test cover in such a scenario?
If a test can never fail, then it's not really testing anything. In your case though, a network api call is being made. And network calls can fail all the time. Depending on how you are making your network call you can either:
create a fake web server that can return a variety of error codes
create a mock api service that can return a variety of error codes
don't test anything
There are all kinds of tests you can use, behavioural, unit, functional, integration, black box, user acceptance testing.
What does testing do for you? Does it document code behaviour? Does it lock in the behavior of a function? Does it ensure that something works?
Depending on your needs, you may not need a test. Or, you may need a lot more. It's up to you.
Unit tests are designed to ensure that a behavior or set of behaviors occur(s) when you invoke a unit of code.
In this case, you have code that is fetching the data from an API and returning it. You might want to test the following:
Your code makes a network call to the API.
When the API returns a successful response, your app renders the data.
When the API returns a failed response, you gracefully handle the failure.
Of course these steps will probably vary depending on your use case. You can look into stubbing the API to understand how you can simulate API invocation failures.

ColdFusion logging vs performance

I am wondering what kind of performance hit I can expect if I enable logging for the following two logs in ColdFusion 8 on a IIS webserver connected to sql 2005 server.
Log slow pages taking longer than
Enable logging for scheduled tasks
It's a very subjective question and depends on your settings, architecture and loads. Generally, logging takes a very small portion of your server's processing in comparison to everything else it does, however the amount of logging and your log retention policy can affect your server's performance if not tuned properly.
With those caveats in mind, I will attempt to address each setting:
Slow Pages logging: Depends on your threshold for slow pages, and if your threshold is reasonable and all your pages are being logged, then the performance issue would likely be on the pages themselves, not the logging of said pages.
Scheduled Tasks: Depending on the amount of scheduled tasks and the execution intervals each scheduled task is set to run, the logging of the execution takes up very little space in the logs, and the only real issue would be size and retention policy of the logs.
You can launch the server monitor and it won't have any impact on your server at all.
That would at least let you see what's going on in real time.

throttling http api calls with delay

I'm trying to implement some throttles on our REST API. A typical approach is after a certain threshold to block the request (with 403 or 429 response). However, I've seen one api that adds a delay to the response instead.
As you make calls to the API, we will be looking at your average calls per second (c/s) over the previous five-minute period. Here's what will happen:
over 3c/s and we add a 2 second delay
over 5c/s and we add a 4 second delay
over 7c/s and we add a 5 second delay
From the client's perspective, I see this being better than getting back an error. The worst that can happen is that you'll slow down.
I am wondering how this can be achieved without negatively impacting the app server. i.e. To add those delays, the server needs to keep the request open, causing it to keep more and more request processors busy, meaning it has less capacity for new requests coming in.
What's the best way to accomplish this? (i.e. is this something that can be done on the web server / load balancer so that the application server is not negatively affected? Is there some kind of a throttling layer that can be added for this purpose?)
We're using Django/Tastypie, but the question is more on the architecture/conceptual level.
If your are using synchronous application server which is the most common setup for Django applications (for example a gunicorn with default --worker-class sync), then adding such a delay in the application would indeed have a very bad impact on performance. A worker handling a delayed request would be blocked during a delay period.
But you can use asynchronous application server (for example gunicorn with '--worker-class gevent`) and then an overhead should be negligible. A worker that handles a delayed requests is able to handle other requests while a delay is in progress.
Doing this in the reverse proxy server may be a better option, because it allows to easily and flexibly adjust a policy. There is an external nginx module for exactly such thing.

How heavy for the server to transmit data over HTTPS?

I am trying to implement web service and web client applications using Ruby on Rails 3. For that I am considering to use a SSL but I would like to know: how "heavy" is it for servers to handle a lot of HTTPS connection instead of HTTP? what is the difference of response time and the performance at all?
The cost of SSL/TLS handshake (which takes most of the overall "slowdown" SSL/TLS adds) nowadays is much less than the cost of TCP connection establishment and other actions associated with session establishment (logging, user lookup etc). And if you worry about speed and want to save any ns of time, there exist hardware SSL accelerators that you can install to your server.
It is several times slower to go with HTTPS, however, most of the time that's not what is actually going to slow your app down. Especially if you're running on Rails, your performance scaling is going to be bottlenecked elsewhere in the system. If you are doing anything that requires the passing of secrets of any kind over the wire (including a shared session cookie), SSL is the only way to go and you probably won't notice the cost. If you happen to scale up to the point where you do start to see a performance hit from encryption, there are hardware acceleration appliances out there that help tremendously. However, rails is likely to fall over long before that point.