Should django health-check endpoint /ht/ be accessible from everybody? - django

From the documentation reported here I read
This project checks for various conditions and provides reports when
anomalous behavior is detected.The following health checks are bundled
with this project: cache, database, storage, disk and memory
utilization (viapsutil), AWS S3 storage, Celery task queue, Celery
ping, RabbitMQ, Migrations
and from use case section
The primary intended use case is to monitor conditions via HTTP(S),
with responses available in HTML and JSONformats. When you get back a
response that includes one or more problems, you can then decide the
appropriate courseof action, which could include generating
notifications and/or automating the replacement of a failing node with
a newone
And then
The /ht/ endpoint will respond aHTTP 200 if all checks passed and a HTTP
500 if any of the tests failed.
From a security point of view: should this url (https://example.com/ht) be reachable from everybody? It seems to give away different information.

Related

Google Cloud Pub/Sub and multiple receivers

My usecase is this:
I have X instances of App Engine NodeJS backend.
I use Redis Cloud Memorystore as caching layer.
To minimize Redis use, and add performance, each backend instance also has its own local caching.
Rules are like this, when a request comes in to a backend instance:
first check local cache
if found, use that.
if not found, look in Redis cache
update local cache with data from Redis
The local cache times out much quicker than the Redis cache, but I would still like to be able to send a "clear cache" message to all instances.
The entire thing works extremely well, but I'd like to be able to update the local caches by sending a pub/sub message which in turn would be received by all backend instances, clearing the local caches.
Yes, I could just opt not to use local caching and just Redis MemoryStore, but that would mean network traffic for every Redis lookup, slower responses, and more Redis resource use.
Is this possible at all, or is there only 1 receiver for each pushed message?
I know the receiver can opt not to ack a message, but how to make sure all instances have received, and how to know that has happened, so ack can be sent by the last instance? Seems impossible
Your use case is uncommon, but IMO, you have 2 solutions:
Use Redis
Put a clear cache datetime in redis. For each request, the App Engine instance call for the clear cache datetime. If the local value (the previous clear cache value, or null if there is no precedent call (new instance case)) is behind the Redis value, clear the local cache. Else continue
Use PubSub
(Not my preferred) When an instance start, create a pull subscription on a PubSub topic. Pull the messages. When you receive one, clear the cache.
With PubSub, when you publish a message in a topic, it is duplicated in all subscription. With that pattern, all the instances will have a subscription and can receive the message.
However, you could reach the number of subscription quotas. To limit that, set the expiration value to 1 day, to clean automatically the subscription after 1 day without any subscriber.
Not sure there is a tear down notice on App Engine instance to let you the time to delete the subscription nicely.

Spring Data Neo4J - Unable to acquire connection from pool within configured maximum time

We have a Reactive REST API using Spring Data Neo4j (SpringBoot v2.7.5) deployed to Kubernetes. When running a stress test to determine the breaking point, once the volume of requests that the service can handle has been breached, the service does not auto-recover, even after the load has dropped to a level at which the service can handle.
After the service has fallen over the Neo4J health indicator shows:
“org.neo4j.driver.exceptions.ClientException: Unable to acquire connection from the pool within configured maximum time of 60000ms”
With respect to connection/configuration settings we are using defaults configured by SDN.
Observations:
Up until the point at which the service breaks only a small number of connections are utilised, at the point at which it breaks the connections in use jumps up to the max pool size and the above mentioned error is observed. No matter how much time passes (even well beyond the max connection lifetime) the service is unable to acquire a connection from the pool. Upon manually shutting down and restarting the service/pod the service returns to a healthy state.
As an interim solution we now check the Neo4J health indicator, if the mentioned error is present the liveness state is set to down which triggers Kubernetes to restart the service automatically. However, I’m wondering if there is an underlying issue with the connections in the pool not getting ‘cleaned up’?
You can take a look at this discussion https://github.com/spring-projects/spring-data-neo4j/issues/2632
I had the same issue. The problem is that either Spring Framework or Neo4j reactive transaction manager doesn't close connections in a complex reactive flow e.g. when there are a lot of inner calls/mappings and somewhere inside an exception is thrown.
So as a workaround you can add #Transactional in such places to avoid multiple transactions to be created.

Making GCP Load-balancer HTTP logs integrate with Cloud Trace (in GCP Logs Explorer)?

Cloud Trace and Cloud Logging integrate quite nicely in most cases, described in https://cloud.google.com/trace/docs/trace-log-integration
Unfortunately, this doesn't seem to include the HTTP request logs generated by a Load Balancer when request logging is enabled.
The LB logs show the traces icon, and are correctly associated with an overall trace in the Cloud Trace system, but the context menu 'show trace details' is greyed out for those log items.
A similar problem arose with my application level logging/tracing, and was solved by setting the traceSampled attribute on the LogEntry, but this can't work for LB logs, since I'm not in control of their generation.
In this instance I'm tracing 100% of requests since the service is M2M and fairly low volume, but in the general case it makes sense that the LB can't know if something is actually generating traces without being told.
I can't find any good references in the docs, but in theory a response header indicating it was sampled could be observed by the LB and cause it to issue the appropriate log.
Any ideas if such a feature exists, in this form or any other?
(Last-ditch workaround might be to use Logs Router to feed LB logs into a pubsub queue (and exclude them from normal logging sinks), and resubmit them to the normal sink(s) with fields appropriately set by some Cloud Function or other pubsub consumer, but that seems like a lot of work and complexity for this purpose)
There is currently a Feature Request created for this, you can follow the status in the following link 1.
As a workaround, you could implement target proxies along with your Load Balancer, according to the documentation for a Global external HTTP(S) load balancer:
The proxies set HTTP request/response headers as follows:
Via: 1.1 google (requests and responses)
X-Forwarded-Proto: [http | https] (requests only)
X-Cloud-Trace-Context: <trace-id>/<span-id>;<trace-options> (requests only) Contains parameters for Cloud Trace.
X-Forwarded-For: [<supplied-value>,]<client-ip>,<load-balancer-ip>
(see X-Forwarded-For header) (requests only)
You can find the complete documentation about external HTTP(S) load balancers and target proxies here 2.
And finally, take a look at the documentation on how to use and configure target proxies here 3.

How can I filter out errors on sentry to avoid consuming my quota?

I'm using Sentry to log my errors, but there are errors I'm not able to fix (or could not be fixed by me) like
OSError (write error)
Or error that come from RQ (each time I deploy my app)
Or client errors (which are client.errors)
I can't just ignore them because I consume all my quota. How I can filter out this errors?
Here some references for interested people.
uwsgi: OSError: write error during GET request
Fixing broken pipe error in uWSGI with Python
https://github.com/unbit/uwsgi/issues/1623
I created a Gist for rate limiting the amount of events that are being send to Sentry:
https://gist.github.com/jurrian/e22f8e724b8499a29c5537e956f0dc7f
It uses ratelimitingfilter which can be configured to set a rate per minute, and additionally add a burst to start rate limiting after a number of events.
I get the same errors, but i never had any problems with my quota. But if you really want to filter it, you can just do it in your sdk:
https://docs.sentry.io/error-reporting/configuration/filtering/?platform=python
But beware, this could hide other errors as mentioned here:
https://github.com/pypa/warehouse/issues/679
To safe yourself some quota, you have two options:
Avoid forwarding events client side, thus preventing events being send to sentry at all. Have a look at the docs for available client-side filters. The drawback with this approach is of course that you need a new code deployment for any adjustment of client-side filters and some clients may not instantly reflect your code changes.
Avoid forwarding events on sentry's side, via inbound filters ([Project] > Project Settings > Inbound Filters). According to the sentry documentation on quota usage, events filtered via inbound filters are not affecting your quota.
Inbound filters include:
Common browser extension errors
Events coming from localhost
Known legacy browsers errors
Known web crawlers
By their error message
From specific release versions of your code
From certain IP addresses
Business plans and above also allow to filter events by error messages.

How do I add simple licensing to api when using AWS Cloudfront to cache queries

I have an application deployed on AWS Elastic Beanstalk, I added some simple licensing to stop abuse of the api, the user has to pass a licensekey as a field
i.e
search.myapi.com/?license=4ca53b04&query=fred
If this is not valid then the request is rejected.
However until the monthly updates the above query will always return the same data, therefore I now point search.myapi.com to an AWS CloudFront distribution, then only if query is not cached does it go to actual server as
direct.myapi.com/?license=4ca53b04&query=fred
However the problem is that if two users make the same query they wont be deemed the same by Cloudfront because the license parameter is different. So the Cloudfront caching is only working at a per user level which is of no use.
What I want to do is have CloudFront ignore the license parameters for caching but not the other parameters. I dont mind too much if that means user could access CloudFront with invalid license as long as they cant make successful query to server (since CloudFront calls are cheap but server calls are expensive, both in terms of cpu and monetary cost)
Perhaps what I need is something in front of CloudFront that does the license check and then strips out the license parameter but I don't know what that would be ?
Two possible come to mind.
The first solution feels like a hack, but would prevent unlicensed users from successfully fetching uncached query responses. If the response is cached, it would leak out, but at no cost in terms of origin server resources.
If the content is not sensitive, and you're only trying to avoid petty theft/annoyance, this might be viable.
For query parameters, CloudFront allows you to forward all, cache on whitelist.
So, whitelist query (and any other necessary fields) but not license.
Results for a given query:
valid license, cache miss: request goes to origin, origin returns response, response stored in cache
valid license, cache hit: response served from cache
invalid license, cache hit: response served from cache
invalid license, cache miss: response goes to origin, origin returns error, error stored in cache.
Oops. The last condition is problematic, because authorized users will receive the cached error if the make the same query.
But we can fix this, as long as the origin returns an HTTP error for an invalid request, such as 403 Forbidden.
As I explained in Amazon CloudFront Latency, CloudFront caches responses with HTTP errors using different timers (not min/default/max-ttl), with a default of t minutes. This value can be set to 0 (or other values) for each of several individual HTTP status codes, like 403. So, for the error code your origin returns, set the Error Caching Minimum TTL to 0 seconds.
At this point, the problematic condition of caching error responses and playing them back to authorized clients has been solved.
The second option seems like a better idea, overall, but would require more sophistication and probably cost slightly more.
CloudFront has a feature that connects it with AWS Lambda, called Lambda#Edge. This allows you to analyze and manipulate requests and responses using simple Javascript scripts that are run at specific trigger points in the CloudFront signal flow.
Viewer Request runs for each request, before the cache is checked. It can allow the request to continue into CloudFront, or it can stop processing and generate a reaponse directly back to the viewer. Generated responses here are not stored in the cache.
Origin Request runs after the cache is checked, only for cache misses, before the request goes to the origin. If this trigger generates a response, the response is stored in the cache and the origin is not contacted.
Origin Response runs after the origin response arrives, only for cache misses, and before the response goes onto the cache. If this trigger modifies the response, the modified response stored in the cache.
Viewer Response runs immediately before the response is returned to the viewer, for both cache misses and cache hits. If this trigger modifies the response, the modified response is not cached.
From this, you can see how this might be useful.
A Viewer Request trigger could check each request for a valid license key, and reject those without. For this, it would need access to a way to validate the license keys.
If your client base is very small or rarely changes, the list of keys could be embedded in the trigger code itself.
Otherwise, it needs to validate the key, which could be done by sending a request to the origin server from within the trigger code (the runtime environment allows your code to make outbound requests and receive responses via the Internet) or by doing a lookup in a hosted database such as DynamoDB.
Lambda#Edge triggers run in Lambda containers, and depending on traffic load, observations suggest that it is very likely that subsequent requests reaching the same edge location will be handled by the same container. Each container only handles one request at a time, but the container becomes available for the next request as soon as control is returned to CloudFront. As a consequence of this, you can cache the results in memory in a global data structure inside each container, significantly reducing the number of times you need to ascertain whether a license key is valid. The function either allows CloudFront to continue processing as normal, or actively rejects the invalid key by generating its own response. A single trigger will cost you a little under $1 per million requests that it handles.
This solution prevents missing or unauthorized license keys from actually checking the cache or making query requests to the origin. As before, you would want to customize the query string whitelist in the CloudFront cache behavior settings to eliminate license from the whitelist, and change the error caching minimum TTL to ensure that errors are not cached, even though these errors should never occur.