Prometheus push gateway how to increment message requests - prometheus-pushgateway

I have a use case where we need to increment the number of requests received on a Nuclio serverless service. Pod is recreated each time the service is invoked. Following the examples from the Prometheus-client library, I am not able to increment the request number using Counter()or Gauge() Object and inc() method, here is the code I tried.
registry = CollectorRegistry()
c = Counter('my_requests', 'HTTP Failures', ['method', 'endpoint'],registry=registry)
c.labels(method='get', endpoint='/').inc()
c.labels(method='post', endpoint='/submit').inc()
pushadd_to_gateway('localhost:8082', job='countJob', registry=registry)
I tried both push_to_gateway and pushadd_to_gateway both resulted the counter value for my_requests remain 1.
Question - by creating the Counter object each time does it resets the increment value back to 0, if so how do we go about it for ephomeral jobs ? Any code example would be helpful.

Related

Is there any Callback option in GCP cloud function?

I am looking for a way to "wake up" the cloud function when a related process is done.
To understand in-depth - these are the functions I have-
1. A cloud function that gets called any X time and its purpose is to call another function (function # 2).
2. An external provider function that requests information, (I can't edit the code, I have only the request body). The information is not received in real-time, but once it ends - it sends a callback. It should be noted that the process can take long minutes and even hours.
I want to create a process where every X time function 1 will call function 2 and as soon as the second is over it will return the information to function 1 and it will store it in DB.
Example code for func1:
import requests
def entry_point():// func1
response = requests.get('https://outsourceapi.com/get_info')// func2
save_response_in_DB(response.json())// This will happand after getting response
Because I can not keep function 1 awake for so long, is there a way to "wake it up" again?
Or alternatively another solution?

Python3 Flask session not appearing to update

I'm using code below to debug an issue where it doesn't appear my session variable is being updated properly.
print(session)
session['review_status'] = 'Pending'
print('Session review_status is now: ' + session['review_status'])
print(session)
This is outputting the following:
<SecureCookieSession {'review_id': None, 'review_status': 'New'}>
Session review_status is now: Pending
<SecureCookieSession {'review_id': None, 'review_status': 'New'}>
I can't understand why the last print statement isn't reflecting that the review_status value should now be "Pending" and not "New".
The frontend is firing off about 5 ajax requests at once to this endpoint, but the first one should be changing the status to Pending, so by the time the other 4 return, it would be "Pending" for them.
It appears this was being caused by the asynchronous calls from the frontend. While watching the debug output, it seems that several of the calls were "finishing" before the Flask session was actually reflecting the new value that was stored.
When I converted the ajax call to "async: false", I then got the expected behavior after the first call finished (the remainder were no longer "New", but rather in "Pending").
I am going to leave this fix in place for now, but would be interested in alternatives to this, and getting a better understanding of how Gunicorn/Flask handles multiple requests to the same endpoint concurrently with regard to the session (ie: does the session remain static until all calls are fulfilled type thing).

AWS S3: How to set maximum number of retries in C++?

I have a typical example of an S3 upload which works just fine. I decided to set a limit on the number of retries since sometimes due to network issues, the delay causes problems. I looked at the AWS SDK and apparently there is a MaxErrorRetry option I can set for the client configuration. However, that doesn't seem to be an option in C++. Instead, I found a RetryStrategy function, but i'm not sure how to use it. All I need to do is to set a number for the amount of retries instead of resulting to the default. Any advice?
Thanks
long maxRetry = 2;
long scope = 2;
std::shared_ptr<Aws::Client::DefaultRetryStrategy> retryStrategy = std::make_shared<Aws::Client::DefaultRetryStrategy>(maxRetry,scope); // strategy with custom max retries
Aws::Client::ClientConfiguration clientConfig;
clientConfig.retryStrategy = retryStrategy; // assign it to Client configuration
Aws::S3::S3Client s3Client(clientConfig); // create S3 client with your configuration
Found the answer:
std::shared_ptr<Aws::Client::RetryStrategy> retry; // initialise retry strategy
retry.reset(new Aws::Client::DefaultRetryStrategy(num_of_retries, scope));//override default by creating an instance of DefaultRetryStrategy
client_config.retryStrategy = retry; // assign to client_config

Postman - how to loop request until I get a specific response?

I'm testing API with Postman and I have a problem:
My request goes to sort of middleware, so either I receive a full 1000+ line JSON, or I receive PENDING status and empty array of results:
{
"meta": {
"status": "PENDING",
"missing_connectors_count": 0,
"xxx_type": "INTERNATIONAL"
},
"results": []
}
The question is, how to loop this request in Postman until I will get status SUCCESS and results array > 0?
When I'm sending those requests manually one-by-one it's ok, but when I'm running them through Collection Runner, "PENDING" messes up everything.
I found an awesome post about retrying a failed request by Christian Baumann which allowed me to find a suitable approach to the exact same problem of first polling the status of some operation and only when it's complete run the actual tests.
The code I'd end up if I were you is:
const maxNumberOfTries = 3; // your max number of tries
const sleepBetweenTries = 5000; // your interval between attempts
if (!pm.environment.get("tries")) {
pm.environment.set("tries", 1);
}
const jsonData = pm.response.json();
if ((jsonData.meta.status !== "SUCCESS" && jsonData.results.length === 0) && (pm.environment.get("tries") < maxNumberOfTries)) {
const tries = parseInt(pm.environment.get("tries"), 10);
pm.environment.set("tries", tries + 1);
setTimeout(function() {}, sleepBetweenTries);
postman.setNextRequest(request.name);
} else {
pm.environment.unset("tries");
// your actual tests go here...
}
What I liked about this approach is that the call postman.setNextRequest(request.name) doesn't have any hardcoded request names. The downside I see with this approach is that if you run such request as a part of the collection, it will be repeated a number of times, which might bloat your logs with unnecessary noise.
The alternative I was considering is writhing a Pre-request Script which will do polling (by sending a request) and spinning until the status is some kind of completion. The downside of this approach is the need for much more code for the same logic.
When waiting for services to be ready, or when polling for long-running job results, I see 4 basic options:
Use Postman collection runner or newman and set a per-step delay. This delay is inserted between every step in the collection. Two challenges here: it can be fragile unless you set the delay to a value the request duration will never exceed, AND, frequently, only a small number of steps need that delay and you are increasing total test run time, creating excessive build times for a common build server delaying other pending builds.
Use https://postman-echo.com/delay/10 where the last URI element is number of seconds to wait. This is simple and concise and can be inserted as a single step after the long running request. The challenge is if the request duration varies widely, you may get false failures because you didn't wait long enough.
Retry the same step until success with postman.setNextRequest(request.name);. The challenge here is that Postman will execute the request as fast as it can which can DDoS your service, get you black-listed (and cause false failures), and chew up a lot of CPU if run on a common build server - slowing other builds.
Use setTimeout() in a Pre-request Script. The only downside I see in this approach is that if you have several steps needing this logic, you end up with some cut & paste code that you need to keep in sync
Note: there are minor variations on these - like setting them on a collection, a collection folder, a step, etc.
I like option 4 because it provides the right level of granularity for most of my cases. Note that this appears to be the only way to "sleep" in a Postman script. Now standard javascript sleep methods like a Promise with async and await are not supported and using the sandbox's lodash _.delay(function() {}, delay, args[...]) does not keep script execution on the Pre-request script.
In Postman standalone app v6.0.10, set your step Pre-request script to:
console.log('Waiting for job completion in step "' + request.name + '"');
// Construct our request URL from environment variables
var url = request['url'].replace('{{host}}', postman.getEnvironmentVariable('host'));
var retryDelay = 1000;
var retryLimit = 3;
function isProcessingComplete(retryCount) {
pm.sendRequest(url, function (err, response) {
if(err) {
// hmmm. Should I keep trying or fail this run? Just log it for now.
console.log(err);
} else {
// I could also check for response.json().results.length > 0, but that
// would omit SUCCESS with empty results which may be valid
if(response.json().meta.status !== 'SUCCESS') {
if (retryCount < retryLimit) {
console.log('Job is still PENDING. Retrying in ' + retryDelay + 'ms');
setTimeout(function() {
isProcessingComplete(++retryCount);
}, retryDelay);
} else {
console.log('Retry limit reached, giving up.');
postman.setNextRequest(null);
}
}
}
});
}
isProcessingComplete(1);
And you can do your standard tests in the same step.
Note: Standard caveats apply to making retryLimit large.
Try this:
var body = JSON.parse(responseBody);
if (body.meta.status !== "SUCCESS" && body.results.length === 0){
postman.setNextRequest("This_same_request_title");
} else {
postman.setNextRequest("Next_request_title");
/* you can also try postman.setNextRequest(null); */
}
I was searching for an answer to the same question and thought of a possible solution as I was reading your question.
Use postman workflow to rerun your request every time you don't get the response you're looking for. Anyway, that's what I'm gonna try.
postman.setNextRequest("request_name");
https://www.getpostman.com/docs/workflows
I didn't succeed to find the complete guidelines for this issue that's why I decided to invest some time and to describe all steps of the process from A to Z.
I will be observing an example where we will need to pass through transaction ids and in each iteration to change query param for next transaction id from the list.
Step 1. Prepare your request
https://some url/{{queryParam}}
Add {{queryParam}} variable for changing it from pre-request script.
If you need a token for request you should add it here, in Authorization tab.
Save request to collection (Save button in the right corner). For demonstration purpose I will use "Transactions Request" name. We will need to use this name later on.
Step 2. Prepare pre-request script
In postman use tab Pre-request Script to change transactionId variable from query param to actual transaction id.
let ids = pm.collectionVariables.get("TransactionIds");
ids = JSON.parse(ids);
const id = ids.shift();
console.log('id', id)
postman.setEnvironmentVariable("transactionId", id);
pm.collectionVariables.set("TransactionIds", JSON.stringify(ids));
pm.collectionVariables.get - gets array of transaction ids from collection variables. We will set it up in Step 4.
ids.shift() - we use it to remove id that we will use from our ids list (to prevent running twice on the same id)
postman.setEnvironmentVariable("transactionId", id) - change transaction id from query param to actual transaction id
pm.collectionVariables.set("TransactionIds", JSON.stringify(ids)) - we are setting up a new collection of variables that now does not include the id that was handled.
Step 3. Prepare Tests
In postman use tab Tests to create a loop logic. Tests will be executed after the request execution, so we can use it to make next request.
let ids = pm.collectionVariables.get("TransactionIds");
ids = JSON.parse(ids);
if (ids && ids.length > 0){
console.log('length', ids.length);
postman.setNextRequest("Transactions Request");
} else {
postman.setNextRequest(null);
}
postman.setNextRequest("Transactions Request") - calls a new request, in this case it will call the "Transactions Request" request
Step 4. Run Collections
In Postman from the left side bar you should choose Collections (click on it) and then choose a tab Variables.
This is the collection variables. In our example we used TransactionIds as a variable, so put in Current Value the array of transaction ids on which you want to loop.
Now you can click on Run (the button from right corner, near Save button) to run our loop requests.
You will be proposed to choose on which request you want to perform an action. Choose the request that we’ve created "Transactions Request".
It will run our request with pre-request script and with logic that we’ve set in Tests. In the end postman will open a new window with summary of our run.

Amazon DynamoDB Atomic Writes

I have a list of Lambda worker functions (say 1000), each running simultaneously and doing its job. To be able to figure out the end result of all workers I have come up with this idea.
Before starting the job and spawning the Lambda worker functions, I save a record in DynamoDB, for example two attributes:
total_number_of_jobs
jobs_completed (set initially to 0)
On finish of each Lambda worker function it will go and increment the attribute jobs_completed by one. Then read the record and check if total_number_of_jobs equals to jobs_completed and if it is, put a record in SQS.
My questions are:
Is this a good idea?
Would the updates be consistent and atomic? Could there be any race conditions?
Any better solution than this?
I would update the counter, jobs_completed, in an UpdateItem API call like this:
SET jobs_completed = jobs_completed + :incr_by where incr_by would be equal to 1.
As long as you use DynamoDB atomic counters, like your example shows, and you check the return value of the UpdateItem call instead of running a separate query, then your proposed solution should work fine.