Call httpBackend with same arguments but return different values - unit-testing

I have a unit test where I am checking that a service polls a URL using $http. The first time round I want to return a value that will cause the service wait 10 seconds (using $timeout) and then to poll again. The second time round I want to return a value that will stop the service from polling.
When I do this
httpBackend.expectGET(url).respond(200, { status: 'busy'});
httpBackend.expectGET(ulr).respond(200, { status: 'complete}');
service.poll();
httpBackend.flush();
timeout.flush();
httpBackend.verifyNoAllRequests();
However it never gets to the verification part. As soon as I do the timeout.flush() I get
Unsatisfied requests: GET /url/status/check

Have you tried :
httpBackend.expectGET(url).respond(200, { status: 'busy'});
service.poll();
httpBackend.flush();
timeout.flush();
httpBackend.expectGET(url).respond(200, { status: 'complete}');
httpBackend.flush();

Related

extend apollo mutation timeout

I'm running down a rabbit hole so I'm asking my question here.
I have a useMutation mutation that uploads up to 10MB JSON data to my DB. This obviously takes a long time and I received a timeout error which I believed caused the query to run twice which then uploaded some of the data twice.
How can I state for one mutation to have a longer timeout? Where do I specify the timeout? Can I specify it not to be retried?
I am currently using a couple of links:
import { onError } from '#apollo/client/link/error'
import { RetryLink } from '#apollo/client/link/retry'
import { setContext } from '#apollo/client/link/context'
I believe the RetryLink (https://www.apollographql.com/docs/link/links/retry/) may help. Currently I have default settings:
// default settings
const retryLink = new RetryLink({
delay: {
initial: 300,
max: Infinity,
jitter: true,
},
attempts: {
max: 5,
retryIf: (error, _operation) => !!error,
},
})
500 level errors are server side issues. In this case, your server is the one indicating that there is a timeout, so there is nothing to do on the client side. You'll have to check your server configuration and figure it out on that end.

Flask can handle several background tasks without using Celery

As I understand, using Flask without Celery will block the server availability when a user starts a long operation.
My web server is actually not exposed to the internet and the maximum amount of users that will be connected at a time will be 3 (its of internal use for invoking automation scripts).
I have created a test environment in order to check how Flask is handling several calls at a time.
I have created 'task1' and 'task2' and run a loop with print statement + sleep in order to block the main thread for several seconds.
It seems like its not really blocking the main thead!!!
I can run 'task1' start to see the output for every loop and then run 'task2' and see the output of 'task2' together with task one.
I checked the limit and it seems like I can run 7 tasks without blocking.
How is that possible? according to the above, I dont need to use Celery in my organization since I will have no scenario that 2 users will run more then 2 tasks at a time.
Can someone explain why 'task1' is not blocking the starting of 'task2'?
#app.route('/task1', methods=['POST'])
def task1():
for i in range(4):
print('task 1 - ' + str(i))
time.sleep(1)
return 'message'
#app.route('/task2', methods=['POST'])
def task2():
for i in range(5):
print('task 2 - ' + str(i))
time.sleep(1)
return 'message'
<script >
function runTask(){
document.getElementById('task').value = "this is a value"
let req = $.ajax({
url : '/task1',
type : 'POST', // post request
data : { }
});
req.done(function (data) {
});
}
function runLongerTask(){
document.getElementById('longer_task').value = "this is longer value"
let req = $.ajax({
url : '/task2',
type : 'POST', // post request
data : { }
});
req.done(function (data) {
});
}
</script>
I expected 'task1' to start only when 'task2' will finish but it seems like the two tasks is running in threads (without actually configuring a thread)
Here is the results that I got:
task 2 - 0
task 1 - 0
task 2 - 1
task 1 - 1
task 2 - 2
task 1 - 2
task 2 - 3
task 1 - 3
task 2 - 4
As I understand, using Flask without Celery will block the server availability when a user starts a long operation.
This is not precisely correct, although it's a good rule of thumb to keep heavy workloads out of your webserver for lots of reasons.
You haven't described how you are running flask - with a WSGI container, or the run options. I'd look there to understand how concurrency is configured.

Starting a StepFunction and exiting doesn't trigger execution

I have Lambda function tranportKickoff which receives an input and then sends/proxies that input forward into a Step Function. The code below does run and I am getting no errors but at the same time the step function is NOT executing.
Also critical to the design, I do not want the transportKickoff function to wait around for the step function to complete as it can be quite long running. I was, however, expecting that any errors in the calling of the Step Function would be reported back synchronously. Maybe this thought is at fault and I'm somehow missing out on an error that is thrown somewhere. If that's the case, however, I'd like to find a way which is able to achieve the goal of having the kickoff lambda function exit as soon as the Step Function has started execution.
note: I can execute the step function independently and I know that it works correctly
const stepFn = new StepFunctions({ apiVersion: "2016-11-23" });
const stage = process.env.AWS_STAGE;
const name = `transport-steps ${message.command} for "${stage}" environment at ${Date.now()}`;
const params: StepFunctions.StartExecutionInput = {
stateMachineArn: `arn:aws:states:us-east-1:999999999:stateMachine:transportion-${stage}-steps`,
input: JSON.stringify(message),
name
};
const request = stepFn.startExecution(params);
request.send();
console.info(
`startExecution request for step function was sent, context sent was:\n`,
JSON.stringify(params, null, 2)
);
callback(null, {
statusCode: 200
});
I have also checked from the console that I have what I believe to be the right permissions to start the execution of a step function:
I've now added more permissions (see below) but still experiencing the same problem:
'states:ListStateMachines'
'states:CreateActivity'
'states:StartExecution'
'states:ListExecutions'
'states:DescribeExecution'
'states:DescribeStateMachineForExecution'
'states:GetExecutionHistory'
Ok I have figured this one out myself, hopefully this answer will be helpful for others:
First of all, the send() method is not a synchronous call but it does not return a promise either. Instead you must setup listeners on the Request object before sending so that you can appropriate respond to success/failure states.
I've done this with the following code:
const stepFn = new StepFunctions({ apiVersion: "2016-11-23" });
const stage = process.env.AWS_STAGE;
const name = `${message.command}-${message.upc}-${message.accountName}-${stage}-${Date.now()}`;
const params: StepFunctions.StartExecutionInput = {
stateMachineArn: `arn:aws:states:us-east-1:837955377040:stateMachine:transportation-${stage}-steps`,
input: JSON.stringify(message),
name
};
const request = stepFn.startExecution(params);
// listen for success
request.on("extractData", req => {
console.info(
`startExecution request for step function was sent and validated, context sent was:\n`,
JSON.stringify(params, null, 2)
);
callback(null, {
statusCode: 200
});
});
// listen for error
request.on("error", (err, response) => {
console.warn(
`There was an error -- ${err.message} [${err.code}, ${
err.statusCode
}] -- that blocked the kickoff of the ${message.command} ITMS command for ${
message.upc
} UPC, ${message.accountName} account.`
);
callback(err.statusCode, {
message: err.message,
errors: [err]
});
});
// send request
request.send();
Now please bear in mind there is a "success" event but I used "extractData" to capture success as I wanted to get a response as quickly as possible. It's possible that success would have worked equally as well but looking at the language in the Typescript typings it wasn't entirely clear and in my testing I'm certain that the "extractData" method does work as expected.
As for why I was not getting any execution on my step functions ... it had to the way I was naming the function ... you're limited to a subset of characters in the name and I'd stepped over that restriction but didn't realize until I was able to capture the error with the code above.
For anyone encountering issues executing state machines from Lambda's make sure the permission 'states:StartExecution' is added to the Lambda permissions and the regions match up.
Promise based version:
import { StepFunctions } from 'aws-sdk';
const clients = {
stepFunctions: new StepFunctions();
}
const createExecutor = ({ clients }) => async (event) => {
console.log('Executing media pipeline job');
const params = {
stateMachineArn: '<state-machine-arn>',
input: JSON.stringify({}),
name: 'new-job',
};
const result = await stepFunctions.startExecution(params).promise();
// { executionArn: "string", startDate: number }
return result;
};
const startExecution = createExecutor({ clients });
// Pass in the event from the Lambda e.g S3 Put, SQS Message
await startExecution(event);
Result should contain the execution ARN and start date (read more)

Akka actor response caching

i'm using Akka on one of my projects and i need to get the state of an actor, the way i'm doing it is as follows.
a REST request comes in
#GET
#Produces(Array(MediaType.APPLICATION_JSON))
def get() = {
try {
Await.result((getScanningActor ? WorkInfo), 5.second).asInstanceOf[ScanRequest]
}
catch{
case ex: TimeoutException => {
RequestTimedOut()
}
}
}
on the actor i respond with the current work state
case WorkInfo => sender ! currentWork
for some reason the first time i call this function i get the correct value, on the following requests i get the same value i received on the first call
I'm also using DCEVM if that makes any difference.

Success callback never triggered with Ember-Data save()

I am trying to use ember-data to get a simple registration form to save on my server. The call technically works, but the success callback is never trigger on the promise, and I have no idea why.
The server receives the data from the front end and successfully saves it to the database. It then returns status code 201 for CREATED. I can see the successful response happening in the Chrome debugger. But even when the server responds with a successful status, the error callback is triggered on the save's promise. I've confirmed this happens every time by putting a debugger; statement in the error callback.
My router's model is hooked up like this:
model: function() {
return this.store.createRecord('registerUser');
}
And I have a simple register function in my controller:
register: function() {
var self = this;
this.get('model').save().then(function() {
self.transitionToRoute('index');
}, function(resp) {
if (resp.responseJSON) {
self.get('model').set('errors', resp.responseJSON.errors);
}
});
}
Every time my server comes back with a response, success or failure, the failure callback is hit. If I have errors in the response (for invalid data or something), the errors are successfully displayed in the form. I can see the request coming in properly, and the data is stored in the database. So, the save is technically successful, but ember doesn't seem to know that it is even though a successful 201 status is returned from the server (which can be verified in the Chrome debugger).
The only thing I can think of is that ember-data's adapter is doing something that I'm not aware of, but I am just using the default RESTAdapter and haven't touched it. Is there anything else
If it makes a difference, the server is running Play 1.2.5. I don't know if that makes a difference in the response's header or something like that.
Any help would be greatly appreciated. Thank you for your time!
Mike
SOLUTION
So, the issue was to do with the JSON response. The two problems:
I did not include an ID in the response
I did not "wrap" the response in a "registerUser". This is necessary to match the model name.
Below is a valid response:
{
"registerUser": {
"id": 11,
"email": "mike999#test.com",
"password": "12345",
"password2": "12345",
"name": "Mike"
}
}
Ember Data is expecting the model in the response, so sending back a success http status doesn't mean it will hit the success endpoint. When it tries to serialize your response (or lack of response) it's probably failing which would be why it's hitting the failure function. A big reason for the response is the id of the record.
The model returned should be in the following format
{
registerUser:{
id: "123",
attr: "asdf"
}
}
https://github.com/emberjs/data/blob/master/TRANSITION.md