I need to test a service that uses apollo client to fetch some data. It uses watchQuery in order to keep an open stream of values.
In the test I use ApolloTestingModule to flush the gql operation with test data. This works once, but I cannot test what happens when a new value is emitted (in other words I cannot flush more than once).
Is it actually possible to do it with ApolloTestingModule or would this fall into a feature request?
Related
I'm creating an async gRPC server in C++. One of the methods streams data from the server to clients - it's used to send data updates to clients. The frequency of the data updates isn't predictable. They could be nearly continuous or as infrequent as once per hour. The model used in the gRPC example with the "CallData" class and the CREATE/PROCESS/FINISH states doesn't seem like it would work very well for that. I've seen an example that shows how to create a 'polling' loop that sleeps for some time and then wakes up to check for new data, but that doesn't seem very efficient.
Is there another way to do this? If I use the "CallData" method can it block in the 'PROCESS' state until there's data (which probably wouldn't be my first choice)? Or better, can I structure my code so I can notify a gRPC handler when data is available?
Any ideas or examples would be appreciated.
In a server-side streaming example, you probably need more states, because you need to track whether there is currently a write already in progress. I would add two states, one called WRITE_PENDING that is used when a write is in progress, and another called WRITABLE that is used when a new message can be sent immediately. When a new message is produced, if you are in state WRITABLE, you can send immediately and go into state WRITE_PENDING, but if you are in state WRITE_PENDING, then the newly produced message needs to go into a queue to be sent after the current write finishes. When a write finishes, if the queue is non-empty, you can grab the next message from the queue and immediately start a write for it; otherwise, you can just go into state WRITABLE and wait for another message to be produced.
There should be no need to block here, and you probably don't want to do that anyway, because it would tie up a thread that should otherwise be polling the completion queue. If all of your threads wind up blocked that way, you will be blind to new events (such as new calls coming in).
An alternative here would be to use the C++ sync API, which is much easier to use. In that case, you can simply write straight-line blocking code. But the cost is that it creates one thread on the server for each in-progress call, so it may not be feasible, depending on the amount of traffic you're handling.
I hope this information is helpful!
Is it possible to create a Source to which I'm going to be able to push data "manually" (or I can do it somehow to a "regular" Source)?
Something like:
var source = Source.Empty<int>();
source.Push(10); //is something like this possible?
My use case would be creating a source to which I'm able to push data whenever my API endpoint is called.
Yes, it's possible. Check out Source.Queue:
Source.Queue can be used for emitting elements to a stream from an actor (or from anything running outside the stream). The elements will be buffered until the stream can process them. You can Offer elements to the queue and they will be emitted to the stream if there is demand from downstream, otherwise they will be buffered until request for demand is received.
Another option is Source.ActorRef:
Messages sent to the actor that is materialized by Source.ActorRef will be emitted to the stream if there is demand from downstream, otherwise they will be buffered until request for demand is received.
Unlike Source.Queue, Source.ActorRef does not support backpressure.
I have a large file which I want to process using Lambda functions in AWS. Since I can not control the size of the file, I came up with the solution to distribute the processing of the file to multiple lambda function calls to avoid timeouts. Here's how it works:
I dedicated a bucket to accept the new input files to be processed.
I set a trigger on the bucket to handle each time a new file is uploaded (let's call it uploadHandler)
Reading the file, uploadHandler measures the size of the file and splits it into equal chunks.
Each chunk is sent to processor lambda function to be processed.
Notes:
The uploadHandler does not read the file content.
The data sent to processor is just a { start: #, end: # }.
Multiple instances of the processor are called in parallel.
Each processor call reads its own chunk of the file individually and generates the output for it.
So far so good. The problem is how to consolidate the output of the all processor calls into one output? Does anyone have any suggestion? And also how to know when the execution of all the processors is done?
I recently had a similar problem. I solve it using AWS lambda and Step functions using this solution https://docs.aws.amazon.com/step-functions/latest/dg/tutorial-create-iterate-pattern-section.html
In this specific example the execution doesn't happen in Parallel, but it's sequential. But when the state machine finish to execute you have the garantee that the file was totally processed correctly. I don't know if is exactly what you are looking.
Option 1:
After breaking the file, make the uploadHandler function call the processor functions synchronously.
Make the calls concurrent, so that you can trigger all processors at once. Lambda functions have only one vCPU (or 2 vCPUs if RAM > 1,800 Gb), but the requests are IO-bound, so you only need one processor.
The uploadHandler will wait for all processors to respond, then you can assemble all responses.
Pros: simpler to implement, no storage;
Cons: no visibility on what's going on until everything is finished;
Option 2:
Persist a processingJob in a DB (RDS, DynamoDB, whatever). The uploadHandler would create the job and save the number of parts into which the file was broken up. Save the job ID with each file part.
Each processor gets one part (with the job ID), processes it, then store in the DB the results of the processing.
Make each processor check if it's the last one delivering its results; if yes, make it trigger an assembler function to collect all results and do whatever you need.
Pros: more visibility, as you can query your storage DB at any time to check which parts were processed and which are pending; you could store all sorts of metadata from the processor for detailed analysis, if needed;
Cons: requires a storage service and a slightly more complex handling of your Lambdas;
I am trying to write connection pool using hiredis.
Problem I am facing is , if user fires a command and didn't read the response from the connection, I should be clearing the response from that connection before putting to connection pool.
Is there any way to check:
Is there more data to read? So I can do redisGetReply , till all data get cleared.
Or is there a way to clear all pending read on connection object ?
Question is not clear, as it fails to state whether you are using sync or async operations.
You mention redisGetReply, I would assume use of sync operations. Sync calls would be blocking calls. Response to commands would be available in the same call. A scenario where you might want to check if all data is read is when context is shared between threads and you check for data before returning connection to pool.
Yes redisGetReply can be used to check if there is more data to read.
For async calls use redisAsyncHandleRead to check if there is data to be read.
Internally both redisGetReply and redisAsyncHandleRead make call to redisBufferRead.
For sync calls use redisFree to clear context.
For Aysnc calls use redisAsyncFree to clear context.
I am developing a WebService in Java upon the jax-ws stack and glassfish.
Now I am a bit concerned about a couple of things.
I need to pass in a unknown amount of binary data that will be processed with a MDB, it is written this way to be asynchronous (so the user does not have to wait for the calculation to take place, kind of fault tolerant aswell as being very scalable.
The input message can however be split into chunks and sent to the MDB or split in the client and sent to the WS itself in chunks.
What I am looking for is a way to be able to specify the max size of the input so I wont blow the heap even if someone delibratly tries to send a to big message. I have noticed that things tend to be a bit unstable once you hit the ceiling and I must be able to keep running.
Is it possible to be safe against big messages or should I try to use another method instead of WS. Which options do I have?
Well I am rather new to Java EE..
If you're passing binary data take a look at enabling MTOM for endpoint. It utilizes streaming and has 'threshold' parameter.