Using Async bidirectional streaming, can i create a client stream (ClientAsyncReaderWriter) and modify metadata value in ClientContext for each request sent on that stream? Also, on the server side, can i use the ServerContext of the incoming client stream, to send modified metadata value for each response sent back to the client on the bidiriectional stream. Please let me know if there is a way to do this since i dont want to maintain multiple streams for each of the metadata value. I would like to use same stream and send different metadata value for a key for each request/response exchanged on that bidirectional stream.
This is intentionally not supported.
Metadata is intended to be at the RPC layer per bidi stream; not per req/response. If you wish to communicate information per req/response, you should put that inside the req/response.
Metadata is intended to be for all req/response in the same stream.
Metadata is tied to stream; please see documentation https://github.com/grpc/grpc/blob/master/CONCEPTS.md#abstract-grpc-protocol
There is the client interceptor
https://github.com/grpc/grpc/blob/master/include/grpcpp/impl/codegen/client_interceptor.h
which may be able to help
These tests show examples how the interceptors can be used:
https://github.com/grpc/grpc/blob/master/test/cpp/end2end/client_interceptors_end2end_test.cc
Related
Let‘s assume we develop a custom low level transport for gRPC. How can we “plug it” into the gRPC c++ API so that we can use it for a Channel?
I'm working on a doc that will soon appear at https://github.com/grpc/grpc/ but here's a preview:
gRPC transports plug in below the core API (one level below the C++ API). You can write your transport in C or C++ though; currently all the transports are nominally written in C++ though they are idiomatically C. The existing transports are:
HTTP/2
Cronet
In-process
Among these, the in-process is likely the easiest to understand, though arguably also the least similar to a "real" sockets-based transport.
In the gRPC core implementation, a fundamental struct is the grpc_transport_stream_op_batch which represents a collection of stream operations sent to a transport. The ops in a batch can include:
send_initial_metadata
Client: initate an RPC
Server: supply response headers
recv_initial_metadata
Client: get response headers
Server: accept an RPC
send_message (zero or more) : send a data buffer
recv_message (zero or more) : receive a data buffer
send_trailing_metadata
Client: half-close indicating that no more messages will be coming
Server: full-close providing final status for the RPC
recv_trailing_metadata: get final status for the RPC
Server extra: This op shouldn't actually be considered complete until the server has also sent trailing metadata to provide the other side with final status
cancel_stream: Attempt to cancel an RPC
collect_stats: Get stats
One or more of these ops are grouped into a batch. Applications can start all of a call's ops in a single batch, or they can split them up into multiple batches. Results of each batch are returned asynchronously via a completion queue.
Internally, we use callbacks to indicate completion. The surface layer creates a callback when starting a new batch and sends it down the filter stack along with the batch. The transport must invoke this callback when the batch is complete, and then the surface layer returns an event to the application via the completion queue. Each batch can have up to 3 callbacks:
recv_initial_metadata_ready (called by the transport when the recv_initial_metadata op is complete)
recv_message_ready (called by the transport when the recv_message op is complete)
on_complete (called by the transport when the entire batch is complete)
The transport's job is to sequence and interpret various possible interleavings of the basic stream ops. For example, a sample timeline of batches would be:
Client send_initial_metadata: Initiate an RPC with a path (method) and authority
Server receive_initial_metadata: accept an RPC
Client send_message: Supply the input proto for the RPC
Server receive_message: Get the input proto from the RPC
Client send_trailing_metadata: This is a half-close indicating that the client will not be sending any more messages
Server receive_trailing_metadata: The server sees this from the client and knows that it will not get any more messages. This won't complete yet though, as described above.
Server send_initial_metadata, send_message, send_trailing_metadata: A batch can contain multiple ops, and this batch provides the RPC response headers, response content, and status. Note that sending the trailing metadata will also complete the server's receive of trailing metadata.
Client recv_initial_metadata: The number of ops in one side of the batch has no relation with the number of ops on the other side of the batch. In this case, the client is just collecting the response headers.
Client recv_message, recv_trailing_metadata: Get the data response and status
In addition to these basic stream ops, the transport must handle cancellations of a stream at any time and pass their effects to the other side. The transport must perform operations like pings and statistics that are used to shape transport-level characteristics like flow control (see, for example, their use in the HTTP/2 transport).
Regarding C++ API: Most of the existing custom transports are done by creating their own credentials C++ types and use those to enable the new transport.
With ActorPublisher deprecated in favor of GraphStage, it looks as though I have to give up my actor-managed state for GraphStateLogic-managed state. But with the actor managed state I was able to mutate state by sending arbitrary messages to my actor and with GraphStateLogic I don't see how to do that.
So previously if I wanted to create a Source to expose data that is made available via HTTP request/response, then with ActorPublisher demand was communicated to my actor by Request messages to which I could react by kicking off an HTTP request in the background and send responses to my actor so I could send its contents downstream.
It is not obvious how to do this with a GraphStageLogic instance if I cannot send it arbitrary messages. Demand is communicated by OnPull() to which I can react by kicking off an HTTP request in the background. But then when the response comes in, how do I safely mutate the GraphStateLogic's state?
(aside: just in case it matters, I'm using Akka.Net, but I believe this applies to the whole Akka streams model. I assume the solution in Akka is also the solution in Akka.Net. I also assume that ActorPublisher will also be deprecated in Akka.Net eventually even though it is not at the moment.)
I believe that the question is referring to "asynchronous side-channels" and is discussed here:
http://doc.akka.io/docs/akka/2.5.3/scala/stream/stream-customize.html#using-asynchronous-side-channels.
Using asynchronous side-channels
In order to receive asynchronous events that are not arriving as stream elements (for example a completion of a future or a callback from a 3rd party API) one must acquire a AsyncCallback by calling getAsyncCallback() from the stage logic. The method getAsyncCallback takes as a parameter a callback that will be called once the asynchronous event fires.
Is there any document or step by step process which guides us on how we can use WS02 DAS to pull data from java class objects and display reports using this data using WS02 Dashboards.
Any help would be really appreciated.
First You can create an Event Stream by specifying attributes and mention what are the attributes you need to persist. When events arrives to the streams, those will be stored in Events tables [1].
Then you can create an Event Receiver for that Event Stream [2]. When creating an event stream you can use a protocol such as Thrift, Soap, Http, Mqtt, JMS, Kafka and Web sockets. You can write a simple Java Application to publish data to DAS Receiver you created on message format protocol which you have selected. For an instance if you create SOAP receiver you can use data on soap message format and also if you create a HTTP receiver you can use JSON format.
You can create a dashboard and gadgets to visualize Event table which was created by your persistent stream [3]. Please note that this event table consist all the events WSO2 DAS received, you can process these data by using spark SQL [4] and create several streams which could be used in Analytics Dashboard.
[1]https://docs.wso2.com/display/DAS300/Understanding+Event+Streams+and+Event+Tables
[2] https://docs.wso2.com/display/DAS300/Configuring+Event+Receivers
[3] https://docs.wso2.com/display/DAS300/Analytics+Dashboard
[4] https://docs.wso2.com/display/DAS300/Batch+Analytics+Using+Spark+SQL
Your subject of the question and the body is contradictory. The subject says to push data while the body says pull data.
If push data is what you want to achieve, you can refer https://docs.wso2.com/pages/viewpage.action?pageId=45952633 This uses a thrift client to push data to DAS.
Please refer https://docs.wso2.com/display/DAS300/Analyzing+Data for how to analyze the raw data. You can write spark scripts for analyzing.
Finally you can https://docs.wso2.com/display/DAS300/Communicating+Results on how to analyze data. You may use the REST API exposed with DAS 3.0.0 to pull data from DAS.
I am working with wso2cep3.0. I went through the docs of wso2cep but there is no exaplanation about wso2cep how would i use it
1.what is the use of Input Event Adapter:-in my concern for getting the data from client.
2.Event Builder means incoming data format specifier.
3.Event Formatter means Outgoing data format specifier.
4.Output Event Adapter out put handler.
but how can i use this thing means any program or any event writer most important how would i publish this to external world example as http endpoint or https or jms.
I am unable to understand how would i start and where can i start.
Please suggest me i know the ESB,DSS,IS,BPS
For a typical CEP usecase, you configure the input event adaptor to connect to an event source, such as a JMS endpoint, Thrift endpoint(WSO2Event adaptor in CEP 3.0.0) etc. Event builder specifies how the incoming message will be mapped.
Next the execution plans will have the actual query (the processing part) of the execution flow. This is where the CEP engine actually processes the events.
Event formatter formats it to an output format as needed. The output event adaptor connects to the actual endpoint to which the processed result would be published. It would go and publish to a JMS endpoint, email, database etc.
To get started, you need to create an input event adaptor, then an event builder that uses the input event adaptor, then the execution plan, then the output event adaptor, and an event formatter that would format the result from the execution plan and send to the relevant output adaptor. You can find the flow in [1].
You can find example configurations in /samples/artifacts directory. You can find an overview of samples in [2] and can find how to run them [3]. Each sample has an associated producer and a consumer that simulates real world event producers/consumers. Samples would be the best place to learn more about CEP configurations.
[1] http://docs.wso2.org/display/CEP300/CEP+Configuration+Overview
[2] http://docs.wso2.org/display/CEP300/Overview+of+Samples
[3] http://docs.wso2.org/display/CEP300/Setting+up+CEP+Samples
I have a situation where I need my API to have a call for triggering a service-side event, no information (besides authentication) is needed from the client, and nothing needs to be returned by the server. Since this doesn't fit well into the standard CRUD/Resource interaction, should I take this as an indicator that I'm doing something wrong, or is there a RESTful design pattern to deal with these conditions?
Your client can just:
POST /trigger
To which the server would respond with a 202 Accepted.
That way your request can still contain the appropriate authentication headers, and the API can be extended in the future if you need the client to supply an entity, or need to return a response with information about how to query the event status.
There's nothing "non-RESTful" about what you're trying to do here; REST principles don't have to correlate to CRUD operations on resources.
The spec for 202 says:
The entity returned with this response SHOULD include an indication of
the request's current status and either a pointer to a status monitor
or some estimate of when the user can expect the request to be
fulfilled.
You aren't obliged to send anything in the response, given the "SHOULD" in the definition.
REST defines the nature of the communication between the client and server. In this case, I think the issues is there is no information to transfer.
Is there any reason the client needs to initiate this at all? I'd say your server-side event should be entirely self-contained within the server. Perhaps kick it off periodically with a cron call?