I have a question on how can we integrate kafka producer with a front end web app. get the data for every minute or second . Can the web app pass the JSON object to a running producer each time the it is created ? or do we need to initiate the kafka client each time we get a JSON object ?
You would want to probably open a new Producer for every session, probably not open and close for each and every request. And this would be done on the backend, not the frontend.
But a web server consisting of a Kafka client is no different underneath the HTTP layer vs a regular console app; you accept an incoming request, deserialize it, then optionally parse, then serialize again for Kafka output, then optionally render something back to the user.
If you're really asking, "is Kafka with HTTP requests possible", regardless of the language and platforms, then sure, the Confluent REST Proxy operates similarly, only written in Java
As far as webapps tracking goes, I would suggest looking into Divolte Collector
Related
I am working on an application which consists of 2 layers: The GUI built in Electron and the "backend" built in C++ running in the background. The GUI needs to be able to (amongst other things such as streaming data) send and request data to and from the backend for configuration purposes. For communication Redis is being used, mainly for its pub/sub capability.
What would be the preferred way to request and send data from/to the backend? I came up with the following ideas but I'm not sure if any of these are the way to go.
Publish value on a configuration channel and handle the request via switch case. E.g. configuration.set_sensor_frequency is handled by a set_sensor_frequency(value) function in the backend.
Write the configuration to configuration.sensor_frequency on the redis server and listen to the set event on the backend and react accordingly. But this kinda seems like method 1 but more complicated.
Like method 2, write the config to the redis server and periodically check (every few cycles or so) in the backend whether the value has been updated
Something else. Please elaborate.
I'm presently working on my ubuntu(14.04) system. In my project there are three servers apart from the broker. The core is the flask server. The other two are Scrapyd and Sentiment Analysis server.
Using the tutorial 'Work Queues', I have managed to write consumer as well as producer code for the broker between Scrapyd (via pipelining) and flask, and similarly using the 'RPC' part of the tutorial. I have written code for SA server and the flask server.
The problem is, the flask server has become a consumer at two ends. It is waiting for response from the scrapyd as well as SA server. The whole idea is to take data from the scraper, transfer it to the SA server and take back the response and pass it on to the front-end. Now, the only way I could think of getting data from the "consumer" part of the code to the code running in the 'view' function of the flask server is via the 'callback' function in the rabbitmq consumer.
Presently, I was trying this way:
Once the data from the scraper arrives at flask end, we create an object of the other 'consumer' (the one who will interact with the SA server), and transfer data through that object. This is done in the callback function of the consumer side of the broker between the Scraper and the Flask Server. Till this was fine.
The problem arises when the data from the SA server arrives. I don't know how am I supposed to take data from the callback function of the consumer part of the broker code to the 'view' function of the flask app.
Background:
I've a local application that process the user input for 3 second (approximately) and then return an answer (output) to the user.
(I don't want to go into details about my application in purpose of not complicate the question and keep it a pure architectural question)
My Goal:
I want to make my application a service in the cloud and expose API
(for the upcoming website and for clients that will connect the service without install the software locally)
Possible Solutions:
Deploy WCF on the cloud and use my application there, so clients can invoke the service and use my application on the cloud. (RPC style)
Use a Web-API that will insert the request into queue and then a worker role will dequeue requests and post the results to a DB, so the client will send one request for creating a request in the queue, and another request for getting the result (which the Web-API will get from the DB).
The Problems:
If I go with the WCF solution (#1) I cant handle great loads of requests, maybe 10-20 simultaneously.
If I go with the WebAPI-Queue-WorkerRole solution (#2) sometimes the client will need to request the results multiple times its can be a problem.
If I go with the WebAPI-Queue-WorkerRole solution (#2) the process isn't sync, the client will not get the result once the process of his request is done, he need to request the result.
Questions:
In the WebAPI-Queue-WorkerRole solution (#2), can I somehow alert the client once his request has processed and done ? so I can save the client multiple request (for the result).
Asking multiple times for the result isn't old stuff ? I remmemeber that 10 - 15 years ago its was accepted but now ? I know that VirusTotal API use this kind of design.
There is a better solution ? one that will handle great loads and will be sync or async (returning result to the client once it done) ?
Thank you.
If you're using Azure, why not simply fire up more servers and use load balancing to handle more load? In that way, as your load increases, you have more servers to handle the requests.
Microsoft recently made available the Azure Service Fabric, which gives you a lot of control over spinning up and shutting down these services.
I've got a Grails app (version 2.2.4) with a controller method that "logs" all requests to an external web service (JSON over HTTP - one way message, response is not needed). I want to decouple the controller method from calling the web service directly/synchronously and provide a simple "queue" which can store the calls if the web service is unavailable and then send them through once the service is back up again.
This sounds like a good fit for some sort of JMS solution but I've not got any experience with using JMS (so learning curve could be an issue). Should I be using one of the available messaging plugins or is that overkill for my simple requirements? I don't want a separate messaging app, it has to be embedded in my webapp and I'd prefer something small and simple vs more complicated and robust (so advice on which plugin would be welcome).
The alternative is to implement an async service myself and queue the "messages" in the database (reading them via a Quartz job) or with something like java.util.concurrent.ConcurrentLinkedQueue?
EDIT: Another approach could be to use log4j with a custom appender set up as a AsyncAppender.
The alternative is to implement an async service myself and queue the "messages" in the database (reading them via a Quartz job)
I went ahead and tried this approach. It was very straight forward and was only a "screen" length of code in the end. I tested it with a failing web service end point as well as an app restart (crash) and it handled both. I used a single service class to both persist the messages (Grails domain class) and to flush the queue (triggered by Quartz scheduler) which reads the DB and fires off the web service calls, removing the DB entity when web service returns 200 status code.
We currently use LoadRunner for performance testing our web apps, but we also have some server side processes we need to test.
Background:
We call these processes our "engines". One engine receives messages by polling an IBM WebSpere MQ queue for messages. It takes a message off the queue, processes it, and puts the result on an outbound queue. We currently test this engine via a TCL script that reads a file that contains the messages, puts the messages on the inbound queue, then polls the outbound queue for the results.
The other engine receives messages via a web service. The web service writes the message to a table in our database. The engine polls the database table for new messages, takes a message and processes it, and puts the result back into the database. We currently test this engine via a VBScript script that reads a file that contains the messages, sends the message to the web service, then keeps querying the web service for the result unitl it's ready.
Question:
We'd like to do away with the TCL and VBScript scripts and standardize on LoadRunner so that we have one tool to manage all our performance tests.
I know LoadRunner supports a Web Services protocol "out of the box", but I'm not sure how to use it. Does anyone know of any examples of how to use LoadRunner to test a web service?
Does LoadRunner have a protocol for MQ? Is it possible to use a LoadRunner Vuser to drive load (put messages) into an MQ queue? Would we need to purchase something from HP or some other vendor to do this?
Thanks :)
There is an add-in for LoadRunner in the incuded software to interface with MQ series and put the messages directly on the queue. Web services are fully supported also, and VBScript is supported too,perhaps using QTPro for the script and a GUI user in LoadRunner?
Colin.
For #1, as an alternative to a Web Services script, you could try recording a Windows Sockets script. I've used LoadRunner to record winsock scripts to test some (Java) APIs. What I did was write a really simple Java API client and then execute that from a Windows batch file. The batch file would then be referenced as the executable when recording a LR script in VUGen.
I'm not sure if VUGen can load a VBScript file for recording, but you might try. Otherwise, you might try wrapping your VBScript in a batch file that can be run by VUGen.
When VUGen records a winsock script, it's basically monitoring the network communication for the process you're recording with. After you're done recording, it'll generate a dump of the network data in a "data.ws" worksheet that you can look at and edit with VUGen. You can parameterize this data worksheet for your load tests.
One can code SOA requests and parse responses within LoadRunner.
See wilsonmar.com/1lrscript.htm.
But bear in mind that TCL and VBScript developed for functional testing have a different architecture and scope than LoadRunner scripts. QTP and WinRunner take over the application.
LoadRunner scripts focus on the exchange of data across the wire. In the case of headless SOA XML, this architectural distinction doesn't matter.
However, it may be easier for you to maintain VBscript from the GUI, because creating SOA scripts in LoadRunner require a deeper understanding of message formats than what most MQ developers have.
You really have three paths for pushing and popping messages off of an MQ queue using LoadRunner
(1) MQTester. This is a native MQ Protocol Add in for use with LoadRunner
(2) Winsock. Winsock development is best described as tedipously similar to picking fly scat out of ground pepper. Tedious, but in the end very rewarding. Out of the box, no additional add ins are required except license updates (possibly)
(3) JMS using a Java Virtual user, see. http://en.wikipedia.org/wiki/Java_Message_Service . You wind up with a small Java program in the Java template virtual user for LoadRunner. You will have to deal with all of the Java black magic aspects associated with LoadRunner, but once you nail down the combination of release and installation details you can use the virtual same code to post to just about any JMS provider (not just MQ) with some connection factory settings changed.
You should be able to do JMS with the web services virtual user as well, but I have not tested that configuration. Look at the JMS section of the run time settings.