Matrix Monitoring Superset - apache-superset

I don't find any articles or docs describing how we can monitor superset in production environment. So any help regarding how we can have matrix which we can be use to monitor superset ?

Superset emits events via statsd, which is described as:
"A network daemon that runs on the Node.js platform and listens for statistics, like counters and timers, sent over UDP or TCP and sends aggregates to one or more pluggable backend services (e.g., Graphite)."
More information here:
https://superset.incubator.apache.org/installation.html#statsd-logging

Related

How to make a communication between Arduino, Web app and AWS?

I'm making a project where temperature and humidity levels are sensored by Arduino and send those data to AWS with ESP-8266-01s. At the same time, those data are also shown on the web application (it may be on Node.js/Java, etc.).
So what I'm asking is how the architecture should be. What is the best practice? Does AWS also provide a web app where I can use it for both database cloud as a web application or should I make a separate project as a web app to connect to AWS?
I searched on Google but the only answers I can find are two ways: Arduino and AWS without another aspect connected to it in my case the web app.
Make use of MQTT protocol.
Components required -
Pubsubclient.h library on esp8266 that will be used to publish temp and humidity data to MQTT Broker on AWS
mosquitto MQTT broker setup on AWS used to accept data from esp8266
Python script that will subscribe to data from the mosquitto broker and dumps into any database(my suggestion is influxdb)
Graphing platform to query database and display visual timeseries-graphs(my suggestion grafana)
Use AWS only for purchasing a virtual machine. Rest can be taken care using open-source Platforms.
Assuming you want to display graphs of temperature and humidity, Using grafana is the best practice.
You will not find a silver bullet here. A proper architecture for your case depends on many things and there can be different approaches with their own pros and cons.
There are many aspects to cover including connectivity, security, update, availability, costs.
Usually IoT devices are not connected directly to the cloud, because they don't have a constant connection, or any network connection. There is a hub (or middleware) that collects data from sensors/devices and send them to the cloud for processing.
But many cloud vendors provide some out of the box complex solutions here (including AWS).
I listed just examples.

ActiveMq: persistent queue and offline system

I'm a fresh user of ActiveMQ technology, and I have some problem approaching this technology.
I have the following situation:
I have a SW, running in a embedded (offline) ARM device, that archive a set of videos on a upluggable hard disk at run time.
Sometimes (4-5 events a day), I have to associate a alarm event to those videos and to queue the alarm on a persistent queue.
Once a month we have to extract the hard disk and to connect it to another embedded online ARM device, that should notify a ActiveMQ server about the alarms generated by the offline ARM device
And now my question: how can I store the persistent queue on the hard disk, so that the events generated byt the offline ARM device will be available to the online ARM system (the only "connection" between online and offline embedded device is hard disk)?
Please note that I cannot change the way I transmit messages to the online server, since it is a system not developed by my company.
Best regards
Giovanni
It sounds like you want a "store-and-forward" messaging pattern. You could configure the "offline" ActiveMQ broker to attempt to connect to the "online" ActiveMQ broker. The network connector will attempt to connect at configurable intervals and when it is "online" it will begin to send messages automatically.
The slight down side is that the broker will attempt to connect to the remote broker (even when offline), so you'll need to manage log rotation or logging levels to accommodate.
Look for the static:// network connector uri
Network of brokers

can I use pymqi to connect to an IBM MQ multi-instance queue manager

Using PyMQI, which is fine on IBM MQ Single instance queues, but does anyone know if I can pass dual IP Addresses and Ports on the connection string and if the MQ CLient under the hood handles the IBM MW Multi Instance queue management ?
PyMQI sits on top of the underlying MQ libraries. If you are using it with MQ v7.0 or higher then you can specify multiple connection names that are separated by a comma. It will then try each one in order and loop back to the first one if it can not connect to any of them. Some settings related to how long it will retry and how often can be set in the mqclient.ini.
The IBM Knowledge center page "Automatic client reconnection" has good general information on the reconnect options. All of it related to the C/C++ clients applies to PyMQI.

Linux C++ Network Session monitor

I'm trying to monitor the network sessions on server withe event driven programming (and not polling on /proc/net/tcp or udp).
I was able to find this article but it only provide one time look at the current state and not an event on each change (LISTEN, ESTABLISHED...).
Is it possible to use this like in this article that monitors processes changes but on network connections?
If not, is there any other API that I can use in order to achive this without polling /porc/net/* in interval?

Using LoadRunner to Test Server Processes

We currently use LoadRunner for performance testing our web apps, but we also have some server side processes we need to test.
Background:
We call these processes our "engines". One engine receives messages by polling an IBM WebSpere MQ queue for messages. It takes a message off the queue, processes it, and puts the result on an outbound queue. We currently test this engine via a TCL script that reads a file that contains the messages, puts the messages on the inbound queue, then polls the outbound queue for the results.
The other engine receives messages via a web service. The web service writes the message to a table in our database. The engine polls the database table for new messages, takes a message and processes it, and puts the result back into the database. We currently test this engine via a VBScript script that reads a file that contains the messages, sends the message to the web service, then keeps querying the web service for the result unitl it's ready.
Question:
We'd like to do away with the TCL and VBScript scripts and standardize on LoadRunner so that we have one tool to manage all our performance tests.
I know LoadRunner supports a Web Services protocol "out of the box", but I'm not sure how to use it. Does anyone know of any examples of how to use LoadRunner to test a web service?
Does LoadRunner have a protocol for MQ? Is it possible to use a LoadRunner Vuser to drive load (put messages) into an MQ queue? Would we need to purchase something from HP or some other vendor to do this?
Thanks :)
There is an add-in for LoadRunner in the incuded software to interface with MQ series and put the messages directly on the queue. Web services are fully supported also, and VBScript is supported too,perhaps using QTPro for the script and a GUI user in LoadRunner?
Colin.
For #1, as an alternative to a Web Services script, you could try recording a Windows Sockets script. I've used LoadRunner to record winsock scripts to test some (Java) APIs. What I did was write a really simple Java API client and then execute that from a Windows batch file. The batch file would then be referenced as the executable when recording a LR script in VUGen.
I'm not sure if VUGen can load a VBScript file for recording, but you might try. Otherwise, you might try wrapping your VBScript in a batch file that can be run by VUGen.
When VUGen records a winsock script, it's basically monitoring the network communication for the process you're recording with. After you're done recording, it'll generate a dump of the network data in a "data.ws" worksheet that you can look at and edit with VUGen. You can parameterize this data worksheet for your load tests.
One can code SOA requests and parse responses within LoadRunner.
See wilsonmar.com/1lrscript.htm.
But bear in mind that TCL and VBScript developed for functional testing have a different architecture and scope than LoadRunner scripts. QTP and WinRunner take over the application.
LoadRunner scripts focus on the exchange of data across the wire. In the case of headless SOA XML, this architectural distinction doesn't matter.
However, it may be easier for you to maintain VBscript from the GUI, because creating SOA scripts in LoadRunner require a deeper understanding of message formats than what most MQ developers have.
You really have three paths for pushing and popping messages off of an MQ queue using LoadRunner
(1) MQTester. This is a native MQ Protocol Add in for use with LoadRunner
(2) Winsock. Winsock development is best described as tedipously similar to picking fly scat out of ground pepper. Tedious, but in the end very rewarding. Out of the box, no additional add ins are required except license updates (possibly)
(3) JMS using a Java Virtual user, see. http://en.wikipedia.org/wiki/Java_Message_Service . You wind up with a small Java program in the Java template virtual user for LoadRunner. You will have to deal with all of the Java black magic aspects associated with LoadRunner, but once you nail down the combination of release and installation details you can use the virtual same code to post to just about any JMS provider (not just MQ) with some connection factory settings changed.
You should be able to do JMS with the web services virtual user as well, but I have not tested that configuration. Look at the JMS section of the run time settings.