How to save the collect-view tool node info in a .pcap file - cooja

i need help in Cooja simulation i want to save the collect view tool node info in a pcap file any one can help, i want to generate a large amount of data for scientific research, the node info have the AVG which i want to remove

Related

How can i make power query read ".dss" files?

Im trying to make a dashboard on Power BI with the .dss files from simulations of HEC-HMS to show results of time series datas, but they are inside a ".dss" file and power query says that: "we don't recognize the format of the first file"
How can I open those ".dss" files inside the power query ?
see a pic:
enter image description here
Thanks! Waiting help.
This looks like what you might be looking for:
HEC-DSS File and HEC-DSSVue – Gridded Data:
Quote:
HEC-DSS, USACE Hydrologic Engineering Center Data Storage System, is a type of database system to store data primarily for hydrologic and hydraulic modeling (*.dss file). HEC-DSSVue is a tool to view, edit, and visualize a HEC-DSS file. Unlike other commercial or open source databases, HEC-DSS is not a relational database: HEC-DSS uses blocks (records) to store data within a HEC-DSS file and each HEC-DSS file can have numerous blocks (records), In addition to time series data and paired data in HEC-DSS, gridded data can also be stored in a HEC-DSS file.
HEC-DSSVue can be downloaded from here:
https://www.hec.usace.army.mil/software/hec-dssvue/

Analysis of Log with Spark Streaming

I recently did analysis on a static log file with Spark SQL (find out stuff like the ip addresses which appear more than ten times). The problem was from this site. But I used my own implementation for it. I read the log into an RDD, turned that RDD to a DataFrame (with the help of a POJO) and used DataFrame operations.
Now I'm supposed to do a similar analysis using Spark Streaming for a streaming log file for a window of 30 mins as well as aggregated results for a day. The solution can again be found here but I want to do it another way. So what I've done is this
Use Flume to write data from the log file to an HDFS directory
Use JavaDStream to read the .txt files from HDFS
Then I can't figure out how to proceed. Here's the code I use
Long slide = 10000L; //new batch every 10 seconds
Long window = 1800000L; //30 mins
SparkConf conf = new SparkConf().setAppName("StreamLogAnalyzer");
JavaStreamingContext streamingContext = new JavaStreamingContext(conf, new Duration(slide));
JavaDStream<String> dStream = streamingContext.textFileStream(hdfsPath).window(new Duration(window), new Duration(slide));
Now I can't seem to decide if I should turn each batch to a DataFrame and do what I previously did with the static log file. Or is this way time consuming and overkill.
I'm an absolute noob to Streaming as well as Flume. Could someone please guide me with this?
Using DataFrame (and Dataset) in Spark is most promoted way in latest versions of Spark, so it's a right choice to go with. I think that some obscurity appears because of non-explicit nature of stream, when you move files into HDFS rather than read from any event log.
Main point here is to choose correct batch time size (or slide size as in your snippet), so application would process data it loaded under that time slot and there would not be batch queue.

time estimated for download and upload file(in python)

i want a code for time estimated to down/up file and speed.i want to integrate this part at back end.
What would be the way to get the network speed and time estimation using Python?
You need to determine the throughput speed of the client Connection download/upload and divide it buy the File size that's the logic there are many plug ins to determine the speed
put please don't wait for some one to spoon feed you the code
you're a programmer after all
cheers..

Can rrdtool store data for metrics, list of which changes over time, like, for example, top 10 processes consuming CPU?

We need to create a graph with top 10 items, which will change from time to time, for example - top 10 processes consuming CPU or any other top 10 items, we can generate values for on the monitored server, with possibility to have names of the items on the graph.
Please tell me, is there any way to store this information using rrdtool?
Thanks
If you want to store this kind of information with rrdtool, you will have to create a separate rrd database for each item, update them accordingly and finally generate charts picking the 10 'top' rrd files ...
In other words, quite a lot of the magic has to happen in the script you write around rrdtool ... rrdtool will take care of storing the time series data ...

access data from files on disc in *real time*

I have the following problem to solve. I have to build a graph viewer to view a massive data set.
We have some files in a particular format that has millions of records representing the result of an experiment. Each record represents a sample point on a large graph plot. The biggest file I have seen has 43.7 Million records.
An average file contains 10 Million records. Each record is small (76 Bytes + optional 12 Bytes each). The complete data cannot be loaded in to the main memory as it is too large. I have build a new file format that compresses the data to 48 bytes per record and organises the data in to chunks that are associated to each other. I want to "view" the data by displaying the records in a 2D/3D plot. As the data is very dense, I would like to progressively increase the level of detail by loading more data and removing data that is not shown in the view from the main memory.
I would also like to access group of associated records in real time and pre-load similar records in order to keep the loading time to bare minimum. This will give the user a smooth control to view the data instead of an experience similar to viewing a video on YouTube with a very slow internet connection. the user cannot randomly and has to use the controls to navigate and I would like to use this info to load the relevant records into the main memory.
The data has to be loaded progressively from the disc based on what is currently in the main memory. Records in the main memory that are not required in the current context can be removed and if required re loaded.
How to I access data from a disc at high speeds based on some hash number
How do I manage main memory if the data to be viewed in the current context is too large. If your answer is level of detail, then how do I build it for a large data set and should this data be part of the file ?
I have been working on this for the last two weeks and I seem to get stuck due to IO speed.
I am working in native C++ and I cannot use work under GPL. If you need any more info, let me know.
Ram
Under most modern file systems (Linux, Unixes, Windows) you can map a file into memory.
Which means you can access the content of the file as if it was entirely in memory (eg you can use data[i++], strchr(data,..), etc) and it's the operating system that does the mapping between used memory and file. When you want to read some data that is not already in memory, the o/s will fetch it from the file.
You should read this question's answer: Mmap() an entire large file
I think you are looking for organization similar to what's used to store level geometry in games, just that you maybe (depending on how your program works and what data you need to show) need just one dimension. See Quadtree and similar methods (bottom of that article).