How to add more fields in Iris RSSiDemo packet - nesc

Can any one tell me how to increase packet size using IRIS mote.
I am pasting my Header file i hav added nodeid and counter but it is not appearing in the packet..Any help will be great full
RssiDemoMessages.h

You can add the following line on your Makefile
CFLAGS += -DTOSH_DATA_LENGTH=50
Where 50 is the number of bytes. Keep in mind that packet loss will increase as the number of bytes increases as well.

Related

What is the best way to read large file (>2GB) (Text file contains ethernet data) and access the data randomly by different parameters?

I have a text file which looks like below:
0.001 ETH Rx 1 1 0 B45678810000000000000000AF0000 555
0.002 ETH Rx 1 1 0 B45678810000000000000000AF 23
0.003 ETH Rx 1 1 0 B45678810000000000000000AF156500
0.004 ETH Rx 1 1 0 B45678810000000000000000AF00000000635254
I need a way to read this file and form a structure and send it to client application.
Currently, I can do this with the help of circular queue by Boost.
The need here is to access different data at different time.
Ex: If I want to access data at 0.03sec while I am currently at 100sec, how can I do this in a best way instead of having file pointer track, or saving whole file to a memory which causes performance bottleneck? (Considering I have a file of size 2 GB with the above kind of data)
Usually the best practice for handling large files depends on the platform architecture (x86/x64) and OS (Windows/Linux etc.)
Since you mentioned boost, have you considered using boost memory mapped file?
Boost Memory Mapped File
Its all depends on
a. how frequently the data access is
b. what pattern the data access is
Splitting the file
If you need to access the data once in a while then this 2GB log
design is fine, if not the logger can be tuned to generate log with
periodic interval/ latter a logic can split the 2GB files into needed fashion of
smaller files. So that fetching the ranged log file and then reading
the log data and then sort out the needed lines is easier since file
read bytes will be reduced here.
Cache
For very frequent data access, for faster response maintaining cache is one the nice solution, again as you said it has its own bottleneck. The size and pattern of the cache memory selection is all depends on the b. what pattern of data access is. Also greater the cache size also slower the response, it should be optimum.
Database
If the searching pattern is un-ordered/dynamically grown on usage then data-base will work. Again here it will not give faster response like small cache.
A mix of database with perfect table organization to support the type of query + smaller cache layer will give optimum result.
Here is the solution I found:
Used Circular buffers (Boost lock free Buffers) for parsing file and to save the structured format of line
Used Separate threads:
One will continuously parse the file and push to lock free queue
One will continuously read from the buffer, process the line, form a structure and push to another queue
Whenever user needs random data, based on time, I will move the file pointer to particular line and read only the particular line.
Both threads have mutex wait mechanisms to stop parsing once the predefined buffer limit reached
User will get data at any time, and no need of storing the complete file contents. As and when the frame is read, I will be deleting the frame from queue. So file size doesn't matter. Parallel threads which fills the buffers allows to not spend time on reading file every time.
If I want to move to other line, move file pointer, wipe off existing data, start threads again.
Note:
Only issue is now to move the file pointer to particular line.
I need to parse line by line till I reach the point.
If there exist any solution to move file pointer to required line it would be helpful. Binary search or any efficient search algorithm can be used and will get what I want.
I appreciate if anybody gives solution for the above new issue!

Understanding Mp3 file structure

I'm working on an mp3 Steganography project and i want to encode text inside the mp3 file by manipulating Least Significant Bits(LSB) at regular intervals. I want to encode that text without making any significant changes in the audio. And according to this link http://www.datavoyage.com/mpgscript/mpeghdr.htm there are mp3 headers which carry information of the leading mp3 chunk. So i want a guidance on how can i make this possible?
Mp3 file is made of sequences of "Frames" (It's about 11000 frames for a mp3 file with 4 minutes playing). At front and end of each MP3 file there are two fields of information (Id3 Tag v1, v2) contains information about Mp3 file - these two fields are optional and can exist or not without any impact on the quality of Mp3 file. You should not hide staga-message here because they can easily be found. Frame consists of frame header (32 bits) and frame body (contains compressed sound). According to your question, steganography will affect on the frame header (32 bits), so I'll focus on frame header!
In 32 bits of frame header still exists some "unimportant bit" due to their functions (read more detail on their function). In short you can use bit in index of: 24, 27, 28, 29, 30, 31, 32 (with bit 27 and 28 will have a small impact on the sound quality) with index in this picture in this link: https://en.wikipedia.org/wiki/MP3#/media/File:Mp3filestructure.svg.
So it depends on whether you want just 5 bits per frame of 7 bits per frame. 7 bits is the max number of bits that you can use on each frame due to my working (both theory and test by source code) but someone else can find a larger bit!
In order to access byte array of each frame, you can write your own class but there are many free available classes on the Internet - NAudio.dll by Mark Heath - (I cannot post link due to forum laws, you can search Google) - is a useful one.
Having accessed the byte array of each frame, you can embed/extract data in/from Mp3 file. Note that: 32 first bits of byte array of each frame is the Frame Header, so you can easily identify the precise index of unimportant bits!
I've recently completed my final year thesis on this topic (steganography on images -LSB, Parity Coding and MP3 - Unused Bits Header). The following source codes from my thesis (written in C#) is a runnable steganography program. I hope that it can help: http://www.mediafire.com/download/aggg33i5ydvgrpg/ThesisSteganography%2850900483%29.rar
Ps: I'm a Vietnamese, so it can exist some errors in my sentences!

Efficiently read data from a structured file in C/C++

I have a file as follows:
The file consists of 2 parts: header and data.
The data part is separated into equally sized pages. Each page holds data for a specific metric. Multiple pages (needs not to be consecutive) might be needed to hold data for a single metric. Each page consists of a page header and a page body. A page header has a field called "Next page" that is the index of the next page that holds data for the same metric. A page body holds real data. All pages have the same & fixed size (20 bytes for header and 800 bytes for body (if data amount is less than 800 bytes, 0 will be filled)).
The header part consists of 20,000 elements, each element has information about a specific metric (point 1 -> point 20000). An element has a field called "first page" that is actually index of the first page holding data for the metric.
The file can be up to 10 GB.
Requirement: Re-order data of the file in the shortest time, that is, pages holding data for a single metric must be consecutive, and from metric 1 to metric 20000 according to alphabet order (header part must be updated accordingly).
An apparent approach: For each metric, read all data for the metric (page by page), write data to new file. But this takes much time, especially when reading data from the file.
Is there any efficient ways?
One possible solution is to create an index from the file, containing the page number and the page metric that you need to sort on. Create this index as an array, so that the first entry (index 0) corresponds to the first page, the second entry (index 1) the second page, etc.
Then you sort the index using the metric specified.
When sorted, you end up with a new array which contains a new first, second etc. entries, and you read the input file writing to the output file in the order of the sorted index.
An apparent approach: For each metric, read all data for the metric (page by page), write data to new file. But this takes much time, especially when reading data from the file.
Is there any efficient ways?
Yes. After you get a working solution, measure it's efficiency, then decide which parts you wish to optimize. What and how you optimize will depend greatly on what results you get here (what are your bottlenecks).
A few generic things to consider:
if you have one set of steps that read data for a single metric and move it to the output, you should be able to parallelize that (have 20 sets of steps instead of one).
a 10Gb file will take a bit to process regardless of what hardware you run your code on (concievably, you could run it on a supercomputer but I am ignoring that case). You / your client may accept a slower solution if it displays it's progress / shows a progress bar.
do not use string comparisons for sorting;
Edit (addressing comment)
Consider performing the read as follows:
create a list of block offset for the blocks you want to read
create a list of worker threads, of fixed size (for example, 10 workers)
each idle worker will receive the file name and a block offset, then create a std::ifstream instance on the file, read the block, and return it to a receiving object (and then, request another block number, if any are left).
read pages should be passed to a central structure that manages/stores pages.
Also consider managing the memory for the blocks separately (for example, allocate chunks of multiple blocks preemptively, when you know the number of blocks to be read).
I first read header part, then sort metrics in alphabetic order. For each metric in the sorted list I read all data from the input file and write to the output file. To remove bottlenecks at reading data step, I used memory mapping. The results showed that when using memory mapping the execution time for an input file of 5 GB was reduced 5 ~ 6 times compared with when not using memory mapping. This way temporarily solve my problems. However, I will also consider suggestions of #utnapistim.

How to determine .mp3 bit rate without downloading it?

I have a list of .mp3 files over the web and I would like to get the highest quality file.
Quality in multimedia files equals the bit rate of them.
The bit rate itself should be found in the file's headers. If not, length of the audio track could be used too. (Filesize / Track Length = Bit Rate)
These things would be easy if I would have these files locally, but I would like to fetch this information over HTTP and determine which file has the highest quality.
Can I get an audio track's length out of HTTP headers? If not, is it possible to fetch only the bits that describes the length/bit rate instead of downloading the whole file?
I'm writing the code in python, but the question is quite general so I'm not tagging it as a python question.
Assuming that the remote server is behaving nicely, you could issue a HEAD request to the file and check the contents of the Content-Length header field. It doesn't give you track length or bit rate but you can get the size of the file.
EDIT: MP3s consist of multiple frames, each of which can be of a different bit rate (VBR). Track length is calculated from the bit rate of each of these frames, rather than the length itself being stored. If you want the bit rate reliably, you'd need two get the whole file and get the bit rate of each of the frames. It may be possible to grab the first few KB of the file and read the bit rate from the first frame, but this is not always at the same point in the file (e.g. due to position of ID3 tag etc.).

c++: Writing to a file

I have a couple of questions.
First, i am doing http byte range requests and then writing the received
data to a file.
Sometimes, i have to read a block of 1K, and then read that
from the file fetched through http. Now, the problem is that the next request
after the 1k request may be starting from 100 bytes and in that case i want to
write in the 1K file, overwriting from 100 bytes. How can i overwrite from a
specific offset in the file?
Secondly, how do i create a file with some data already in the file? For eg. i want
put the data in the file from lets say 500th byte, i do not care about the first
500 bytes, could be any garbage data, but its important to have the correct file
size for the code to work.
thanks
There's some reference material and sample code on ofstream's seekp at http://www.cplusplus.com/reference/iostream/ostream/seekp/