HDR stacking of images in halcon - hdr

I have multiple images that I have to merge to a single HDR image in Halcon.
I've been looking online and in the manual, but was not able to find any HDR comand..
How can this be done in Halcon?

Check out the example program "create_high_dynamic_range_image.dev".
I can't remember which version this was released in. If it is not in your version then get a trial license of the newest version of Halcon and test it out.

Related

C++: Writing images to video file independent of installed codecs

I'm trying to save a series of images (16 bit grayscale pgm) as video. The video has to be compressed. My program has to be independent of the codecs installed in the system.
My initial idea was to use OpenCV for this, unfortunately it depends on codecs installed in the system (unless I'm missing something).
I feel like there should be a way to compile an encoder (H264 or similar would be perfect) into the program or redistribute it as a dll with my program. I just can't find any good up to date guidance/examples.
I've been swimming in the deep vast ocean of AV encoding for a couple of days and would really appreciate it if someone could point me to a right direction.
Thanks.
As Ben suggests, it would be a good idea to use an established library in your code.
FFMPEG is probably the most used at the moment - it can be used on the command line, with a 'wrapper' program or the libraries it is built with can be used directly.
I think the last case sounds like the one you want - you can find documentation here:
https://trac.ffmpeg.org/wiki/Using%20libav*
Note the comment about disambiguation at the start - this is important to understand as the project lib and the library (which is what you want) are different things.
and there is some notes in this answer on how to build it into a program:
FFMpeg sample program

Pvr flipped in cocos2d version 3.2 : dilemma when porting from 2.1

I am currently porting a game from version 2.1 to 3.2 of cocos2d. We have over 3600 pvr.gz files that are NOT flipped, eg were produced and working under 2.1. We tested out with TexturePacker the option to flipY for pvr images, and sure enough code and animations work fine.
Now, we can go and do this manually for all our files, but ... is there a way i could convince verions 3.2 to use the 'old' pvr rules. This is a port, all our assets have been produced already.
Alternately, any command-line utility/tool i could use and script recursively the pvr conversion in my Resources folder :)
any help greatly appreciated. tia.
ok, many thanks to Scott Lembcke of cocos2d for pointing me in the right direction. For posterity, hope this helps someone out there. PVRTexToolCLI did the job for me (from ImgTech.com). Free to download, free to use, you must register on their site.
here is the syntax (for this game's assets strategy)
PVRTexToolCLI -i old_magie_cleanse-hd.pvr -o magie_cleanse-hd.pvr -flip y,flag -f r8g8b8a8 -legacypvr
notes :
I am still using pvr v2 format, just because i like to go one change at a time. After I run my tests, i will switch all textures to pvr v3. Cocos2d 3.2 supports pvr v2 and pvr v3, but who knows for how long.
format is rgb8888 in our case, you will have to figure out the format of your own. You can use the PVRTextToolGUI and drag one of your existing textures into it to figure out the current encoding/compression.
If you have compressed textures, use "-q pvrtcbest" to prevent artefacts. Slow, hogs tons of CPU, but worth it.

How to debug image processing projects on eclipse?

Here at work we have plenty of experience developing image processing applications for the TI DSP platform using an old version of code composer (CC 3.3). We are transitioning to the ARM platform using Eclipse (flavored and distributed by Xilinx).
In the old code composer, a feature we used a lot was to look was an IDE widget that could display a certain area of memory as a bitmap image. It had a properties grid where you would define things like size, pixel format and stride orientation to properly interpret the blob of memory as a picture. The tool also had some nice features like zooming, a grayscale counter, line profile, histogram and etc.
Is there something similar for eclipse? If not, how difficult would it be to create one? I mean, how difficult it is to create a barebones plugin for eclipse that draws information from a location of memory in a jtag interface?
Gdb can call python script. If gdb is used for debugging then please use this. You can use OpenCV or PIL or any image library to show the image.
Updated on 2 Apr 2014:
Let 'data' be the pointer to the image.
Inside gdb run "python data = gdb.parse_and_eval("data")". This will give access to inferior's memory pointed by data.
e.g., "python print(data[35])" will show the 35th element of 'data'.
As the image data can be read in python, it can be displayed or analysed.
Following links will help in getting things done:
http://www.cinsk.org/wiki/Debugging_with_GDB:_How_to_create_GDB_Commands_in_Python
https://sourceware.org/gdb/onlinedocs/gdb/Python.html#Python
Note: Please ignore the comment I added. Hit enter key before writing the actual info.
Hope this helps.
Eclipse has no such feature; personally, I work in a similar environment (image processing on a DaVinci architecture + Eclipse IDE). Writing a Eclipse plugin is not a overly complicated task - there a loads of tutorials(like this one). But maintaining it may become one. We have referece Code in QT so we don't need such features. but if you really want something similar I guess you can always do a memdump in a binary file and interpret hat as an image. As long as the format isn't anything spacial (e.g. you are dumping jpeg format) it should be fine.
That is a great idea for static objects but what if your memory is created temporarily on the heap or on the stack? Is isn’t it easier in dynamic environments (using openCV) just to put a three lines of code whenever you need to see some buf content as an image and use Qt to do image scaling and histogramming?
Mat I(h, w, CV_8U, buf);
imshow(“winname”, buf);
waitKey(-1);
I heard that the latest version of openCV highgui module has the options you talked about but I personally see only very limited use of it in a dynamic programming environment. And yet I visualize data all the time. Moreover, I like to interact with my images, for example, rotate them in 3D, click and get values, or to mark a certain segment. I guess it will be hard to do this with a specific plug-ins.
I just found this; haven't tried it yet:
https://github.com/cuekoo/GDB-ImageWatch
If eclipse is calling gdb, maybe there is a way...

Convert Movie to OpenNI *.oni video

The Kinect OpenNI library uses a custom video file format to store videos that contain rgb+d information. These videos have the extension *.oni. I am unable to find any information or documentation whatsoever on the ONI video format.
I'm looking for a way to convert a conventional rgb video to a *.oni video. The depth channel can be left blank (ie zeroed out). For example purposes, I have a MPEG-4 encoded .mov file with audio and video channels.
There are no restrictions on how this conversion must be made, I just need to convert it somehow! Ie, imagemagick, ffmpeg, mencoder are all ok, as is custom conversion code in C/C++ etc.
So far, all I can find is one C++ conversion utility in the OpenNI sources. From the looks of it, I this converts from one *.oni file to another though. I've also managed to find a C++ script by a phd student that converts images from a academic database into a *.oni file. Unfortunately the code is in spanish, not one of my native languages.
Any help or pointers much appreciated!
EDIT: As my usecase is a little odd, some explanation may be in order. The OpenNI Drivers (in my case I'm using the excellent Kinect for Matlab library) allow you to specify a *.oni file when creating the Kinect context. This allows you to emulate having a real Kinect attached that is receiving video data - useful when you're testing / developing code (you don't need to have the Kinect attached to do this). In my particular case, we will be using a Kinect in the production environment (process control in a factory environment), but during development all I have is a video file :) Hence wanting to convert to a *.oni file. We aren't using the Depth channel at the moment, hence not caring about it.
I don't have a complete answer for you, but take a look at the NiRecordRaw and NiRecordSynthetic examples in OpenNI/Samples. They demonstrate how to create an ONI with arbitrary or modified data. See how MockDepthGenerator is used in NiRecordSynthetic -- in your case you will need MockImageGenerator.
For more details you may want to ask in the openni-dev google group.
Did you look into this command and its associated documentation
NiConvertXToONI --
NiConvertXToONI opens any recording, takes every node within it, and records it to a new ONI recording. It receives both the input file and the output file from the command line.

Analysing audio data for attributes at time intervals

I've been wanting to play around with audio parsing for a while now but I haven't really been able to find the correct library for what I want to do.
I basically just want to parse through a sound file and get amplitudes/frequencies and other relevant information at certain times during the song (like every 10 ms or so) so I can graph the data for example where the song speeds up a lot and where it gets really loud.
I've looked at OpenAL quite a bit but it doesn't look like it provides this ability, other than that I have not had much luck with finding out where to start. If anyone has done this or used a library which can do this a point in the right direction would be greatly appreciated. Thanks!
For parsing and decoding audio files I had good results with libsndfile, which runs on Windows/OSX/Linux and is open source (LGPL license). This library does not support mp3 (the author wants to avoid licensing issues), but it does support FLAC and Ogg/Vorbis.
If working with closed source libraries is not a problem for you, then an interesting option could be the Quicktime SDK from Apple. This SDK is available for OSX and Windows and is free for registered developers (you can register as an Apple developer for free as well). With the QT SDK you can parse all the file formats that the Quicktime Player supports, and that includes .mp3. The SDK gives you access to all the codecs installed by QuickTime, so you can read .mp3 files and have them decoded to PCM on the fly. Note that to use this SDK you have to have the free QuickTime Player installed.
As far as signal processing libraries I honestly can't recommend any, as I have written my own functions (for speech recognition, in case you are curious). There are a few open source projects that seem interesting listed in this page.
I recommend that you start simple, for example working on analyzing amplitude data, which is readily available from the PCM samples without having to do any processing. Being able to visualize the data is very useful, I have found Audacity to be an excellent visualization tool, and since it is open source you can build your own tests inside it.
Good luck!