I am loading CT scan nifti (.nii) images using XTK. I want to modify the default orientation of images and flip them, for example up/down or left/right before rendering (onShowtime). So, how can I get the default orientation and then how can I change it?
Currently, I am using xtk.js since there is a bug in xtk_edge.js possibly in .nii parser. (Uncaught RangeError: Invalid array length)
The lastest version of XTK should now be working fine.
https://github.com/xtk/X/commit/3a28de0294a94b41c44007545ab3a37df556d315
How did you generate this data? If you look at it in Slicer4 or Freeview, it looks the same way as in XTK. We want to look at the data in the Radiological convention. (http://www.grahamwideman.com/gw/brain/orientation/orientterms.htm)
The rotate method do not exist anymore but we might have to put it back at some point...
We also want to add the possibility to switch between radiological/neurological conventions easily.
Thanks
Related
If I have an af::array A already in GPU memory, what is the procedure to pass it through to OpenGl? My intention is to plot it as a line graph using OpenGl, but I'm not sure how to deal with the fact that the backend of Arrayfire could be OpenCl, CUDA or even the CPU. How does OpenGl achieve ownership of the array? I would preferably like to avoid copying if possible.
On a separate note, if I was to use the in built forge library to plot graphs in Arrayfire, I find that if I follow the tutorials to plot a graph, by pressing and holding on the data in the plot and dragging it somewhere else, the data plot moves from its original location and is no longer correctly aligned with the axis. Is there a way to correct this?
#HamzaAB
What you are asking is known as GL-CUDA or GL-OpenCL interoperability, if you already don't know about it, which is the area Forge tries to address. You can look the ComputeCopy.h header inside forge repository to understand how to do OpenGL interop.
Having said that, Do you want to reset the transformation you are doing to the line plot, if you want to reset it then there is way. While holding the left control, hit middle mouse button, that will reset the pan/zoom done to the line plot. If you are facing some other issue with forge's line plot, you may raise an issue here and we will try to look into it.
PS. I am one of the core developers of ArrayFire.
I wish to display an image through my projector via MATlab. The projected image should be full sized without any figure handle bars (menu bar, the grey stuff which encompasses a figure etc).
Similar to a normal presentation when the projector projects the complete slide or image, I want to do the same using MATlab as my platform. Any thoughts or idea? Can we access the projector using MATlab? My first thoughts were to send data to the corresponding printer IP but that doesn't seem to work :/
If you know the relevant C++ command or method to do this, please suggest a link or a library, so that I may try and import it on my MATlab platform.
Reason for doing this: Projector-Camera calibration for photo-metric correction of my projector display output.
Assuming your projector is set as a second display, you can do something very simple. Get the monitor position information and set the figure frame to be the monitor size
// plot figure however you want
monitorFrames = get(0,'MonitorPositions');
secondMonitor = monitorFrames(2,:);
secondMonitor(3) = secondMonitor(3)-monitorFrames(1,3);
set(gcf,'Position',secondMonitor);
This will put the figure window onto the second monitor and have it take up the whole screen.
You can then use this to do whatever calibration you need, and shift this window around as necessary.
NOTE:
In no way am I saying this is the ideal solution. It is quick and dirty, and will not use any outside libraries.
UPDATE
If the above solution does not suit your specific needs, what you could always do is save the plot as an image, then have your MATLAB script, call a c++ script that opens the image and makes it full screen.
This is non-trivial. For Windows you can use the WindowAPI submission to the MATLAB File Exchange. With the WindowAPI function installed you can do
WindowAPI(FigH, 'Position', 'full');
For Mac and Linux you can use wrappers around OpenGL to do low level plotting, but you cannot use standard MATLAB figure windows. One nice implementation is PsychToolbox.
I'm using c++ with directshow to capture multiple images from the camera, much like how opencv camcapture does.
And the camera I'm using has a AoI control where I can move the offset of x and y to move the camera view. I try searching the web, but I couldn't find anything.
Is there a way to control those values using directshow? Cause there seems to be way to change the values of gain and what not, but there's no mention about the AoI control
This totally depends on the source directshow filer that you got with your camera and you should check it's documentation / view it's property page with graph edit.
If it's a standard filter you are using, try to switch to the one you got with your hardware, since the lack of results probably means there is no support for this option in the standard src filter.
I'm developing application that uses DirectShow combined with C++.
Its main goal is to capture users' faces.
I have reached the phase when I capture a image from my webcam.
The problem is I need an intelligent render. In fact, I need that render to be able to detect a face inside a rectangle.
I'm wondring if there is a filter that I can use for this purpose,
or if I need to create my own custmized filter.
If so enlighten my mind.
It would look like this:
I need to understand how I can draw a recangle in my render in the first place. Because otherwise, even if I know the algorithm, I will not be able to apply it. This is my main goal now.
I have some idea but I don't know if they are correct. I think I need to grab each frame separately and apply some modification in some pixels, like what's drawn in the live render.
Have a look at OpenCV
Quick look inside and I found this.
Making your own "filter" that works well is no easy job.
Are you talking about automatic detection of where there is something like a human face in the shot you have taken with the webcam? In this case object detection algorithms like Viola-Jones might be interesting for you.
If a commercial package is an option, you can use the Montivision Filter SDK which includes filters that should do the job out of the box. They offer a free eval which is perfect for experimentation.
I'm doing an implementation for a path planning algorithm. I'd really like to be able to load in a 2d "environment" in vector graphics (svg) format, so that complex obstacles can be used. This would also make it fairly easy to overlay the path onto the environment and export another file with the result of the algorithm.
What I'm hoping to be able to do is use some kind of library in my collision test method so that I can simply ask, "is there an obstacle at x, y?" and get back true or false. And then of course I'd like to be able to add the path itself to the file.
A brief search and a couple of downloads left me with libraries which either create svg's or render them but none really gave me what I need. Am I better off just parsing the xml and hacking through everything manually? That seems like a lot of wasted effort.
1.This may be a bit heavyhanded, but Qt has a really great set of tools called the Graphics View Framework. Using these tools, you can create a bunch of QGraphicsItems (polygons, paths, etc..) in a QGraphicsScene, and then query the scene by giving it a position. Using this you'll never actually have to render the scene out to a raster bitmap.
http://doc.trolltech.com/4.2/graphicsview.html, http://doc.trolltech.com/4.2/qgraphicsscene.html#itemAt
2.Cairo has tools to draw all sorts of shapes as well, but I believe you'll have to render the whole image and then check the pixel values. http://cairographics.org/
The SVG specification includes some DOM interfaces for collision detection:
http://www.w3.org/TR/SVG/struct.html#_svg_SVGSVGElement__getEnclosureList
http://www.w3.org/TR/SVG/struct.html#_svg_SVGSVGElement__getIntersectionList
Using these methods, all "obstacles" (which can be groups of primitive elements, using the <g> element) should be labelled as a target of pointer events.
These methods are bounding-box based, however, so may not be sophisticated enough for your requirements.
Thanks for the responses. I didn't manage to get anything working in QT (it's massive!) or Cairo, and I ended up going with PNGwriter, which does pretty much what I wanted except, of course, that it reads and writes PNG's instead of vector graphics. The downside here is that my coordinates must be rounded off to even pixels. Maybe I'll continue to look into vector graphics, but this solution is satisfactory for this project.