Any change someone here knows how I can 'pixel-perfect' format certain rows in a Matrix (Not only value. Full row). in the example some fake data. But real life scenario is related to a 9 levels deep Chart Of accounts. And i want to add for all levels custom formatting. From light background color, to a more darker color size for a better user experience.
Thanks!
Related
I'm trying to build my report on Power BI, on my tab there are 3 matrix. When I filter by one of the fields, all my matrix reduce their size and there are some big white holes.
Is there a feature or tool that allows me to place one matrix under another each time they are resized, to avoid those huge white holes?
I searched for the PowerBI documentation but did not find anything to solve my problem.
(a) what I have, (b) what I get, (c) what I want
I have a simple vector graphic in Inkscape, which consists of a rectangle, filled points and stars. Since the axis ranges are not really nice (the height equals approximatly 3 times the width of the picture) for a publication, I want to rescale the picture. However, I do not have the raw data, such that I can plot it again. How can I rescale my graphic (see figure (a)), such that the x-range is more wide (see figure (c)) without getting distortions (see figure (b))? In the end I want to create a PDF file out of it.
Any ideas on that?
Thanks for your help.
You can try to do it in 2 steps, using the Object -> Transform tool (Shift-Ctrl-M).
First, select everything, and with the transform tool select the Scale tab, and scale horizontally by, say, 300%. All figures will be distorted.
Now, unselect the rectangle, and scale horizontally again by 33.3%, but first click on Apply to each object separately. This will undo the distortion (but not the translation) of each object.
Note that 300% followed by 33.3% should leave the individual objects with the same size.
Documentation here.
I was unable to find literature on this.
The question is that given some photograph with a well known object within it - say something that was printed for this purpose, how well does the approach work to use that object to infer lighting conditions as a method of color profile calibration.
For instance, say we print out the peace flag rainbow and then take a photo of it in various lighting conditions with a consumer-grade flagship smartphone camera (say, iphone 6, nexus 6) the underlying question is whether using known references within the image is a potentially good technique in calibrating the colors throughout the image
There's of course a number of issues regarding variance of lighting conditions in different regions of the photograph along with what wavelengths the device is capable from differentiating in even the best circumstances --- but let's set them aside.
Has anyone worked with this technique or seen literature regarding it, and if so, can you point me in the direction of some findings.
Thanks.
I am not sure if this is a standard technique, however one simple way to calibrate your color channels would be to learn a regression model (for each pixel) between the colors that are present in the region and their actual colors. If you have some shots of known images, you should have sufficient data to learn the transformation model using a neural network (or a simpler model like linear regression if you like, but a NN would be able to capture multi-modal mappings). You can even do a patch based regression using a NN on small patches (say 8x8, or 16x16) if you need to learn some spatial dependencies between intensities.
This should be possible, but you should pay attention to the way your known object reacts to light. Ideally it should be non-glossy, have identical colours when pictured from an angle, be totally non-transparent, and reflect all wavelengths outside the visible spectrum to which your sensor is sensitive (IR, UV, no filter is perfect) uniformly across all different coloured regions. Emphasis added because this last one is very important and very hard to get right.
However, the main issue you have with a coloured known object is: What are the actual colours of the different regions in RGB(*)? So in this way you can determine the effect of different lighting conditions between each other, but never relative to some ground truth.
The solution: use a uniformly white, non-reflective, intransparant surface: A sufficiently thick sheet of white paper should do just fine. Take a non-overexposed photograph of the sheet in your scene, and you know:
R, G and B should be close to equal
R, G and B should be nearly 255.
From those two facts and the R, G and B values you actually get from the sheet you can determine any shift in colour and brightness in your scene. Assume that black is still black (usually a reasonable assumption) and use linear interpolation to determine the shift experienced by pixels coloured somewhere between 0 and 255 on any of the axed.
(*) or other colourspace of your choice.
Any idea how I can get the smaller blobs belonging to the same vehicle count as 1 vehicle? Due to background subtraction, in the foreground mask, some of the blobs belonging to a vehicle are quite small, and hence filtering the blobs based on their size won't work.
Try filtering things based on colorDistance() and the comparing the mean color of the blobs in the image with the vehicle against a control image of the background without the car in it. The SimpleCV docs have a tutorial specifically on this topic. That said... it may not always work as expected. Another possibility (just occurred to me) might be summing up the area of the blobs of interest and seeing if that sum is over a given thresh-hold, rather than just any one blob itself.
I mean, is there a way to use the current slicer to show a sectional slice (not aligned with the canonical Axis) of the 3D image data? or better, a sectional slice of a predefined size.
If there is no exists, how can I contribute with this ??
Regards,
P
currently this is not possible but you are very welcome to contribute.
the parsers of NRRD and MGH/MGZ format store a 3D array of the image. All X.slices are configured using a front and an up vector.. now we only have to link the X.slice to grab the right values from the 3D array and then create a widget so it is possible to reslice/reformat on the fly. Sounds good?