I can get 2 levels of PivotViewerItemTemplate to work just fine, but not three.
If I set one template at MaxWidth=130, the next at MaxWidth=400 and then a third with no MaxWidth, the second level starts transitioning into the 3rd at about 170 pixels and is no longer visible at all at only 280 pixels. I expect to see the 2nd level until it is 400 pixels wide.
Any tips on what I'm doing wrong here?
TIA
Your MaxWidth's need to be powers of two: 32, 64, 128 etc. You can then have as many levels as there are powers :)
I've written a post explaining it in more detail here http://www.rogernoble.com/2012/04/02/picking-maxwidth-for-pivotviewer-semantic-zoom
Related
Usually the anchors sizes are set to {32, 64, 128, 256, 512}. However, in my dataset, I don't have boxes as large as 512 x 512. So I would like to use only 4 anchor scales i.e. {32, 64, 128, 256}. How would this be possible, since the FPN has 5 levels?
To elaborate, consider the following image. (It's from an article about detectron2)
Decreasing the number of anchors isn't really straightforward since removing a scale involves removing a stage of the resnet (resnet block) from being used. Both the BoxHead and the RPN expects P2 to P5 (RPN expects res5/P6 as well). So my question is if I were to remove an anchor scale (in my case 512 x 512, since my images are only 300 x 300 and objects won't exceed that size) which resnet block should be ignored. Should the low resolution block (res2) be ignored or should the high res (res5) be removed?
Or is it that the structure does not allow removal of an anchor scale and 5 scales must be used?
You can remove the anchor scale, but be aware to also modify your RPN and your BoxHead. The P2 will have the largest dimension (512 in your case).
But maybe think about keeping all of them and changing only the resolutions, beginning from 16 to 256. I guess this can save you from a lot of reorganization of your model plus will improve detection for smaller objects.
I'm attempting to get the width and height of the primary monitor via GetSystemMetrics. However, calling:
GetSystemMetrics(SM_CYVIRTUALSCREEN)
Is returning a value of 1018, rather than the actual vertical resolution, which is 1080.
Now, I thought maybe I misunderstood the docs, so I tried calling
SystemParametersInfo(SPI_GETWORKAREA)
to see if maybe that was actually the one that gave the full screen. But, it does as it describes, and returns the working area of the screen (total_height - taskbar_height). Which in my case is 1040 pixels (1080 - 40 (taskbar height)).
So, I'm a bit stumped. Where is 1018 coming from? What's causing it to be off by 62 pixels?
GetSystemMetrics(SM_CYSCREEN) should do the job.
As per MSDN this is equal to GetDeviceCaps(hdcPrimaryMonitor, VERTRES) which might be what you really want.
I am working on a project to losslessly compress a specific style of BMP images that look like this
I have thought about doing pattern recognition, to find repetitive blocks of N x N pixels but I feel like it wont be fast enough execution time.
Any suggestions?
EDIT: I have access to the dataset that created these images too, I just use the image to visualize my data.
Optical illusions make it hard to tell for sure but are the colors only black/blue/red/green? If so, the most straightforward compression would be to simply make more efficient use of pixels. I'm thinking pixels use a fixed amount of space regardless of what color they are. Thus, chances are you are using 12x as many pixels as you really need to be. Since a pixel can be a lot more colors than just those four.
A simple way to do that would be to do label the pixels with the following base 4 numbers:
Black = 0
Red = 1
Green = 2
Blue = 3
Example:
The first four colors of the image seems to be Blue-Red-Blue-Blue. This is equal to 3233 in base 4, which is simply EF in base 16 or 239 in base 10. This is enough to define what the red color of the new pixel should be. The next 4 would define the green color and the final 4 define what the blue color is. Thus turning 12 pixels into a single pixel.
Beyond that you'll probably want to look into more conventional compression software.
I want to merge 2 images. How can i remove the same area between 2 images?
Can you tell me an algorithm to solve this problem. Thanks.
Two image are screenshoot image. They have the same width and image 1 always above image 2.
When two images have the same width and there is no X-offset at the left side this shouldn't be too difficult.
You should create two vectors of integer and store the CRC of each pixel row in the corresponding vector element. After doing this for both pictures you find the CRC of the first line of the lower image in the first vector. This is the offset in the upper picture. Then you check that all following CRCs from both pictures are identical. If not, you have to look up the next occurrence of the initial CRC in the upper image again.
After checking that the CRCs between both pictures are identical when you apply the offset you can use the bitblit function of your graphics format and build the composite picture.
I haven't come across something similar before but I think the following might work:
Convert both to grey-scale.
Enhance the contrast, the grey box might become white for example and the text would become more black. (This is just to increase the confidence in the next step)
Apply some threshold, converting the pictures to black and white.
afterwards, you could find the similar areas (and thus the offset of overlap) with a good degree of confidence. To find the similar parts, you could harper's method (which is good but I don't know how reliable it would be without the said filtering), or you could apply some DSP operation(s) like convolution.
Hope that helps.
If your images are same width and image 1 is always on top. I don't see how that hard could it be..
Just store the bytes of the last line of image 1.
from the first line to the last of the image 2, make this test :
If the current line of image 2 is not equal to the last line of image 1 -> continue
else -> break the loop
you have to define a new byte container for your new image :
Just store all the lines of image 1 + all the lines of image 2 that start at (the found line + 1).
What would make you sweat here is finding the libraries to manipulate all these data structures. But after a few linkage and documentation digging, you should be able to easily implement that.
Update: This only seems to be a problem at some computers. The normal, intuitive code seems to work fine one my home computer, but the computer at work has trouble.
Home computer: (no problems)
Windows XP Professional SP3
AMD Athlon 64 X2 3800+ Dual Core 2.0 GHz
NVIDIA GeForce 7800 GT
2 GB RAM
Work computer: (this question applies to this computer)
Windows XP Professional SP3
Intel Pentium 4 2.8 Ghz (dual core, I think)
Intel 82945G Express Chipset Family
1 GB RAM
Original post:
I'm trying to apply a very simple texture to a part of the screen using Psychtoolbox in Matlab with the following code:
win = Screen('OpenWindow', 0, 127); % open window and obtain window pointer
tex = Screen('MakeTexture', win, [255 0;0 255]); % get texture pointer
% draw texture. Args: command, window pointer, texture pointer, source
% (i.e. the entire 2x2 matrix), destination (a 100x100 square), rotation
% (none) and filtering (nearest neighbour)
Screen('DrawTexture', win, tex, [0 0 2 2], [100 100 200 200], 0, 0);
Screen('Flip', win); % flip the buffer so the texture is drawn
KbWait; % wait for keystroke
Screen('Close', win); % close screen
Now I would expect to see this (four equally sized squares):
But instead I get this (right and bottom sides are cut off and top left square is too large):
Obviously the destination rectangle is a lot bigger than the source rectangle, so the texture needs to be magnified. I would expect this to happen symmetrically like in the first picture and this is also what I need. Why is this not happening and what can I do about it?
I have also tried using [128 0 1152 1024] as a destination rectangle (as it's the square in the center of my screen). In this case, all sides are 1024, which makes each involved rectangle a power of 2. This does not help.
Increasing the size of the checkerboard results in a similar situation where the right- and bottommost sides are not showed correctly.
Like I said, I use Psychtoolbox, but I know that it uses OpenGL under the hood. I don't know much about OpenGL either, but maybe someone who does can help without knowing Matlab. I don't know.
Thanks for your time!
While I don't know much (read: any) Matlab, I do know that textures are very picky in openGL. Last I checked, openGL requires texture files to be square and of a power of two (i.e. 128 x 128, 256 x 256, 512 x 512).
If they aren't, openGL is supposed to pad the file with appropriate white pixels where they're needed to meet this condition, although it could be a crapshoot depending on which system you are running it on.
I suggest making sure that your checkerboard texture fits these requirements.
Also, I can't quite make sure from your code posted, but openGL expects you to map the corners of your texture to the corners of the object you are intending to texture.
Another bit of advice, maybe try a linear filter instead of nearest neighbor. It's heavier computationally, but results in a better image. This probably won't matter in the end.
While this help is not Matlab specific, hope it is useful.
Without knowing a lot about the Psychtoolbox, but having dealt with graphics and user interfaces a lot in MATLAB, the first thing I would try would be to fiddle with the fourth input to Screen (the "source" input). Try shifting each corner by half-pixel and whole-pixel values. For example, the first thing I would try would be:
Screen('DrawTexture', win, tex, [0 0 2.5 2.5], [100 100 200 200], 0, 0);
And if that didn't seem to do anything, I would next try:
Screen('DrawTexture', win, tex, [0 0 3 3], [100 100 200 200], 0, 0);
My reasoning for this advice: I've noticed sometimes that images or GUI controls in my figures can appear to be off by a pixel, which I can only speculate is some kind of round-off error when scaling or positioning them.
That's the best advice I can give. Hope it helps!