What's the shortest solution in c/c++?
You didn't give too much information so I will go with StretchBlt
For an example, see Scaling an Image.
I won't give you a demo, but try to do the following:
create destination bitmap that is of your desired size
select that bitmap into device context
StretchBlt original bitmap onto device context previously mentioned
unselect bitmap from the device context
That recipe above needs no any library then GDI that is already present in windows. And if you plan to draw something in c++, you should get familiarity with that library anyway.
Look here:
http://www.ucancode.net/Free-VC-Draw-Print-gdi-example-tutorial/GDI-Object-VC-MFC-Tutorial.htm
or here:
http://www.olivierlanglois.net/clover.html
if you don't plan to use MFC for the task.
One of the easiest rescale algorithms is nearest-neighbour. Suppose your are rescaling from an image in an array of size x1 y1 to another size x2 y2. The idea is to find the nearest integer offset in original array to each target array position. So your rescale algorithm ends for something like this:
const int x1 = 512;
const int y1 = 512;
const int x2 = 64;
const int y2 = 64;
unsigned char orig[x1*y1]; /* Original byte array */
unsigned char target[x2*y2] /* Target byte array */
for(int i=0;i<x2;i++)
{
for(int j=0;j<y2;j++)
{
xoff = (i*x2)/x1;
yoff = (j*y2)/y1;
target[i+j*x2] = orig[xoff+yoff*x1]
}
}
This will give a blocky resized image. For better results you can use average or any other fancier polynomial based interpolators.
What libraries are you using? How do you represent images? Most image libraries should already be able to do that, e.g. Qt has QPixmap with scaled() and GDI has StretchBlt.
Or you could code it yourself with bicubic interpolation.
Related
I am creating a program that allows you to view fractals like the Mandelbrot or Julia set. I would like to render them as quickly as possible. I would love a way to put an array of uint8_t pixel values onto the screen. The array is formatted like this...
{r0,g0,b0,r1,g1,b1,...}
(A one dimensional array or RGB color values)
I know I have the proper data because before I just set individual points and it worked...
for(int i = 0;i < height * width;++i) {
//setStroke and point are functions that I made that together just draw a colored point
r.setStroke(data[i*3],data[i*3+1],data[i*3+2]);
r.point(i % r.window.w,i / r.window.w);
}
This is a pretty slow operation especially if the screen is big (which I would like it to be)
Is there any faster way to just put all the data onto the screen.
I tried doing something like this
void* pixels;
int pitch;
SDL_Texture* img = SDL_CreateTexture(ren,
SDL_GetWindowPixelFormat(win),SDL_TEXTUREACCESS_STREAMING,window.w,window.h);
SDL_LockTexture(img, NULL, &pixels, &pitch);
memcpy(pixels, data, window.w * 3 * window.h);
SDL_UnlockTexture(img);
SDL_RenderCopy(ren,img,NULL,NULL);
SDL_DestroyTexture(img);
I have no idea what I'm doing so please have mercy
Edit (thank you for comments :))
So here is what I do now
SDL_Texture* img = SDL_CreateTexture(ren, SDL_PIXELFORMAT_RGB888,SDL_TEXTUREACCESS_STREAMING,window.w,window.h);
SDL_UpdateTexture(img,NULL,&data[0],window.w * 3);
SDL_RenderCopy(ren,img,NULL,NULL);
SDL_DestroyTexture(img);
But I get this Image... which is not what it should look like
I am thinking that my data is just formatted wrong, right now it is formatted as an array of uint8_t in RGB order. Is there another way I should be formatting it (note I do not need an alpha channel)
I'm currently working on a printing plugin with C++, and starting working with TextOut to print the text I want. It works great, but apparently, the positions that TextOut uses as params are in pixels. Is there a way to set them to be in cm or mm? or any other?.
Well, it's pretty simple. The coordinates are not in pixels, but they are in the coordinates of your mapping mode. It just so happens that the default mapping mode of a DC is MM_TEXT which has each coordinate unit to be one pixel on the device.
Change your mapping mode using SetMapMode() to the coordinate system you prefer to use. You can also play with window extents, viewport extents, and origins to customize it however you want. You might want to look at the documentation for SetMapMode() and the MM_LOMETRIC (or MM_HIMETRIC) mapping mode.
There should be special handling implemented for printing. Basically, you need to perform conversion based on HIMETRIC units. The paper size is in HIMETRIC units.
Here is the code that will help you get started (MFC-based):
if (pDC->IsPrinting())
{
// printable area in millimeters
int nWidth = pDC->GetDeviceCaps(HORZSIZE);
int nHeight = pDC->GetDeviceCaps(VERTSIZE);
CDC ScreenDC;
ScreenDC.CreateIC(_T("DISPLAY"), NULL, NULL, NULL);
int nPixelsPerInchX = ScreenDC.GetDeviceCaps(LOGPIXELSX);
int nPixelsPerInchY = ScreenDC.GetDeviceCaps(LOGPIXELSY);
// paper size is in HIMETRIC units. we need to convert
CSize PaperSize(MulDiv(nWidth,nPixelsPerInchX*100,HIMETRIC_PER_INCH),
MulDiv(nHeight,nPixelsPerInchY*100,HIMETRIC_PER_INCH));
// now we need to calculate zoom ratio so the layer content fits on page
double fZoomX = (double)PaperSize.cx/(double)m_DocSize.cx;
double fZoomY = (double)PaperSize.cy/(double)m_DocSize.cy;
m_PrintZoom = min(fZoomX, fZoomY);
ResetViewSize(TRUE);
if (pDC->IsKindOf(RUNTIME_CLASS(CPreviewDC)))
{
pDC->SetMapMode(MM_ANISOTROPIC);
pDC->SetWindowExt(nPixelsPerInchX, nPixelsPerInchY);
pDC->SetViewportExt(pDC->GetDeviceCaps(LOGPIXELSX), pDC->GetDeviceCaps(LOGPIXELSY));
pDC->SetViewportOrg(0,0);
pDC->SetWindowOrg(0,0);
}
}
I need to center a piece of text to a rectangle.
I found this example, but I'm struggling to understand what it does.
It is not too hard to achieve this, I just need to know how do I find the width and the height of the text after being drawn, but I cannot find this anywhere.
To draw the text, I do it char by char:
static void drawText(std::string str, float x, float y, float z) {
glRasterPos3f(x, y, z);
for (unsigned int i = 0; i < str.size(); i++) {
glutBitmapCharacter(GLUT_BITMAP_HELVETICA_18, str[i]);
}
}
Not sure if this is the best way, but it is my first program using OpenGL.
Raster fonts are awful, this will not work in modern OpenGL just so you know - you need to use texture-mapped triangles to implement bitmap fonts now-a-days. If you are just starting out, legacy OpenGL may work for you, but you will find that things like raster pos are not supported in OpenGL ES and core OpenGL 3+.
That said you can sum up glutBitmapWidth (...) across all of the characters in your string, like this:
unsigned int str_pel_width = 0;
const unsigned int str_len = str.size ();
// Finding the string length can be expensive depending on implementation (e.g. in
// a C-string it requires looping through the entire string storage until the
// first null byte is found, each and every time you call this).
//
// The string has a constant-length, so move this out of the loop for better
// performance! You are using std::string, so this is not as big an issue, but
// you did ask for the "best way" of doing something.
for (unsigned int i = 0; i < str_len; i++)
str_pel_width += glutBitmapWidth (GLUT_BITMAP_HELVETICA_18, str [i]);
Now, to finish up this discussion, you should be aware that the height of each character is identical in a GLUT bitmap font. If I recall, 18 pt. Helvetica is probably 22 or 24 pixels high. The distinction between pt. size and pixel size is supposed to be for DPI scaling, but GLUT does not actually implement this.
I'm using OpenCV for object detection and one of the operations I would like to be able to perform is a per-pixel square root. I imagine the loop would be something like:
IplImage* img_;
...
for (int y = 0; y < img_->height; y++) {
for(int x = 0; x < img_->width; x++) {
// Take pixel square root here
}
}
My question is how can I access the pixel value at coordinates (x, y) in an IplImage object?
Assuming img_ is of type IplImage, and assuming 16 bit unsigned integer data, I would say
unsigned short pixel_value = ((unsigned short *)&(img_->imageData[img_->widthStep * y]))[x];
See also here for IplImage definition.
OpenCV IplImage is a one dimensional array. You must create a single index to get at image data. The position of your pixel will be based on the color depth, and number of channels in your image.
// width step
int ws = img_->withStep;
// the number of channels (colors)
int nc = img_->nChannels;
// the depth in bytes of the color
int d = img_->depth&0x0000ffff) >> 3;
// assuming the depth is the size of a short
unsigned short * pixel_value = (img_->imageData)+((y*ws)+(x*nc*d));
// this gives you a pointer to the first color in a pixel
//if your are rolling grayscale just dereference the pointer.
You can pick a channel (color) by moving over pixel pointer pixel_value++. I would suggest using a look up table for square roots of pixels if this is going to be any sort of real time application.
please use the CV_IMAGE_ELEM macro.
Also, consider using cvPow with power=0.5 instead of working on pixels yourself, which should be avoided anyways
You may find several ways of reaching image elements in Gady Agam's nice OpenCV tutorial here.
I'm using JNI to obtain raw image data in the following format:
The image data is returned in the format of a DATA32 (32 bits) per pixel in a linear array ordered from the top left of the image to the bottom right going from left to right each line. Each pixel has the upper 8 bits as the alpha channel and the lower 8 bits are the blue channel - so a pixel's bits are ARGB (from most to least significant, 8 bits per channel). You must put the data back at some point.
The DATA32 format is essentially an unsigned int in C.
So I obtain an int[] array and then try to create a Buffered Image out of it by
int w = 1920;
int h = 1200;
BufferedImage b = new BufferedImage(w, h, BufferedImage.TYPE_INT_ARGB);
int[] f = (new Capture()).capture();
for(int i = 0; i < f.length; i++){;
b.setRGB(x, y, f[i]);
}
f is the array with the pixel data.
According to the Java documentation this should work since BufferedImage.TYPE_INT_ARGB is:
Represents an image with 8-bit RGBA color components packed into integer pixels. The image has a DirectColorModel with alpha. The color data in this image is considered not to be premultiplied with alpha. When this type is used as the imageType argument to a BufferedImage constructor, the created image is consistent with images created in the JDK1.1 and earlier releases.
Unless by 8-bit RGBA, them mean that all components added together are encoded in 8bits? But this is impossible.
This code does work, but the image that is produced is not at all like the image that it should produce. There are tonnes of artifacts. Can anyone see something obviously wrong in here?
Note I obtain my pixel data with
imlib_context_set_image(im);
data = imlib_image_get_data();
in my C code, using the library imlib2 with api http://docs.enlightenment.org/api/imlib2/html/imlib2_8c.html#17817446139a645cc017e9f79124e5a2
i'm an idiot.
This is merely a bug.
I forgot to include how I calculate x,y above.
Basically I was using
int x = i%w;
int y = i/h;
in the for loop, which is wrong. SHould be
int x = i%w;
int y = i/w;
Can't believe I made this stupid mistake.