I want to create a line with SDL, i know there is a code called SDL_RenderDrawLine. But my question probably can not be solved with SDL, because the line that i want to create is lets say 1 m length, there i can say that 100 pixels are 1 meter length, but what will happen if after that, i want to create 0.001 m length?
How to tackle this? because SDL only accepts int due to the pixels, in other words there is not 0.1 pixels or half pixel.
if that can not be solved using SDL how can i do it?
You cannot have lower than 1 pixel, this is how computers represent points. If you want to make it look smaller you can create a camera, and while getting closer or further away it will impact the size of the shape you are looking to draw.
In your case by being further away from your line it will look smaller, that said, you cannot have smaller than one pixel, its either one pixel or no pixel.
Related
I am trying to display 3d model of a human in opengl. The human object is represented by a 3D array[n][n][n] (height, width and depth), where n = 300. Each element of array has value either 1 or 0. If element is 0 then it should be ignored else drawn.
Problem: due to the fact that I have to iterate through 3D array using 3 nested for loops and then create vertices for each individual voxel it takes a lot of time.
My idea of how to solve the problem: write another program that would iterate through array, create vertices and write them to the file. And then whenever I need to render I would read vertices from the file.
Question: What is the best way to render such an object? Would be great if you could suggest any algorithm or technic.
Many years ago I made a school project where I did something similar.
I had a 3D volume representation with 0s and 1s which represents a room. 0 means the cube is empty, 1 means the cube is full. It's the same problem you are facing but flipping the normal of the quads.
So I made an algorithm that turns the cube of bits into the minimum number of quads.
I've been digging in my old code repository and found the function that does that. I feel a bit ashamed of the code I wrote back then, but hopefully you can grab some inspiration from it.
I'm going to try to give a brief explanation of what the algorithm does.
We slice the volume with planes in each direction X, Y and Z.
For each plane we check all the cells it touches on each side.
If both sides have the same number (both 0, or both 1), we do nothing.
Otherwise (we have a different number on each side), we generate a quad in that position, and the normal of that quad will depend on the order of the numbers (01 or 10).
Ignore the colorCount variable, I think it's just a color ID I used for debugging.
My program called this algorithm when it first loads, and whenever you make a change in the scene (it was a live-editor). I didn't notice any slowdowns when editing the scene and the computer I was using back then was not very fast.
I need an algorithm that, from a 1bit 2D image (a 2D matrix of mixed 1s and 0s) returns me rectangles (with the x,y coordinates of each corner) that packs the pixels that are equal to zero, using the least amount of boxes.
So for an image like
0000000
1111111
1111111
1111110
1111100
0000000
It would return something like
Rectangle 1 ((0,0),(0,1),(7,0),(7,1))
Rectangle 2 ((6,3),(7,3),(7,4),(6,4))
Rectangle 3 ((5,4),(7,4),(7,6),(5,6))
Rectangle 4 ((0,5),(0,6),(7,6),(7,5))
I feel this algorithm exists, but I am unable to Google it or name it.
I'm guessing you're looking to make a compression algorithm for your images. There isn't an algorithm that guarantees the minimum number of rectangles, as far as I'm aware.
The first thing that comes to mind is taking your pixel data as a 1D array and using run-length encoding to compress it. Images tend to have rather large groupings of similarly-colored pixels, so this should give you some data savings.
There are some things you can do on top of that to further increase the information density:
Like you suggested, start off with an image that is completely white and only store black pixels
If encoding time isn't an issue, run your encoding on both white and black pixels, then store whichever requires less data and use one bit to store whether the image should start with a black or a white background.
There are some algorithms that try to do this in two dimensions, but this seems to be quite a bit more complex. Here's one attempt I found on the topic:
https://pdfs.semanticscholar.org/d09a/62ea3472352bf7bbe873677cd81f348206cc.pdf
I found more interesting SO answers:
What algorithm can be used for packing rectangles of different sizes into the smallest rectangle possible in a fairly optimal way?
Minimum exact cover of grid with squares; extra cuts
Algorithm for finding the fewest rectangles to cover a set of rectangles without overlapping
https://mathoverflow.net/questions/244718/algo-for-covering-maximum-surface-of-a-polygon-with-rectangles
https://mathoverflow.net/questions/105837/get-largest-inscribed-rectangle-of-a-concave-polygon
https://mathoverflow.net/questions/80665/how-to-cover-a-set-in-a-grid-with-as-few-rectangles-as-possible
Converting monochrome image to minimum number of 2d shapes
I also read on Covering rectilinear polygons with axis-parallel rectangles.
I even found a code here: https://github.com/codecombat/codecombat/blob/6009df26de7c7938c0af2122ffba72c07123d172/app/lib/world/world_utils.coffee#L94-L148
I tested multiple approaches but in the end none were as fast as I needed or generated a reasonable amount of rectangles. So for now I went with a different approach.
I wrote code to make a 2D transformation: scaling.
( value = variable from slider 1-10 )
int x=punktx.back();
int y=punkty.back();
DrawPixel(x*value/6.0,y*value/6.0,bits,255,255,255);
And I received that output:
As you can see I received a little breaks in that square. Is it okay or I have wrong code?
It's not how you scale things in Qt. Use QImage::scaled() or QPixmap::scaled() method instead.
Regarding your problem, the breaks are result of the fact that you use the same number of pixels for drawing the large square as for the small square - you would have to fill the gaps between the pixels, but scaling that way doesn't make sense anyway as mentioned above.
the problem is that if you iterate over an input image that's e.g. 10x10 pixels, and output the same number of pixels then you're only drawing 100 pixels no matter how much you "scale" it by. If you're scaling it to fill 20x20 pixels in size but you only draw your original 100 pixels, of course it will have holes in it.
If you want to implement a simple scaling function as a learning exercise, some approaches are:
instead of drawing 1 pixel per original pixel, draw a rectangle of the scaled size, in that pixel's color. This has the advantage that it's only a tiny change to your existing code, so it might be worth trying as a first attempt.
loop over the output pixels, then interpolate where that would be on the input image (reverse the scaling) then draw one pixel of the right color. This avoids some of the overheads with drawing lots of rectangles (which could paint the same output pixels more than once).
as above, but write the data into a bitmap stored in memory, then draw it all at once with a bitmap drawing command.
Also if you really want to get better results you can work out whether an output pixel crosses over several input pixels, then average out the colors, and so on. This will give smoother looking results but could blur things for solid color images.
I've been trying to do something that seems surprisingly challenging --- printing an equilateral triangle to the command line (Terminal for Mac OS X). I have a program that can compute the nth row of Pascal's triangle up to some user-specified constant. As is well known, if one takes the values of Pascal's triangle modulo two, there is a correlation between that and Sierpinski's triangle.
I have been setting odd values to be 1 and even values to be 0, and when I print the results on the Terminal and zoom out, it looks nice, apart from the fact that it's clearly not equilateral. Here is an example output of my program after zooming way out (so zeroes and ones look much different):
But I'm wondering ... is there a way to get this triangle to look equilateral? Or do I have to print the output somewhere else? I've been experimenting with different fonts, different line width levels, but I can't get anything to look close to equilateral, and even if it does, I don't have a reliable way of checking for this. Part of the problem is also that zooming in/out on the Terminal results in different line width and height scales.
My code takes in as input the number of rows to generate. Then, it takes that number into account when printing out each row. So the first row (which is just a single "1") would have n-1 spaces to print before printing the 1. Then the second row has to print n-2 spaces before printing its actual contents (which are "1 1"), which includes a space between each number, and so on. It's in C++, but I don't think that should matter.
I suspect that I'll need to find some other way of getting the image out, so any advice about libraries to use would be great.
A good option is to render the triangle to an raster format of your choice, and use aalib or libcaca to render that image to the terminal.
I would try to (and I think you already have) figure out the actual width and height of what the image would ultimately be, and generate the 2D matrix defining that images dimensions. This matrix can be a 2D set of integers (no less than 24 bits wide giving space for 8 bit color components), or 3 separate 2D matrices, one for each color component. Set all of those values to whatever you want the background color to be.
Move through your algorithm setting the appropriate pixels to whatever OTHER color your want your actual triangle to show up as.
Look here for writing such a matrix out to a .bmp (or bitmap) file.
Writing BMP image in pure c/c++ without other libraries
I am writing an application in C++ that requires a little bit of image processing. Since I am completely new to this field I don't quite know where to begin.
Basically I have an image that contains a rectangle with several boxes. What I want is to be able to isolate that rectangle (x, y, width, height) as well as get the center coordinates of each of the boxes inside (18 total).
I was thinking of using a simple for-loop to loop through the pixels in the image until I find a pattern but I was wondering if there is a more efficient approach. I also want to see if I can do it efficiently without using big libraries like OpenCV.
Here are a couple example images, any help would be appreciated:
Also, what are some good resources where I could learn more about image processing like this.
The detection algorithm here can be fairly simple. Your box-of-squares (BOS) is always aligned with the edge of the image, and has a simple structure. Here's how I'd approach it.
Choose a colorspace. Assume RGB is OK for now, but it may work better in something else.
For each line
For each pixel, calculate the magnitude difference between the pixel and the pixel immediately below it. The magnitude difference is simply sqrt((X-x)^2+(Y-y)^2+(Z-z)^2)), where X,Y,Z are color coordinates of the first pixel, and x,y,z are color coordinates of the pixel below it. For RGB, XYZ=RGB of course.
Calculate the maximum run length of consecutive difference magnitudes that are below a certain threshold magThresh. You may also choose a forgiving version of this: maximum run length, but allowing intrusions up to intrLen pixels long that must be followed by up to contLen pixels long runs. This is to take care of possible line-to-line differences at the edges of the squares.
Find the largest set of consecutive lines that have the maximum run lengths above minWidth and below maxWidth.
Thus you've found the lines which contain the box, and by recalculating data in 2.1 above, you'll get to know where the boxes are in horizontal coordinates.
Detecting box edges is done by repeating the same thing but scanning left-to-right within the box. At that point you'll have approximate box centroids that take no notice of bleeding between pixels.
This can be all accomplished by repeatedly running the image through various convolution kernels followed by doing thresholding, I'd think. The good thing is that both of those operations have very fast library implementations. You do not want to reimplement them by hand, it will be likely significantly slower.
If you insist on doing it yourself (personally I'd use OpenCV, it's industrial-strength and free), you're going to need an edge detection algorithm first. There are a good few out there on the internet, but be prepared for some frightening mathematics...
Many involve iterating over each pixel, and lifting it and it's neighbours' values into a matrix, and then convolving with a kernel matrix. Be aware that this has to be done for every pixel (in principle though, in your case you can stop at the first discovered rectangle), and for each colour channel - so it would be highly advisable to push onto the GPU.