Changing a texture in real time sdl - c++

Sorry I'm a bit new with SDL and C++ development. Right now I've created a tile mapper that reads from my map.txt file. So far it works, but I want to add editing the map now.
SDL_Texture *texture;
texture= IMG_LoadTexture(G_Renderer,"assets/tile_1.png");
SDL_RenderCopy(G_Renderer, texture, NULL, &destination);
SDL_RenderPresent(G_Renderer);
The above is the basic way I'm showing my tiles, but if I want to go in and change the texture in real time it's kind of buggy and doesn't work well. Is there a method that is best for editing a texture? Thanks for the help I appreciate everything.

The most basic way is to set up a storage container with some textures which you will use repeatedly; for example a vector or dictionary/map. Using the map approach for example you could do something like:
// remember to #include <map>
map<string, SDL_Texture> myTextures;
// assign using array-like notation:
myTextures["texture1"] = IMG_LoadTexture(G_Renderer,"assets/tile_1.png");
myTextures["texture2"] = IMG_LoadTexture(G_Renderer,"assets/tile_2.png");
myTextures["texture3"] = IMG_LoadTexture(G_Renderer,"assets/tile_3.png");
myTextures["texture4"] = IMG_LoadTexture(G_Renderer,"assets/tile_4.png");
then to utilise a different texture, all you have to do is use something along the lines of:
SDL_RenderCopy(G_Renderer, myTextures["texture1"], NULL, &destination);
SDL_RenderPresent(G_Renderer);
which can be further controlled by changing the first line to
SDL_RenderCopy(G_Renderer, myTextures[textureName], NULL, &destination);
where textureName is a string variable which you can alter in code in realtime.
This approach means you can load all the textures you will need before-hand and simply utilise them as needed later, meaning there's no loading from file system whilst rendering:)
There is a nice explanation of map here.
Hopefully this gives you a nudge in the right direction. Let me know if you need more info:)

Related

How to use stock-objects with GDI+

I was using the following pattern to record an enhanced meta-file for a later playback:
POINT pts[] = {
//.....
};
::SelectObject(hEnhDC, ::GetStockObject(LTGRAY_BRUSH));
::Polygon(hEnhDC, pts, _countof(pts));
Now I'm forced to use GDI+ to provide anti-aliasing, so I'm trying to convert that code sample:
Gdiplus::Point pts[] = {
//...
};
Gdiplus::Graphics grx(hEnhDC);
Gdiplus::Pen pen(Gdiplus::Color(255, GetRValue(clrPen), GetGValue(clrPen), GetBValue(clrPen)), PEN_THICKNESS);
grx.FillPolygon(&brush, pts, _countof(pts));
grx.DrawPolygon(&pen, pts, _countof(pts));
The issue is how do I convert a stock-object HBRUSH from ::GetStockObject(LTGRAY_BRUSH) to GDI+ Brush object?
EDIT: Guys, thank you for all your suggestions. And I apologize for not providing more details. This question is not about getting the RGB color triplet from the stock brush. I can do all that with the GetSysColor function, or with the LOGBRUSH like you showed below.
The trick lies in the first sentence above. I am recording an enhanced metafile that may be played on a separate computer, so I cannot hard-code colors into it.
Let me explain. Say, the first GDI example (let's simplify it down to a triangle with a gray fill):
POINT pts[] = {
{100, 100,},
{100, 120,},
{120, 100,},
};
::SelectObject(hEnhDC, ::GetStockObject(LTGRAY_BRUSH));
::Polygon(hEnhDC, pts, _countof(pts));
If I then call GetEnhMetaFileBits on that meta-file, I'll get the following data:
So as you see the EMR_SELECTOBJECT object in that recorded meta-file specifies the LTGRAY_BRUSH = 0x80000001, which will be properly substituted for the color when that meta-file is played on the target system.
And that's what I'm trying to achieve here with GDI+. For some reason it only seems to support hard-coded color triplets in its Brush class. That's why I asked.
Otherwise, one solution is to parse the enhanced meta-file's raw data. (For GDI+ it is a much more complex structure though, that also involves parsing EMR_GDICOMMENT objects.) And then substitute the needed color on the target system before the GDI+ meta-file is played. But it involves writing a lot of code, which I was trying to avoid at this stage ...
I’m afraid you can’t easily convert.
A simple workaround is create GDI+ solid brush with the same color.
See this spec for color values of GDI stock objects, that particular brush has color #C0C0C0

Where to alter reference code to extract motion vectors from HEVC encoded video

So this question has been asked a few times, but I think my C++ skills are too deficient to really appreciate the answers. What I need is a way to start with an HEVC encoded video and end with CSV that has all the motion vectors. So far, I've compiled and run the reference decoder, everything seems to be working fine. I'm not sure if this matters, but I'm interested in the motion vectors as a convenient way to analyze motion in a video. My plan at first is to average the MVs in each frame to just get a value expressing something about the average amount of movement in that frame.
The discussion here tells me about the TComDataCU class methods I need to interact with to get the MVs and talks about how to iterate over CTUs. But I still don't really understand the following:
1) what information is returned by these MV methods and in what format? With my limited knowledge, I assume that there are going to be something like 7 values associated with the MV: the frame number, an index identifying a macroblock in that frame, the size of the macroblock, the x coordinate of the macroblock (probably the top left corner?), the y coordinate of the macroblock, the x coordinate of the vector, and the y coordinate of the vector.
2) where in the code do I need to put new statements that save the data? I thought there must be some spot in TComDataCU.cpp where I can put lines in that print the data I want to a file, but I'm confused when the values are actually determined and what they are. The variable declarations look like this:
// create motion vector fields
m_pCtuAboveLeft = NULL;
m_pCtuAboveRight = NULL;
m_pCtuAbove = NULL;
m_pCtuLeft = NULL;
But I can't make much sense of those names. AboveLeft, AboveRight, Above, and Left seem like an asymmetric mix of directions?
Any help would be great! I think I would most benefit from seeing some example code. An explanation of the variables I need to pay attention to would also be very helpful.
At TEncSlice.cpp, you can access every CTU in loop
for( UInt ctuTsAddr = startCtuTsAddr; ctuTsAddr < boundingCtuTsAddr; ++ctuTsAddr )
then you can choose exact CTU by using address of CTU.
pCtu(TComDataCU class)->getCtuRsAddr().
After that,
pCtu->getCUMvField()
will return CTU's motion vector field. You can extract MV of CTU in that object.
For example,
TComMvField->getMv(g_auiRasterToZscan[y * 16 + x])->getHor()
returns specific 4x4 block MV's Horizontal element.
You can save these data after m_pcCuEncoder->compressCtu( pCtu ) because compressCtu determines all data of CTU such as CU partition and motion estimation, etc.
I hope this information helps you and other people!

GDI+ Creating filled CustomLineCap

I'm trying to build custom line caps in GDI+ (C++), and I can't get it to draw a filled cap, while unfilled caps draw fine.
I set up a closed polygon path, create a CustomLineCap with the path as the first parameter (fillPath parameter), and call SetCustomStartCap on the pen:
std::vector<Gdiplus::Point> pathPoints =
{
Gdiplus::Point(20,0),
Gdiplus::Point(0,20),
Gdiplus::Point(-20,0),
Gdiplus::Point(20,0)
};
Gdiplus::GraphicsPath path;
path.AddPolygon(&pathPoints[0], 4);
Gdiplus::CustomLineCap startCap(&path, nullptr);
Gdiplus::CustomLineCap endCap(nullptr, path.Clone());
m_Pen.SetCustomStartCap(&startCap);
m_Pen.SetCustomEndCap(&endCap);
I've read comments that it might have to do with the point order, or if the path is definitely closed. I've tried having the points both clockwise and counter-clockwise, but it didn't seem to help.
Can anyone spot if I'm doing something obviously wrong, or maybe I'm missing something?
Aside from the order of the points, I think there's also a requirement that the shape must intersect the negative y-axis, that is the line itself. So if you create your polygon like this
std::vector<Point> pathPoints =
{
Gdiplus::Point(20,-1),
Gdiplus::Point(0,19),
Gdiplus::Point(-20,-1)
};
then you'll have a line cap with the desired shape and size, but it will overlap the line a bit.
Alternatively, if you want arrowheads specifically, take a look at AdjustableArrowCap. Just set the isFilled property to TRUE and you're good to go. Or you can just use Pen's SetStartCap/SetEndCap methods with the LineCapArrowAnchor parameter, but then you probably can't customize your arrow.

changing textureRect of a CCSprite created by CCRenderTexture

I have a CCSprite which gradually needs to be exhausted linearly from one end, lets say from left to right.For this purpose ,I am trying to change the textureRect property of the sprite so that the part that got exhausted from one end is 'outside' the displaying frame of the sprite.
I did this sort of thing before with a sprite that gets loaded from a spritesheet.And it worked perfectly.But I created this CCSprite using CCRenderTexture and by changing the textureRect property,the entire sprite gets disappeared.
The first image is the original CCSprite which I get from CCRenderTexture.The second image shows what I want to achieve.The black dotted rectangular portion of the Sprite needs to be omitted out.Only the blue dotted portion of the sprite needs to be displayed.Essentially,this blue dotted rectangle is my textureRect.
Is there any way how I could make my sprite reduce from one end.
Also is there any difference between a sprite created normally,and one created using CCRenderTexture.
I have done similar thing like this before using some low-level hack.
There is a work around solution if you use CCProgressTimer, that's very easy and I think it should be enough for your examples.
But you said in comment that you have some special requirements like "exhaust it from both the ends at once" then some low-level hack is needed. My solution from my last object is:
1) Get the texture image's raw data. In cocos2d you can use CCRenderTexture and in cocos2d-x you can use CCImage.
2) CCRenderTexture has a method of - (BOOL) saveToFile: (NSString *) name
format: (tCCImageFormat) format
. You can read its source code then try to save it into an 2D array instead like byte raw[1024][768]. Each element in this array represents one pixel on your picture(the type may not be byte, I'm not sure, nearly forget the details). The format MUST BE PNG since transparency will be needed.
3) Modify raw data directly, set pixel's transparency to 0x0 which you want it to disappear.
4) Re-initialize a CCRenderTexture using picture data you modified.
I can't provide the code directly since is a trade secret and core part of one of my projects. But I can share you my solution. You also need some knowledge about how PNG file works. Read:
https://en.wikipedia.org/wiki/Portable_Network_Graphics#File_header
Turns out I was making a silly mistake.While supplying values to the textureRect(CGRect),I was actually setting the textureRect.origin.y to the height of the texture which made my textureRect go beyond(above) the texture area.This explains why they were disappearing.

Converting image to pixmap using ImageMagic libraries

My assignment is to get "images read into pixmaps which you will then convert to texture maps". So for the pixmap part only, hear me out and tell me if I have the right idea and if there's an easier way. Library docs I'm using: http://www.imagemagick.org/Magick++/Documentation.html
Read in image:
Image myimage;
myimage.read( "myimage.gif" );
I think this is the pixmap I need to read 'image' into:
GLubyte pixmap[TextureSize][TextureSize][3];
So I think I need a loop that, for every 'pixmap' pixel index, assigns R,G,B values from the corresponding 'image' pixel indices. I'm thinking the loop body is like this:
pixmap[i][j][0] = myimage.pixelColor(i,j).redQuantum(void);
pixmap[i][j][1] = myimage.pixelColor(i,j).greenQuantum(void);
pixmap[i][j][2] = myimage.pixelColor(i,j).blueQuantum(void);
But I think the above functions return Quantums where I need GLubytes, so can anyone offer help here?
-- OR --
Perhaps I can take care of both the pixmap and texture map by using OpenIL (docs here: http://openil.sourceforge.net/tuts/tut_10/index.htm). Think I could simply call these in sequence?
ilutOglLoadImage(char *FileName);
ilutOglBindTexImage(ILvoid);
You can copy the quantum values returned by pixelColor(x,y) to ColorRGB and you will get normalized (0.0,1.0) color values.
If you don't have to stick with Magick++ maybe you can try OpenIL, which can load and convert your image to OpenGL texture maps without too much hassle.