this is a follow up on this post, in which I asked about checking some condition on the border of a shape drawn using Cairomm in a Gtk::DrawingArea derived widget. In my case, I have a void drawBorder(const Cairo::RefPtr<Cairo::Context>& p_context) method which is virtual and is overriden to specify the shape's border. For example, if I wanted a circle, I could provide the following implementation:
void drawBorder(const Cairo::RefPtr<Cairo::Context>& p_context)
{
const Gtk::Allocation allocation{get_allocation()};
const int width{allocation.get_width()};
const int height{allocation.get_height()};
const int smallestDimension{std::min(width, height)};
const int xCenter{width / 2};
const int yCenter{height / 2};
p_context->arc(xCenter,
yCenter,
smallestDimension / 2.5,
0.0,
2.0 * M_PI);
}
I would like to use this method to check my condition on the border curve, as suggested in the answer:
So, you would somehow get a cairo context (cairo_t in C), create your shape there (with line_to, curve_to, arc etc). Then you do not call fill or stroke, but instead cairo_copy_path_flat.
So far, I am unable to get a usable Cairo::Context mock to perform the check. I don't need to draw anything to perform my check, I only need to get the underlying path and work on it.
So far, I have tried:
passing nullptr as the Cairo::Surface (which of course failed);
get an equivalent surface to my widget.
But it failed. This: gdk_window_create_similar_surface looked promising, but I have not found an equivalent for widgets.
How could one go about getting a minimal mock context to perform such checks? This would also help me very much in my unit testing, later on.
So far I got this code:
bool isTheBorderASimpleAndClosedCurve()
{
const Gtk::Allocation allocation{get_allocation()};
Glib::RefPtr<Gdk::Window> widgetWindow{get_window()};
Cairo::RefPtr<Cairo::Surface> widgetSurface{widgetWindow->create_similar_surface(Cairo::Content::CONTENT_COLOR_ALPHA,
allocation.get_width(), allocation.get_height()) };
Cairo::Context nakedContext{cairo_create(widgetSurface->cobj())};
const Cairo::RefPtr<Cairo::Context> context{&nakedContext};
drawBorder(context);
// Would like to get the path and test my condition here...!
}
It compiles and links, but at runtime I get a segfault with this message and a bunch of garbage:
double free or corruption (out): 0x00007ffc0401c740
Just create a cairo image surface with size 0x0 and create a context for that.
Cairo::RefPtr<Cairo::Surface> surface = Cairo::ImageSurface::create(
Cairo::Format::FORMAT_ARGB32, 0, 0);
Cairo::RefPtr<Cairo::Context> context = Cairo::Context::create(surface);
Since the surface is not used for anything, it does not matter which size it has.
(Side note: According to the API docs that Google gave me, the constructor of Context wants a cairo_t* as argument, not a Cairo::Context*; this might explain the crash that you are seeing)
Related
I'm entirely new to SDL 2 , and I'm hoping to find some help with making my very first proper program for a class in it. We've been provided with some code already for use in this project, which is why I'm not simply using a BlitSurface function to make this solution. If that is indeed the better solution, I'll switch over to that. This is part of a State to be used when the program runs, showing a title image.
I am getting a break error due to a pointer issue in the following code:
void MenuState::Enter()
{
//Is to load the title image used for the State
Sprite* extBackgroundSprite = met_extSystem.met_pointextSpriteManager- >CreateSprite("../assets/Testimage1.bmp" , 0 , 0 , 768 , 1024);
}
Which refers to a Sprite made by a SpriteManager class and CreateSprite function, as seen here:
Sprite * SpriteManager::CreateSprite(const std::string & point_stringFilePath, int point_intX, int point_intY, int point_intWidth, int point_intHeight)
{
auto iter = met_arraypointextTextures.find(point_stringFilePath); //breaks here
if (iter == met_arraypointextTextures.end())
//If the iterator cannot locate the sprite we need in our already loaded memory,
//it needs to be loaded into our map to create pointers
{
SDL_Surface* extSurface = SDL_LoadBMP(point_stringFilePath.c_str());
SDL_Texture* extTexture = SDL_CreateTextureFromSurface(met_pointextRenderer, extSurface);
SDL_FreeSurface(extSurface);
met_arraypointextTextures.insert(std::pair<std::string, SDL_Texture*>(point_stringFilePath, extTexture));
iter = met_arraypointextTextures.find(point_stringFilePath);
}
//Creates the sprite, adds a new index point via pushback
Sprite* extSprite = new Sprite(iter->second, point_intX, point_intY, point_intWidth, point_intHeight);
met_arraypointextSprites.push_back(extSprite);
return extSprite;
}
I hope this is enough information and code to present my problem. If not, let me know! And thank you in advance.
Turns out the issue was impossible to solve with the information I provided. The pointer did indeed need to be initialized, but with arguments found in the constructor, which I had not provided here.
I have followed a dirextX 9 tutorial on utube and i have tried to modify the program to display multiple triangles based on a set of points. I am using it as a sort of plotter. in my testing i generate a list of points within my plotter class. the plotter class then generates 3 vertices to create a small triangle around the point. the points are then passed to the directx device.
i have moved the code that generates the polygons into my update method, as i need to update the polygon list with fresh polygons.
The code works, but every now and then it will crash with the following error message
Unhandled exception at 0x010F6AF1 in DX3DPlotTest.exe: 0xC0000005: Access violation reading location 0x00000000.
im shure that the problem is to do with the memcpy command being called over and over. i've tried deleting pVert but that creates its own error as pVert is never initiated.
hear is my update version
`
void TestApp::Update(float dt)
{
void *pVerts;
plotter=new Plotter(MaxPoints,0.01f);
float x,y;
for(ULONG i=0;i<MaxPoints;i++)
{
x= (float)(distribution(generator)-2.0f);
y= (float)(distribution(generator)-2.0f);
plotter->Plot(x,y);
}
m_pDevice3D->CreateVertexBuffer(
plotter->listContentCount*sizeof(VertexPositionColor),
0,VertexPositionColor::FVF,
D3DPOOL_MANAGED,
&VB,
NULL
);
//d3d vertex buffer VB
VB -> Lock(0,sizeof(VertexPositionColor)*plotter->listContentCount, (void**)&pVerts, 0);
memcpy(pVerts,plotter->m_pVertexList,sizeof(VertexPositionColor)*plotter->listContentCount);
VB -> Unlock();
}
`
please can someone help me understand how to fix this problem? if been fiddling around with it for hours. It does work, but for a limited amount of time.
Thanks all.
EDIT:
OK now im shure its do to wich recreating my plotter instance
`
Plotter::Plotter(UINT PointCount,float pointsize)
{
listSize = PointCount*3;
listContentCount = 0;
bufferContentCount = 0;
Polycount = 0;
m_pStdtri = new VertexPositionColor[3];
m_pVertexList = new VertexPositionColor[listSize];
m_pStdtri[0] = VertexPositionColor(0.0f ,1.0f*pointsize ,d3dColors::Red);
m_pStdtri[1] = VertexPositionColor(1.0f*pointsize , -1.0f*pointsize ,d3dColors::Lime);
m_pStdtri[2] = VertexPositionColor(-1.0f*pointsize , -1.0f*pointsize ,d3dColors::Red);
}
Plotter::~Plotter()
{
delete(m_pStdtri);
delete(m_pVertexList);
}
void Plotter::Plot(float x, float y)
{
Polycount++;
m_pVertexList[listContentCount]=VertexPositionColor(x+m_pStdtri[0].x, y+m_pStdtri[0].y,d3dColors::Red);
listContentCount++;
m_pVertexList[listContentCount]=VertexPositionColor(x+m_pStdtri[1].x, y+m_pStdtri[1].y,d3dColors::Lime);
listContentCount++;
m_pVertexList[listContentCount]=VertexPositionColor(x+m_pStdtri[2].x, y+m_pStdtri[2].y,d3dColors::Blue);
listContentCount++;
}
`
There are a couple of things that can be wrong here. The plotter object seems to be never disposed, but it is potentially possible that it's done elsewhere. What bothers me, however, is your calling of CreateVertexBuffer over and over again, presumably without ever releasing the resource that you're using. So basically what happens in my opinion is: in every frame, you create a new VertexBuffer. As the memory on your GPU runs low, the command fails eventually, which you don't detect and try to use the "created" buffer, which is not really created. You need to know, that the buffer is not destroyed, even if you delete the object which holds the VB variable. The CreateVertexBuffer command occupies resources on GPU so they need to be explicitly freed when no longer needed. But let's return to the point. This function fails at some point. So it results in a NULL pointer error. My suggestion would be to create the buffer just once and then only update it in each frame. But first, make sure if it is the case.
I'm building a simple generic engine for my true start in the making of games, and I am trying to be somehow organized and decent in the making of my engine, meaning I don't want it to be something I throw to the side once I make what I'm planning to.
I add objects to be displayed, drawObjects, and these can either move, not move, and have an animation, or not have one.
In case they DO have an animation, I want to initialize a single animationSet, and this animationSet will have xxx animationComp inside of it. As I'm trying to be neat and have worked abit on "optimizations" towards memory and cpu usage (such as sharing already-loaded image pointers, and whatever came across my mind), I wanted to not ask for possibly unused memory in arrays.
So I had animationSetS* animationSet = NULL; initially, planning to do a animationSet = animationSetS[spacesINEED]; after, only on the objects that needed animation that I added, being those that aren't animations a NULL and therefore not using memory (correct?).
And then this question popped up! (title)
struct animationComp {
SDL_Rect* clip;
int clipsize;
};
struct animationSetS {
animationComp* animation;
int currentFrame;
int currentAnimation;
int animationNumber;
};
struct drawObject { // Um objecto.
char* name;
SDL_Surface* surface;
bool draw = true;
float xPos;
float yPos;
bool willMove = false; // 0 - Won't move, 10 - Moves alot, TO IMPLEMENT
bool isSprite = false;
animationSetS* animationSet;
};
I dabble alot in my questions, sorry for that. For any clarifications reply here, I'll reply within 10 minutes for the next... 1 hour perhaps? Or more.
Thanks!
Setting the pointer to NULL means that you'll be able to add ASSERT(ptr != NULL); and KNOW that your pointer does not accidentally contain some rubbish value from whatever happens to be in the memory it was using.
So, if for some reason, you end up using the object before it's been properly set up, you can detect it.
It also helps if you sometimes don't use a field, you can still call delete stuff; [assuming it's allocated in the first place].
Note that leaving a variable uninitialized means that it can have ANY value within it's valid range [and for some types, outside the valid range - e.g. pointers and floating point values can be "values that are not allowed by the processor"]. This means that it's impossible to "tell" within the code if it has been initialized or not - but things will go horribly wrong if you don't initialize things!
If this should be really implemented in C++ (as you write), why don't you use the C++ Standard Library? Like
struct animationSetS {
std::vector< std::shared_ptr<animationComp> > animation;
// ...
}
I realized that writing to gil::color_converted_view doesn't affect the underlying view's data. I wonder if that's correct?
For example, let's say that I want to write a program that will take the value of the red channel and set the blue channel's value to half of that. Here's my failed attempt:
template <typename SrcView>
void half_red_to_blue(SrcView & view)
{
// Since SrcView might be RGB or BGR or some other types,
// I decided to use a color_converted_view to ensure that I'm
// accessing the correct channels
typedef gil::color_converted_view_type<SrcView, gil::rgb8_pixel_t>::type MyView;
MyView my_view = gil::color_converted_view<gil::rgb8_pixel_t>(view):
struct my_lambda
{
void operator()(gil::rgb8_pixel_t & p)
{
p[2] = p[0] / 2;
}
};
gil::for_each_pixel(my_view, my_lambda());
}
However, it only works when SrcView is actually gil::rgb8_view_t. If I call, e.g. half_red_to_blue<gil::bgr8_view_t>(view), the view is not changed at all! I inspected a little in the debugger and it seems the write operation is writing to some kind of proxy location instead of the original pixels.
Any ideas? Thanks in advance!
This is valid behaviour in Boost.GIL because the colour components of pixel are touched only upon access of pixel. You can modify my_lambda::operator() to use get_color to trigger colour component access.
I have a class Message that has a std::string as a data member, defined like this:
class Message
{
// Member Variables
private:
std::string text;
(...)
// Member Functions
public:
Message(const std::string& t)
: text(t) {}
std::string getText() const {return text;}
(...)
};
This class is used in a vector in another class, like this:
class Console
{
// Member Variables
private:
std::vector<Message> messageLog;
(...)
// Member Functions
public:
Console()
{
messageLog.push_back(Message("Hello World!"));
}
void draw() const;
};
In draw(), there's an iterator that calls getText(). When it does, the program segfaults. I've determined that text is valid inside the Message constructor. However, I can't tell if it's valid from inside Console. I'm assuming it is, but if I try to inspect indices of Console's messageLog, gdb tells me this:
(gdb) p messageLog[0]
One of the arguments you tried to pass to operator[] could not be converted to what
the function wants.
Anyone know what's going on?
EDIT: here's draw(). TCODConsole is part of a curses library I'm using, and so this function prints each message in Console to a part of the curses screen. TL and BR are Point member objects (two ints) that tell where on the screen to draw Console. I left out parts of Message and Console in the original question to hopefully make things clearer, but if you need me to post the entire classes then I can. They aren't too long.
void Console::draw() const
{
int x = TL.getX(), y = TL.getY();
int width = BR.getX() - TL.getX();
int height = BR.getY() - TL.getY();
// draw the Console frame
TCODConsole::root->printFrame(x, y, width, height, true);
// print the Console's messages
vector<Message>::const_iterator it;
for(it=messageLog.begin(); it<messageLog.begin()+height-1; ++it)
{
string message = "%c" + it->getText();
TCODConsole::setColorControl(TCOD_COLCTRL_1,
it->getForeColor(),
it->getBackColor());
y += TCODConsole::root->printRectEx(x, y, width, height,
TCOD_BKGND_NONE,
TCOD_LEFT,
message.c_str(),
TCOD_COLCTRL_1);
}
}
My guess is that by the point you use it->getText(), the iterator is NULL. Add a check it != messageLog.end() when you walk the array, and before calling it->getText().
Is it definitely std::vector messageLog and not std::vector<Message> messageLog? That seems a bit odd.
What does the height have to do with the vector's index? You have:
messageLog.begin()+height-1;
Why are you adding the screen coordinate to the iterator? That seems to be your problem and you're most likely overindexing and that's why you're getting a SIGSEGV.
What you probably want is to simply iterate over all the messages in the vector and display them at a particular location on the screen. I see what you're trying to do, but if you're trying to calculate the screen boundary using the iterator you're definitely going about it the wrong way. Try running a counter or get messageLog.size() and then recalculate the height with each iteration. As for the loop just do:
for(it=messageLog.begin(); it!=messageLog.end(); ++it)
It's probably because the scope of the Message object created in the Console method is just the Console method. So, if your program is trying to access this object in another method, like draw, you will get this segmentation fault, since this object is deleted after the execution.
Try this (just insert a new keyword):
Console()
{
messageLog.push_back(new Message("Hello World!"));
}
In this case, the object is not deleted after Console's end.
Just remember to delete the objects created when your program doesn't need them anymore.