I have a weird issue, I'm not sure where it is coming from. I pass two strings from lua back to c++. The first string is a file name which has to be converted to a wchar_t* due to this being what DirectX requires for the built in texture loading functions. The second string stays the same (being a normal char*). I used breakpoints and found that after this "TextureList.count(filepath)" is ran, the 2 strings which are passed to the function seem to get destroyed and become random garbage. Normally this function works fine if I type the strings in myself but for my script engine I need to be able to load textures externally using lua.
short ID = lua_tonumber(env, 1);
string Texture = lua_tostring(env, 2);
const char* Archive = lua_tostring(env, 3);
wstring_convert<codecvt_utf8_utf16<wchar_t>> convert;
wstring final_ = convert.from_bytes(Texture);
for (auto& Sprites : StageBackground->BackgroundSprites)
{
if (Sprites->Object_ID == ID)
{
LoadTextureFromMemory(final_.c_str(),Archive, Sprites->Texture);
break;
}
}
This is the function that is called from lua to load textures.
EDIT: I noticed that the problem can be narrowed down to the "Archive" variable being the one that is destroyed. I still cannot find out why. If I switch from Release to Debug mode in Visual Studio I get debug assertion errors.
Likely the strings were already destroyed but just happened to still contain valid data because that portion of the stack hadn't been used yet. If you need to keep the strings around, make your own copies of them whose lifetime you control.
Related
there. I am trying to pass a Text Field contents into a database call (mysql). When I look at the field that gets returned from Text->getvalue.(funct) call - I simply do not get the text that was entered into the field - via any of the UTF functions in WXWidgets. I have tried the following:
wxChar buffer = ((wxTextCtrl*)FindWindow(wxID_TITLE))->GetValue().mb_str(wxConvUTF8);
//GooglyGook for whole thing
wxChar buffer = ((wxTextCtrl*)FindWindow(wxID_TITLE))->GetValue().mb_str();
//NULL it fails completely
wxChar buffer = ((wxTextCtrl*)FindWindow(wxID_TITLE))->GetValue().ToUTF8();
//More GOoblygook
wxChar buffer = ((wxTextCtrl*)FindWindow(wxID_TITLE))->GetValue().utf8_str();
//More Gooblygook
message.Printf(_T("The title saved as wxCharBuffer =%s"),buffer.data());
wxMessageBox(message,_("Rivendell"), wxICON_ERROR|wxOK);
The message box is how I am trying to display what is in the wxChar buffer,
but I am running in debug so I can simply look at it during the run and confirm that it is incorrect. Please note that I have tried these wxChar buffer lines one at a time separately (not like they are listed here). Just wanted to show things I had tried.
What is the correct way to do this? The type of characters I am attempting to save in the db looks like:"check Todd 1 乞: 乞丐 qǐgài, 乞求 qǐqiú, 乞讨 qǐtǎo."
The gooblygook output looks like Chineese characters etc.. even in the English part of the field (Check Todd)...
Anyone who has an idea of how to do this please let me know. Thanks...
Tb
I appreciate the help provided, and after trying some things I found an answer.
The correct way to do this seems to be the following: Put the TextCtrl field into a wxString using wx_str(). Then put the wxString into a wxCHarBuffer via toUTF8() function. Then use the data() function of the wxCHarBuffer to pass a char pointer.
Part of my problem stemmed from trying to display what was in those fields via Visual Studio Debugger, and/or wxMessage boxes - so sometimes my conversions were wrong (as noted by the previous poster).
I was able to set the wxString (s) to Unicode characters and have it be handled correctly by the mysql call (i.e. Call would crash before). The chk_title variable returned seems to be correctly encoded into UTF8 and escaped.
Thanks.
In my C++ application, my application does an execv() in a fork()ed child process to use the same executable to process some work in a new child process with different arguments that communicates with pipes to the parent process. To get the pathname to self, I execute the following code on the Linux port (I have different code on Macintosh):
const size_t bufSize = PATH_MAX + 1;
char dirNameBuffer[bufSize];
// Read the symbolic link '/proc/self/exe'.
const char *linkName = "/proc/self/exe";
const int ret = int(readlink(linkName, dirNameBuffer, bufSize - 1));
However, if while the executable is running, I replace the executable with an updated version of the binary on disk, the readlink() string result is: "/usr/local/bin/myExecutable (deleted)"
I understand that my executable has been replaced by a newer updated version and the original for /proc/self/exe is now replaced, however, when I go to execv() it now fails with the errno 2 - No such file or directory. due to the extra trailing " (deleted)" in the result.
I would like the execv() to either use the old executable for self, or the updated one. I could just detect the string ending with " (deleted)" and modify it to omit that and resolve to the updated executable, but that seems clumsy to me.
How can I execv() the current executable (or its replacement if that is easier) with a new set of arguments when the original executable has been replaced by an updated one during execution?
Instead of using readlink to discover the path to your own executable, you can directly call open on /proc/self/exe. Since the kernel already has an open fd to processes that are currently executing, this will give you an fd regardless of whether the path has been replaced with a new executable or not.
Next, you can use fexecve instead of execv which accepts an fd parameter instead of a filename parameter for the executable.
int fd = open("/proc/self/exe", O_RDONLY);
fexecve(fd, argv, envp);
Above code omits error handling for brevity.
One solution is at executable startup (e.g. near the beginning of main()) to read the value of the link /proc/self/exe once and store it statically for future use:
static string savedBinary;
static bool initialized = false;
// To deal with issue of long running executable having its binary replaced
// with a newer one on disk, we compute the resolved binary once at startup.
if (!initialized) {
const size_t bufSize = PATH_MAX + 1;
char dirNameBuffer[bufSize];
// Read the symbolic link '/proc/self/exe'.
const char *linkName = "/proc/self/exe";
const int ret = int(readlink(linkName, dirNameBuffer, bufSize - 1));
savedBinary = dirNameBuffer;
// On at least Linux, if the executable is replaced, readlink() of
// "/proc/self/exe" gives "/usr/local/bin/flume (deleted)".
// Therefore, we just compute the binary location statically once at
// startup, before it can possibly be replaced, but we leave this code
// here as an extra precaution.
const string deleted(" (deleted)");
const size_t deletedSize = deleted.size();
const size_t pathSize = savedBinary.size();
if (pathSize > deletedSize) {
const size_t matchPos = pathSize - deletedSize;
if (0 == savedBinary.compare(matchPos, deletedSize, deleted)) {
// Deleted original binary, Issue warning, throw an exception, or exit.
// Or cludge the original path with: savedBinary.erase(matchPos);
}
}
initialized = true;
}
// Use savedBinary value.
In this way, it is very unlikely that the original executable would be replaced within the microseconds of main() caching the path to its binary. Thus, a long running application (e.g. hours or days) could get replaced on disk, but per the original question, it could fork() and execv() to the updated binary that perhaps has a bug fix. This has the added benefit of working across platforms, and thus the differing Macintosh code to read the binary path could be likewise protected from binary replacement after startup.
WARNING editors note: readlink does not null terminate the string, so the above program may or may not work accidentally if the buffer was not filled with zeros before calling readlink
The reason you get the (deleted) part into the symbolic link is that you have substituted the file with the right program binary text with a different file, and the symbolic link to the executable is never valid again. Suppose you use this symbolic link to get the symbol table of this program or to load some data embedded on it, and you change the program... the table would be incorrect and you can even crash your program. The executable file for the program you were executing is no longer available (you have deleted it) and the program you have put in its place doesn't correspond to the binary you are executing.
When you unlink(2) a program that is being executed, the kernel marks that symlink in /proc, so the program can
detect that the binary has been deleted and is no longer accessible.
allow you still to gather some information of the last name it had (instead of deleting the symlink from the /proc tree)
You cannot write to a file that is being executed by the kernel, but nobody prevents you to erase that file. The file will continue to be present in the filesystem as long as you execute it, but no name points to it (it's space will be deallocated once the process exit(2)) The kernel doesn't erase its contents until the inode count in kernel memory gets to zero, which happens when all uses (references) to that file are due.
std::string filename;
In this code:osg::Image* image = osgDB::readImageFile(filename + ".dicom");
osg::Image type variable: image gets wrong returned values from read file. And by debugging to the line above, the watch window shows as follows:
The _fileName (std::string type) value indicated on the first and second lines are both "digest", but in the fourth line the value of _fileName turned out to be "iiiiii\x*6" with capacity equals to 0.
According to my understanding, the _fileName of the fourth line in the watch window should indicate the same member variable of osg::Image as the _fileName on the first and second lines. Thus, I think all the _fileName in the debug watch window should have the same value. But, I am not sure why there are such differences.
MSVC++ implementation of std::string uses different storage strategies for short strings and for long ones. Short strings (16 bytes or less) are stored in a buffer embedded directly inside the std::string object (you will see it as _Bx._Buf in Raw View). Long strings are stored in an independently-allocated block of memory located elsewhere (pointed by _Bx._Ptr).
If you violate the integrity of std::string object, you might easily end up in a situation as the one you describe. The objects thinks that the data should be stored in the external buffer, but in reality it is stored in the internal one and vice-versa. That might easily confuse the debugger as well.
I suggest you open the Raw View of your std::string object and check what it shows in _Bx._Buf and _Bx._Ptr. If the current _Myres value is smaller or equal to the internal buffer size, then the data is [supposed to be] stored in the internal buffer. Otherwise, it is stored in the external memory block. See if this invariant really holds. If it doesn't, then there's your problem. Then you'll just have to find at which point it got broken.
For some reason your filename argument isn't getting .dicom attached to it when it becomes _filename ("digest" should become "digest.dicom"). OSG decides which plugin to use for file loading by extension, so it will have no idea how to load the current one. And so the second reference to _filename doesn't get intialized by any plugin.
By the way, I don't think the dicom plugin is part of the standard OSG prebuilt package - you might have to gather the dependencies yourself and build the plugin.
I have a debug function to export some images to hard disk, they are inside an unmanaged C++ dll. The function has two parameters that are marshaled into a single string for the complete file name. This function is called a lot of times. While executing the application I noticed that it becomes slower over time and I traced it back to this function (thanks to SlimTune!). If I leave the marshaling code but I comment out this line:
std::string completeFileName = m_csTempFolder + stdSubFolderName + stdFileName + ".bmp";
replacing it by:
std::string completeFileName = "C:\\Users\\pedro\\AppData\\Local\\Temp\\BatchProc64\\test.bmp";
I don't have the problem any more. It looks that the issue is with mixing the just marshaled strings with a string constant, into a new string. Anyone can explain this?
The app used memory remains stable, no app crash.
I'm using Visual Studio 2008 with .NET 3.5
Thanks in advance!
The start of the function code is below:
// Export the 32 bit buffer into a 8 bit buffer, then write it into a file, used for trace image generation (app debug)
void CMtxSurface::ExportTraceImage32bit(System::String^ subFolderName, System::String^ fileName, MIL_ID img32bit)
{
// The [Conditional("_TRACE_")] doesn't work in C++ compiler so we need to use ifdef...
#ifdef _TRACE_
marshal_context context;
std::string stdSubFolderName = context.marshal_as<std::string>(subFolderName);
std::string stdFileName = context.marshal_as<std::string>(fileName);
std::string completeFileName = m_csTempFolder + stdSubFolderName + stdFileName + ".bmp";
// std::string completeFileName = "C:\\Users\\pedro\\AppData\\Local\\Temp\\BatchProc64\\test.bmp";
I am attempting to create an edit box that allows users to input text. I've been working on this for some time now and have tossed around different ideas. Ultimately, the one I think that would offer the best performance is to load all the characters from the .ttf (I'm using SDL to manage events, windows, text, and images for openGL) onto their own surface, and then render those surfaces onto textures one time. Then each frame, I can just bind an appropriate texture in the appropriate location.
However, now I'm thinking how to access these glyphs. My limited bkg would say something like this:
struct CharTextures {
char glpyh;
GLuint TextureID;
int Width;
int Height;
CharTextures* Next;
}
//Code
CharTexture* FindGlyph(char Foo) {
CharTextures* Poo = _FirstOne;
while( Poo != NULL ) {
if( Foo == Poo->glyph ) {
return Poo;
}
Poo = Poo->Next;
}
return NULL;
}
I know that will work. However, it seems very wasteful to iterate the entire list each time. My scripting experience has taught me some lua and they have tables in lua that allow for unordered indices of all sorts of types. How could I mimic it in C++ such that instead of this iteration, I could do something like:
CharTexture* FindGlyph(char Foo) {
return PooPointers[Foo]; //somehow use the character as a key to get pointer to glyph without iteration
}
I was thinking I could try converting to the numerical value, but I don't know how to convert char to UTF8 values and if I could use those as keys. I could convert to ascii but would that handle all the characters I would want to be able to type? I am trying to get this application to run on mac and windows and am not sure about the machine specifics. I've read about the differences of the different format (ascii v unicode v utf8 v utf16 etc)... I understand it has to do with bit width and endianness but I understand relatively little about the interface differences between platforms and implications of said endianness on my code.
Thank you
What you probably want is
std::map<char,CharTexture*> PooPointers;
using the array access operator will also use some search in the map behind the scene, but optimized.
What g-makulik has said is probably right. The map may be what you're after. To expand on the reply, maps are automatically sorted base on the key (char in this case) and so lookups based on the character is extremely quick using
CharTexture* pCharTexture = PooPointers[char];
If you want a sparse data structure where you don't predefine the texture for each character.
Note that running the code above where an entry doesn't exist will create a default entry in the map.
Depending on your general needs you could also use a simple vector if generalized sorting isn't important or if you know that you'll always have a fixed number of characters. You could fill the vector with predefined data for each possible character.
It all depends on your memory requirements.