Problem reading back value from table in Lua - c++

I've "inherited" an application that uses Lua with c++.
To move data between the two, the data is pushed to a table and read back.
At some point a pointer-value to some data is also stored this way. But apparently the value that is read back is always zero.
I don't know much about Lua, so could someone please explain why this suddenly fails (it has been working, but something has changed)
Lua version is 5.2.1 (I've made sure it's the same version in both working/non-working environment)
Data is pushed like this:
lua_pushstring(L, "data"); lua_pushinteger(L, (u32)pData->pucImageData); lua_settable(L, -3);
I get this in the log
PUSHEntry (key=data, v=-1451470840)
and popped like this
u32 uValue = 0;
lua_pushstring(L, key);
lua_gettable(L, iTableIndex); /* get table[key] */
uValue = (u32)lua_tonumber(L, -1); // Retrieve value
printf("ReadEntry (key=%s, v=%d) \n", key, uValue);
lua_pop(L, 1); /* remove number from stack */
where I only get this back
ReadEntry (key=data, v=0)

Thank you for pointing me in the right direction!
The problem turned out to be that the big number was pushed with lua_pushinteger. Changing this to lua_pushnumber fixed the issue.
I agree that this needs some cleaning up for 64-bit platforms.

Related

How to buffer efficiently when writing to 1000s of files in C++

I am quite inexperienced when it comes to C++ I/O operations especially when dealing with buffers etc. so please bear with me.
I have a programme that has a vector of objects (1000s - 10,000s). At each time-step the state of the objects is updated. I want to have the functionality to log a complete state time history for each of these objects.
Currently I have a function that loops through my vector of objects, updates the state, and then calls a logging function which opens the file (ascii) for that object, writes the state to file, and closes the file (using std::ofstream). The problem is this signficantly slows down my run time.
I've been recommended a couple things to do to help speed this up:
Buffer my output to prevent extensive I/O calls to the disk
Write to binary not ascii files
My question mainly concerns 1. Specifically, how would I actually implement this? Would each object effectively require it's own buffer? or would this be a single buffer that somehow knows which file to send each bit of data? If the latter, what is the best way to achieve this?
Thanks!
Maybe the simplest idea first: instead of logging to separate files, why not log everything to an SQLite database?
Given the following table structure:
create table iterations (
id integer not null,
iteration integer not null,
value text not null
);
At the start of the program, prepare a statement once:
sqlite3_stmt *stmt;
sqlite3_prepare_v3(db, "insert into iterations values(?,?,?)", -1, SQLITE_PREPARE_PERSISTENT, &stmt, NULL);
The question marks here are placeholders for future values.
After every iteration of your simulation, you could walk your state vector and execute the stmt a number of times to actually insert rows into the database, like so:
for (int i = 0; i < objects.size(); i++) {
sqlite3_reset(stmt);
// Fill in the three placeholders and execute the query.
sqlite3_bind_int(stmt, 1, i);
sqlite3_bind_int(stmt, 2, current_iteration); // Could be done once, but here for illustration.
std::string state = objects[i].get_state();
sqlite3_bind_text(stmt, 3, state.c_str(), state.size(), SQLITE_STATIC); // SQLITE_STATIC means "no need to free this"
sqlite3_step(stmt); // Execute the query.
}
You can then easily query the history of each individual object using the SQLite command-line tool or any database manager that understands SQLite.

Cryptography - Bizzare behaviour in Release mode

I've got a project using Crypto++.
Crypto++ is a own project which builds in a static lib.
Aside from that I have another large project using some of the Crypto++ classes and processing various algorithms, which also builds in a static lib.
Two of the functions are these:
long long MyClass::EncryptMemory(std::vector<byte> &inOut, char *cPadding, int rounds)
{
typedef std::numeric_limits<char> CharNumLimit;
char sPadding = 0;
//Calculates padding and returns value as provided type
sPadding = CalcPad<decltype(sPadding)>(reinterpret_cast<MyClassBase*>(m_P)->BLOCKSIZE, static_cast<int>(inOut.size()));
//Push random chars as padding, we never care about padding's content so it doesn't matter what the padding is
for (auto i = 0; i < sPadding; ++i)
inOut.push_back(sRandom(CharNumLimit::min(), CharNumLimit::max()));
std::size_t nSize = inOut.size();
EncryptAdvanced(inOut.data(), nSize, rounds);
if (cPadding)
*cPadding = sPadding;
return nSize;
}
//Removing the padding is the responsibility of the caller.
//Nevertheless the string is encrypted with padding
//and should here be the right string with a little padding
long long MyClass::DecryptMemory(std::vector<byte> &inOut, int rounds)
{
DecryptAdvanced(inOut.data(), inOut.size(), rounds);
return inOut.size();
}
Where EncryptAdvanced and DecryptAdvanced pass the arguments to the Crypto++ object.
//...
AdvancedProcessBlocks(bytePtr, nullptr, bytePtr, length, 0);
//...
These functions have so far worked flawless, no modifications have been applied to them since months.
The logic around them has evolved, though the calls and data passed to them did not change.
The data being encrypted / decrypted is rather small but has a dynamic size, which is being padded if (datasize % BLOCKSIZE) has a remainder.
Example: AES Blocksize is 16. Data is 31. Padding is 1. Data is now 32.
After encrypting and before decrypting, the string is the same - as in the picture.
Running all this in debug mode apparently works as intended. Even when running this program on another computer (with VS installed for DLLs) it shows no difference. The data is correctly encrypted and decrypted.
Trying to run the same code in release mode results in a totally different encrypted string, plus it does not decrypt correctly - "trash data" is decrypted. The wrongly encrypted or decrypted data is consistent - always the same trash is decrypted. The key/password and the rounds/iterations are the same all the time.
Additional info: The data is saved in a file (ios_base::binary) and correctly processed in debug mode, from two different programs in the same solution using the same static librar(y/ies).
What could be the cause of this Debug / Release problem ?
I re-checked the git history a couple of times, debugged for days through the code, yet I cannot find any possible cause for this problem. If any information - aside from a (here rather impossible) MCVE is needed, please leave a comment.
Apparently this is a bug in CryptoPP. The minimum keylength of Rijndael / AES is set to 8 instead of 16. Using a invalid keylength of 8 bytes will cause a out-of-bounds access to the in-place array of Rcon values. This keylength of 8 byte is currently reported as valid and has to be fixed in CryptoPP.
See this issue on github for more information. (On-going conversation)

Using new to allocate memory for unsigned char array fails

I'm trying to load a tga file in c++ code that I got from google searching, but the part that allocates memory fails. The beginning of my "LoadTarga" method includes these variables:
int imageSize;
unsigned char* targaImage;
Later on in the method the imageSize variable gets set to 262144 and I use that number to set the size of the array:
// Calculate the size of the 32 bit image data.
imageSize = width * height * 4;
// Allocate memory for the targa image data.
targaImage = new unsigned char[imageSize];
if (!targaImage)
{
MessageBox(hwnd, L"LoadTarga - failed to allocate memory for the targa image data", L"Error", MB_OK);
return false;
}
The problem is that the body of the if statement executes and I have no idea why the memory allocation failed. As far as I know it should work - I know the code compiles and runs up to this point and I haven't seen anything yet in google that would show a proper alternative.
What should I change in my code to make it allocate memory correctly?
Important Update:
Rob L's comments and suggestions were very useful (though I didn't try _heapchk since I solved the issue before I tried using it)
Trying each of fritzone's ideas meant the program ran past the "if (!targaImage)" point without trouble. The code that sets "targaImage and the if statement checks if memory was allocated correctly has been replaced with this:
try
{
targaImage = new unsigned char[imageSize];
}
catch (std::bad_alloc& ba)
{
std::cerr << "bad_alloc caught: " << ba.what() << '\n';
return false;
}
However I got a new problem with the very next bit of code:
count = (unsigned int)fread(targaImage, 1, imageSize, filePtr);
if (count != imageSize)
{
MessageBox(hwnd, L"LoadTarga - failed to read in the targa image data", L"Error", MB_OK);
return false;
}
Count was giving me a value of "250394" which is different to imageSize's value of "262144". I couldn't figure out why this was and doing a bit of searching (though I must admit, not much searching) on how "fread" works didn't yield info.
I decided to cancel my search and try the answer code on the tutorial site here http://www.rastertek.com/dx11s2tut05.html (scroll to the bottom of the page where it says "Source Code and Data Files" and download the zip. However creating a new project, putting in the source files and image file didn't work as I got a new error. At this point I thought maybe the way I converted the image file from to tga might have been incorrect.
So rather than spend a whole lot of time debugging the answer code I put the image file from the answer into my own project. I noted that the size of mine was MUCH smaller than the answer (245KB compared to 1025 KB) )so maybe if I use the answer code's image my code would run fine. Turns out I was right! Now the image is stretched sideways for some reason but my original query appears to have been solved.
Thanks Rob L and fritzone for your help!
You are NOT using the form of new which returns a null pointer in case of error, so it makes no sense for checking the return value. Instead you should be aware of catching a std::bad_alloc. The null pointer returning new for you has the syntax: new (std::nothrow) unsigned char[imageSize];
Please see: http://www.cplusplus.com/reference/new/operator%20new[]/
Nothing in your sample looks wrong. It is pretty unlikely that a modern Windows system will run out of memory allocating 256k just once. Perhaps your allocator is being called in a loop and allocating more than you think, or the value of imagesize is wrong. Look in the debugger.
Another possibility is that your heap is corrupt. Calling _heapchk() can help diagnose that.
Check the "memory peak working set" in windows tasks manager and ensure how much memory you are really trying to allocate.

Accessing buffer using C++-AMP

Could somebody please help me understand exactly the step that is not working here?
I am trying to use C++-AMP to do parallel-for loops, however despite having no trouble or errors going through my process, I can't get my final data.
I want to pull out my data by means of mapping it
m_pDeviceContext->Map(pBuffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &MappedResource);
{
blah
}
But I've worked on this for days on end without even a single inch of progress.
Here is everything I do with C++-AMP:
Constructor: I initialise my variables because I have to
: m_AcceleratorView(concurrency::direct3d::create_accelerator_view(reinterpret_cast<IUnknown *>(_pDevice)))
, m_myArray(_uiNumElement, m_AcceleratorView)
I copy my initial data into the C++-AMP array
concurrency::copy(Data.begin(), m_myArray);
I do stuff to the data
concurrency::parallel_for_each(...) restrict(amp)
{
blah
}
All of this seems fine, I run into no errors.
However the next step I want to do is pull the data from the buffer, which doesn't seem to work:
ID3D11Buffer* pBuffer = reinterpret_cast<ID3D11Buffer *>(concurrency::direct3d::get_buffer(m_myArray));
When I map this data (deviceContext->Map) the data inside is 0x00000000
What step am I forgetting that will allow me to read this data? Even when I try to set the CPU read/write access type I get an error, and I didn't even see any of my references do it that way either:
m_Accelerator.set_default_cpu_access_type(concurrency::access_type::access_type_read_write);
This creates an error to say "accelerator does not support zero copy"
Can anyone please help me and tell me why I can't read my buffer, and how to fix it?
The following code should work for this. You should also check that the DX device you and the C++AMP accelerator are associated with the same hardware.
HRESULT hr = S_OK;
array<int, 1> arr(1024);
CComPtr<ID3D11Buffer> buffer;
IUnknown* unkBuf = get_buffer(arr);
hr = unkBuf->QueryInterface(__uuidof(ID3D11Buffer), reinterpret_cast<LPVOID*>(&buffer));
This question has an answer that shows you how to do the opposite.
Reading Buffer Data using C++ AMP

How to handle long strings with ODBC?

I'm using ODBC SQLGetData to retrieve string data, using a 256 byte buffer as default. If the buffer is too short, I'm allocating a new buffer large enough for the string and calling SQLGetData() again.
It seems that calling this a second time only returns what was left after the last call, and not the whole field.
Is there any way to 'reset' this behaviour so SQLGetData returns the whole field into the second buffer?
char buffer[256];
SQLLEN sizeNeeded = 0;
SQLRETURN ret = SQLGetData(_statement, _columnIndex, SQL_C_CHAR, (SQLCHAR*)buffer, sizeof(buffer), &sizeNeeded);
if(ret == SQL_SUCCESS)
{
return std::string(buffer);
}
else if(ret == SQL_SUCCESS_WITH_INFO)
{
std::auto_ptr<char> largeBuffer(new char[sizeNeeded + 1]);
// Doesn't return the whole field, only what was left...
SQLGetData(_statement, _columnIndex, SQL_C_CHAR, (SQLCHAR*)largeBuffer.get(), sizeNeeded, &sizeNeeded);
}
Thanks for any help!
It is the caller's responsibility to put the data together; the limitation on returning the data in chunks could be due to the database provider and not your code, so you need to be able to handle the case either way.
Also your code has a logic flaw -- you might have to call SQLGetData multiple times; each time could return additional chunks of data with SQL_SUCCESS_WITH_INFO/01004 that need to be appended in a loop.
If you are interested in "resetting" the fetch buffer, I believe that the position in the column is only preserved if the column name/index is the same for two consecutive calls. In other words, calling SQLFetchData with a different column name should reset the position in the original column. Here's a snippet from MSDN:
Successive calls to SQLGetData will retrieve data from the last column requested; prior offsets become invalid. For example, when the following sequence is performed:
SQLGetData(icol=n), SQLGetData(icol=m), SQLGetData(icol=n)
the second call to SQLGetData(icol=n) retrieves data from the start of the n column. Any offset in the data due to earlier calls to SQLGetData for the column is no longer valid.
I don't have the ODBC spec handy, but MSDN seems to indicate that this is the expected behavior. Personally, I have always accumulated the result of multiple calls directly into a string using a fixed size buffer.