Problem with C++ class instance not being recognized - c++

This is for "homework" but this is not an algorithm question, but rather a programming issue. As part of a project for my Data Structures class, I have to write a class to act as a database. That part is done. I am not asking about the algorithm, but rather, trying to isolate what is clearly a stupid bug on my part.
PeopleDB has two constructors, the default one and one that takes as a parameter an input file and reads it into the database to initialize it.
Here is the code snippet, the problem is described below it:
#include "People.h" // People class definition
#include "PeopleDB.h" // People database class
#include "PrecondViolatedExcep.h"
using namespace std;
int main(int argc, char *argv[])
{
// Define variables
string infilename;
PeopleDB mydb;
// Get the filename of the text file to process
infilename = argv[1];
// Try to open import the data into a database instance
try
{
cout << "Attempting to import DB entries from "<< infilename << endl;
PeopleDB mydb(infilename);
cout << "# A total of "<< mydb.countEntries() << " DB entries loaded." << endl;
}
catch(PrecondViolatedExcep e)
{
cout << e.what() << endl;
cout << "Exiting program.";
exit(1);
}
// Display database contents
cout << endl;
cout << "# A total of "<< mydb.countEntries() << " DB entries found before display." << endl;
return 0;
} // end main
The problem is if I don't include the PeopleDB mydb; constructor at the top of the main() loop, the compiler barfs saying it doesn't recognize the mydb.countEntries() in the second to last line of the main loop. But if I do include it, it is clear that the mydb within the try loop doesn't survive because the output of the program is:
Attempting to import DB entries from testinput.txt
# A total of 7 DB entries loaded.
# A total of 0 DB entries loaded.
I didn't want to use the same variable (mydb) twice (I actually assumed this would error out during compiling), but for some reason creating the mydb instance of PeopleDB inside the try block doesn't seem to survive to be outside the block. I am sure this is something stupid on my part, but I am not seeing it. It has been a long day, so any suggestions would be appreciated.

You declare two independent mydb objects.
Either perform all actions in the try-catch block, or move connecting to another function.
PeopleDB connect(const std::string& infilename) {
try
{
cout << "Attempting to import DB entries from "<< infilename << endl;
PeopleDB mydb(infilename);
cout << "# A total of "<< mydb.countEntries() << " DB entries loaded." << endl;
return mydb;
}
catch(PrecondViolatedExcep e)
{
cout << e.what() << endl;
cout << "Exiting program.";
exit(1);
}
return PeopleDB{};
}
int main(int argc, char *argv[])
{
// Get the filename of the text file to process
string infilename = argv[1];
PeopleDB mydb = connect(infilename);
// Display database contents
cout << endl;
cout << "# A total of "<< mydb.countEntries() << " DB entries found before display." << endl;
return 0;
} // end main

You are creating two objects myDb of type PeopleDB: one at the beginning of main, the other one in the try block. That latter one loads the data, but gets gets destroyed with the end of that try block's scope.
The second one being printed is the one created in the main block, and that one has never loaded the data.
There are multiple ways to fix that, e.g. provide a method to load data and call it inside the try block. Another option is to copy/move/swap the "inside" one with the "outside" one before the try block ends (but I'd provide different names in such case). Your call, but bottom line is: at that point you have two different objects: one the data is loaded to, and the other one it is printed from (with empty results).

Try using move-assignment inside the try block:
mydb = std::move(PeopleDB(infilename));
The reason why I suggested this, is because it avoids creating a new object inside the try block because this object will disappear after the scope of the try block ends.
The reason for using move is to prevent creating the object twice:
once with the constructor call
another with the copy-constructor call
However, now I realize that the std::move is redundant because PeopleDB(infilename) is already an rvalue reference, and compiler will be smart enough to do the move itself.
So my new suggestion is to just do:
mydb = PeopleDB(infilename);

Related

When looking at the address of a FILE stream, why is the address so much different than that of the original pointer?

Pardon my absolute lack of any understanding here, just diving into C++. So essentially I just wanted to see if I could figure out how to use putc to properly write characters to a file. I wanna make sure I'm understanding every step of the way.
Now, when looking at the address spaces used when I originally declared the pointer for the file, and after passing the pointer to a different function that would write the stream to a file I noticed the address spaces are completely different, even in length, than that of the address space of the original function. Still trying to completely wrap my head around pointers but it's hard without any intervention to tell you where you are misinterpretting things, and I know I have to be. Here is the code, dont mind the fact I'm doing it in Qtcreator. Links help but please don't just copy pasta some C++ info page on pointers. I've read it.
#include <QCoreApplication>
#include <stdio.h>
#include <iostream>
#include <fstream>
using namespace std;
void stream_writer(FILE & stream)
{
int c1='A',
c2='B',
c3='C',
nl='\n';
cout << &stream << endl;
putc(c1, &stream);
putc(nl, &stream);
cout << "written to testfile" << endl;
fclose(&stream);
putc(c2, stdout);
putc(c3, stdout);
putc(nl, stdout);
}
int main(int argc, char *argv[])
{
QCoreApplication a(argc, argv);
FILE* testfile;
testfile = fopen("testfile.txt", "wt");
if (testfile != NULL )
{
cout << &testfile << endl;
cout << testfile << endl;
stream_writer(*testfile);
}
else
{
cout << "Unable to open file\n";
}
return a.exec();
}
An example of my console output after running the code:
0x7ffff6aed478
0x138a200
0x138a200
written to testfile
BC
void stream_writer(FILE & stream)
Here you are receiving a reference to a FILE object.
cout << &stream << endl;
Here you are printing the address of a FILE object, via a reference.
FILE* testfile;
Here you are declaring a pointer to FILE.
cout << &testfile << endl;
Here you are printing the value of the pointer.
stream_writer(*testfile);
Here you are passing the dereferenced pointer as an object reference to the called function.
It would be surprising if all of these had the same value.
Your expectations are misplaced.
cout << &testfile << endl; is printing the address of the FILE pointer itself: 0x7ffff6aed478
cout << testfile << endl; is printing the address that the pointer points to: 0x138a200
Memory at address 0x7ffff6aed478 is where the FILE pointer is stored, and it has the value of 0x138a200.
Memory at address 0x138a200 is where the actual FILE object is allocated, and the values here correspond to data in struct FILE{...}
stream_writer(*testfile); You're dereferencing to get the FILE object, passing it by reference to stream_writer(). cout << &stream << endl; You then are printing the address of the same FILE object again. Hence the third line of output is 0x138a200

how to attach to an existing shared memory segment

I am having trouble with shared memory. I have one process that creates and writes to a shared memory segment just fine. But I cannot get a second process to attach that same existing segment. My second process can create a new shared segment if I use IPC_CREATE flag but I need to attach to the existing shared segment that was created by the 1st process.
This is my code in the 2nd process:
int nSharedMemoryID = 10;
key_t tKey = ftok("/dev/null", nSharedMemoryID);
if (tKey == -1) {
std::cerr << "ERROR: ftok(id: " << nSharedMemoryID << ") failed, " << strerror(errno) << std::endl;
exit(3);
}
std::cout << "ftok() successful " << std::endl;
size_t nSharedMemorySize = 10000;
int id = shmget(tKey, nSharedMemorySize, S_IRUSR | S_IWUSR | S_IRGRP | S_IROTH);
if (id == -1) {
std::cerr << "ERROR: shmget() failed, " << strerror(errno) << std::endl << std::endl;
exit(4);
}
std::cout << "shmget() successful, id: " << id << std::endl;
unsigned char *pBaseSM = (unsigned char *)shmat(id, (const void *)NULL, SHM_RDONLY);
if (pBaseSM == (unsigned char *)-1) {
std::cerr << "ERROR: shmat() failed, " << strerror(errno) << std::endl << std::endl;
exit(5);
}
std::cout << "shmat() successful " << std::endl;
The problem is that the 2nd process always errors out on the call to shmget() with a "No such file or directory" error. But this is the exact same code I used in the 1st process and it works just fine there. In the 1st process that created the shared segment, I can write to the memory segment, I can see it with "ipcs -m" Also, if I get the shmid from the "ipcs -m" command of the segment and hard code it in my 2nd process and the 2nd process can attach to it just fine. So the problem seems to be generation of the common id that both processes use to identify a single shared segment.
I have several questions:
(1) Is there an easier way to get the shmid of an existing shared memory segment? It seems crazy to me that I have to pass three separate parameters from the 1st process (that created the segment) to the 2nd process just so the 2nd process can get the same shared segment. I can live with having to pass 2 parameters: the file name like "/dev/null" and the same shared id (nSharedMemoryID in my code). But the size of the segment that has to be passed to the shmget() routine in order to get the shmid seems senseless because I have no idea of exactly how much memory was actually allocated (because of the page size issues) so I cannot be certain it is the same.
(2) does the segment size that I use in the 2nd process have to be the same as the size of the segment used to initially create the segment in the 1st process? I have tried to specify it as 0 but I still get errors.
(3) likewise, do the permissions have to be the same? that is, if the shared segment was created with read/write for user/group/world, can the 2nd process just use read for user? (same user for both processes).
(4) and why does shmget() fail with the "No such file or directory" error when the file "/dev/null" obviously exists for both processes? I am assuming that the 1st process does not put some kind of a lock on that node because that would be senseless.
Thanks for any help anyone can give. I have been struggling with this for hours--which means I am probably doing something really stupid and will ultimately embarrass myself when someone points out my error :-)
thanks,
-Andres
(1) as a different way: the attaching process scan the existing segments of the user, tries to attach with the needed size, check for a "magic byte sequence" at the beginning of the segment (to exclude other programs of the same user). Alternatively you can check if the process attached is the one that you expect. If one of the steps fails, this is the first one and will create the segment... cumbersome yes, I saw it in a code from the '70s.
Eventually you can evaluate to use the POSIX compliant shm_open() alternative - should be simpler or at least more modern...
(2) Regarding the size, it's important that the size specified be less/equal than the size of the existing segment, so no issues if it's rounded to the next memory page size. you get the EINVAL error only if it's larger.
(3) the mode flags are only relevant when you create the segment the first time (mostly sure).
(4) The fact that shmget() fail with the "No such file or directory" means only that it hasn't found a segment with that key (being now pedantic: not id - with id we usually refer to the value returnet by shmget(), used subsequently) - have you checked that the tKey is the same? Your code works fine on my system. Just added a main() around it.
EDIT: attached the working program
#include <iostream>
#include <sys/ipc.h>
#include <sys/shm.h>
#include <errno.h>
#include <stdlib.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <string.h>
int main(int argc, char **argv) {
int nSharedMemoryID = 10;
if (argc > 1) {
nSharedMemoryID = atoi(argv[1]);
}
key_t tKey = ftok("/dev/null", nSharedMemoryID);
if (tKey == -1) {
std::cerr << "ERROR: ftok(id: " << nSharedMemoryID << ") failed, " << strerror(errno) << std::endl;
exit(3);
}
std::cout << "ftok() successful. key = " << tKey << std::endl;
size_t nSharedMemorySize = 10000;
int id = shmget(tKey, nSharedMemorySize, 0);
if (id == -1) {
std::cerr << "ERROR: shmget() failed (WILL TRY TO CREATE IT NEW), " << strerror(errno) << std::endl << std::endl;
id = shmget(tKey, nSharedMemorySize, S_IRUSR | S_IWUSR | S_IRGRP | S_IROTH | IPC_CREAT);
if (id == -1) {
std::cerr << "ERROR: shmget() failed, " << strerror(errno) << std::endl << std::endl;
exit(4);
}
}
std::cout << "shmget() successful, id: " << id << std::endl;
unsigned char *pBaseSM = (unsigned char *)shmat(id, (const void *)NULL, SHM_RDONLY);
if (pBaseSM == (unsigned char *)-1) {
std::cerr << "ERROR: shmat() failed, " << strerror(errno) << std::endl << std::endl;
exit(5);
}
std::cout << "shmat() successful " << std::endl;
}
EDIT: output
$ ./a.out 33
ftok() successful. key = 553976853
ERROR: shmget() failed (WILL TRY TO CREATE IT NEW), No such file or directory
shmget() successful, id: 20381699
shmat() successful
$ ./a.out 33
ftok() successful. key = 553976853
shmget() successful, id: 20381699
shmat() successful
SOLUTION - after in-chat (wow SO has a chat!) discussion:
At the end the problem was that in the original code he was calling shmctl() later on to tell to detach the segment as the last process detached it, before the other process was attached.
The problem is that this in fact make the segment private. It's key is marked as 0x00000000 by ipcs -m and cannot be attached anymore by other processes - it's in fact marked for lazy deletion.
I just want to post the result of all the help Sigismondo gave me and post the solution to this issue just in case anyone else has the same problem.
The clue was using "ipcs -m" and noticing that the key value was 0 which means that the shared segment is private and so the 2nd process could not attach to it.
An additional quirk was this: I was calling the following:
int nReturnCode = shmctl(id, IPC_RMID, &m_stCtrlStruct);
My intent was to set the mode for the segment so that it would be deleted when all processes that are using it have exited. However, this call has the side effect of making the segment private even though it was created without using the IPC_EXCL flag.
Hopefully this will help anyone else who trips across this issue.
And, many, many thanks to Sigismondo for taking the time to help me--I learned a lot from our chat!
-Andres

boost removing managed_shared_memory when process is attached

I have 2 processes, process 1 creates a boost managed_shared_memory segment and process 2 opens this segment. Process 1 is then restarted and the start of process 1 has the following,
struct vshm_remove
{
vshm_remove()
{
boost::interprocess::shared_memory_object::remove("VMySharedMemory");
}
~vshm_remove()
{
boost::interprocess::shared_memory_object::remove("VMySharedMemory");
}
} vremover;
I understand that when process 1 starts or ends the remove method will be called on my shared memory but shouldnt it only remove it if Process 2 is not attached to it? I am attaching to the shared memory in process 2 using the following,
boost::interprocess::managed_shared_memory *vfsegment;
vfsegment = new boost::interprocess::managed_shared_memory(boost::interprocess::open_only, "VMySharedMemory");
I am noticing that the shared memory is removed regardless of Process 2 being connected.
I don't believe that there is any mention in the documentation that shared_memory_object::remove will fail if a process is attached.
Please see this section for reference: Removing shared memory. Particularly:
This function can fail if the shared memory objects does not exist or it's opened by another process.
This means that a call to shared_memory_object::remove("foo") will attempt to remove shared memory named "foo" no matter what.
The implementation of that function (source here) reflects that behavior:
inline bool shared_memory_object::remove(const char *filename)
{
try{
//Make sure a temporary path is created for shared memory
std::string shmfile;
ipcdetail::tmp_filename(filename, shmfile);
return ipcdetail::delete_file(shmfile.c_str());
}
catch(...){
return false;
}
}
In my experience with released production code, I've had success not calling shared_memory_object::remove until I no longer need access to the shared memory.
I wrote a very simple example main program that you might find helpful. It will attach to, create, or remove shared memory depending on how you run it. After compiling, try the following steps:
Run with c to create the shared memory (1.0K by default) and insert dummy data
Run with o to open ("attach to") the shared memory and read dummy data (reading will happen in a loop every 10 seconds by default)
In a separate session, run with r to remove the shared memory
Run again with o to try to open. Notice that this will (almost certainly) fail because the shared memory was (again, almost certainly) removed during the previous step
Feel free to kill the process from the second step
As to why step 2 above continues to be able to access the data after a call to shared_memory_object::remove, please see Constructing Managed Shared Memory. Specifically:
When we open a managed shared memory
A shared memory object is opened.
The whole shared memory object is mapped in the process' address space.
Mostly likely, because the shared memory object is mapped into the process' address space, the shared memory file itself is no longer directly needed.
I realize that this is a rather contrived example, but I thought something more concrete might be helpful.
#include <cctype> // tolower()
#include <iostream>
#include <string>
#include <unistd.h> // sleep()
#include <boost/interprocess/shared_memory_object.hpp>
#include <boost/interprocess/managed_shared_memory.hpp>
int main(int argc, char *argv[])
{
using std::cerr; using std::cout; using std::endl;
using namespace boost::interprocess;
if (argc == 1) {
cout << "usage: " << argv[0] << " <command>\n 'c' create\n 'r' remove\n 'a' attach" << endl;
return 0;
}
const char * shm_name = "shared_memory_segment";
const char * data_name = "the_answer_to_everything";
switch (tolower(argv[1][0])) {
case 'c':
if (shared_memory_object::remove(shm_name)) { cout << "removed: " << shm_name << endl; }
managed_shared_memory(create_only, shm_name, 1024).construct<int>(data_name)(42);
cout << "created: " << shm_name << "\nadded int \"" << data_name << "\": " << 42 << endl;
break;
case 'r':
cout << (shared_memory_object::remove(shm_name) ? "removed: " : "failed to remove: " ) << shm_name << endl;
break;
case 'a':
{
managed_shared_memory segment(open_only, shm_name);
while (true) {
std::pair<int *, std::size_t> data = segment.find<int>( data_name );
if (!data.first || data.second == 0) {
cerr << "Allocation " << data_name << " either not found or empty" << endl;
break;
}
cout << "opened: " << shm_name << " (" << segment.get_segment_manager()->get_size()
<< " bytes)\nretrieved int \"" << data_name << "\": " << *data.first << endl;
sleep(10);
}
}
break;
default:
cerr << "unknown command" << endl;
break;
}
return 0;
}
One additional interesting thing - add one more case:
case 'w':
{
managed_shared_memory segment(open_only, shm_name);
std::pair<int *, std::size_t> data = segment.find<int>( data_name );
if (!data.first || data.second == 0) {
cerr << "Allocation " << data_name << " either not found or empty" << endl;
break;
}
*data.first = 17;
cout << "opened: " << shm_name << " (" << segment.get_segment_manager()->get_size()
<< " bytes)\nretrieved int \"" << data_name << "\": " << *data.first << endl;
}
break;
The aditional option 'w' causes that the memory be attached and written '17' instead ("the most random random number"). With this you can do the following:
Console 1: Do 'c', then 'a'. Reports the memory created with value 42.
Console 2: Do 'w'. On Console1 you'll see that the number is changed.
Console 2: Do 'r'. The memory is successfully removed, Console 1 still prints 17.
Console 2: Do 'c'. It will report memory as created with value 42.
Console 2: Do 'a'. You'll see 42, Console 1 still prints 17.
This confirms - as long as it works the same way on all platforms, but boost declares that it does - that you can use this way to send memory blocks from one process to another, while the "producer" only needs confirmation that the "consumer" attached the block so that "producer" can now remove it. The consumer also doesn't have to detach previous block before attaching the next one.

SEGFAULT Getting Results Using MySQL/C++ Connector

I'm trying to display a small MySQL table via C++ using the MySQL/C++ Connector, but when I execute the following function, my program either quits with the message "Aborted" or I get a segfault. Can anyone tell me what I'm doing wrong here? I thought I followed the documentation pretty well. :/
void
addressBook::display(sql::Connection* con)
{
sql::Statement *stmt;
sql::ResultSet *res;
// Create the statement object
stmt = con->createStatement();
// Execute a query and store the result in res
res = stmt->executeQuery("SELECT * FROM address_book "
"ORDER BY last_name, first_name");
// Loop through the results and display them
if(res)
{
while(res->next())
{
std::cout << "Name: " << res->getString("first_name")
<< " " << res->getString("last_name") << std::endl
<< "Phone: " << res->getString("phone") << std::endl
<< "eMail: " << res->getString("email") << std::endl
<< "City: " << res->getString("city") << std::endl
<< "Comments: " << res->getString("comments")
<< std::endl << std::endl;
}
}
delete stmt;
delete res;
}
The full (as of yet, unfinished) program may be found here, for reference. http://pastebin.com/kWnknHi4
Also, each field in the table being called contains a valid string.
Edit The debugger message can be found here: http://pastebin.com/NnSqV8hv
It looks like you're calling delete in the wrong order. The example deletes res first.
The ResultSet destructor may reference the associated Statement.
Generally, you should do free/delete in the opposite order you created/allocated the object.
The problem was that the libraries were installed incorrectly on my system; according to the docs, you run make clean as an intermediary step, when it should just be make.
Thanks to vinleod from ##c++-basic (Vincent Damewood of http://damewood.us/) for the help in figuring that out.

Why does ofstream sometimes create files but can't write to them?

I'm trying to use the ofstream class to write some stuff to a file, but all that happens is that the file gets created, and then nothing. I have some simply code here:
#include <iostream>
#include <fstream>
#include <cstring>
#include <cerrno>
#include <time.h>
using namespace std;
int main(int argc, char* argv[])
{
ofstream file;
file.open("test.txt");
if (!file) {
cout << strerror(errno) << endl;
} else {
cout << "All is well!" << endl;
}
for (int i = 0; i < 10; i++) {
file << i << "\t" << time(NULL) << endl;
}
file.flush();
file.close();
return 0;
}
When I create a console application, everything works fine, so I'm afraid this code is not completely representative. However, I am using code like this in a much larger project that - to be honest - I don't fully understand (Neurostim). I'm supposed to write some class that is compiled to a dll which can be loaded by Neurostim.
When the code is run, "test.txt" is created and then "No error!" is printed, as this is apparently the output from strerror. Obviously this is wrong however. The application runs perfectly otherwise, and is not phased by the fact that I'm trying to write to a corrupted stream. It just doesn't do it. It seems to me like there is no problem with permissions, because the file is in fact created.
Does anyone have any ideas what kind of things might cause this odd behavior? (I'm on WinXP Pro SP3 and use Visual C++ 2008 Express Edition)
Thanks!
Just a thought :- in your real code are you re-using your stream object?
If so, you need to ensure that you call clear() on the stream before re-using the object otherwise, if there was a previous error state, it won't work. As I recall, not calling clear() on such a stream would result in an empty file that couldn't be written to, as you describe in your question.
ofstream file;
file.open("test.txt");
Just a nit: you can combine that into a single line. ofstream file("test.txt");
if (file) {
cout << strerror(errno) << endl;
} else {
cout << "All is well!" << endl;
}
Your test is backwards. If file is true, it's open and ready for writing.
Also, I wouldn't count on strerror() working correctly on Windows. Most Windows APIs don't use errno to signal errors. If your failure is happening outside the C/C++ run-time library, this may not tell you anything interesting.
UPDATE Thinking more about this, failing to open a file via fstreams is not guaranteed to set errno. It's possible that errno ends up set on some platforms (espeically if those platforms implement fstream operations with FILE* or file descriptors, or some other library that sets errno) but that is not guaranteed. The official way to check for failure is via exceptions, std::io_state or helper methods on std::fstream (like fail or bad). Unfortunately you can't get as much information out of std::streams as you can from errno.
You've got the if statement wrong. operator void* returns NULL (a.k.a. false) if the file is not writable. It returns non-zero (a.k.a. true) if the file is writeable. So you want:
if (!file) {
cout << strerror(errno) << endl;
} else {
cout << "All is well!" << endl;
}
Or:
if (!file.good()) {
cout << strerror(errno) << endl;
} else {
cout << "All is well!" << endl;
}