Custom Open File Dialog C++ - c++

We have created a class to customize the open file dialog in c++.
There is character array 'm_fileNameBuf' to hold the selected files names. Since the buffer is set to 5000, it can hold a maximum of 5000 characters.
Later on, large number of files are added and hence the total number of file characters exceeds and it leads to a problem. So we increased the number to 100K. But again, there was a case where the even large number of files are added and it is causing problem.
So the question here is how to avoid this problem? Instead of hard coding the array, is there anyway we can handle it according to the file size?
class DLLEXPORT CustomOpenFileDialog
{
......
......
private:
OPENFILENAME m_OpenFileName;
static const long m_fileNameBufSize = 5000;
TCHAR m_fileNameBuf[m_fileNameBufSize];
......
}
CustomOpenFileDialog::CustomOpenFileDialog()
{
.....
.....
m_OpenFileName.lpstrFile = m_fileNameBuf;
m_OpenFileName.nMaxFile = m_fileNameBufSize;
}
void CustomOpenFileDialog::SetFileName(TCHAR* name)
{
_tcsncpy_s(m_fileNameBuf, m_fileNameBufSize, name, m_fileNameBufSize);
.....
}

Related

putting a list (.txt) file into a 2D array

I'm trying to separate a text file (which has a list of 200 strings) and store each other string (even number and odd number in the list) into a 2D Array.
The text file is ordered in this way (without the numbers):
Alabama
Brighton
Arkansas
Bermuda
Averton
Burmingham
I would like to store it in a 2 dimensional array called strLine[101][2] iterating throughout so the first string in the list is in location [0][0] and the second string of the list is in location [0][1], etc until the file finishes reading and the list becomes organized like this (without the numbers):
Alabama | Brighton
Arkansas | Bermuda
Avertinon | Burmingham
My code outputs the original unsorted list at the moment, i would like to know how to implement the 2d array (with correct syntax) and how to implement an i, j for-loop in the getline() function so it can iterate through each element of the 2D array.
Any help would be greatly appreciated.
My code:
bool LoadListBox()
{
// Declarations
ifstream fInput; // file handle
string strLine[201]; // array of string to hold file data
int index = 0; // index of StrLine array
TCHAR szOutput[50]; // output to listbox,
50 char TCHAR
// File Open Process
fInput.open("data.txt"); // opens the file for read only
if (fInput.is_open())
{
getline( // read a line from the file
fInput, // handle of file to read
strLine[index]); // storage destination and index iterator
while (fInput.good()) // while loop for open file
{
getline( // read line from data file
fInput, // file handle to read
strLine[index++]); // storage destination
}
fInput.close(); // close the file
index = 0; // resets back to start of string
while (strLine[index] != "") // while loop for string not void
{
size_t pReturnValue; // return code for mbstowcs_s
mbstowcs_s( // converts string to TCHAR
&pReturnValue, // return value
szOutput, // destination of the TCHAR
50, // size of the destination TCHAR
strLine[index].c_str(), // source of string as char
50); // max # of chars to copy
SendMessage( // message to a control
hWnd_ListBox, // handle to listbox
LB_ADDSTRING, // append string to listbox
NULL, // window parameter not used
LPARAM(szOutput)); // TCHAR to add
index++; // next element of string array
}
return true; // file loaded okay
}
return false; // file did not load okay
}
Step 1
Transform string strLine[201]; to string place[100][2];. Also consider making a
struct place
{
std::string state;
std::string city;
};
because it is a bit more explicit what exactly is being stored. More expressive code is easier to read, generally prevents mistakes (harder to accidentally use strLine[x][2] or something like that), and requires less commenting. Code that comments itself should be a personal goal. The compiler doesn't care, of course, but few people are compilers.
Step 2
Use two separate index variables. Name the first something like num_entries because what it's really doing is counting the number of items in the array.
Step 3
Read two lines into the inner array and test the result of the reads. If they read successfully, increment the index.
while (getline(fInput, place[num_entries][0]) && getline(fInput, place[num_entries][1]))
{
num_entries++;
}
Step 4 (optional clean-up)
Step 2 turns while (strLine[index] != "") into while (index < num_entries)
Replace all of the 50s with a constant. That way you can't change the value and miss a few 50s AND it's easier to infer meaning from a good, descriptive identifier than a raw number.

Embed font larger than char size in ImGui

Im developing translation on little script that uses ImGui as frontend. I need extended set of unicode characters to be available in font that will be used. Since this script is injecting via DLL there's no way (I think so. I have no experience with c++ at all.) to use:
io.Fonts->AddFontFromFileTTF("myfontfile.ttf", size_in_pixels);
Adding font from ttf file resulted in error that data == NULL;
void* data = ImFileLoadToMemory(filename, "rb", &data_size, 0);
if (!data)
{
IM_ASSERT(0); // Could not load file.
return NULL;
}
I've also tried to use io.Fonts->AddFontFromMemoryCompressedBase85TTF and compiling font by included binary_to_compressed_c but output is so big that I'm getting:
fatal error C1091: compiler limit: string exceeds 65535 bytes in length
But function is not accepting any types except char*. I was connecting chars into string and then re-assemble it by str() and c_str() but app was crashing after injection. Here is function handling base85 conversion from ImGui:
ImFont* ImFontAtlas::AddFontFromMemoryCompressedBase85TTF(const char* compressed_ttf_data_base85, float size_pixels, const ImFontConfig* font_cfg, const ImWchar* glyph_ranges)
{
int compressed_ttf_size = (((int)strlen(compressed_ttf_data_base85) + 4) / 5) * 4;
void* compressed_ttf = ImGui::MemAlloc((size_t)compressed_ttf_size);
Decode85((const unsigned char*)compressed_ttf_data_base85, (unsigned char*)compressed_ttf);
ImFont* font = AddFontFromMemoryCompressedTTF(compressed_ttf, compressed_ttf_size, size_pixels, font_cfg, glyph_ranges);
ImGui::MemFree(compressed_ttf);
return font;
}
How I can fix this problem ? I've tried everything and nothing is working. Only passing smaller chars into compile function is working (Tried with bundled Cousine_Regular.ttf).
I've found workaround this problem. If you really need to use BASE85 there's still no answer but you can increase your size limit by converting to int type (Dont put -base85 in binary_to_compressed_c.exe) then insert resulting table to header file and use instrucions provided by ImGui like so:
Header file:
// File: 'DroidSans.ttf' (190044 bytes)
// Exported using binary_to_compressed_c.cpp
static const unsigned int droid_compressed_size = 134345;
static const unsigned int droid_compressed_data[134348 / 4] =
Your import / render file:
static const ImWchar ranges[] = { 0x0020, 0x00FF, 0x0100, 0x017F, 0 };
//Because I need extended characters im passing my array to function.
io.Fonts->AddFontFromMemoryCompressedTTF(droid_compressed_data, droid_compressed_size, 16.0f, NULL, ranges);
That's getting rid of the problem about converting from string to char and other stuff related to base85 importing.

What is the best solution for writing numbers into file and than read them?

I have 640*480 numbers. I need to write them into a file. I will need to read them later. What is the best solution? Numbers are between 0 - 255.
For me the best solution is to write them binary(8 bits). I wrote the numbers into txt file and now it looks like 1011111010111110 ..... So there are no questions where the number starts and ends.
How am I supposed to read them from the file?
Using c++
It's not good idea to write bit values like 1 and 0 to text file. The file size will bigger in 8 times. 1 byte = 8 bits. You have to store bytes, 0-255 - is byte. So your file will have size 640*480 bytes instead of 640*480*8. Every symbol in text file has size of 1 byte minimum. If you want to get bits, use binary operators of programming language that you use. To read bytes much easier. Use binary file for saving your data.
Presumably you have some sort of data structure representing your image, which somewhere inside holds the actual data:
class pixmap
{
public:
// stuff...
private:
std::unique_ptr<std::uint8_t[]> data;
};
So you can add a new constructor which takes a filename and reads bytes from that file:
pixmap(const std::string& filename)
{
constexpr int SIZE = 640 * 480;
// Open an input file stream and set it to throw exceptions:
std::ifstream file;
file.exceptions(std::ios_base::badbit | std::ios_base::failbit);
file.open(filename.c_str());
// Create a unique ptr to hold the data: this will be cleaned up
// automatically if file reading throws
std::unique_ptr<std::uint8_t[]> temp(new std::uint8_t[SIZE]);
// Read SIZE bytes from the file
file.read(reinterpret_cast<char*>(temp.get()), SIZE);
// If we get to here, the read worked, so we move the temp data we've just read
// into where we'd like it
data = std::move(temp); // or std::swap(data, temp) if you prefer
}
I realise I've assumed some implementation details here (you might not be using a std::unique_ptr to store the underlying image data, though you probably should be) but hopefully this is enough to get you started.
You can print the number between 0-255 as the char value in the file.
See the below code. in this example I am printing integer 70 as char.
So this result in print as 'F' on the console.
Similarly you can read it as char and then convert this char to integer.
#include <stdio.h>
int main()
{
int i = 70;
char dig = (char)i;
printf("%c", dig);
return 0;
}
This way you can restrict the file size.

Function to determine whether or not a downloaded file is identical to an existing one

I'm developing a linux-program, that is supposed to parse a file downloaded from another computer or the internet, and collect information from that file. The program also has to re-download the file by routine, every n days/hours/minutes/whatever, and parse it again to keep updated in case the file has changed.
However, the process of parsing the file could require a lot of resources. Thus, I would like a function to check if the file has changed since last time it was downloaded. I imagine something like this example:
int get_checksum(char *filename) {
// New prototype, if no such function already exists in standard C-libraries
int result; // Or char/float/whatever
// ...
return result;
}
int main(void) {
char filename[] = { "foo.dat" };
char file_url[] = { "http://example.com/foo.dat" }
int old_checksum; // Or char/float/whatever
int new_checksum; // Or char/float/whatever
// ...
// Now assume that old_checksum has a value from before:
dl_file(filename, file_url); // Some prototype for downloading the file
if ((new_checksum = get_checksum(filename)) == -1) {
// Badness
}
else {
if (new_checksum != old_checksum) {
old_checksum = new_checksum;
// Parse the file
}
else {
// Do nothing
}
}
// ...
}
Q1: Is there such a function as get_checksum (from the example above) available in standard C/C++ libraries?
Q2: If not: What is the best way to achieve this purpose?
There is no need for:
- a very advanced function
- encrypted or secured checksums
- the ability to compare a new file against files older than the last one, since the new downloaded file will always overwrite the older one
You can use the stat() function. It can give you access to the file parameters like last access time, time of last modification, file size etc:
struct stat {
dev_t st_dev; /* ID of device containing file */
ino_t st_ino; /* inode number */
mode_t st_mode; /* protection */
nlink_t st_nlink; /* number of hard links */
uid_t st_uid; /* user ID of owner */
gid_t st_gid; /* group ID of owner */
dev_t st_rdev; /* device ID (if special file) */
off_t st_size; /* total size, in bytes */
blksize_t st_blksize; /* blocksize for file system I/O */
blkcnt_t st_blocks; /* number of 512B blocks allocated */
time_t st_atime; /* time of last access */
time_t st_mtime; /* time of last modification */
time_t st_ctime; /* time of last status change */
};
But you need to have execute permission on the file you would be using it on.
man page
There was nothing built in the C++ language until std::hash<> in C++11 which is very simple, but may be appropriate for your needs.
Last I checked there is nothing at all in Boost (the most common C++ library extension). The reasoning is talked about here, but may be dated:
http://www.gamedev.net/topic/528553-why-doesnt-boost-have-a-cryptographic-hash-library/
So, you're best bet is:
std::hash with the file contents.
Or something like the following could be of use saved into a simple header and linked:
http://www.zedwood.com/article/cpp-md5-function
Or you could get a library such as OpenSSL or Crypto++.
You could do an XOR hash, in which you just xor successive blocks of unsigned ints/longs, but this has problems with collisions. For example, if the file is mostly chars, then the majority of the bytes will be in the ranges of normal ASCII/Unicode chars, so there will be a lot of unused key space.
For a standard implementation, you could read the file into a string and use std::hash from C++11. http://en.cppreference.com/w/cpp/utility/hash
The following is an example of the first method:
unsigned int hash(vector<char> file){
unsigned int result;
int *arr = (int*)file.data();
for(int i = 0;i < file.size() / sizeof(unsigned int);i++)
result ^= arr[i];
return result;
}
You just have to read the file into the vector.

Best way to store string of known maximum length in file for fast load into vector<string> in C++

I've got big amount of text data which I need to save to file for next reprocessing. These data are stored in table like vector< vector< string > > - every record (vector) has same number of attributes(vector). So, going through the vector I can find the max length of every attribute in table and count of records. Now I have to write these data to file (can be binary) in that way that I will be able to load them back into vector< vector< string > > very fast. It doesn't matter how much time will writing take but I need reading to vector implement in the fastest way.
Due to fact that data will be processed "record by record" the whole file may not will be load to memory. But for fast reading I want to use buffer 256 MB or 512 MB.
So for now I implemented this in this way:
Data are stored in two files - description file and data file. Description file contains the count of records, count of attributes and maximum length of every attribute. Data file is binary file of chars. There are no values or records separators, just values. Every value in concrete attribute has same length so if some value has smaller length than maximum length, the remaining chars are null characters '\0'.
Then I read chunk of file to char array buffer (256 MB or 512 MB) with std::fread. When application calls function vector getNext(), I read the chunk of chars from buffer (because I know length of every attribute) and append every char to concrete string to create vector.
But, this way seems not so fast for my purpose when I need parse big count of records in loop from buffer to vector. Is another better way to do whole this problem?
This part of code is parsing chars from buffer to values:
string value;
vector<string> record;
int pos = bfrIndex(); // returns current position in buffer. position of values of next record
for(unsigned int i = 0; i < d.colSize.size(); i++) { // d.colSize is vector of every attribute
value.clear();
value.reserve(d.colSize[i]);
for(unsigned int j = pos; j < pos + d.colSize[i]; j++) {
if (buffer[j] == '\0') break;
value += buffer[j];
}
record.push_back(value);
pos += d.colSize[i]; // set position in buffer to next value
}
return record;
I'd consider a binary approach that used the method employed in Doom's .wad files. I.e a directory with length & file offsets of each resource, followed by the resources themselves. With a small amount of overhead for the directory, you get instant knowledge of both where to find each string and how long they each are.
vector<vector<string> > is a 3d character "cube" where every dimension vary in size along the others. Unless you are able to predict each "size", you risk to read one-by one and reallocate every time.
Fast reading happens when you can "load up" the data all in once, and than define how to split. The data structure will probably be a single string, and a vector<vector<range> > where range is a std::pair<std::string::const_iterator>.
The problem -here- is that you cannot manipulate the strings being them tightened together.
A second chance is maintain the dynamic nature of vector<vector<string> >,but store the dataso that each "size" can be read before the data tehnselves, so that you can resize the vectos and then read the content into its componets.
In pseudocode:
template<class Stream, class Container>
void save(const Container& c, const stream& s)
{ s.write(c.size()); for(auto& e: c) save(e,s) }
template<class Stream, class Container>
void load(Container& c, const stream& s)
{
int sz=0; s.read(c.size()); c.resize(sz);
for(auto& i:c) load(i,s);
}
Of course, specialized for string-s so that saving/loading a string actually writes/reads its own chars.