How to get rid of if-elif-else clause - c++

For example, I have some arrays of filenames:
char arr[N] = ["FILENAME0", "FILENAME1", "FILENAME2", "FILENAME3", "FILENAME4", ...]
How can I write a function which depends on N will fopen and fclose N files?
switch-case and if-elif-else are straightforward, but require a lot of conditions and N already should be known (N will pass in runtime from stdin).
For-loop is not suitable here because it will open and close step by step. I want that at the beginning, function will fopen N files, then all these N files descriptors should be available in memory and only then close N files.
I expect that if N == 1 function will behave like:
int func ()
{
FILE *fp = fopen(arr[0]);
fclose(fp);
return 0;
}
or if N == 3:
int func ()
{
FILE *fp = fopen(arr[0]);
FILE *fp1 = fopen(arr[1]);
FILE *fp2 = fopen(arr[2]);
fclose(fp);
fclose(fp1);
fclose(fp2);
return 0;
}

Just store your FILE*s into a std::vector and close them with a second loop:
void func(const std::vector<std::string>>& filenames) {
std::vector<FILE*> fds;
for (const std::string& filename : filenames) {
fds.push_back(std::fopen(filename.c_str(), "w"));
}
// Work with the file descriptors however you want
for (FILE* fd : fds) {
std::fclose(fd);
}
}
If you do anything that could throw an exception between when you open the files and when you close them then you may want to use an exception-safe wrapper, rather than closing the FILE*s manually:
void func(const std::vector<std::string>>& filenames) {
std::vector<std::unique_ptr<FILE, int(*)(FILE*)>> fds;
for (const std::string& filename : filenames) {
fds.emplace_back(std::fopen(filename.c_str(), "w"), std::fclose);
}
// Work with the file descriptors however you want. To get
// the raw FILE* use fds[i].get()
// std::unique_ptr will call its deleter (std::fclose in this case)
// on its managed pointer in its destructor, so there's no need to
// manually close them
}

AFAIK, there isn't a possible solution of shortening FILE *fp = fopen(arr[0]); and so on, on C++. You are stuck using the if-elif-else clause for both opening the file and closing it.

Related

Caching images in c++. Using buffer_body or other things instead of file_body?

I have slightly modified version of this https://www.boost.org/doc/libs/develop/libs/beast/example/http/server/async/http_server_async.cpp.
What it does:
According to the correctness of the request it returns the required image or an error.
What I'm going to do:
I want to keep frequently requesting images in local cache like an LRU cache to decrease response time
What I've tried:
I wanted to use buffer_body instead of file_body but some difficulties occurred with respond part, so I discarded this idea.
I tried to decode an png image to std::string, I thought this way I could keep it in std::unordered_map easier, but again problems arose with response part of the code
Here is the response part:
http::response<http::file_body> res {
std::piecewise_construct,
std::make_tuple(std::move(body)),
std::make_tuple(http::status::ok, req.version()) };
res.set(http::field::content_type, "image/png");
res.content_length(size);
res.keep_alive(req.keep_alive());
return send(std::move(res));
If doing it by encoding and decoding the image as string is ok I provide below the code where I read it to a string:
std::unordered_map<std::string, std::string> cache;
std::string load_file_contents(const std::string& filepath)
{
static const size_t MAX_LOAD_DATA_SIZE = 1024 * 1024 * 8 ; // 8 Mbytes.
std::string result;
static const size_t BUFF_SIZE = 8192; // 8 Kbytes
char buf[BUFF_SIZE];
FILE* file = fopen( filepath.c_str(), "rb" ) ;
if ( file != NULL )
{
size_t n;
while( result.size() < MAX_LOAD_DATA_SIZE )
{
n = fread( buf, sizeof(char), BUFF_SIZE, file);
if (n == 0)
break;
result.append(buf, n);
}
fclose(file);
}
return result;
}
template<class Body, class Allocator, class Send>
void handle_request(
beast::string_view doc_root,
http::request<Body, http::basic_fields<Allocator>>&& req,
Send&& send)
{
.... // skipping this part not to paste all the code
if(cache.find(path) == cache.end())
{
// if not in cache
std::ifstream image(path.c_str(), std::ios::binary);
// not in the cache and could open, so get it and decode it as a binary file
cache.emplace(path, load_file_contents(path));
}
.... // repsonse part (provided above) response should take from cache
}
ANY HELP WILL BE APPRECIATED! THANK YOU!
Sometimes there is no need to cache theseĀ files, for example, in my case changing file_body to vector_body or string_body were enough to speed up respond time almost twice

Creating text file into C++ addon of node.js

I want to know how i can create file and append data inside it in c++ addon (.cc) file of node.js ??
I have used below code to do same, but not able to find file "data.txt" in my ubuntu machine(reason behind it may be below code is not correct way to create file, but strange i haven't received any error/warning at compile time).
FILE * pFileTXT;
pFileTXT = fopen ("data.txt","a+");
const char * c = localReq->strResponse.c_str();
fprintf(pFileTXT,c);
fclose (pFileTXT);
Node.js relies on libuv, a C library to handle the I/O (asynchronous or not). This allows you to use the event loop.
You'd be interested in this free online book/introduction to libuv: http://nikhilm.github.com/uvbook/index.html
Specifically, there is a chapter dedicated to reading/writing files.
int main(int argc, char **argv) {
// Open the file in write-only and execute the "on_open" callback when it's ready
uv_fs_open(uv_default_loop(), &open_req, argv[1], O_WRONLY, 0, on_open);
// Run the event loop.
uv_run(uv_default_loop());
return 0;
}
// on_open callback called when the file is opened
void on_open(uv_fs_t *req) {
if (req->result != -1) {
// Specify the on_write callback "on_write" as last argument
uv_fs_write(uv_default_loop(), &write_req, 1, buffer, req->result, -1, on_write);
}
else {
fprintf(stderr, "error opening file: %d\n", req->errorno);
}
// Don't forget to cleanup
uv_fs_req_cleanup(req);
}
void on_write(uv_fs_t *req) {
uv_fs_req_cleanup(req);
if (req->result < 0) {
fprintf(stderr, "Write error: %s\n", uv_strerror(uv_last_error(uv_default_loop())));
}
else {
// Close the handle once you're done with it
uv_fs_close(uv_default_loop(), &close_req, open_req.result, NULL);
}
}
Spend some time reading the book if you want to write C++ for node.js. It's worth it.

Writing (logging) into same file from different threads , different functions?

In C++ is there any way to make the writing into file thread safe in the following scenario ?
void foo_one(){
lock(mutex1);
//open file abc.txt
//write into file
//close file
unlock(mutex1);
}
void foo_two(){
lock(mutex2);
//open file abc.txt
//write into file
//close file
unlock(mutex2);
}
In my application (multi-threaded) , it is likely that foo_one() and foo_two() are executed by two different threads at the same time .
Is there any way to make the above thread safe ?
I have considered using the file-lock ( fcntl and/or lockf ) but not sure how to use them because fopen() has been used in the application ( performance reasons ) , and it was stated somewhere that those file locks should not be used with fopen ( because it is buffered )
PS : The functions foo_one() and foo_two() are in two different classes , and there is no way to have a shared data between them :( , and sadly the design is such that one function cannot call other function .
Add a function for logging.
Both functions call the logging function (which does the appropriate locking).
mutex logMutex;
void log(std::string const& msg)
{
RAIILock lock(logMutex);
// open("abc.txt");
// write msg
// close
}
If you really need a logger, do not try doing it simply by writing into files and perhaps use a dedicated logger, thus separating the concerns away from the code you're writing. There's a number of thread-safe loggers: the first one that comes to mind: g2log. Googling further you'll find log4cplus, a discussion here, even a minimalist one, +1
If the essence of functions foo_one() and foo_two() are only to open the file, write something to it, and close it, then use the same mutex to keep them from messing each other up:
void foo_one(){
lock(foo_mutex);
//open file abc.txt
//write into file
//close file
unlock(foo_mutex);
}
void foo_two(){
lock(foo_mutex);
//open file abc.txt
//write into file
//close file
unlock(foo_mutex);
}
Of course, this assumes these are the only writers. If other threads or processes write to the file, a lock file might be a good idea.
You should do this, have a struct with a mutex and a ofstream:
struct parser {
ofstream myfile
mutex lock
};
Then you can pass this struct (a) to foo1 and foo2 as a void*
parser * a = new parser();
initialise the mutex lock, then you can pass the struct to both the functions.
void foo_one(void * a){
parser * b = reinterperet_cast<parser *>(a);
lock(b->lock);
b->myfile.open("abc.txt");
//write into file
b->myfile.close();
unlock(b->mutex);
}
You can do the same for the foo_two function. This will provide a thread safe means to write to the same file.
Try this code. I've done this with MFC Console Application
#include "stdafx.h"
#include <mutex>
CWinApp theApp;
using namespace std;
const int size_ = 100; //thread array size
std::mutex mymutex;
void printRailLock(int id) {
printf("#ID :%", id);
lock_guard<std::mutex> lk(mymutex); // <- this is the lock
CStdioFile lastLog;
CString logfiledb{ "_FILE_2.txt" };
CString str;
str.Format(L"%d\n", id);
bool opend = lastLog.Open(logfiledb, CFile::modeCreate | CFile::modeReadWrite | CFile::modeNoTruncate);
if (opend) {
lastLog.SeekToEnd();
lastLog.WriteString(str);
lastLog.Flush();
lastLog.Close();
}
}
int main()
{
int nRetCode = 0;
HMODULE hModule = ::GetModuleHandle(nullptr);
if (hModule != nullptr)
{
if (!AfxWinInit(hModule, nullptr, ::GetCommandLine(), 0))
{
wprintf(L"Fatal Error: MFC initialization failed\n");
nRetCode = 1;
}
else
{
std::thread threads[size_];
for (int i = 0; i < size_; ++i) {
threads[i] = std::thread(printRailLock, i + 1);
Sleep(1000);
}
for (auto& th : threads) { th.hardware_concurrency(); th.join(); }
}
}
else
{
wprintf(L"Fatal Error: GetModuleHandle failed\n");
nRetCode = 1;
}
return nRetCode;
}
Referance:
http://www.cplusplus.com/reference/mutex/lock_guard/
http://www.cplusplus.com/reference/mutex/mutex/
http://devoptions.blogspot.com/2016/07/multi-threaded-file-writer-in-c_14.html

Why does this write() operation just write a single line

I have a small problem with my code in the following. I call it in my class from within a state machine this->write_file(this->d_filename);. The case in the loop gets hit a couple of times, however I only have one line of entries in the CSV file I want to produce.
I'm not sure why this is. I open the file with this->open(filename) in my write function. It returns the file-descriptor. The file is opened with O_TRUNK, and if ((d_new_fp = fdopen(fd, d_is_binary ? "wba" : "w")) == NULL). While the aba refers to write, binary and append. Therefore I expect more than one line.
The fprintf statement writes my data. It also has a \n.
fprintf(d_new_fp, "%s, %d %d\n", this->d_packet, this->d_lqi, this->d_lqi_sample_count);
I simply can't figure out why my file doesn't grow.
Best,
Marius
inline bool
cogra_ieee_802_15_4_sink::open(const char *filename)
{
gruel::scoped_lock guard(d_mutex); // hold mutex for duration of this function
// we use the open system call to get access to the O_LARGEFILE flag.
int fd;
if ((fd = ::open(filename, O_WRONLY | O_CREAT | O_TRUNC | OUR_O_LARGEFILE,
0664)) < 0)
{
perror(filename);
return false;
}
if (d_new_fp)
{ // if we've already got a new one open, close it
fclose(d_new_fp);
d_new_fp = 0;
}
if ((d_new_fp = fdopen(fd, d_is_binary ? "wba" : "w")) == NULL)
{
perror(filename);
::close(fd);
}
d_updated = true;
return d_new_fp != 0;
}
inline void
cogra_ieee_802_15_4_sink::close()
{
gruel::scoped_lock guard(d_mutex); // hold mutex for duration of this function
if (d_new_fp)
{
fclose(d_new_fp);
d_new_fp = 0;
}
d_updated = true;
}
inline void
cogra_ieee_802_15_4_sink::write_file(const char* filename)
{
if (this->open(filename))
{
fprintf(d_new_fp, "%s, %d %d\n", this->d_packet, this->d_lqi,
this->d_lqi_sample_count);
if (true)
{
fprintf(stderr, "Writing file %x\n", this->d_packet);
}
}
}
Description for O_TRUNC from man open:
If the file already exists and is a regular file and the open mode allows writing (i.e., is O_RDWR or O_WRONLY) it will be truncated to length 0. If the file is a FIFO or terminal device file, the O_TRUNC flag is ignored. Otherwise the effect of O_TRUNC is unspecified.
The file is opened in each call to write_file(), removing anything that was previously written. Replace O_TRUNC with O_APPEND.

Libzip - read file contents from zip

I using libzip to work with zip files and everything goes fine, until i need to read file from zip
I need to read just a whole text files, so it will be great to achieve something like PHP "file_get_contents" function.
To read file from zip there is a function "int
zip_fread(struct zip_file *file, void *buf, zip_uint64_t nbytes)".
Main problem what i don't know what size of buf must be and how many nbytes i must read (well i need to read whole file, but files have different size). I can just do a big buffer to fit them all and read all it's size, or do a while loop until fread return -1 but i don't think it's rational option.
You can try using zip_stat to get file size.
http://linux.die.net/man/3/zip_stat
I haven't used the libzip interface but from what you write it seems to look very similar to a file interface: once you got a handle to the stream you keep calling zip_fread() until this function return an error (ir, possibly, less than requested bytes). The buffer you pass in us just a reasonably size temporary buffer where the data is communicated.
Personally I would probably create a stream buffer for this so once the file in the zip archive is set up it can be read using the conventional I/O stream methods. This would look something like this:
struct zipbuf: std::streambuf {
zipbuf(???): file_(???) {}
private:
zip_file* file_;
enum { s_size = 8196 };
char buffer_[s_size];
int underflow() {
int rc(zip_fread(this->file_, this->buffer_, s_size));
this->setg(this->buffer_, this->buffer_,
this->buffer_ + std::max(0, rc));
return this->gptr() == this->egptr()
? traits_type::eof()
: traits_type::to_int_type(*this->gptr());
}
};
With this stream buffer you should be able to create an std::istream and read the file into whatever structure you need:
zipbuf buf(???);
std::istream in(&buf);
...
Obviously, this code isn't tested or compiled. However, when you replace the ??? with whatever is needed to open the zip file, I'd think this should pretty much work.
Here is a routine I wrote that extracts data from a zip-stream and prints out a line at a time. This uses zlib, not libzip, but if this code is useful to you, feel free to use it:
#
# compile with -lz option in order to link in the zlib library
#
#include <zlib.h>
#define Z_CHUNK 2097152
int unzipFile(const char *fName)
{
z_stream zStream;
char *zRemainderBuf = malloc(1);
unsigned char zInBuf[Z_CHUNK];
unsigned char zOutBuf[Z_CHUNK];
char zLineBuf[Z_CHUNK];
unsigned int zHave, zBufIdx, zBufOffset, zOutBufIdx;
int zError;
FILE *inFp = fopen(fName, "rbR");
if (!inFp) { fprintf(stderr, "could not open file: %s\n", fName); return EXIT_FAILURE; }
zStream.zalloc = Z_NULL;
zStream.zfree = Z_NULL;
zStream.opaque = Z_NULL;
zStream.avail_in = 0;
zStream.next_in = Z_NULL;
zError = inflateInit2(&zStream, (15+32)); /* cf. http://www.zlib.net/manual.html */
if (zError != Z_OK) { fprintf(stderr, "could not initialize z-stream\n"); return EXIT_FAILURE; }
*zRemainderBuf = '\0';
do {
zStream.avail_in = fread(zInBuf, 1, Z_CHUNK, inFp);
if (zStream.avail_in == 0)
break;
zStream.next_in = zInBuf;
do {
zStream.avail_out = Z_CHUNK;
zStream.next_out = zOutBuf;
zError = inflate(&zStream, Z_NO_FLUSH);
switch (zError) {
case Z_NEED_DICT: { fprintf(stderr, "Z-stream needs dictionary!\n"); return EXIT_FAILURE; }
case Z_DATA_ERROR: { fprintf(stderr, "Z-stream suffered data error!\n"); return EXIT_FAILURE; }
case Z_MEM_ERROR: { fprintf(stderr, "Z-stream suffered memory error!\n"); return EXIT_FAILURE; }
}
zHave = Z_CHUNK - zStream.avail_out;
zOutBuf[zHave] = '\0';
/* copy remainder buffer onto line buffer, if not NULL */
if (zRemainderBuf) {
strncpy(zLineBuf, zRemainderBuf, strlen(zRemainderBuf));
zBufOffset = strlen(zRemainderBuf);
}
else
zBufOffset = 0;
/* read through zOutBuf for newlines */
for (zBufIdx = zBufOffset, zOutBufIdx = 0; zOutBufIdx < zHave; zBufIdx++, zOutBufIdx++) {
zLineBuf[zBufIdx] = zOutBuf[zOutBufIdx];
if (zLineBuf[zBufIdx] == '\n') {
zLineBuf[zBufIdx] = '\0';
zBufIdx = -1;
fprintf(stdout, "%s\n", zLineBuf);
}
}
/* copy some of line buffer onto the remainder buffer, if there are remnants from the z-stream */
if (strlen(zLineBuf) > 0) {
if (strlen(zLineBuf) > strlen(zRemainderBuf)) {
/* to minimize the chance of doing another (expensive) malloc, we double the length of zRemainderBuf */
free(zRemainderBuf);
zRemainderBuf = malloc(strlen(zLineBuf) * 2);
}
strncpy(zRemainderBuf, zLineBuf, zBufIdx);
zRemainderBuf[zBufIdx] = '\0';
}
} while (zStream.avail_out == 0);
} while (zError != Z_STREAM_END);
/* close gzip stream */
zError = inflateEnd(&zStream);
if (zError != Z_OK) {
fprintf(stderr, "could not close z-stream!\n");
return EXIT_FAILURE;
}
if (zRemainderBuf)
free(zRemainderBuf);
fclose(inFp);
return EXIT_SUCCESS;
}
With any streaming you should consider the memory requirements of your app.
A good buffer size is large, but you do not want to have too much memory in use depending on your RAM usage requirements. A small buffer size will require you call your read and write operations more times which are expensive in terms of time performance. So, you need to find a buffer in the middle of those two extremes.
Typically I use a size of 4096 (4KB) which is sufficiently large for many purposes. If you want, you can go larger. But at the worst case size of 1 byte, you will be waiting a long time for you read to complete.
So to answer your question, there is no "right" size to pick. It is a choice you should make so that the speed of your app and the memory it requires are what you need.