asl logger change filter setting during program runtime (or any other configuration) - c++

I'd like to change ASL logger filter setting from an external utility.
However, I wish to do it with little performence degradation as possible.
currently I do so by creating a special file to hold the logger level parameter,
and change it by the following command
echo LOGGER_LEVEL > /etc/syslog_MODULE.conf
in the application that uses the logger, there's a special thread that mmap the file into virtual memory space, and check every interval if new logger level should be committed.
this is the mapping section :
const char * mmap_config()
{
int fd;
char *data;
struct stat sbuf;
if ((fd = open("/tmp/conf", O_RDONLY)) == -1) {
perror("open");
exit(1);
}
if (stat("/tmp/conf", &sbuf) == -1) {
perror("stat");
exit(1);
}
data = (char *)mmap((caddr_t)0, sbuf.st_size, PROT_READ, MAP_SHARED, fd, 0);
if (data == (caddr_t)(-1)) {
perror("mmap");
exit(1);
}
return data;
}
and this is the loop that look for changes in logger level :
int main() {
const char* x = mmap_config();
while (1) {
int y = atoi(x);
asl_set_filter(log_asl_client, ASL_FILTER_MASK_UPTO(y));
sleep(5);
}
}
perhaps there's a simpler way to do so with smaller performance foot-print ?
thanks

Related

What build environment differences are causing the mesa GBM library to behave differently

I am working on a basic demo application (kmscube) to do rendering via the DRM and GBM APIs. My application uses the TI variation of libgbm (mesa generic buffer management). TI provides the source, GNU autotools files, and a build environment (Yocto) to compile it. I compiled GBM in their environment and it works perfectly.
I did not want to use Yocto, so I moved the source into my own build system (buildroot) and compiled it. Everything compiles correctly (using the same autotools files), but when it runs, I get a segmentation fault when I call gbm_surface_create. I debugged as far as I could and found that the program steps into the gbm library but fails on return gbm->surface_create(gbm, width, height, format, flags);
What would cause a library to run differently when compiled in different environments. Are there some really important compiler or linker flags that I could be missing?
This is the code from the graphics application (in kmscube.c)
gbm.dev = gbm_create_device(drm.fd);
gbm.surface = gbm_surface_create(gbm.dev,
drm.mode[DISP_ID]->hdisplay, drm.mode[DISP_ID]->vdisplay,
drm_fmt_to_gbm_fmt(drm.format[DISP_ID]),
GBM_BO_USE_SCANOUT | GBM_BO_USE_RENDERING);
if (!gbm.surface) {
printf("failed to create gbm surface\n");
return -1;
}
return 0;
This is the call stack that creates the device ( in gbm.c)
GBM_EXPORT struct gbm_device *
gbm_create_device(int fd)
{
struct gbm_device *gbm = NULL;
struct stat buf;
if (fd < 0 || fstat(fd, &buf) < 0 || !S_ISCHR(buf.st_mode)) {
fprintf(stderr, "gbm_create_device: invalid fd: %d\n", fd);
return NULL;
}
if (device_num == 0)
memset(devices, 0, sizeof devices);
gbm = _gbm_create_device(fd);
if (gbm == NULL)
return NULL;
gbm->dummy = gbm_create_device;
gbm->stat = buf;
gbm->refcount = 1;
if (device_num < ARRAY_SIZE(devices)-1)
devices[device_num++] = gbm;
return gbm;
}
(continued in backend.c)
struct gbm_device *
_gbm_create_device(int fd)
{
const struct gbm_backend *backend = NULL;
struct gbm_device *dev = NULL;
int i;
const char *b;
b = getenv("GBM_BACKEND");
if (b)
backend = load_backend(b);
if (backend)
dev = backend->create_device(fd);
for (i = 0; i < ARRAY_SIZE(backends) && dev == NULL; ++i) {
backend = load_backend(backends[i]);
if (backend == NULL)
continue;
fprintf(stderr, "found valid GBM backend : %s\n", backends[i]);
dev = backend->create_device(fd);
}
return dev;
}
static const void *
load_backend(const char *name)
{
char path[PATH_MAX];
void *module;
const char *entrypoint = "gbm_backend";
if (name[0] != '/')
snprintf(path, sizeof path, MODULEDIR "/%s", name);
else
snprintf(path, sizeof path, "%s", name);
module = dlopen(path, RTLD_NOW | RTLD_GLOBAL);
if (!module) {
fprintf(stderr, "failed to load module: %s\n", dlerror());
return NULL;
}
else {
fprintf(stderr, "loaded module : %s\n", name);
}
return dlsym(module, entrypoint);
}
And here is the code that throws a segmentation fault (in gbm.c)
GBM_EXPORT struct gbm_surface *
gbm_surface_create(struct gbm_device *gbm,
uint32_t width, uint32_t height,
uint32_t format, uint32_t flags)
{
return gbm->surface_create(gbm, width, height, format, flags);
}

Reading all of a stuct or nothing from a pipe/socket in linux?

I've got a subprocess that I've popened that outputs fixed-sized structs containing some status information. My plan is to have a separate thread that reads from the stdout of that process to pull in the data as it comes.
I've got to check a flag periodically to make sure the program is still running so I can shut down cleanly, so I have to set the pipe to non-blocking and just have to run a loop piecing together the status message.
Is there a canonical way I can tell Linux "either read this entire amount or nothing before a timeout", that way I'll be able to check my flag, but I don't have to handle the boilerplate of reading the structure piece meal?
Alternatively, is there a way to push data back into a pipe? I could try to read the whole thing, and if it times out before it's all ready, push what I have back in and try again in a bit.
I've also written my popen (so I can grab stdin and stdout, so I'm totally OK using a socket rather than a pipe if that helps).
Here's what I ended up doing for anyone that's curious. I just wrote a class that wraps up the file descriptor and message size and gives me the "all-or-none" behavior I want.
struct aonreader {
aonreader(int fd, ssize_t size) {
fd_ = fd;
size_ = size_;
nread_ = 0;
nremain_ = size_;
}
ssize_t read(void *dst) {
ssize_t ngot = read(fd, (char*)dst + nread_, nremain_);
if (ngot < 0) {
if (errno != EAGAIN && errno != EWOULDBLOCK) {
return -1; // error
}
} else {
nread_ += ngot;
nremain_ -= ngot;
// if we read a whole struct
if (nremain_ == 0) {
nread_ = 0;
nremain_ = size_;
return size_;
}
}
return 0;
private:
int fd_;
ssize_t size_;
ssize_t nread_;
ssize_t nremain_;
};
Which can then be used something like this:
thing_you_want thing;
aonreader buffer(fd, sizeof(thing_you_want));
while (running) {
size_t ngot = buffer.read(&thing);
if (ngot == sizeof(thing_you_want)) {
<handle thing>
} else if (ngot < 0) {
<error, handle errno>
}
<otherwise loop and check running flag>
}

Why thrift TBinaryProtocol read recv data more complex than just size + content

Thrift version is 0.8. I'm implementing my own thrift transport layer in client with C++, protocol use Binary, my server is use frame transport and binary protocol, and is no problem for sure. And I get "No more data to read" exception in TTransport.h readAll function. I traced the call link, find in TBinaryProtocol.tcc
template <class Transport_>
uint32_t TBinaryProtocolT<Transport_>::readMessageBegin(std::string& name,
TMessageType& messageType,
int32_t& seqid) {
uint32_t result = 0;
int32_t sz;
result += readI32(sz); **//sz should be the whole return buf len without the first 4 bytes?**
if (sz < 0) {
// Check for correct version number
int32_t version = sz & VERSION_MASK;
if (version != VERSION_1) {
throw TProtocolException(TProtocolException::BAD_VERSION, "Bad version identifier");
}
messageType = (TMessageType)(sz & 0x000000ff);
result += readString(name);
result += readI32(seqid);
} else {
if (this->strict_read_) {
throw TProtocolException(TProtocolException::BAD_VERSION, "No version identifier... old protocol client in strict mode?");
} else {
// Handle pre-versioned input
int8_t type;
result += readStringBody(name, sz);
result += readByte(type); **//No more data to read in buf, so exception here**
messageType = (TMessageType)type;
result += readI32(seqid);
}
}
return result;
}
So my quesiton is: in frame transport, the data struct, should ONLY be size + content(result, seqid, function name....), that's exactly what my server pack. Then my client read the first 4 bytes lenth, and use it to fetch the whole content, is there any other left to read now?
Here is my client code, I believe quite simple.the most import part I have emphasize that.
class CthriftCli
{
......
TMemoryBuffer write_buf_;
TMemoryBuffer read_buf_;
enum CthriftConn::State state_;
uint32_t frameSize_;
};
void CthriftCli::OnConn4SgAgent(const TcpConnectionPtr& conn)
{
if(conn->connected() ){
conn->setTcpNoDelay(true);
wp_tcp_conn_ = boost::weak_ptr<muduo::net::TcpConnection>(conn);
if(unlikely(!(sp_countdown_latch_4_conn_.get()))) {
return 0;
}
sp_countdown_latch_4_conn_->countDown();
}
}
void CthriftCli::OnMsg4SgAgent(const muduo::net::TcpConnectionPtr& conn,
muduo::net::Buffer* buffer,
muduo::Timestamp receiveTime)
{
bool more = true;
while (more)
{
if (state_ == CthriftConn::kExpectFrameSize)
{
if (buffer->readableBytes() >= 4)
{
frameSize_ = static_cast<uint32_t>(buffer->peekInt32());
state_ = CthriftConn::kExpectFrame;
}
else
{
more = false;
}
}
else if (state_ == CthriftConn::kExpectFrame)
{
if (buffer->readableBytes() >= frameSize_)
{
uint8_t* buf = reinterpret_cast<uint8_t*>((const_cast<char*>(buffer->peek())));
read_buf_.resetBuffer(buf, sizeof(int32_t) + frameSize_, TMemoryBuffer::COPY); **// all the return buf, include first size bytes**
if(unlikely(!(sp_countdown_latch_.get()))){
return;
}
sp_countdown_latch_->countDown();
buffer->retrieve(sizeof(int32_t) + frameSize_);
state_ = CthriftConn::kExpectFrameSize;
}
else
{
more = false;
}
}
}
}
uint32_t CthriftCli::read(uint8_t* buf, uint32_t len) {
if (read_buf_.available_read() == 0) {
if(unlikely(!(sp_countdown_latch_.get()))){
return 0;
}
sp_countdown_latch_->wait();
}
return read_buf_.read(buf, len);
}
void CthriftCli::readEnd(void) {
read_buf_.resetBuffer();
}
void CthriftCli::write(const uint8_t* buf, uint32_t len) {
return write_buf_.write(buf, len);
}
uint32_t CthriftCli::writeEnd(void)
{
uint8_t* buf;
uint32_t size;
write_buf_.getBuffer(&buf, &size);
if(unlikely(!(sp_countdown_latch_4_conn_.get()))) {
return 0;
}
sp_countdown_latch_4_conn_->wait();
TcpConnectionPtr sp_tcp_conn(wp_tcp_conn_.lock());
if (sp_tcp_conn && sp_tcp_conn->connected()) {
muduo::net::Buffer send_buf;
send_buf.appendInt32(size);
send_buf.append(buf, size);
sp_tcp_conn->send(&send_buf);
write_buf_.resetBuffer(true);
} else {
#ifdef MUDUO_LOG
MUDUO_LOG_ERROR << "conn error, NOT send";
#endif
}
return size;
}
So please give me some hints about this?
You seem to have mixed concepts of 'transport' and 'protocol'.
Binary Protocol describes how data should be encoded (protocol layer).
Framed Transport describes how encoded data should be delivered (forwarded by message length) - transport layer.
Important part - Binary Protocol is not (and should not) be aware of any transport issues. So if you add frame size while encoding on transport level, you should also interpret incoming size before passing read bytes to Binary Protocol for decoding. You can (for example) use it to read all required bytes at once etc.
After quick looking trough you code: try reading 4 bytes of frame size instead of peeking it. Those bytes should not be visible outside transport layer.

Libzip - read file contents from zip

I using libzip to work with zip files and everything goes fine, until i need to read file from zip
I need to read just a whole text files, so it will be great to achieve something like PHP "file_get_contents" function.
To read file from zip there is a function "int
zip_fread(struct zip_file *file, void *buf, zip_uint64_t nbytes)".
Main problem what i don't know what size of buf must be and how many nbytes i must read (well i need to read whole file, but files have different size). I can just do a big buffer to fit them all and read all it's size, or do a while loop until fread return -1 but i don't think it's rational option.
You can try using zip_stat to get file size.
http://linux.die.net/man/3/zip_stat
I haven't used the libzip interface but from what you write it seems to look very similar to a file interface: once you got a handle to the stream you keep calling zip_fread() until this function return an error (ir, possibly, less than requested bytes). The buffer you pass in us just a reasonably size temporary buffer where the data is communicated.
Personally I would probably create a stream buffer for this so once the file in the zip archive is set up it can be read using the conventional I/O stream methods. This would look something like this:
struct zipbuf: std::streambuf {
zipbuf(???): file_(???) {}
private:
zip_file* file_;
enum { s_size = 8196 };
char buffer_[s_size];
int underflow() {
int rc(zip_fread(this->file_, this->buffer_, s_size));
this->setg(this->buffer_, this->buffer_,
this->buffer_ + std::max(0, rc));
return this->gptr() == this->egptr()
? traits_type::eof()
: traits_type::to_int_type(*this->gptr());
}
};
With this stream buffer you should be able to create an std::istream and read the file into whatever structure you need:
zipbuf buf(???);
std::istream in(&buf);
...
Obviously, this code isn't tested or compiled. However, when you replace the ??? with whatever is needed to open the zip file, I'd think this should pretty much work.
Here is a routine I wrote that extracts data from a zip-stream and prints out a line at a time. This uses zlib, not libzip, but if this code is useful to you, feel free to use it:
#
# compile with -lz option in order to link in the zlib library
#
#include <zlib.h>
#define Z_CHUNK 2097152
int unzipFile(const char *fName)
{
z_stream zStream;
char *zRemainderBuf = malloc(1);
unsigned char zInBuf[Z_CHUNK];
unsigned char zOutBuf[Z_CHUNK];
char zLineBuf[Z_CHUNK];
unsigned int zHave, zBufIdx, zBufOffset, zOutBufIdx;
int zError;
FILE *inFp = fopen(fName, "rbR");
if (!inFp) { fprintf(stderr, "could not open file: %s\n", fName); return EXIT_FAILURE; }
zStream.zalloc = Z_NULL;
zStream.zfree = Z_NULL;
zStream.opaque = Z_NULL;
zStream.avail_in = 0;
zStream.next_in = Z_NULL;
zError = inflateInit2(&zStream, (15+32)); /* cf. http://www.zlib.net/manual.html */
if (zError != Z_OK) { fprintf(stderr, "could not initialize z-stream\n"); return EXIT_FAILURE; }
*zRemainderBuf = '\0';
do {
zStream.avail_in = fread(zInBuf, 1, Z_CHUNK, inFp);
if (zStream.avail_in == 0)
break;
zStream.next_in = zInBuf;
do {
zStream.avail_out = Z_CHUNK;
zStream.next_out = zOutBuf;
zError = inflate(&zStream, Z_NO_FLUSH);
switch (zError) {
case Z_NEED_DICT: { fprintf(stderr, "Z-stream needs dictionary!\n"); return EXIT_FAILURE; }
case Z_DATA_ERROR: { fprintf(stderr, "Z-stream suffered data error!\n"); return EXIT_FAILURE; }
case Z_MEM_ERROR: { fprintf(stderr, "Z-stream suffered memory error!\n"); return EXIT_FAILURE; }
}
zHave = Z_CHUNK - zStream.avail_out;
zOutBuf[zHave] = '\0';
/* copy remainder buffer onto line buffer, if not NULL */
if (zRemainderBuf) {
strncpy(zLineBuf, zRemainderBuf, strlen(zRemainderBuf));
zBufOffset = strlen(zRemainderBuf);
}
else
zBufOffset = 0;
/* read through zOutBuf for newlines */
for (zBufIdx = zBufOffset, zOutBufIdx = 0; zOutBufIdx < zHave; zBufIdx++, zOutBufIdx++) {
zLineBuf[zBufIdx] = zOutBuf[zOutBufIdx];
if (zLineBuf[zBufIdx] == '\n') {
zLineBuf[zBufIdx] = '\0';
zBufIdx = -1;
fprintf(stdout, "%s\n", zLineBuf);
}
}
/* copy some of line buffer onto the remainder buffer, if there are remnants from the z-stream */
if (strlen(zLineBuf) > 0) {
if (strlen(zLineBuf) > strlen(zRemainderBuf)) {
/* to minimize the chance of doing another (expensive) malloc, we double the length of zRemainderBuf */
free(zRemainderBuf);
zRemainderBuf = malloc(strlen(zLineBuf) * 2);
}
strncpy(zRemainderBuf, zLineBuf, zBufIdx);
zRemainderBuf[zBufIdx] = '\0';
}
} while (zStream.avail_out == 0);
} while (zError != Z_STREAM_END);
/* close gzip stream */
zError = inflateEnd(&zStream);
if (zError != Z_OK) {
fprintf(stderr, "could not close z-stream!\n");
return EXIT_FAILURE;
}
if (zRemainderBuf)
free(zRemainderBuf);
fclose(inFp);
return EXIT_SUCCESS;
}
With any streaming you should consider the memory requirements of your app.
A good buffer size is large, but you do not want to have too much memory in use depending on your RAM usage requirements. A small buffer size will require you call your read and write operations more times which are expensive in terms of time performance. So, you need to find a buffer in the middle of those two extremes.
Typically I use a size of 4096 (4KB) which is sufficiently large for many purposes. If you want, you can go larger. But at the worst case size of 1 byte, you will be waiting a long time for you read to complete.
So to answer your question, there is no "right" size to pick. It is a choice you should make so that the speed of your app and the memory it requires are what you need.

C++ code to find BSSID OF associated network [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Want to know the ESSID of wireless network via C++ in UBUNTU
Hello I have written the following code which is a part of a project. It is used to find the ESSID of the current associated network. But it has a flaw that it also the displays the ESSID of the network with which I am not associated i.e. if I try to associate myself with a wireless n/w and if it is unsuccessfull i.e. NO DHCP OFFERS ARE RECEIVED, then also it will display the that ESSID with which I have made my attempt.
Is it possible to find the BSSID of current associated wireless network as it is the only way with which I can mark b/w associated and non associated, e.g. with an ioctl call?
int main (void)
{
int errno;
struct iwreq wreq;
CStdString result = "None";
int sockfd;
char * id;
char ESSID[20];
memset(&wreq, 0, sizeof(struct iwreq));
if((sockfd = socket(AF_INET, SOCK_DGRAM, 0)) == -1) {
fprintf(stderr, "Cannot open socket \n");
fprintf(stderr, "errno = %d \n", errno);
fprintf(stderr, "Error description is : %s\n",strerror(errno));
return result ;
}
CLog::Log(LOGINFO,"Socket opened successfully");
FILE* fp = fopen("/proc/net/dev", "r");
if (!fp)
{
// TBD: Error
return result;
}
char* line = NULL;
size_t linel = 0;
int n;
char* p;
int linenum = 0;
while (getdelim(&line, &linel, '\n', fp) > 0)
{
// skip first two lines
if (linenum++ < 2)
continue;
p = line;
while (isspace(*p))
++p;
n = strcspn(p, ": \t");
p[n] = 0;
strcpy(wreq.ifr_name, p);
id = new char[IW_ESSID_MAX_SIZE+100];
wreq.u.essid.pointer = id;
wreq.u.essid.length = 100;
if ( ioctl(sockfd,SIOCGIWESSID, &wreq) == -1 ) {
continue;
}
else
{
strcpy(ESSID,id);
return ESSID;
}
free(id);
}
free(line);
fclose(fp);
return result;
}
Note: Since this question seems to be duplicated in two places, I'm repeating my answer here as well.
You didn't mention whether you were using an independent basic service set or not (i.e., an ad-hoc network with no controlling access point), so if you're not trying to create an ad-hoc network, then the BSSID should be the MAC address of the local access point. The ioctl() constant you can use to access that information is SIOCGIWAP. The ioctl payload information will be stored inside of your iwreq structure at u.ap_addr.sa_data.