Im trying to use statvfs to find free space on a file system.
Here's the code:
const char* Connection::getDiskInfo()
{
struct statvfs vfs;
int nRet = statvfs( "/u0", &vfs );
if( nRet ) return NULL;
char* pOut = (char*)malloc( 256 );
memset( pOut, 0, 256 );
sprintf( pOut, "<disk letter='%s' total='%lu' free='%lu' totalfree='%lu'/>",
"/", ( vfs.f_bsize * vfs.f_blocks ) / ( 1024 * 1024 ),
( vfs.f_bsize * vfs.f_bavail ) / ( 1024 * 1024 ),
( vfs.f_bsize * vfs.f_bfree ) / ( 1024 * 1024 ));
return pOut;
}
In the debugger (NetBeans 6.9) I see the appropriate values for the statvfs struct:
f_bavail = 105811542
f_bfree = 111586082
f_blocks = 111873644
f_bsize = 4096
this should give me total=437006 but my output insists that total=2830. Clearly Im doing something ignorant in my formatting or math.
If I add the line:
unsigned long x = ( vfs.f_bsize * vfs.f_blocks );
x evaluates to 2967912448 while the debugger shows me the appropriate values (see above).
system: Linux version 2.6.18-194.17.1.el5PAE
i386
I've read the other entries here referring to this function and they make it seem pretty straightforward. So where did I go astray?
What is the size of fsblkcnt_t? If it's 32-bit then it's an overflow problem and you simply need to temporarily use a 64-bit size during the calculation.
Related
I would like to compress a bunch of data in a buffer and write to a file such that it is gzip compatible. The reason for doing this is that I have multiple threads that can be compressing their own data in parallel and require a lock only when writing to the common output file.
I have some dummy code below based on the zlib.h docs for writing a gz compatible, but I get a gzip: test.gz: unexpected end of file when I try to decompress the output. Can anyone tell me what might be going wrong ?
Thank you
#include <cassert>
#include <fstream>
#include <string.h>
#include <zlib.h>
int main()
{
char compress_in[50] = "I waaaaaaaant tooooooo beeeee compressssssed";
char compress_out[100];
z_stream bufstream;
bufstream.zalloc = Z_NULL;
bufstream.zfree = Z_NULL;
bufstream.opaque = Z_NULL;
bufstream.avail_in = ( uInt )strlen(compress_in) + 1;
bufstream.next_in = ( Bytef * ) compress_in;
bufstream.avail_out = ( uInt )sizeof( compress_out );
bufstream.next_out = ( Bytef * ) compress_out;
int res = deflateInit2( &bufstream, Z_DEFAULT_COMPRESSION, Z_DEFLATED, 15 + 16, 8, Z_DEFAULT_STRATEGY );
assert ( res == Z_OK );
res = deflate( &bufstream, Z_FINISH );
assert( res == Z_STREAM_END );
deflateEnd( &bufstream );
std::ofstream outfile( "test.gz", std::ios::binary | std::ios::out );
outfile.write( compress_out, strlen( compress_out ) + 1 );
outfile.close();
return 0;
}
The length of the compressed data written to the output buffer is the space you provided for the output buffer minus the space remaining in the output buffer. So, sizeof( compress_out ) - bufstream.avail_out. Do:
outfile.write( compress_out, sizeof( compress_out ) - bufstream.avail_out );
I'm trying to write a DNS Resolver with user-supplied resolvers(just a text file with several IP addresses that could be used for querying) using the standalone ASIO/C++ library and I have failed on every attempt to make the receiver work. All the resolvers do not seem to be responding(udp::receive_from) to the query I'm sending. However, when I try to use the same resolver file with an external library like dnslib, they work like charm, so problem lies in my code. Here's the code I'm using to send data to the DNS servers.
struct DNSPktHeader
{
uint16_t id{};
uint16_t bitfields{};
uint16_t qdcount{};
uint16_t ancount{};
uint16_t nscount{};
uint16_t arcount{};
};
// dnsname, for example is -> google.com
// dns_resolver is a list of udp::endpoint of IPv4 address on port 53.
// ip is the final result
// returns 0 on success and negative value on failure
int get_host_by_name( char const *dnsname, std::vector<udp::endpoint> const & dns_resolvers, OUT uint16_t* ip )
{
uint8_t netbuf[128]{};
char const *funcname = "get_host_by_name";
uint16_t const dns_id = rand() % 2345; // warning!!! Simply for testing purpose
DNSPktHeader dns_qry{};
dns_qry.id = dns_id;
dns_qry.qdcount = 1;
dns_qry.bitfields = 0x8; // set the RD field of the header to 1
// custom_htons sets the buffer pointed to by the second argument netbuf
// to the htons of the first argument
custom_htons( dns_qry.id, netbuf + 0 );
custom_htons( dns_qry.bitfields, netbuf + 2 );
custom_htons( dns_qry.qdcount, netbuf + 4 );
custom_htons( dns_qry.ancount, netbuf + 6 );
custom_htons( dns_qry.nscount, netbuf + 8 );
custom_htons( dns_qry.arcount, netbuf + 10 );
unsigned char* question_start = netbuf + sizeof( DNSPktHeader ) + 1;
// creates the DNS question segment into netbuf's specified starting index
int len = create_question_section( dnsname, (char**) &question_start, thisdns::dns_record_type::DNS_REC_A,
thisdns::dns_class::DNS_CLS_IN );
if( len < 0 ){
fmt::print( stderr, "{}: {} ({})\n", funcname, dnslib_errno_strings[DNSLIB_ERRNO_BADNAME - 1], dnsname );
return -EFAULT;
}
len += sizeof( DNSPktHeader );
fmt::print( stdout, "{}: Submitting DNS A-record query for domain name ({})\n", funcname, dnsname );
asio::error_code resolver_ec{};
udp::socket udp_socket{ DNSResolver::GetIOService() };
udp_socket.open( udp::v4() );
// set 5 seconds timeout on receive and reuse the address
udp_socket.set_option( asio::ip::udp::socket::reuse_address( true ) );
udp_socket.set_option( asio::detail::socket_option::integer<SOL_SOCKET, SO_RCVTIMEO>{ 5'000 } );
udp_socket.bind( udp::endpoint{ asio::ip::make_address( "127.0.0.1" ), 53 } );
std::size_t bytes_read = 0, retries = 1;
int const max_retries = 10;
asio::error_code receiver_err{};
uint8_t receive_buf[0x200]{};
udp::endpoint default_receiver{};
do{
udp::endpoint const & resolver_endpoint{ dns_resolvers[retries] };
int bytes_sent = udp_socket.send_to( asio::buffer( netbuf, len ), resolver_endpoint, 0, resolver_ec );
if( bytes_sent < len || resolver_ec ){
fmt::print( stderr, "{}: (found {}, expected {})\n", funcname, i, sizeof( DNSPktHeader ) );
return -EFAULT;
}
// ======== the problem ==============
bytes_read = udp_socket.receive_from( asio::buffer( receive_buf, sizeof( receive_buf ) ), default_receiver, 0,
receiver_err );
// bytes_read always return 0
if( receiver_err ){
fmt::print( stderr, "{}\n\n", receiver_err.message() );
}
} while( bytes_read == 0 && retries++ < max_retries );
//...
}
I have tried my best but it clearly isn't enough. Could you please take a look at this and help figure where the problem lies? It's my very first time using ASIO on any real-life project.
Don't know if this would be relevant but here's create_question_section.
int create_question_section( const char *dnsname, char** buf, thisdns::dns_record_type type, thisdns::dns_class class_ )
{
char const *funcname = "create_question_section";
if( dnsname[0] == '\0' ){ // Blank DNS name?
fmt::print( stderr, "{}: Blank DNS name?\n", funcname );
return -EBADF;
}
uint8_t len{};
int index{};
int j{};
bool found = false;
do{
if( dnsname[index] != '.' ){
j = 1;
found = false;
do{
if( dnsname[index + j] == '.' || dnsname[index + j] == '\0' ){
len = j;
strncpy( *buf, (char*) &len, 1 );
++( *buf );
strncpy( *buf, (char*) dnsname + index, j );
( *buf ) += j;
found = true;
if( dnsname[index + j] != '\0' )
index += j + 1;
else
index += j;
} else{
j++;
}
} while( !found && j < 64 );
} else{
fmt::print( stderr, "{}: DNS addresses can't start with a dot!\n", funcname );
return -EBADF; // DNS addresses can't start with a dot!
}
} while( dnsname[index] );
uint8_t metadata_buf[5]{};
custom_htons( (uint16_t)type, metadata_buf + 1 );
custom_htons( (uint16_t)class_, metadata_buf + 3 );
strncpy( *buf, (char*) metadata_buf, sizeof(metadata_buf) );
return sizeof( metadata_buf ) + index + 1;
}
There are at least two issues why it's not working for you. They all boil down to the fact that the DNS packet you send out is malformed.
This line
unsigned char* question_start = netbuf + sizeof( DNSPktHeader ) + 1;
sets the pointer into the buffer one position farther than you want. Instead of starting the encoded FQDN at position 12 (as indexed from 0) it starts at position 13. What that means is that the DNS server sees a zero length domain name and some garbage record type and class and ignores the rest. And so it decides not to respond to your query at all. Just get rid of +1.
Another possible issue could be in encoding all the records with custom_htons(). I have no clue how it's implemented and so cannot tell you whether it works correctly.
Furthermore, although not directly responsible for your observed behaviour, the following call to bind() will have zero effect unless you run the binary as root (or with appropriate capabilities on linux) because you are trying to bind to a privileged port
udp_socket.bind( udp::endpoint{ asio::ip::make_address( "127.0.0.1" ), 53 } );
Also, this dns_qry.bitfields = 0x8; doesn't do what you want. It should be dns_qry.bitfields = 0x80;.
Check this and this RFC out for reference on how to form a valid DNS request.
Important note: I would strongly recommend to you not to mix C++ with C. Pick one but since you tagged C++ and use Boost and libfmt, stick with C++. Replace all your C-style casts with appropriate C++ versions (static_cast, reinterpret_cast, etc.). Instead of using C-style arrays, use std::array, don't use strncpy, etc.
I have small C++ program where main process is "creating data" and sends them to the child (fork) process which should read that data. My problem is that in school my code works well, but on my own laptop both processes get stuck right after the program start. Specifically both of them are in Waiting Channel "do_msgrcv".
Here is my code:
#define VYROBA 1 // Manufacturer
#define PREPRAVA 2 // Transport
void manufacturer ( ) {
static int count = 0;
int rcv [ 2 ];
while ( 1 ) {
int snd [ 2 ] = { VYROBA, count };
int ret = msgsnd ( glb_msg_id, &snd, sizeof ( int ), 0 );
ret = msgrcv ( glb_msg_id, &rcv, sizeof ( int ), PREPRAVA, 0 );
printf ( "Got crate\n" );
}
}
void consumer ( ) {
static int count = 0;
int rcv [ 2 ];
while ( 1 ) {
int ret = msgrcv ( glb_msg_id, &rcv, sizeof ( int ), VYROBA, 0 );
usleep ( 500000 );
if ( ret < 0 ) {
printf ( "Can't read message.\n" );
}
printf ( "Got product: %d\r\n", rcv [ 1 ] );
fflush ( stdout );
rcv [ 1 ]++;
if ( rcv [ 1 ] == 10 ) {
int snd [ 2 ] = { PREPRAVA, rcv [ 1 ] };
ret = msgsnd ( glb_msg_id, &snd, sizeof ( int ), 0 );
} else {
ret = msgsnd ( glb_msg_id, &rcv, sizeof ( int ), 0 );
}
}
}
If it helps in school we have Ubuntu 12.04 and I'm using Ubuntu 16.04.
Thanks for any help.
You are using the system V message queue, which is supported by the linux kernel since version 2.6, so both Ubuntu versions you consider.
Now if you look at the manual, msgrcv() and msgsnd() use a message buffer, hwich should have the structure:
struct msgbuf {
long mtype; /* message type, must be > 0 */
char mtext[1]; /* message data */
};
As soon as you start not using this structure but some manually managed buffer, you have the risk of getting non portable code (e.g. you have to ensure that the right sizes and padding, to take into consideration endianness, etc...). And this is certainly what happens here.
The message structure starts with a message type coded as long which you assume to be same as int (the first element of your array). But the C++ standard doesn't fix the size of int and long (only their minimum size): this size may vary between platforms, compilers and CPU architecture. For instance:
if at university you run Ubuntu 32 bits, int and long would both be 32 bits and have the same size. Your code works.
If you run Ubuntu 64 bits at home, your long would be 64 bits, whereas the int would still be 32 bits. So your buffers would be too short, causing msgsnd() and msgrcv() to overwlow the buffers. This would be undefined behaviour, not speaking of the fact that the message type would be corrupted.
I think that changing your code and use the right structure for the buffer should solve the problem.
Additional remark
By the way, your forking logic makes manufacturer() being executed twice: on the original process, but also on the forked ones once consumer finished !
if ( !fork ( ) ) {
consumer ( );
}
manufacturer ( );
Better use an else to make sure it's used only where it is supposed to be.
I am using openssl and trying to decrypt data which is encrypted using RSA_SSLV23_PADDING. The code is as follows:
BIO *pBPK=NULL;
RSA *pPrivKey;
pBPK = BIO_new_mem_buf ( ( void* ) strKey, -1 );
pPrivKey = PEM_read_bio_RSAPrivateKey ( pBPK, NULL, NULL, NULL );
int flen = RSA_size ( pPrivKey );
unsigned char* from = (unsigned char*)strData;
int maxSize = RSA_size ( pPrivKey );
unsigned char* to = new unsigned char[maxSize];
int res = RSA_private_decrypt ( flen, from, to, pPrivKey, RSA_SSLV23_PADDING );
But I am always getting res as -1. When I use RSA_PKCS1_PADDING or RSA_PKCS1_OAEP_PADDING then it works fine.
Decryption with RSA_SSLV23_PADDING not working
That's a padding scheme used for rollback-attack detection in SSLv3 and above. Its meant to be used in the context of SSL/TLS.
When I use RSA_PKCS1_PADDING or RSA_PKCS1_OAEP_PADDING then it works fine.
Right, these are modern RSA padding schemes.
int res = RSA_private_decrypt ( flen, from, to, pPrivKey, RSA_SSLV23_PADDING );
To use RSA_SSLV23_PADDING, I believe you call EVP_PKEY_CTX_set_rsa_padding on the EVP_PKEY_CTX*. See EVP_PKEY_CTX_ctrl man pages for details.
You should probably avoid RSA_PKCS1_PADDING, and use RSA_PKCS1_OAEP_PADDING. For reading, see A bad couple of years for the cryptographic token industry.
I have found this function which uses libjpeg to write to a file:
int write_jpeg_file( char *filename )
{
struct jpeg_compress_struct cinfo;
struct jpeg_error_mgr jerr;
/* this is a pointer to one row of image data */
JSAMPROW row_pointer[1];
FILE *outfile = fopen( filename, "wb" );
if ( !outfile )
{
printf("Error opening output jpeg file %s\n!", filename );
return -1;
}
cinfo.err = jpeg_std_error( &jerr );
jpeg_create_compress(&cinfo);
jpeg_stdio_dest(&cinfo, outfile);
/* Setting the parameters of the output file here */
cinfo.image_width = width;
cinfo.image_height = height;
cinfo.input_components = bytes_per_pixel;
cinfo.in_color_space = color_space;
/* default compression parameters, we shouldn't be worried about these */
jpeg_set_defaults( &cinfo );
/* Now do the compression .. */
jpeg_start_compress( &cinfo, TRUE );
/* like reading a file, this time write one row at a time */
while( cinfo.next_scanline < cinfo.image_height )
{
row_pointer[0] = &raw_image[ cinfo.next_scanline * cinfo.image_width * cinfo.input_components];
jpeg_write_scanlines( &cinfo, row_pointer, 1 );
}
/* similar to read file, clean up after we're done compressing */
jpeg_finish_compress( &cinfo );
jpeg_destroy_compress( &cinfo );
fclose( outfile );
/* success code is 1! */
return 1;
}
I would actually need to write the jpeg compressed image just to memory buffer, without saving it to a file, to save time. Could somebody give me an example how to do it?
I have been searching the web for a while but the documentation is very rare if any and examples are also difficult to come by.
You can define your own destination manager quite easily. The jpeg_compress_struct contains a pointer to a jpeg_destination_mgr, which contains a pointer to a buffer, a count of space left in the buffer, and 3 pointers to functions:
init_destination (j_compress_ptr cinfo)
empty_output_buffer (j_compress_ptr cinfo)
term_destination (j_compress_ptr cinfo)
You need to fill in the function pointers before you make the first call into the jpeg library, and let those functions handle the buffer. If you create a buffer that is larger than the largest possible output that you expect, this becomes trivial; init_destination just fills in the buffer pointer and count, and empty_output_buffer and term_destination do nothing.
Here's some sample code:
std::vector<JOCTET> my_buffer;
#define BLOCK_SIZE 16384
void my_init_destination(j_compress_ptr cinfo)
{
my_buffer.resize(BLOCK_SIZE);
cinfo->dest->next_output_byte = &my_buffer[0];
cinfo->dest->free_in_buffer = my_buffer.size();
}
boolean my_empty_output_buffer(j_compress_ptr cinfo)
{
size_t oldsize = my_buffer.size();
my_buffer.resize(oldsize + BLOCK_SIZE);
cinfo->dest->next_output_byte = &my_buffer[oldsize];
cinfo->dest->free_in_buffer = my_buffer.size() - oldsize;
return true;
}
void my_term_destination(j_compress_ptr cinfo)
{
my_buffer.resize(my_buffer.size() - cinfo->dest->free_in_buffer);
}
cinfo->dest->init_destination = &my_init_destination;
cinfo->dest->empty_output_buffer = &my_empty_output_buffer;
cinfo->dest->term_destination = &my_term_destination;
There is a predefined function jpeg_mem_src defined in jdatasrc.c. The simplest usage example:
unsigned char *mem = NULL;
unsigned long mem_size = 0;
struct jpeg_compress_struct cinfo;
struct jpeg_error_mgr jerr;
cinfo.err = jpeg_std_error(&jerr);
jpeg_create_compress(&cinfo);
jpeg_mem_dest(&cinfo, &mem, &mem_size);
// do compression
// use mem buffer
Do not forget to deallocate your buffer.
I have tried Mark's solution and on my platform it always gives SEGMENTATION FALUT error when it executes
cinfo->dest->term_destination = &my_term_destination;
And I turned to the jpeglib source codes (jdatadst.c) and found this:
jpeg_mem_dest (j_compress_ptr cinfo, unsigned char ** outbuffer, unsigned long * outsize)
just below the method jpeg_stdio_dest(), and I've tried it by simply fill in the address of the buffer(char*) and the address of the buffer size(int). The destination manager automatically allocates memory for the buffer and the program need to free the memory after use.
It successfully runs on my platform, Beaglebone Black with the pre-installed Angstrom Linux. My libjpeg version is 8d.
All you need to do is pass a FILE-like object to jpeg_stdio_dest().
unsigned char ***image_ptr
unsigned char* ptr;
unsigned char** image_buf;
for(int i=0;i<h;i++){
image_buf[i] = new unsigned char[w*o];
}
ptr = image_buf[0];
while (info.output_scanline < info.image_height) {
jpeg_read_scanlines(&info,&ptr,1);
ptr = image_buf[c];
c++;
}
*image_ptr = image_buf;
This is all you need to read.
JSAMPROW row_pointer;
while (info.next_scanline < info.image_height) {
row_pointer = &image_buf[info.next_scanline][0];
(void) jpeg_write_scanlines(&info, &row_pointer, 1);
}
And this is all you need to write.