Write binary data to stdout in Crystal - crystal-lang

I'm trying to output binary data to stdout (to serve some dynamic binary data using Kemal).
Here is a test:
size = File.size( "./img.png" )
slice = Slice( UInt8 ).new( size )
File.open( "./img.png" ) do |file|
file.read_fully( slice )
end
I tried without success:
slice
slice.hexdump
slice.hexstring
slice.to_a
slice.to_s
slice.to_unsafe.value

You can just use IO#write(Slice):
STDOUT.write(slice)

Related

Streaming decompression of data in a vector

I need to do the following procedure.
Compress an input text into an array.
Split the compressed output into multiple pieces with approx. the same length and store them in a vector.
Decompress using streaming decompression.
Is it possible to do that?
Consider that in my case, the size of each block is fixed and independent of the compression scheme.
In this example here, the decompression function returns the size of the next block, I wonder if that is somewhat related to the compression scheme, i.e. you cannot randomly take a sub-array contained in the full compressed array and decompress it.
I need to use zstd, no other compression algorithms.
Here is what I tried so far.
//std::vector<std::string_view> _content_compressed passed as parameter
ZSTD_DStream* const dstream = ZSTD_createDStream();
ZSTD_initDStream(dstream);
std::vector<char*> vec;
for (auto el : _content_compressed)
{
auto ee = el.data();
char* decompressed = new char[1000];
ZSTD_inBuffer input = { el.data(), el.size(), 0 };
ZSTD_outBuffer output = { decompressed, _decompressed_size, 0 };
std::size_t toRead = ZSTD_decompressStream(dstream, &output, &input);
vec.push_back(decompressed);
}
The problem is that decompressed doesn't contain the decompressed value at the end.

Read/write large object from postgres using pqxx

The main pqxx API works with columns as text. So how to access binary data from large objects (LOB) using the pqxx library?
There are a couple of ways. The first way, is to translate the data to/from bytea, and work through the common pqxx api. If you know how to work with bytea, probably this is your way. Here is example how to insert a string as lob, plain sql, no c++ code:
select lo_from_bytea(0, 'this is a test'::bytea);
...
select encode(lo_get(190850), 'escape'); -- here 190850 is the oid for lob created by the first line.
The other option is to use iostream API provided by pqxx library. There is no much examples of how to use it, so here we go:
// write lob
auto conn = std::make_shared<pqxx::connection>(url);
auto tran = std::make_shared<pqxx::work>(*conn);
auto stream = std::make_shared<pqxx::olostream>(*tran, oid);
stream->write(data, size);
stream->flush();
stream.reset();
tran->commit();
// read lob
stream = std::make_shared<pqxx::ilostream>(*tran, oid);
...
sszie_t get_chunk(shard_ptr<> stream, char *buf, size_t max_len)
{
while (!stream->eof() && len < max_len && stream->get(buf[len])) {
len++;
}
return (len > 0 || !stream->eof()) ? len : -1;
}
Note: there is a bug in pqxx::ilostream, you can get truncated data if 0xff byte in the data will hit the inner buffer boundary, it will mistakenly considered as EOF character. The bug was fixed in the february 2020, and for now this fix didn't get to all distributions.

Print full data of char* pointer?

I'm new in C++ and I want to show data of char* pointer before writing it to file but the output only 5 first bytes. I don't know what is wrong with it. I'm using __android_log_print to show log.
This is code to write data to file :
Logcat show :
WriteUnbuffered filename = /data/user/0/com.abc.helloleveldb/databases/leveldb/000003.log
WriteUnbuffered size = 37 ,data before encrypt = ��� -> only 5bytes .
but data write to the full file, 2 pairs of data ( test: okok and test1: hello)
If your have any suggest , please help me. Thank you
The content of the buffer at data looks to be binary data and is unsuitable for to pass in for a %s format.
I would suggest iterating through every char (byte) individually and logging the hex value instead, something like this:
for(size_t i=0; i<size; ++i) {
LOGD("WriteUnbuffered data[%zu]=0x%02x", i, (data[i])&0xff);
}

How to delete data from a file based on an offset?

I am currently writing a file-page-manager program which basically write, append and read pages to binary files. For the writing function, I have to delete the whole content of the specified page and write new content. I need to delete data from a file within a specific range like delete data from position 30 to position 4096.
If there is no more data after position 4096 then you can use truncate(2) to shrink the file to 30 bytes.
If you have more data after 4096 byte then you could first overwrite the data starting at position 30 by the data present after 4096th byte. Then you can truncate the file to [original_filesize - (4096-30)] bytes.
The only way to delete data in a file is to mark it as deleted. Use some value to indicate that the section(s) are deleted. Otherwise, copy the sections you want to save to a new file.
So easy with std::string
Follow the steps:
read the file and extract it to the std::string
std::ifstream input_file_stream( "file" );
const unsigned size_of_file = input_file_stream.seekg( 0, std::ios_base::end ).tellg();
input_file_stream.seekg( 0, std::ios_base::beg ); // rewind
std::string whole_file( size_of_file, ' ' ); // reserved space for the whole file
input_file_stream.read( &* whole_file.begin(), size_of_file );
then erase what you want:
// delete with offset
whole_file.erase( 10, // size_type index
200 ); // size_type count
and finally write to a new file:
// write it to the new file:
std::ofstream output_file_stream( "new_file" );
output_file_stream.write( &*whole_file.begin(), whole_file.size() );
output_file_stream.close();
input_file_stream.close();

Store array in C permanantly

Suppose I write a program in C/C++ and create an array of certain size. I want to keep that permanently even if I switch off the computer and later access it. Is there a way to do that? If so do let me know and also after saving it, how to access it.
Save the data to a file and load it on programm start.
Say you create a vector of MAX size for a string:
char * str = (char *) malloc( MAX );
At some point, you fill it with some data:
strcpy( str, "Useful data in the form of a string" );
Finally, at the program's end, you save it to a file:
FILE * f = fopen( "data.bin", "wb" );
fwrite( str, 1, MAX, f );
fclose( f );
At the beginning of the next execution, you'd like to load it:
char * str = (char *) malloc( MAX );
FILE * f = fopen( "data.bin", "rb" );
fread( str, 1, MAX, f );
fclose( f );
This solution has a few shortcomings: for example your data will be only useful for the computer in which you saved it. If you want portability, then you should use text and XML: http://www.jclark.com/xml/expat.html
Hope this helps.
You could use a memory mapped file and use offsets in the memory mapped file in place of pointers. You would have to implement your own dynamic block allocation management in the memory mapped file.
Using offsets would be less efficient than pointers. But you would load and save the data structure in a snap.
It is possible to avoid the use of offset and use real pointers instead. To do this, you save the pointer value to the memory mapped file when you close the memory mapped file. When you load the memory mapped file, you would then have to adjust all pointers in the data structure by adding the offset of the pointer to the memory mapped file.
If the data structure is small, you could do it in one pass when the file is mapped into memory. If the data structure is big, you could do it in a lazy way and only fix pointers of struct when you access them for the first time.