i have a weird problem.
I added one field to my protobuf like this:
{
optional uint32 avg_body_length = 90;
optional uint32 pub_body_length = 95; //new field
}
My program(a service running continuously) updates pub_body_length and then writes it to some storage periodically.
After several hours, i found the size of the serialized protobuf grew rapidly.
By analyzing the serialized protobuf manually, it turns out field pub_body_length appears thousands of time with the same value.
I use c++ and the version of protoc is 2.4.1
Anyone has any clue?
Related
Let's say I have compiled an application (Receiver) with the following proto file:
syntax = "proto3";
message Control {
bytes version = 1;
uint32 id = 2;
bytes color = 3;
}
and I have another application (Transmitter) which initially has the same proto file but after an update a new field is added like:
syntax = "proto3";
message Control {
bytes name = 1;
uint32 id = 2;
bytes color = 3;
uint32 color_id = 4;
}
I have seen that if the Receiver app tries to parse the proto, change some data and then serialize it back the added fields coming from the Transmitter app are removed.
I need a way to change the id field directly accessing to the raw bytes without having to parse/serialize the proto. Is it possible ?
This is needed because I have some "header" fields in the Control message that I know that will never be changed but others that can be added/changed in the same proto of trasmitter app due to app update.
I have seen: https://developers.google.com/protocol-buffers/docs/reference/cpp/google.protobuf.io.coded_stream
but I was not able to modify an existing bytestream and the ReadString is not able to understand the string length.
Thanks in advance
I don't think there is an official way to do it. You could do this by hand following the encoding guidelines by protobuf (https://developers.google.com/protocol-buffers/docs/encoding#structure).
Basically you should do this:
start decoding with the very first bit
decode until you reach the field number of the id
identify the bits representing the id and replace them with your new (encoded!) id
This is bad for several reasons. Most importantly, your code has to know details about the message structure and content (field number and data type of your id), and this is exactly what you want to avoid when using protocol buffers (you always need some info from the .proto files).
In proto2 syntax, protobuf C++ library used to preserve unknown fields so that when you re-encoded the message, they would remain. Unfortunately this feature (like many others) have been removed in the proto3 syntax.
One workaround could be to do it this way:
Set only the new id value in the Receiver message and encode it.
Append this data after the original binary data.
This relies on the protobuf feature that appended messages replace original values of fields in protobuf messages.
Hmm, actually reading the issue report linked above, it seems that you can turn on unknown field preservation in protobuf version 3.5 and newer.
Just deserialize the entire message and map it on the new message. It is the cleanest way. You do not have a lot of data and probably no real time requirements. Create a mapper and do not overthink the problem.
I've got a project using Crypto++.
Crypto++ is a own project which builds in a static lib.
Aside from that I have another large project using some of the Crypto++ classes and processing various algorithms, which also builds in a static lib.
Two of the functions are these:
long long MyClass::EncryptMemory(std::vector<byte> &inOut, char *cPadding, int rounds)
{
typedef std::numeric_limits<char> CharNumLimit;
char sPadding = 0;
//Calculates padding and returns value as provided type
sPadding = CalcPad<decltype(sPadding)>(reinterpret_cast<MyClassBase*>(m_P)->BLOCKSIZE, static_cast<int>(inOut.size()));
//Push random chars as padding, we never care about padding's content so it doesn't matter what the padding is
for (auto i = 0; i < sPadding; ++i)
inOut.push_back(sRandom(CharNumLimit::min(), CharNumLimit::max()));
std::size_t nSize = inOut.size();
EncryptAdvanced(inOut.data(), nSize, rounds);
if (cPadding)
*cPadding = sPadding;
return nSize;
}
//Removing the padding is the responsibility of the caller.
//Nevertheless the string is encrypted with padding
//and should here be the right string with a little padding
long long MyClass::DecryptMemory(std::vector<byte> &inOut, int rounds)
{
DecryptAdvanced(inOut.data(), inOut.size(), rounds);
return inOut.size();
}
Where EncryptAdvanced and DecryptAdvanced pass the arguments to the Crypto++ object.
//...
AdvancedProcessBlocks(bytePtr, nullptr, bytePtr, length, 0);
//...
These functions have so far worked flawless, no modifications have been applied to them since months.
The logic around them has evolved, though the calls and data passed to them did not change.
The data being encrypted / decrypted is rather small but has a dynamic size, which is being padded if (datasize % BLOCKSIZE) has a remainder.
Example: AES Blocksize is 16. Data is 31. Padding is 1. Data is now 32.
After encrypting and before decrypting, the string is the same - as in the picture.
Running all this in debug mode apparently works as intended. Even when running this program on another computer (with VS installed for DLLs) it shows no difference. The data is correctly encrypted and decrypted.
Trying to run the same code in release mode results in a totally different encrypted string, plus it does not decrypt correctly - "trash data" is decrypted. The wrongly encrypted or decrypted data is consistent - always the same trash is decrypted. The key/password and the rounds/iterations are the same all the time.
Additional info: The data is saved in a file (ios_base::binary) and correctly processed in debug mode, from two different programs in the same solution using the same static librar(y/ies).
What could be the cause of this Debug / Release problem ?
I re-checked the git history a couple of times, debugged for days through the code, yet I cannot find any possible cause for this problem. If any information - aside from a (here rather impossible) MCVE is needed, please leave a comment.
Apparently this is a bug in CryptoPP. The minimum keylength of Rijndael / AES is set to 8 instead of 16. Using a invalid keylength of 8 bytes will cause a out-of-bounds access to the in-place array of Rcon values. This keylength of 8 byte is currently reported as valid and has to be fixed in CryptoPP.
See this issue on github for more information. (On-going conversation)
I sincerely apologize if this has been asked before, however I was unable to find a suitable answer that appeared similar to my current situation. I am developing an app with MoSync using the MAUI because of the same appearance across all platforms. I am running into issues with understanding MAHandles, as well as how to go about sending the SQLite information to a web address. The SQLite commands will then be converted to MySQL commands using a RedBean PHP script, and then sent to the permanent database.
My biggest concerns are 2 items:
1.Declaring connections that are usable through MAHandles (I have already gotten the SQLite commands working without using MAHandles, however declaring the database address in the resources.lstx still evades me)
2.Declaring MAHandles in general.
Also, I understand that strings are much more effective, however I disregard that fact due to the age of MAUI and it's capabilities appear much smoother when using char arrays.
I can provide additional clarification if needed so that I can help speed this process up.
Thank you ahead of time, and hopefully this will help others trying their hands at MoSync's immaculate product.
I have no experience with SQLite whatsoever, but I'm assuming handling SQLite commands is the job of your server-side application. To be clear, you are sending SQLite commands from your mobile app to a server-side app via a URL, correct? If you need help on this you should search "CGI". CGI is essentially a way to execute a server-side application with arguments via an http:// request.
This means your app should have a manager that constructs a URL with the right SQLite commands based on the input events sent to your mobile app (buttons, text fields, etc).
As far as Mosync goes, MAHandles can be used for many things including downloading.
Take a look at the MAUtil::DownloadListener class on Mosync's doxygen pages.
You will see that there are full descriptions of 5 pure virtual functions that you will need to implement.
The bulk of your code will probably be in finishedDownloading( Downloader* dl, MAHandle data ). It is here that the MAHandle "data", will point to the beginning of the data segment that you downloaded.
I read my data into a char* since I am downloading text.
Here's a snippet:
void MainScreen::finishedDownloading( Downloader* dl, MAHandle data )
{
char* mData = new char[ maGetDataSize( data ) + 1 ];
memset( mData, 0, maGetDataSize( data ) + 1 );
maReadData( data, mData, 0, maGetDataSize( data ) );
// Destroy the store
maDestroyObject( data );
// Do something with mData;
}
Here's one example of setting the font of NativeUI::Widget text using an MAHandle:
MAHandle font = maFontLoadDefault( FONT_TYPE_SERIF |
FONT_TYPE_MONOSPACE |
FONT_STYLE_NORMAL, 0, Dimensions::DIM_LIST_ELEM_FONT_SIZE );
ListViewItem* items = new ListViewItem();
items -> setFont( font );
This is my first post here, so please bear with me.
I have searched high and low on the internet for an answer, but I've not been able to resolve my issue, so I have decided to write a post here.
I am trying to write(append) to a JSON array on file using C++ and JZON, at intervals of 1 write each second. The JSON file is initially written by a “Prepare” function. Another function is then called each second to a add an array to the JSON file and append an new object to the array every second.
I have tried many things, most of which resulted in all sorts of issues. My latest attempt gave me the best results and this is the code that I have included below. However, the approach I took is very inefficient as I am writing an entire array every second. This is having a massive hit on CPU utilisation as the array grows, but not so much on memory as I had first anticipated.
What I really would like to be able to do is to append to an existing array contained in a JSON file on disk, line by line, rather than having to clear the entire array from the JSON object and rewriting the entire file, each and every second.
I am hoping that some of the geniuses on this website will be able to point me in the right direction.
Thank you very much in advance.
Here is my code:
//Create some object somewhere at the top of the cpp file
Jzon::Object jsonFlight;
Jzon::Array jsonFlightPath;
Jzon::Object jsonCoordinates;
int PrepareFlight(const char* jsonfilename) {
//...SOME PREPARE FUNCTION STUFF GOES HERE...
//Add the Flight Information to the jsonFlight root JSON Object
jsonFlight.Add("Flight Number", flightnum);
jsonFlight.Add("Origin", originicao);
jsonFlight.Add("Destination", desticao);
jsonFlight.Add("Pilot in Command", pic);
//Write the jsonFlight object to a .json file on disk. Filename is passed in as a param of the function.
Jzon::FileWriter::WriteFile(jsonfilename, jsonFlight, Jzon::NoFormat);
return 0;
}
int UpdateJSON_FlightPath(ACFT_PARAM* pS, const char* jsonfilename) {
//Add the current returned coordinates to the jsonCoordinates jzon object
jsonCoordinates.Add("altitude", pS-> altitude);
jsonCoordinates.Add("latitude", pS-> latitude);
jsonCoordinates.Add("longitude", pS-> longitude);
//Add the Coordinates to the FlightPath then clear the coordinates.
jsonFlightPath.Add(jsonCoordinates);
jsonCoordinates.Clear();
//Now add the entire flightpath array to the jsonFlight object.
jsonFlight.Add("Flightpath", jsonFlightPath);
//write the jsonFlight object to a JSON file on disk.
Jzon::FileWriter::WriteFile(jsonfilename, jsonFlight, Jzon::NoFormat);
//Remove the entire jsonFlighPath array from the jsonFlight object to avoid duplicaiton next time the function executes.
jsonFlight.Remove("Flightpath");
return 0;
}
For sure you can do "flat file" storage yourself.. but this is a symptom of needing a database. Something very light like SQLite, or mid-weight & open-source like MySQL, FireBird, or PostgreSQL.
But as to your question:
1) Leave the closing ] bracket off, and just keep the file open & appending -- but if you don't close the file correctly, it will be damaged & need repair to be readable.
2) Your current option -- writing a complete file each time -- isn't safe from data loss either, as the moment you "open to overwrite" you lose all data previously stored in the file. The workaround here, is to rename the old file as a backup before you start writing.
You should also make backup copies of your file, with the first option. (Say at daily intervals). Otherwise data loss is likely to occur eventually -- on Ctrl-C, power loss, program error or system crash.
Of course if you use any of SQLlite, MySQL, Firebird or PostgreSQL all the data-integrity problems will be handled for you.
I would like to open a small video file and map every frames in memory (to apply some custom filter). I don't want to handle the video codec, I would rather let the library handle that for me.
I've tried to use Direct Show with the SampleGrabber filter (using this sample http://msdn.microsoft.com/en-us/library/ms787867(VS.85).aspx), but I only managed to grab some frames (not every frames!). I'm quite new in video software programming, maybe I'm not using the best library, or I'm doing it wrong.
I've pasted a part of my code (mainly a modified copy/paste from the msdn example), unfortunately it doesn't grabb the 25 first frames as expected...
[...]
hr = pGrabber->SetOneShot(TRUE);
hr = pGrabber->SetBufferSamples(TRUE);
pControl->Run(); // Run the graph.
pEvent->WaitForCompletion(INFINITE, &evCode); // Wait till it's done.
// Find the required buffer size.
long cbBuffer = 0;
hr = pGrabber->GetCurrentBuffer(&cbBuffer, NULL);
for( int i = 0 ; i < 25 ; ++i )
{
pControl->Run(); // Run the graph.
pEvent->WaitForCompletion(INFINITE, &evCode); // Wait till it's done.
char *pBuffer = new char[cbBuffer];
hr = pGrabber->GetCurrentBuffer(&cbBuffer, (long*)pBuffer);
AM_MEDIA_TYPE mt;
hr = pGrabber->GetConnectedMediaType(&mt);
VIDEOINFOHEADER *pVih;
pVih = (VIDEOINFOHEADER*)mt.pbFormat;
[...]
}
[...]
Is there somebody, with video software experience, who can advise me about code or other simpler library?
Thanks
Edit:
Msdn links seems not to work (see the bug)
Currently these are the most popular video frameworks available on Win32 platforms:
Video for Windows: old windows framework coming from the age of Win95 but still widely used because it is very simple to use. Unfortunately it supports only AVI files for which the proper VFW codec has been installed.
DirectShow: standard WinXP framework, it can basically load all formats you can play with Windows Media Player. Rather difficult to use.
Ffmpeg: more precisely libavcodec and libavformat that comes with Ffmpeg open- source multimedia utility. It is extremely powerful and can read a lot of formats (almost everything you can play with VLC) even if you don't have the codec installed on the system. It's quite complicated to use but you can always get inspired by the code of ffplay that comes shipped with it or by other implementations in open-source software. Anyway I think it's still much easier to use than DS (and much faster). It needs to be comipled by MinGW on Windows, but all the steps are explained very well here (in this moment the link is down, hope not dead).
QuickTime: the Apple framework is not the best solution for Windows platform, since it needs QuickTime app to be installed and also the proper QuickTime codec for every format; it does not support many formats, but its quite common in professional field (so some codec are actually only for QuickTime). Shouldn't be too difficult to implement.
Gstreamer: latest open source framework. I don't know much about it, I guess it wraps over some of the other systems (but I'm not sure).
All of this frameworks have been implemented as backend in OpenCv Highgui, except for DirectShow. The default framework for Win32 OpenCV is using VFW (and thus able only to open some AVI files), if you want to use the others you must download the CVS instead of the official release and still do some hacking on the code and it's anyway not too complete, for example FFMPEG backend doesn't allow to seek in the stream.
If you want to use QuickTime with OpenCV this can help you.
I have used OpenCV to load video files and process them. It's also handy for many types of video processing including those useful for computer vision.
Using the "Callback" model of SampleGrabber may give you better results. See the example in Samples\C++\DirectShow\Editing\GrabBitmaps.
There's also a lot of info in Samples\C++\DirectShow\Filters\Grabber2\grabber_text.txt and readme.txt.
I know it is very tempting in C++ to get a proper breakdown of the video files and just do it yourself. But although the information is out there, it is such a long winded process building classes to hand each file format, and make it easily alterable to take future structure changes into account, that frankly it just is not worth the effort.
Instead I recommend ffmpeg. It got a mention above, but says it is difficult, it isn't difficult. There are a lot more options than most people would need which makes it look more difficult than it is. For the majority of operations you can just let ffmpeg work it out for itself.
For example a file conversion
ffmpeg -i inputFile.mp4 outputFile.avi
Decide right from the start that you will have ffmpeg operations run in a thread, or more precisely a thread library. But have your own thread class wrap it so that you can have your own EventAgs and methods of checking the thread is finished. Something like :-
ThreadLibManager()
{
List<MyThreads> listOfActiveThreads;
public AddThread(MyThreads);
}
Your thread class is something like:-
class MyThread
{
public Thread threadForThisInstance { get; set; }
public MyFFMpegTools mpegTools { get; set; }
}
MyFFMpegTools performs many different video operations, so you want your own event
args to tell your parent code precisely what type of operation has just raised and
event.
enum MyFmpegArgs
{
public int thisThreadID { get; set; } //Set as a new MyThread is added to the List<>
public MyFfmpegType operationType {get; set;}
//output paths etc that the parent handler will need to find output files
}
enum MyFfmpegType
{
FF_CONVERTFILE = 0, FF_CREATETHUMBNAIL, FF_EXTRACTFRAMES ...
}
Here is a small snippet of my ffmpeg tool class, this part collecting information about a video.
I put FFmpeg in a particular location, and at the start of the software running it makes sure that it is there. For this version I have moved it to the Desktop, I am fairly sure I have written the path correctly for you (I really hate MS's special folders system, so I ignore it as much as I can).
Anyway, it is an example of using windowless ffmpeg.
public string GetVideoInfo(FileInfo fi)
{
outputBuilder.Clear();
string strCommand = string.Concat(" -i \"", fi.FullName, "\"");
string ffPath =
System.Environment.GetFolderPath(Environment.SpecialFolder.Desktop) + "\\ffmpeg.exe";
string oStr = "";
try
{
Process build = new Process();
//build.StartInfo.WorkingDirectory = #"dir";
build.StartInfo.Arguments = strCommand;
build.StartInfo.FileName = ffPath;
build.StartInfo.UseShellExecute = false;
build.StartInfo.RedirectStandardOutput = true;
build.StartInfo.RedirectStandardError = true;
build.StartInfo.CreateNoWindow = true;
build.ErrorDataReceived += build_ErrorDataReceived;
build.OutputDataReceived += build_ErrorDataReceived;
build.EnableRaisingEvents = true;
build.Start();
build.BeginOutputReadLine();
build.BeginErrorReadLine();
build.WaitForExit();
string findThis = "start";
int offset = 0;
foreach (string str in outputBuilder)
{
if (str.Contains("Duration"))
{
offset = str.IndexOf(findThis);
oStr = str.Substring(0, offset);
}
}
}
catch
{
oStr = "Error collecting file information";
}
return oStr;
}
private void build_ErrorDataReceived(object sender, DataReceivedEventArgs e)
{
string strMessage = e.Data;
if (outputBuilder != null && strMessage != null)
{
outputBuilder.Add(string.Concat(strMessage, "\n"));
}
}
Try using the OpenCV library. It definitely has the capabilities you require.
This guide has a section about accessing frames from a video file.
If it's for AVI files I'd read the data from the AVI file myself and extract the frames. Now use the video compression manager to decompress it.
The AVI file format is very simple, see: http://msdn.microsoft.com/en-us/library/dd318187(VS.85).aspx (and use google).
Once you have the file open you just extract each frame and pass it to ICDecompress() to decompress it.
It seems like a lot of work but it's the most reliable way.
If that's too much work, or if you want more than AVI files then use ffmpeg.
OpenCV is the best solution if video in your case only needs to lead to a sequence of pictures. If you're willing to do real video processing, so ViDeo equals "Visual Audio", you need to keep up track with the ones offered by "martjno". New windows solutions also for Win7 include 3 new possibilities additionally:
Windows Media Foundation: Successor of DirectShow; cleaned-up interface
Windows Media Encoder 9: It does not only include the programm, it also ships libraries for coding
Windows Expression 4: Successor of 2.
Last 2 are commercial-only solutions, but the first one is free. To code WMF, you need to install the Windows SDK.
I would recommend FFMPEG or GStreamer. Try and stay away from openCV unless you plan to utilize some other functionality than just streaming video. The library is a beefy build and a pain to install from source to configure FFMPEG/+GStreamer options.