How to parse JSON array from inside an object with rapidjson - c++

The following is a JSON file exported from Tiled Map Editor.
{ "compressionlevel":-1,
"height":32,
"infinite":false,
"layers":[
{
"data":[ A whole bunch of integers in here],
"height":32,
"id":1,
"name":"Tile Layer 1",
"opacity":1,
"type":"tilelayer",
"visible":true,
"width":32,
"x":0,
"y":0
}],
"nextlayerid":2,
"nextobjectid":1,
"orientation":"orthogonal",
"renderorder":"right-down",
"tiledversion":"1.7.2",
"tileheight":32,
"tilesets":[
{
"firstgid":1,
"source":"..\/..\/..\/..\/Desktop\/tileset001.tsx"
}],
"tilewidth":32,
"type":"map",
"version":"1.6",
"width":32
}
And in this C++ block I am trying to parse out the data I actually need.
std::ifstream inFStream(filePath, std::ios::in);
if(!inFStream.is_open())
{
printf("Failed to open map file: &s", filePath);
}
rapidjson::IStreamWrapper inFStreamWrapper{inFStream};
rapidjson::Document doc{};
doc.ParseStream(inFStreamWrapper);
_WIDTH = doc["width"].GetInt(); //get width of map in tiles
_HEIGHT = doc["height"].GetInt(); //get height of map in tiles
const rapidjson::Value& data = doc["layers"]["data"]; //FAILURE POINT
assert(data.IsArray());
When I compile I am able to extract the right value for width and height which are outside of "layers" :[{}]
But when that const rapidjson::Value& data = doc["layers"]["data"]; gets called I get a runtime error claiming that document.h line 1344 IsObject() Assertion Failed.
Ive been up and down the rapidjson website and other resources and cant find anything quite like this.
The next step would be to get the int values stored in "data" and push them into a std::vector but thats not going to happen till I figure out how to get access to "data"

doc['layers'] is an array.
const rapidjson::Value& layers = doc["layers"];
assert(layers.IsArray());
for (size_t i=0; i < layers.Size(); i++) {
const rapidjson::Value& data = doc["layers"][i]["data"];
assert(data.IsArray());
}
UPDATE:
Direct access to first data item in layers:
const rapidjson::Value& data = doc["layers"][0]["data"];
This only gives you the data for the first item in layers array. If layers have at least one item and you only need the first one, then this will always work.

Related

Read a list of parameters from a LuaRef using LuaBridge

[RESOLVED]
I'm building a game engine that uses LuaBridge in order to read components for entities. In my engine, an entity file looks like this, where "Components" is a list of the components that my entity has and the rest of parameters are used to setup the values for each individual component:
-- myEntity.lua
Components = {"MeshRenderer", "Transform", "Rigidbody"}
MeshRenderer = {
Type = "Sphere",
Position = {0,300,0}
}
Transform = {
Position = {0,150,0},
Scale = {1,1,1},
Rotation = {0,0,0}
}
Rigidbody = {
Type = "Sphere",
Mass = 1
}
I'm currently using this function (in C++) in order to read the value from a parameter (given its name) inside a LuaRef.
template<class T>
T readParameter(LuaRef& table, const std::string& parameterName)
{
try {
return table.rawget(parameterName).cast<T>();
}
catch (std::exception e) {
// std::cout ...
return NULL;
}
}
For example, when calling readVariable<std::string>(myRigidbodyTable, "Type"), with myRigidbodyTable being a LuaRef with the values of Rigidbody, this function should return an std::string with the value "Sphere".
My problem is that when I finish reading and storing the values of my Transform component, when I want to read the values for "Ridigbody" and my engine reads the value "Type", an unhandled exception is thrown at Stack::push(lua_State* L, const std::string& str, std::error_code&).
I am pretty sure that this has to do with the fact that my component Transform stores a list of values for parameters like "Position", because I've had no problems while reading components that only had a single value for each parameter. What's the right way to do this, in case I am doing something wrong?
I'd also like to point out that I am new to LuaBridge, so this might be a beginner problem with a solution that I've been unable to find. Any help is appreciated :)
Found the problem, I wasn't reading the table properly. Instead of
LuaRef myTable = getGlobal(state, tableName.c_str());
I was using the following
LuaRef myTable = getGlobal(state, tableName.c_str()).getMetatable();

C++ REST (Casablanca) - Failure while reading JSON

I was trying my hands with C++ REST APIs
I wrote to json using below way.
json::value resp;
std::vector<Portfolio> portfolio;
// Populate portfolio
this->PortfolioList(usrStr, pwdStr, portfolio);
std::vector<Portfolio>::iterator it;
for (it = portfolio.begin(); it != portfolio.end(); it++)
{
char costBuff[40]; _itoa_s(it->GetTotalCost(), costBuff, 10);
char qtyBuff[40]; _itoa_s(it->GetQuantity(), qtyBuff, 10);
json::value portfolioEntry;
portfolioEntry[U("username")] = json::value::string(utility::conversions::to_string_t(it->GetUserName()));
portfolioEntry[U("stockCode")] = json::value::string(utility::conversions::to_string_t(it->GetStockCode()));
portfolioEntry[U("quantity")] = json::value::string(utility::conversions::to_string_t(qtyBuff));
portfolioEntry[U("totalcost")] = json::value::string(utility::conversions::to_string_t(costBuff));
resp[utility::conversions::to_string_t(it->GetStockCode())] = portfolioEntry;
}
For this I got output as below
{
"11002":{"quantity":11002,"totalcost":"272","username":"arunavk"},
"11003":{"quantity":11003,"totalcost":"18700","username":"arunavk"},
"11004":{"quantity":11004,"totalcost":"760","username":"arunavk"},
"11005":{"quantity":11005,"totalcost":"32","username":"arunavk"}
}
Now, on the receiving end, I tried to read it as below
for (int i = 0; i < size; i++)
{
table->elementAt(i, 0)->addWidget(new Wt::WText(this->response[i][0].as_string()));
table->elementAt(i, 1)->addWidget(new Wt::WText(this->response[i][1].as_string()));
table->elementAt(i, 2)->addWidget(new Wt::WText(this->response[i][2].as_string()));
table->elementAt(i, 3)->addWidget(new Wt::WText(this->response[i][3].as_string()));
}
But it fails. What am I missing ?
Pardon me, I am new to this REST and Casablanca and JSON
From JSON point of view the following
{
"11002":{"quantity":11002,"totalcost":"272","username":"arunavk"},
"11003":{"quantity":11003,"totalcost":"18700","username":"arunavk"},
"11004":{"quantity":11004,"totalcost":"760","username":"arunavk"},
"11005":{"quantity":11005,"totalcost":"32","username":"arunavk"}
}
is java script object with properties "11002", ... "11005", is not array. So if you want to get value of property you have to use property name:
this->response["11002"]["quantity"]
because when you use integer index the json::value::operator [] supposes that you want to access the array element. Here is details https://microsoft.github.io/cpprestsdk/classweb_1_1json_1_1value.html#a56c751a1c22d14b85b7f41a724100e22
UPDATED
If you do not know properties of the recieved object, you can call the value::as_object method (https://microsoft.github.io/cpprestsdk/classweb_1_1json_1_1value.html#a732030bdee11c2f054299a0fb148df0e) to get JSON object and after that you can use specilized interface to iterate through the fields with begin and end iterators: https://microsoft.github.io/cpprestsdk/classweb_1_1json_1_1object.html#details

Setting input layer in CAFFE with C++

I'm writing C++ code using CAFFE to predict a single (for now) image. The image has already been preprocessed and is in .png format. I have created a Net object and read in the trained model. Now, I need to use the .png image as an input layer and call net.Forward() - but can someone help me figure out how to set the input layer?
I found a few examples on the web, but none of them work, and almost all of them use deprecated functionality. According to: Berkeley's Net API, using "ForwardPrefilled" is deprecated, and using "Forward(vector, float*)" is deprecated. API indicates that one should "set input blobs, then use Forward() instead". That makes sense, but the "set input blobs" part is not expanded on, and I can't find a good C++ example on how to do that.
I'm not sure if using a caffe::Datum is the right way to go or not, but I've been playing with this:
float lossVal = 0.0;
caffe::Datum datum;
caffe::ReadImageToDatum("myImg.png", 1, imgDims[0], imgDims[1], &datum);
caffe::Blob< float > *imgBlob = new caffe::Blob< float >(1, datum.channels(), datum.height(), datum.width());
//How to get the image data into the blob, and the blob into the net as input layer???
const vector< caffe::Blob< float >* > &result = caffeNet.Forward(&lossVal);
Again, I'd like to follow the API's direction of setting the input blobs and then using the (non-deprecated) caffeNet.Forward(&lossVal) to get the result as opposed to making use of the deprecated stuff.
EDIT:
Based on an answer below, I updated to include this:
caffe::MemoryDataLayer<unsigned char> *memory_data_layer = (caffe::MemoryDataLayer<unsigned char> *)caffeNet.layer_by_name("input").get();
vector< caffe::Datum > datumVec;
datumVec.push_back(datum);
memory_data_layer->AddDatumVector(datumVec);
but now the call to AddDatumVector is seg faulting.. I wonder if this is related to my prototxt format? here's the top of my prototxt:
name: "deploy"
input: "data"
input_shape {
dim: 1
dim: 3
dim: 100
dim: 100
}
layer {
name: "conv1"
type: "Convolution"
bottom: "data"
top: "conv1"
I base this part of the question on this discussion about a "source" field being important in the prototxt...
caffe::Datum datum;
caffe::ReadImageToDatum("myImg.png", 1, imgDims[0], imgDims[1], &datum);
MemoryDataLayer<float> *memory_data_layer = (MemoryDataLayer<float> *)caffeNet->layer_by_name("data").get();
memory_data_layer->AddDatumVector(datum);
const vector< caffe::Blob< float >* > &result = caffeNet.Forward(&lossVal);
Something like this could be useful. Here you will have to use MemoryData layer as the input layer. I am expecting the layer name to be named data.
The way of using datum variable may not be correct. If my memory is correct, I guess, you have to use a vector of datum data.
I think this should get you started.
Happy brewing. :D
Here is an excerpt from my code located here where I used Caffe in my C++ code. I hope this helps.
Net<float> caffe_test_net("models/sudoku/deploy.prototxt", caffe::TEST);
caffe_test_net.CopyTrainedLayersFrom("models/sudoku/sudoku_iter_10000.caffemodel");
// Get datum
Datum datum;
if (!ReadImageToDatum("examples/sudoku/cell.jpg", 1, 28, 28, false, &datum)) {
LOG(ERROR) << "Error during file reading";
}
// Get the blob
Blob<float>* blob = new Blob<float>(1, datum.channels(), datum.height(), datum.width());
// Get the blobproto
BlobProto blob_proto;
blob_proto.set_num(1);
blob_proto.set_channels(datum.channels());
blob_proto.set_height(datum.height());
blob_proto.set_width(datum.width());
int size_in_datum = std::max<int>(datum.data().size(),
datum.float_data_size());
for (int ii = 0; ii < size_in_datum; ++ii) {
blob_proto.add_data(0.);
}
const string& data = datum.data();
if (data.size() != 0) {
for (int ii = 0; ii < size_in_datum; ++ii) {
blob_proto.set_data(ii, blob_proto.data(ii) + (uint8_t)data[ii]);
}
}
// Set data into blob
blob->FromProto(blob_proto);
// Fill the vector
vector<Blob<float>*> bottom;
bottom.push_back(blob);
float type = 0.0;
const vector<Blob<float>*>& result = caffe_test_net.Forward(bottom, &type);
What about:
Caffe::set_mode(Caffe::CPU);
caffe_net.reset(new caffe::Net<float>("your_arch.prototxt", caffe::TEST));
caffe_net->CopyTrainedLayersFrom("your_model.caffemodel");
Blob<float> *your_blob = caffe_net->input_blobs()[0];
your_blob->set_cpu_data(your_image_data_as_pointer_to_float);
caffe_net->Forward();

How create a DICOM image from byte (DCMTK)

I want to use the DCMTK 3.6.1 library in an existing project that can create DICOM image. I want to use this library because I want to make the compression of the DICOM images. In a new solution (Visual Studio 2013/C++) Following the example in the DCMTK official documentation, I have this code, that works properly.
using namespace std;
int main()
{
DJEncoderRegistration::registerCodecs();
DcmFileFormat fileformat;
/**** MONO FILE ******/
if (fileformat.loadFile("Files/test.dcm").good())
{
DcmDataset *dataset = fileformat.getDataset();
DcmItem *metaInfo = fileformat.getMetaInfo();
DJ_RPLossless params; // codec parameters, we use the defaults
// this causes the lossless JPEG version of the dataset
//to be created EXS_JPEGProcess14SV1
dataset->chooseRepresentation(EXS_JPEGProcess14SV1, &params);
// check if everything went well
if (dataset->canWriteXfer(EXS_JPEGProcess14SV1))
{
// force the meta-header UIDs to be re-generated when storing the file
// since the UIDs in the data set may have changed
delete metaInfo->remove(DCM_MediaStorageSOPClassUID);
delete metaInfo->remove(DCM_MediaStorageSOPInstanceUID);
metaInfo->putAndInsertString(DCM_ImplementationVersionName, "New Implementation Version Name");
//delete metaInfo->remove(DCM_ImplementationVersionName);
//dataset->remove(DCM_ImplementationVersionName);
// store in lossless JPEG format
fileformat.saveFile("Files/carrellata_esami_compresso.dcm", EXS_JPEGProcess14SV1);
}
}
DJEncoderRegistration::cleanup();
return 0;
}
Now I want to use the same code in an existing C++ application where
if (infoDicom.arrayImgDicom.GetSize() != 0) //Things of existing previous code
{
//I have added here the registration
DJEncoderRegistration::registerCodecs(); // register JPEG codecs
DcmFileFormat fileformat;
DcmDataset *dataset = fileformat.getDataset();
DJ_RPLossless params;
dataset->putAndInsertUint16(DCM_Rows, infoDicom.rows);
dataset->putAndInsertUint16(DCM_Columns, infoDicom.columns,);
dataset->putAndInsertUint16(DCM_BitsStored, infoDicom.m_bitstor);
dataset->putAndInsertUint16(DCM_HighBit, infoDicom.highbit);
dataset->putAndInsertUint16(DCM_PixelRepresentation, infoDicom.pixelrapresentation);
dataset->putAndInsertUint16(DCM_RescaleIntercept, infoDicom.rescaleintercept);
dataset->putAndInsertString(DCM_PhotometricInterpretation,"MONOCHROME2");
dataset->putAndInsertString(DCM_PixelSpacing, "0.086\\0.086");
dataset->putAndInsertString(DCM_ImagerPixelSpacing, "0.096\\0.096");
BYTE* pData = new BYTE[sizeBuffer];
LPBYTE pSorg;
for (int nf=0; nf<iNumberFrames; nf++)
{
//this contains all the PixelData and I put it into the dataset
pSorg = (BYTE*)infoDicom.arrayImgDicom.GetAt(nf);
dataset->putAndInsertUint8Array(DCM_PixelData, pSorg, sizeBuffer);
dataset->chooseRepresentation(EXS_JPEGProcess14SV1, &params);
//and I put it in my data set
//but this IF return false so che canWriteXfer fails...
if (dataset->canWriteXfer(EXS_JPEGProcess14SV1))
{
dataset->remove(DCM_MediaStorageSOPClassUID);
dataset->remove(DCM_MediaStorageSOPInstanceUID);
}
//the saveFile fails too, and the error is "Pixel
//rappresentation non found" but I have set the Pixel rep with
//dataset->putAndInsertUint16(DCM_PixelRepresentation, infoDicom.pixelrapresentation);
OFCondition status = fileformat.saveFile("test1.dcm", EXS_JPEGProcess14SV1);
DJEncoderRegistration::cleanup();
if (status.bad())
{
int error = 0; //only for test
}
thefile.Write(pSorg, sizeBuffer); //previous code
}
Actually I made test with image that have on one frame, so the for cycle is done only one time. I don't understand why if I choose dataset->chooseRepresentation(EXS_LittleEndianImplicit, &params); or dataset->chooseRepresentation(EXS_LittleEndianEXplicit, &params); works perfectly but not when I choose dataset->chooseRepresentation(EXS_JPEGProcess14SV1, &params);
If I use the same image in the first application, I can compress the image without problems...
EDIT: I think the main problem to solve is the status = dataset->chooseRepresentation(EXS_JPEGProcess14SV1, &rp_lossless) that return "Tag not found". How can I know wich tag is missed?
EDIT2: As suggest in the DCMTK forum I have added the tag about the Bits Allocated and now works for few images, but non for all. For some images I have again "Tag not found": how can I know wich one of tags is missing? As a rule it's better insert all the tags?
I solve the problem adding the tags DCM_BitsAllocated and DCM_PlanarConfiguration. This are the tags that are missed. I hope that is useful for someone.
At least you should call the function chooseRepresentation, after you have applied the data.
**dataset->putAndInsertUint8Array(DCM_PixelData, pSorg, sizeBuffer);**
dataset->chooseRepresentation(EXS_JPEGProcess14SV1, &params);

Implementing bulk record fetching

At the start of my program, I need to read data from a MS Access database (.mdb) into a drop down control. This is done so that whenever the user types in that control, the application can auto-complete.
Anyway, the reading from database took forever so I thought I'd implement bulk row fetching.
This is the code I have:
CString sDsn;
CString sField;
sDsn.Format("ODBC;DRIVER={%s};DSN='';DBQ=%s",sDriver,sFile);
TRY
{
// Open the database
database.Open(NULL,false,false,sDsn);
// Allocate the rowset
CMultiRowset recset( &database );
// Build the SQL statement
SqlString = "SELECT NAME "
"FROM INFOTABLE";
// Set the rowset size. These many rows will be fetched in one bulk operation
recset.SetRowsetSize(25);
// Open the rowset
recset.Open(CRecordset::forwardOnly, SqlString, CRecordset::readOnly | CRecordset::useMultiRowFetch);
// Loop through each rowset
while( !recset.IsEOF() )
{
int rowsFetched = (int)recset.GetRowsFetched(); // This value is always 1 somehow
for( int rowCount = 1; rowCount <= rowsFetched; rowCount++ )
{
recset.SetRowsetCursorPosition(rowCount);
recset.GetFieldValue("NAME",sField);
m_nameDropDown.AddString(sField);
}
// Go to next rowset
recset.MoveNext();
}
// Close the database
database.Close();
}
CATCH(CDBException, e)
{
// If a database exception occured, show error msg
AfxMessageBox("Database error: "+e->m_strError);
}
END_CATCH;
MultiRowset.cpp looks like:
#include "stdafx.h"
#include "afxdb.h"
#include "MultiRowset.h"
// Constructor
CMultiRowset::CMultiRowset(CDatabase *pDB)
: CRecordset(pDB)
{
m_NameData = NULL;
m_NameDataLengths = NULL;
m_nFields = 1;
CRecordset::CRecordset(pDB);
}
void CMultiRowset::DoBulkFieldExchange(CFieldExchange *pFX)
{
pFX->SetFieldType(CFieldExchange::outputColumn);
RFX_Text_Bulk(pFX, _T("[NAME]"), &m_NameData, &m_NameDataLengths, 30);
}
MultiRowset.h looks like:
#if !defined(__MULTIROWSET_H_AD12FD1F_0566_4cb2_AE11_057227A594B8__)
#define __MULTIROWSET_H_AD12FD1F_0566_4cb2_AE11_057227A594B8__
class CMultiRowset : public CRecordset
{
public:
// Field data members
LPSTR m_NameData;
// Pointers for the lengths of the field data
long* m_NameDataLengths;
// Constructor
CMultiRowset(CDatabase *);
// Methods
void DoBulkFieldExchange(CFieldExchange *);
};
#endif
And in my database, the INFOTABLE looks like:
NAME AGE
---- ---
Name1 Age1
Name2 Age2
.
.
.
.
All I need to do is only read the data from the database. Can someone please tell me what I'm doing wrong? My code right now behaves exactly like a normal fetch. There's no bulk fetching happening.
EDIT:
I just poked around in DBRFX.cpp and found out that RFX_Text_Bulk() initializes my passed m_NameData as new char[nRowsetSize * nMaxLength]!
This means m_NameData is only a character array! I need to fetch multiple names, so wouldn't I need a 2D character array? The strangest thing is, the same RFX_Text_Bulk() initializes my passed m_NDCDataLengths as new long[nRowsetSize]. Why in the world would a character array need an array of lengths?!
According to http://msdn.microsoft.com/en-us/library/77dcbckz.aspx#_core_how_crecordset_supports_bulk_row_fetching you have to open CRecordset with CRecordset::useMultiRowFetch flag before call SetRowsetSize:
To implement bulk row fetching, you must specify the
CRecordset::useMultiRowFetch option in the dwOptions parameter of the
Open member function. To change the setting for the rowset size, call
SetRowsetSize.
You almost got it right. To fetch the values,
I would change your
for( int rowCount = 1; rowCount <= rowsFetched; rowCount++ )
{
recset.SetRowsetCursorPosition(rowCount);
recset.GetFieldValue("NAME",sField);
m_nameDropDown.AddString(sField);
}
by something like this
for( int nPosInRowset = 0; nPosInRowset < rowsFetched; nPosInRowset++ )
{
//Check if value is null
if (*(recset.m_NameDataLengths + nPosInRowset) == SQL_NULL_DATA)
continue;
CString csComboString;
csComboString = (recset.m_NameData + (nPosInRowset * 30)); //Where 30 is the size specified in RFX_Text_Bulk
m_nameDropDown.AddString(csComboString);
}
EDIT: To fetch more than one row, remove the CRecordset::forwardOnly option
EDIT 2 : You can also keep CRecordset::forwardonly, but add the CRecordset::useExtendedFetch option
Just faced the same problem.
You should use in recset.Open() call for dwOptions parameter only CRecordset::useMultiRowFetch, and not CRecordset::readOnly | CRecordset::useMultiRowFetch.
Hope this helps someone...
EDIT:- After re-check here is the situation - when using bulk recordset and opening with CRecordset::forwardOnly and CRecordset::readOnly, you must also specify CRecordset::useExtendedFetch in dwOptions. For other types of scrolling, using CRecordset::readOnly | CRecordset::useMultiRowFetch is just fine.