Parsing a YAML file? - c++

How can I parse the following YAML file using yaml-cpp?
scene:
- camera:
film:
width: 800
height: 600
filename: "out.svg"
- shape:
name: "muh"
I tried:
#include <yaml-cpp/yaml.h>
int main() {
YAML::Node root_node = YAML::LoadFile("Scenes/StanfordBunny.flatland.yaml");
// throws an exception
int value = root_node["scene"]["camera"]["film"]["width"].as<int>();
}
How can I get the value of the width attribute?
How can I get the name of the shape attribute?

The "-" in front of camera means it is an array of objects. So my guess would be:
root_node["scene"][0]["camera"]["film"]["width"].as<int>();

Related

Create regExp to textField in QML

I need a regExp to a textField who will receive a ip and mask, something like that: "000.000.000.000/00". It's necessary, if field is empty, return 0: example - 192.168.0.12
I have that:
Custom.VisTextField{implicitWidth: 200; implicitHeight: 30
id: ipValue
validator:RegExpValidator
{
regExp:/^(([01]?[0-9]?[0-9]|2([0-4][0-9]|5[0-5]))\.){3}([01]?[0-9]?[0-9]|2([0-4][0-9]|5[0-5]))\/(([01]?[0-3]|3([0])|2([0-9]))\.)$/
}
inputMask: "000.000.000.000/00;0"
}
But when field is empty i receive nothing, example: 192.168..12
I search in many fonts to discover a resolution, but i realize a conflict with inputMask and regEx. I solved using something like that:
Custom.VisTextField{implicitWidth: 200; implicitHeight: 30
id: ipValue
validator:RegExpValidator
{
regExp: /^(([01 ]?[ 0-9]?[0-9 ]|2([ 0-4][0-9 ]|5[ 0-5]))\.){3}([01 ]?[ 0-9]?[0-9 ]|2([ 0-4][0-9 ]|5[ 0-5]))\/(([0-2 ]?[ 0-9]|3([0 ]))\.)$/
}
inputMask: "000.000.000.000/00"
}

Deploying caffe RNNs

I'm trying to deploy a caffe model containing a RNN layer. The issue I'm having is how to compute the output from the network. My assumption was that I could call
net->Forward();
to update the network and then
net->output_blobs()[0]->mutable_cpu_data()[x];
once every timestep to read the output. However, using a constant input and then running "net-Forward()" multiple times does not affect the output as one would expect. I've tried to use different weights/biases, which changes the output, but no matter what configuration I'm using the output will still be static. Does anyone know what the proper procedure for deploying caffe RNNs with C++ is?
Edit:
This was tested with a single neuron RNN layer like below.
model.prototxt:
layer {
name: "input"
type: "Input"
top: "states"
input_param {
shape: {
dim: 1
dim: 1
}
}
}
input: "clip"
input_shape { dim: 1 dim: 1 dim: 1}
layer {
name: "rnn"
type: "RNN"
top: "rnn"
bottom: "clip"
bottom: "states"
recurrent_param {
num_output: 1
}
}
And the .cpp:
caffe::Blob<float>* input_layer = test_net->input_blobs()[0];
float* input_data;
input_data = input_layer->mutable_cpu_data();
input_data[0] = 1.0;
for (int i=0; i<5; i++)
{
test_net->Forward();
cout << "Ouput: " << net->output_blobs().back()->mutable_cpu_data()[0] << endl;
}

How can I get layer's top label in c++?

1)Is it possible get each the layer's top labels (e.g: ip1,ip2,conv1,conv2) in c++?
If my layer is
layer {
name: "relu1_1"
type: "Input"
top: "pool1"
input_param {
shape: {
dim:1
dim: 1
dim: 28
dim: 28
}
}
}
I want to get the top label which in my case is "pool1"
I searched the examples provided, but I couldn't find anything. currently I'm able to get only the layers names and layer type by the following commands,
cout << "Layer name:" << "'" << net_->layer_names()[layer_index]<<endl;
cout << "Layer type: " << net_->layers()[layer_index]->type()<<endl;
2) Where can I find the tutorials or the examples which explains most used API's for using caffe framework using c++?
Thankyou in advance.
Look at Net class in doxygen:
const vector< vector< Blob< Dtype > * > > all_ tops = net_->top_vecs(); // get "top" of all layers
Blob<Dtype>* ptop = all_tops[layer_index][0]; // pointer to top blob of layer
If you want the layer's name, you can
const string layer_name = net_->layer_names()[layer_index];
You can access all sorts of names/data using net_ interface, just read the doc!

Need Support for Apple Pencil/Finger drawing on Swift 3

I am making use of swift 2.3 with TouchCanvas (Apple Pencil Sample App),
in which while drawing,
I am able to switch between pen/pencil/brush/eraser - thickness & colors and the same is applied correspondingly.
Ref:
Now I upgraded to swift 3.0,
in which which drawing,
when switch between pen/pencil/brush/eraser - thickness & colors, the last picked one is applied to ALL.
Ref:
and also tried apple latest pencil API..Results were same
can please any one tell me exact solution for this..
Ahhhh...After trying long hours ...found an solution..
Just one line on CanvasView.swift
override func draw(_ rect: CGRect) {
let context = UIGraphicsGetCurrentContext()!
context.setLineCap(.round)
needsFullRedraw = false //Added this line
if (needsFullRedraw) {
setFrozenImageNeedsUpdate()
frozenContext.clear(bounds)
for array in [finishedLines,lines] {
for line in array {
line.drawCommitedPointsInContext(frozenContext, isDebuggingEnabled: isDebuggingEnabled, usePreciseLocation: usePreciseLocations)
}
}
needsFullRedraw = false
}
frozenImage = frozenImage ?? frozenContext.makeImage()
if let frozenImage = frozenImage {
context.draw(frozenImage, in: bounds)
}
for line in lines {
line.drawInContext(context, isDebuggingEnabled: isDebuggingEnabled, usePreciseLocation: usePreciseLocations)
}
}
or just commented the following line
/*if (needsFullRedraw) {
setFrozenImageNeedsUpdate()
frozenContext.clear(bounds)
for array in [finishedLines,lines] {
for line in array {
line.drawCommitedPointsInContext(frozenContext, isDebuggingEnabled: isDebuggingEnabled, usePreciseLocation: usePreciseLocations)
}
}
needsFullRedraw = false
}*/

Setting input layer in CAFFE with C++

I'm writing C++ code using CAFFE to predict a single (for now) image. The image has already been preprocessed and is in .png format. I have created a Net object and read in the trained model. Now, I need to use the .png image as an input layer and call net.Forward() - but can someone help me figure out how to set the input layer?
I found a few examples on the web, but none of them work, and almost all of them use deprecated functionality. According to: Berkeley's Net API, using "ForwardPrefilled" is deprecated, and using "Forward(vector, float*)" is deprecated. API indicates that one should "set input blobs, then use Forward() instead". That makes sense, but the "set input blobs" part is not expanded on, and I can't find a good C++ example on how to do that.
I'm not sure if using a caffe::Datum is the right way to go or not, but I've been playing with this:
float lossVal = 0.0;
caffe::Datum datum;
caffe::ReadImageToDatum("myImg.png", 1, imgDims[0], imgDims[1], &datum);
caffe::Blob< float > *imgBlob = new caffe::Blob< float >(1, datum.channels(), datum.height(), datum.width());
//How to get the image data into the blob, and the blob into the net as input layer???
const vector< caffe::Blob< float >* > &result = caffeNet.Forward(&lossVal);
Again, I'd like to follow the API's direction of setting the input blobs and then using the (non-deprecated) caffeNet.Forward(&lossVal) to get the result as opposed to making use of the deprecated stuff.
EDIT:
Based on an answer below, I updated to include this:
caffe::MemoryDataLayer<unsigned char> *memory_data_layer = (caffe::MemoryDataLayer<unsigned char> *)caffeNet.layer_by_name("input").get();
vector< caffe::Datum > datumVec;
datumVec.push_back(datum);
memory_data_layer->AddDatumVector(datumVec);
but now the call to AddDatumVector is seg faulting.. I wonder if this is related to my prototxt format? here's the top of my prototxt:
name: "deploy"
input: "data"
input_shape {
dim: 1
dim: 3
dim: 100
dim: 100
}
layer {
name: "conv1"
type: "Convolution"
bottom: "data"
top: "conv1"
I base this part of the question on this discussion about a "source" field being important in the prototxt...
caffe::Datum datum;
caffe::ReadImageToDatum("myImg.png", 1, imgDims[0], imgDims[1], &datum);
MemoryDataLayer<float> *memory_data_layer = (MemoryDataLayer<float> *)caffeNet->layer_by_name("data").get();
memory_data_layer->AddDatumVector(datum);
const vector< caffe::Blob< float >* > &result = caffeNet.Forward(&lossVal);
Something like this could be useful. Here you will have to use MemoryData layer as the input layer. I am expecting the layer name to be named data.
The way of using datum variable may not be correct. If my memory is correct, I guess, you have to use a vector of datum data.
I think this should get you started.
Happy brewing. :D
Here is an excerpt from my code located here where I used Caffe in my C++ code. I hope this helps.
Net<float> caffe_test_net("models/sudoku/deploy.prototxt", caffe::TEST);
caffe_test_net.CopyTrainedLayersFrom("models/sudoku/sudoku_iter_10000.caffemodel");
// Get datum
Datum datum;
if (!ReadImageToDatum("examples/sudoku/cell.jpg", 1, 28, 28, false, &datum)) {
LOG(ERROR) << "Error during file reading";
}
// Get the blob
Blob<float>* blob = new Blob<float>(1, datum.channels(), datum.height(), datum.width());
// Get the blobproto
BlobProto blob_proto;
blob_proto.set_num(1);
blob_proto.set_channels(datum.channels());
blob_proto.set_height(datum.height());
blob_proto.set_width(datum.width());
int size_in_datum = std::max<int>(datum.data().size(),
datum.float_data_size());
for (int ii = 0; ii < size_in_datum; ++ii) {
blob_proto.add_data(0.);
}
const string& data = datum.data();
if (data.size() != 0) {
for (int ii = 0; ii < size_in_datum; ++ii) {
blob_proto.set_data(ii, blob_proto.data(ii) + (uint8_t)data[ii]);
}
}
// Set data into blob
blob->FromProto(blob_proto);
// Fill the vector
vector<Blob<float>*> bottom;
bottom.push_back(blob);
float type = 0.0;
const vector<Blob<float>*>& result = caffe_test_net.Forward(bottom, &type);
What about:
Caffe::set_mode(Caffe::CPU);
caffe_net.reset(new caffe::Net<float>("your_arch.prototxt", caffe::TEST));
caffe_net->CopyTrainedLayersFrom("your_model.caffemodel");
Blob<float> *your_blob = caffe_net->input_blobs()[0];
your_blob->set_cpu_data(your_image_data_as_pointer_to_float);
caffe_net->Forward();