Error control coding for a practical application - error-correction

I’m doing a project where a device is built to measure the girth of a rubber tree in a rubber plantation.
I need to give an identity to each tree to store the measurements of each tree.
The ID of each tree contains 33bits (in binary). For error detection and correction I’m hoping to encode this 33bit word in to a code word (Using a error control coding technique) and generate a 2D matrix (Color matrix-With red and cyan squares representing 1’s and 0’s). The 2D matrix will represent the coded word. This matrix will be pasted on the trunk of the tree. And a camera (the device) will be used to take the image of the 2D matrix and the code will be decoded then the ID of the tree will be taken.
I’m looking for the best scheme to implement this. I thought of cyclic codes for coding, but since the data word is 33 bits cyclic codes seems to be bit complicated.
Can someone please suggest the best way (at least a good way) to implement this???
Additional info: The image is taken in a forest environment (low light condition), Color matrix is to be used considering the environment.(The bark of the tree is dark so black and white matrix would be not appropriate)

One way to do it is to use 2D-parity check codes. The resulted codeword is a matrix, and it has single error correction (SEC) capability.
Since your information part (tree ID) has 33 bits, you may need to add a few dummy bits to make the codeword a 2D rectangle (say, a 6x6 information part). If a tree's ID is 1010_1010_1010_1010_1010_1010_1010_1010_1, then by adding 3 more 0 we have it as:
1 0 1 0 1 0 | 1
1 0 1 0 1 0 | 1
1 0 1 0 1 0 | 1
1 0 1 0 1 0 | 1
1 0 1 0 1 0 | 1
1 0 1 0 0 0 | 0
—————————————
0 0 0 0 1 0 1
Then you get a (n, k, d) = (49, 36, 3) code, which correct 1 bit errors.

Related

Unexpected transformationMatrix and offset Matrix with Skeletal Animation

What I use:
Assimp to import .fbx files from blender
OpenGL for rendering
glm lib for handling matrices and vectors
I am trying to make skeletal animation work. I dont read the .fbx file directly into the Program but I convert it to a binary at first. I tried to understand how transformationMatrix and offsetMatrix is supposed to work. This is what I understood: In order to later be able to run an animation, we need to find a way to make a moving Bone affect its vertices but also the bones connected to it. So the idea is to use transformation matrices, which describe the coordinate system of a bone or a node and we multiply along these paths to a bone and then multiply by is offsetMatrix to be back in objectSpace. I think by now I tried every possible combination of multiplying these but I always get something wrong. Then I looked at the values a couple of times and to me it is not obvious at all how this should work. Correct me if I am wrong but my expectation is that when in BindPose and using the offsetMatrices and transformationMatrices from the Assimp import I must result with an identity Matrix, because I want my model in bind pose just as without those Matrices. These are the transformation Matrices from all nodes up to the first bone:
The Root Node is identity
Armature mTransformationMatrix:
100 0 0 0
0 -1.6e-05 100 0
0 -100 -1.629e-05 0
0 0 0 1
Bone mTransformationMatrix:
1 0 0 0
0 1.6e-07 -1 0
0 1 -1.6e-07 0
0 0 0 1
I expect those two to result with something like an identity*100 when multiplied.
mOffsetMatrix of the corresponding Bone:
0.38 0 0 0
0 0 -1 0
0 0.28 0 0
0 0 1.67 1
In my opinion this doesnt help me at all. So either my expectaiton to result with an identity matrix are wrong or the offsetMatrix is.
In case you consider it important what my model looks like:
Edit: I forgot to mention: to read in the aiMatrix4x4 I use
static inline glm::mat4 mat4_cast(const aiMatrix4x4& m) { return glm::transpose(glm::make_mat4(&m.a1)); }
But the transformationMatrices I wrote in this report are directly from the Scene so only the offsetMatrix is transposed. But this doesn't change a thing.
I found the problem. The problem is that I also have to transform from meshSpace back to object space. The offset Matrix is the inverse of the tranformation Matrices multiplied and also multiplied with the inverse of the transformation Matrix to the Mesh which is on a different node.

Enumerating and indexing all possible trees of n vertices [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
We have been trying to find a way to enumerate and index all possible trees with n labelled vertices. By Cayley's theorem there shall be nn−2 number of trees from n labelled vertices. Is there a way, in C/C++, to index all the possible trees so that when a user inputs an integer/number a unique tree will generated in real time?
A quick glance at the Wikipedia article on Cayley's formula (the nn−2 formula you mention) pointed me to Prüfer sequences, which is a sequence of length n−2 consisting of (possibly repeated) node labels. It's obvious that there are nn−2 such sequences, and each sequence can be represented as an n−2 digit base n number. It's less obvious that every Prüfer sequence corresponds to a unique tree with n labeled nodes, but that fact is sufficient to demonstrate Cayley's formula.
The Wikipedia article on Prüfer sequences explains how to turn a sequence into its corresponding tree; which is equivalent to turning an integer into a tree.
I haven't tried any of this, but it looks convincing.
I'm not well-versed in C or C++, but I think I can provide the theory such that enumerating every tree shouldn't be too hard. Comment if I need to clarify anything.
Think Binary.
Take an adjacency matrix. To describe whether one vertex is connected to another, we use 1 or 0. So to find all the graphs using adjacency matrices, we would fill up a matrix with all the combos of 1 and 0. The only constraint is that for trees, a node can't be its own parent, and can't have multiple parents. Example with three vertices:
0 1 1 0 1 1 0 0 1
1 0 0 0 0 0 1 0 0
0 0 0 1 0 0 0 1 0 etc.
What we can do is to lay the rows out side-by-side such that a binary sequence describes every matrix. Example:
0 1 1 1 0 0 0 0 0 0 1 1 0 0 0 1 0 0 0 0 1 1 0 0 0 1 0 etc.
So given nine bits, we can describe all graphs with three vertices. This translates to one tree for every number 1-2^9, minus the numbers which are rotations of each other.
To turn a number into a tree, you just convert the number to binary, and turn the binary into a matrix. To fix the self-connections, for every "1" that is on or past the diagonal, move it further by one. So then:
1 0 0 1 0 1
0 1 0 -> 0 0 0
0 0 1 0 1 0

Given 2d array of 0s and 1s, find all the squares in it using backtracking

in this 2d array 1 represents a point and 0 represents blank area.
for example this array:
1 0 0 0 1
0 0 1 0 0
0 0 0 0 0
0 0 0 0 1
my answer should be 2, because there are 2 squares (or rectangles) in this array like this
all the points should be used, and you can't make another square | rectangle if all its points are already used (like we can't make another square from the point in the middle to the point in the top right) because they are both already used in other squares, you can use any point multiple times just if at least one corner is not used point.
I could solve it as an implementation problem, but I am not understanding how backtracking is related to this problem.
thanks in advance.
Backtracking, lets take a look at another possible answer to your problem, you listed:
{0,0} to (2,1}
{0,0} to {4,0}
As one solution another solution is (With respect to the point can be used multiple times as long as one point is unused):
{4,0} to {2,1} (first time 4,0 and 2,1 is used)
{0,0} to {2,1} (first time 0,0 is used)
{0,0} to {4,4} (first time 4,4 is used)
Which is 3 moves, with backtracking it is designed to show you alternative results using recursion. In this equation if you start the starting location for calculating the squares at different areas of the array you can achieve different results.
for an example iterating starts from 0,0, and going right across each row trying to find all possible rectangles starting with [0,0] will give the solution you provided, iteratings starting from 4,0 and going left across each row trying to find all possible solutions will give my result.

How does a projection Matrix work?

I have to write a paper for my A-Levels about 3D-Programming. But I got a serious problem understanding the perspective projection Matrix and I need to fully explain the Matrix in detail. I've searched a lot of websites and youtube videos on this topic but very little even try to answer the question why the Matrix has these values at that place. Based on this http://www.songho.ca/opengl/gl_projectionmatrix.html I was able to find out how the w-row works, but I don't understand the other three.
I decided to use the "simpler" version for symmetric viewports only (right-handed Coord.):
I am very thankful for every attempt to explain the first three rows to me!
The core reason for the matrix is to map the 3D coordinates to a 2D plane and have more distant objects be smaller.
For just this a much simpler matrix suffices (assuming your camera is at origin and looking at the Z axis):
1 0 0 0
0 1 0 0
0 0 0 0
0 0 1 0
After multiplying with this matrix and then renormalizing the w coordinate you have exactly that. Each x,y,z,1 point becomes x/z,y/z,0,1.
However there is no depth information (Z is 0 for all points) so a depth buffer/filter won't work. For that we can a a parameter to the matrix so the depth information remains available:
1 0 0 0
0 1 0 0
0 0 0 1
0 0 1 0
Now the resulting point contains the inverse depth in the Z coordinate. Each x,y,z,1 point becomes x/z,y/z,1/z,1.
The extra parameters are the result of mapping the coordinates into the (-1,-1,-1) - (1,1,1) device box (the bounding box where if you are outside of it the point won't get drawn) using a scale and a translate.

Searching jpeg/bmp/pdf image for straight lines, circles and text

I want to create an image parser that shall read an image having following:
1. Straight Lines
2. Circles
3. Arcs
4. Text
I am open for solutions for any type of image format either jpeg, bmp, or PDF format.
I have seen QImage documentation. It shall provide me with pixel data that I can store in the form of a 2D matrix. At the moment I shall assume that there are only two colours black and white. White represents empty pixel and black represents a drawn pixel.
So I will have a sparse matrix like
0 1 1 1 0 0 0
0 0 0 0 0 0 1
0 1 1 0 0 0 1
1 0 0 1 0 0 1
1 0 0 1 0 0 0
0 1 1 0 0 0 0
Now I want to decode this matrix and search for the elements. Searching for horizontal and vertical lines is easy because for each element I can just scan its neighbouring row elements and column elements.
How can I search for other elements (angled lines, circles, arcs and possibly text)?
For text I read that QImage has text() function but I don't know for what type of input file it works.
Is there any other library that I can consider?
Please note that I just want to be able to read the image, processing does not need to be done.
Is there any other way I can accomplish this? Or am I being too ambitious?
Thanks
Take a look at the OpenCV library.
It provides most of the standard algorithms used in image detection and vision and the code quality of its implementation is quite high in general.
Notice though that this is a very difficult problem in general, so you will probably need to do a fair amount of research before getting satisfactory solutions.
One interesting way of tackling this would be with machine learning systems, such as neural networks and genetic algorithms. Neural nets in particular are very good at pattern matching and are often seen being used for tasks such as handwriting recognition.
There's a lot of information on this if you search for it. Here's one such article that is an introduction to NNs.
If your input images are always black and white, I don't think it would be too difficult to adapt a code example to get it working.
I suggest Viola-Jones object detection algorithm.
Though the approach is usually implemented on face detection - the original article discusses general object detection, such as your text, circles and lines.