Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 3 years ago.
Improve this question
I writed this code below, and I got bad results, can anyone help me, and tell me what is wrong ? I writed this in Qt. . If Anyone can help me it would be great.
my Matrix is just some random integral 0 or 1
while( x < obraz.width())
{
while( y < obraz.height())
{
piksel2 = obraz.pixel(x, y);
kolor2 = QColor::fromRgb(piksel2);
minR = kolor2.red();
minG = kolor2.green();
minB = kolor2.blue();
for(i = 0; i < w; i++)
{
for(j = 0; j < h; j++)
{
if (matrix[i][j] == 1 && x - o + i >= 0 && y - u + j >= 0 && x - o + i < obraz.width() && y - u + j < obraz.height())
{
piksel = obraz.pixel(x - o + i, y - u + j);
kolor = QColor::fromRgb(piksel);
if (kolor.blue() < minB)
{
minB = kolor.blue();
}
if(kolor.green() < minG)
{
minG = kolor.green();
}
if(kolor.red() < minR)
{
minR = kolor.red();
}
}
}
}
obraz.setPixel(x, y, qRgb(minR, minG, minB));
y++;
}
y=1;
x++;
}
Input file:
Output file:
The main problem with the code is that it writes the result for each pixel into the input image. This result will be used when computing the min value for the next pixel. Thus, the dark patch at the top-left of the image gets propagated across the whole image.
It is important for this type of algorithm to write into a separate output buffer, leaving the input unchanged until the whole image has been processed.
Do note also that the erosion is well defined for gray-value images, but not for color images. You seem to want to apply marginal ordering, which is equivalent to computing the erosion for each channel independently. Be advised that this method will introduce new colors to the image. There are better approaches, but they all have some sort of downside. I wrote a small overview about this some years ago on my blog.
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
you might remember me or running a kind of a 'Lightroom' panel, using C++ and Qt for GUI.
Today I was reading about implementing a unit testing for my main classes, but my question is, how can I test a function that does not return anything?
for example, I got that function:
void ImgProcessing::processMaster(cv::Mat& img, cv::Mat& tmp, int brightness, int red, int green, int blue, double contrast){
for(int i = 0; i < img.rows; i++)
for(int j = 0; j < img.cols; j++)
for(int k = 0; k < 3; k++){
if(k == 0) //_R
tmp.at<cv::Vec3b>(i,j)[k] = cv::saturate_cast<uchar>((img.at<cv::Vec3b>(i,j)[k] + brightness + red )*(259 * (contrast + 255) / (255 * (259 - contrast))));
if(k == 1) //_G
tmp.at<cv::Vec3b>(i,j)[k] = cv::saturate_cast<uchar>((img.at<cv::Vec3b>(i,j)[k] + brightness + green )*(259 * (contrast + 255) / (255 * (259 - contrast))));
if(k == 2) //_B
tmp.at<cv::Vec3b>(i,j)[k] = cv::saturate_cast<uchar>((img.at<cv::Vec3b>(i,j)[k] + brightness + blue )*(259 * (contrast + 255) / (255 * (259 - contrast))));
}
this function just take the obj 'mat img', and modify the 'mat tmp' obj.
than I update the UI for display the modified image, by using another dedicated function in my gui class.
Has someone already encounter something like that?
It does not make a difference if it returns a value the regular way or via an output parameter. The procedure is the same anyway. Run the function and check that the output parameter has the expected value.
This is C code, but it does not make a difference for understanding the concept. Consider these functions:
int addOne1(int x) { return x+1; }
void addOne2(int x, int* ret) { *ret = x+1; }
These can now be tested in this way:
const int x = 3;
int ret1, ret2;
ret1 = addOne1(x);
addOne2(x, &ret2);
assert(ret1 == 4);
assert(ret2 == 4);
If the output parameter also is an input parameter, then you of course need to make sure that you know the initial value.
void inc(int *x) { (*x)++; }
int x=3;
inc(&x);
assert(x == 4);
Technically, modifying a parameter IS considered a side effect. But as long as you are careful it's not a big issue. The difference compared to using a member variable is huge. And if you start modifying globals you will soon make it REALLY hard to test the code.
I am trying to develop a c++ program with opencv library on Xcode 9.3, macOS 10.14, using clang. During weeks I've been trying to solve or understand why I am getting an undefined behavior error that sometimes makes my program crash and sometimes not.
I am reading a set of images from different cameras and storing them in a multidimensional array: silC[camera][image]. (images are well stored)
I get this error THREAD 1: EXC_BAD_ACCESS (code=1, address=0x1177c1530) when I do this: currentImage.at(x,y) even the values of currentImage are not the problem nor the image.
I post the code below if there's any chance someone could help me..
vector< vector<Mat> > silC(8,vector<Mat>()); // Store the pbm images separating from different cameras
* I read the images and store them in silC. *
for (int z=0; z < nz; z++) {
for (int y=0; y < ny; y++) {
for (int x=0; x < nx; x++) {
// Current voxel coordinates in the 3D space
float xcoord = x*voxelsize + Ox + voxelsize/2;
float ycoord = y*voxelsize + Oy + voxelsize/2;
float zcoord = z*voxelsize + Oz + voxelsize/2;
for (int camId=0; camId < matricesP.size(); camId++) {
imgId = 0;
currentImage = silC[camId][imgId];
int w = silC[camId][imgId].cols;
int h = silC[camId][imgId].rows;
// Project the voxel from the 3D space to the images
Mat P = matricesP[camId];
Mat projection = P*(Mat_<float>(4,1) << xcoord,ycoord,zcoord,1.0);
//We get the point in homog coord.
float xp = projection.at<float>(0);
float yp = projection.at<float>(1);
float zp = projection.at<float>(2);
// Get the cartesian coord
int xp2d = cvRound(xp/zp);
int yp2d = cvRound(yp/zp);
if(xp2d >= 0 && xp2d < w && yp2d >= 0 && yp2d < h){
// all values are correct! :/
// int value = silC[camId][imgId].at<float>(xp2d, yp2d); // undefined behaviour: crashes sometimes..
int value = currentImage.at<float>(xp2d, yp2d); // undefined behaviour also crashes sometimes..
if(value == 255){
cout << "Voxel okey \n";
}
}
}
}
}
}
EDIT:
The solution posted on comments below is that instead of currentImage.at(xp2d,yp2d) --> currentImage.at(yp2d,xp2d), as cv::Mat access requieres.
BUT, I tried to parallelize the for several times with openMP (#pragma omp parallel for) but it kept crashing. If someone is familiar with parallelize I'll appreciate any help.
the solution is what #rafix07 posted. Thank you very much guys, next time I'll try to focus more.
So, I try to create my own neural network. Something really simple.
My input is the MNIST database of handwritten digits.
Input: 28*28 neurons (Images).
Output: 10 neurons (0/1/2/3/4/5/6/7/8/9).
So my network is as follow: 28*28 -> 15 -> 10.
The problem remains in my estimated output. Indeed, it seems I have a gradient explosion.
The output given by my network is here: https://pastebin.com/EFpBGAZd
As you can see, the first estimated output is wrong. So my network adjust the weights thanks to the backpropagation. But It doesn't seems to updates the weights correctly. Indeed the estimated output is too high compared to the second highest value.
So the first estimated output keeps being the best estimated output for the following training (13 in my example).
My backpropagation code:
VOID BP(NETWORK &Network, double Target[OUTPUT_NEURONS]) {
double DeltaETotalOut = 0;
double DeltaOutNet = 0;
double DeltaErrorNet = 0;
double DeltaETotalWeight = 0;
double Error = 0;
double ErrorTotal = 0;
double OutputUpdatedWeights[OUTPUT_NEURONS*HIDDEN_NEURONS] = { 0 };
unsigned int _indexOutput = 0;
double fNetworkError = 0;
//Calculate Error
for (int i = 0; i < OUTPUT_NEURONS; i++) {
fNetworkError += 0.5*pow(Target[i] - Network.OLayer.Cell[i].Output, 2);
}
Network.Error = fNetworkError;
//Output Neurons
for (int i = 0; i < OUTPUT_NEURONS; i++) {
DeltaETotalOut = -(Target[i] - Network.OLayer.Cell[i].Output);
DeltaOutNet = ActivateSigmoidPrime(Network.OLayer.Cell[i].Output);
for (int j = 0; j < HIDDEN_NEURONS; j++) {
OutputUpdatedWeights[_indexOutput] = Network.OLayer.Cell[i].Weight[j] - 0.5 * DeltaOutNet*DeltaETotalOut* Network.HLayer.Cell[j].Output;
_indexOutput++;
}
}
//Hidden Neurons
for (int i = 0; i < HIDDEN_NEURONS; i++) {
ErrorTotal = 0;
for (int k = 0; k < OUTPUT_NEURONS; k++) {
DeltaETotalOut = -(Target[k] - Network.OLayer.Cell[k].Output);
DeltaOutNet = ActivateSigmoidPrime(Network.OLayer.Cell[k].Output);
DeltaErrorNet = DeltaETotalOut * DeltaOutNet;
Error = DeltaErrorNet * Network.OLayer.Cell[k].Weight[i];
ErrorTotal += Error;
}
DeltaOutNet = ActivateSigmoidPrime(Network.HLayer.Cell[i].Output);
for (int j = 0; j < INPUT_NEURONS; j++) {
DeltaETotalWeight = ErrorTotal * DeltaOutNet*Network.ILayer.Image[j];
Network.HLayer.Cell[i].Weight[j] -= 0.5 * DeltaETotalWeight;
}
}
//Update Weights
_indexOutput = 0;
for (int i = 0; i < OUTPUT_NEURONS; i++) {
for (int j = 0; j < HIDDEN_NEURONS; j++) {
Network.OLayer.Cell[i].Weight[j] = OutputUpdatedWeights[_indexOutput];
_indexOutput++;
}
}}
How can I solve this issue?
I didn't worked on the hidden layer nor biases, is it due to it?
Thanks
Well, since Backpropagation is notoriously hard to implement and especially to debug (I guess everyone who did it can relate) it’s much harder to debug some Code written by others.
After a quick view over your code, I’m quite surprised that you calculate a negative delta term? Are you using ReLU or any sigmoid function? I’m quite sure there is more. But I’d suggest you to stay away from MNIST until you got your network to solve XOR.
I’ve wrote a summary in pseudo code on how to implement Backpropagation in pseudo code. I’m sure you’ll be able to translate it into C++ quite easily.
Strange convergence in simple Neural Network
In my experience neural networks should really be implemented with matrix operations. This will make your code faster and easier to debug.
The way to debug backpropagation is to use finite difference. For a loss function J(theta) we can approximate the gradient in each dimension with (J(theta + epsilon*d) - J(theta))/epsilon with d a one-hot vector representing one dimension (note the similarity to a derivative).
https://en.wikipedia.org/wiki/Finite_difference_method
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
So, I have a pretty good idea of how to implement the majority of the program. However, I am having a hard time coming up with an algorithm to add the hints of array locations adjacent to mines. The real trouble I am seeing is that the edge cases almost make it like you have two functions to deal with it (I have 20 line max on all functions). I know that from the position of the mine we want a loop to check row - 1 to row +1 and col -1 to col +1, but is it possible to do this in one function with the code I have for the game? If so, some advice would be great!
EDIT!
SO I think I have come up with the algorithm that works for all cases, but it is outputting bad info. I am pretty sure it is due to improper casting, but I am unable to see what's wrong.
Here are the two functions I wrote to add the hints:
void add_hints_chk(char ** game_board, int cur_row, int cur_col, int
rows, int cols)
{
int row_start = 0, row_end = 0, col_start = 0, col_end = 0;
if (cur_row - 1 < 0)
{
//Top edge case
row_start = 0;
}
else
{
row_start = cur_row - 1;
}
if (cur_row + 1 > rows - 1)
{
//bottom edge case
row_end = rows - 1;
}
else
{
row_end = cur_row + 1;
}
if (cur_col - 1 < 0)
{
//Left edge case
col_start = 0;
}
else
{
col_start = cur_col - 1;
}
if (cur_col - 1 > cols - 1)
{
//Right edge case
col_end = cols - 1;
}
else
{
col_end = cur_col + 1;
}
add_hints(game_board, row_start, row_end, col_start, col_end);
}
void add_hints(char **board, int row_start, int row_end, int col_start,
int col_end)
{
int tmp_int = 0;
for (int i = row_start; i <= row_end; i++)
{
for (int j = col_start; j <= col_end; j++)
{
if (board[i][j] != '*')
{
if (board[i][j] == ' ')
{
tmp_int = 1;
board[i][j] = (char)tmp_int;
}
else
{
tmp_int = (int)board[i][j];
tmp_int++;
board[i][j] += (char)tmp_int;
}
}
}
}
}
So, when I print the array, I get the little box with a q-mark in it. Am I converting tmp_int back to a char incorrectly?
There are different strategies to handle this. One simple strategy is creating a larger grid (add one line on each side) that is initialized with no bombs; make the board a view that hides the borders. With this strategy you know that you can step out of the game board without causing issues (since the data structure has an additional row).
Alternatively you can test whether the coordinates are within the valid range before calling the function that tests, or as the first step within that function.
Also you can consider precalculating the values for all of the map, whenever you add a bomb to the board during the pre-game phase, increment the counter of bombs in the vicinity for all of the surrounding positions. You can use either of the above approaches to handle the border conditions.
For any cell, C, there are 8 possible locations to check:
# # #
# C #
# # #
Before extracting data from the array, each outer location must be boundary checked.
You may be able to generalize, for example, if the value (column - 1) is out of bounds, you don't need to check 3 locations.
In your case, I would go with the brute force method and check each outer cell for boundary before accessing it. If profiling identifies this as the primary bottleneck, the come back and optimize it. Otherwise move on.
Edit 1: Being blunt
int C_left = C_column - 1;
int C_right = C_column + 1;
if (C_left >= 0)
{
// The left column can be accessed.
}
if (C_right < MAXIMUM_COLUMNS)
{
// The right columns can be accessed.
}
// Similarly for the rows.
so I am struggling to understand why I am getting this assertion error from opencv when accessing a pointer in the next col/row of an image. Let me tell you what is happening and provide some code.
I am taking a ROI from the image, which is a cv::Mat or a header to a section of a bigger cv::Mat.
I constructed some pointers to access the value of my ROI. So lets say my ROI is filled with values of pixels and is a 3x3 Mat
with the following Dimensions (index starting at 0,0)
---------
| 1 | 2 | 3 |
| 4 | 5 | 6 |
| 7 | 8 | 9 |
first of all I need to initialize my pointers to point to their positions respectively. I took the ptr function of the cv::Mat and their location in the grid via cv::Point.
Problem faced:
When I try to access the pixel of the next neighbor, I get an assertion error.
Diagnostics by me:
I thought it might be the range, but I made sure that wouldn't be the case by defining the for loop conditions according to my dimensions.
The item I am trying to access doesn't exist, but how I understand it when I go through the ROI, i already have the values in a new Matrix and I should be able to access all values around my desired pixels.
PART OF THE CODE:
cv::Mat ROI =disTrafo(cv::Rect(cv::Point(x,y),cv::Size(3,3)));
cv::minMaxLoc(ROI,&minVal,&maxVal,&minCoord,&maxCoord);
auto* maxPtr_x = &maxCoord.x;
auto* maxPtr_y = &maxCoord.y;
auto* maxPtr_value = &maxVal;
uchar diff1 = 0;
uchar diff2= 0;
uchar diff3 = 0;
uchar diff4 = 0;
uchar max_diff = 0;
for(int j = 1; j < ROI.rows ; j++){
auto current = ROI.ptr<uchar>(maxCoord.y);
auto neighbor_down = ROI.ptr<uchar>(maxCoord.y+1); //THE PROB IS HERE according to debugging
auto neighbor_up = ROI.ptr<uchar>(maxCoord.y-1);
cv::Point poi ; //point of interest
for(int i= 0; i < ROI.cols; i++){
switch(maxCoord.x){ //PROOF FOR LOGIC
case 0:
if(maxCoord.y == 0){ //another switch statement maybe ??
diff1 = std::abs(current[maxCoord.x+1] - current[maxCoord.x]);
diff2 = std::abs(neighbor_down[maxCoord.x] - current[maxCoord.x]);
if(diff2 > diff1){
cv::Point(maxCoord.x,maxCoord.y+1) = poi;
} else {
cv::Point(maxCoord.x+1,maxCoord.y) = poi;
}
};
ASSERTION FAILED when running it: OpenCV Error: Assertion failed(y == 0|| < data && dims >= 1 && (unsigned)y < (unsigned)size.p[0])) in cv::Mat::ptr, file //... indicates path to header file mat.hpp// line 428
I can't put my finger on the problem, could you please be of assistance. And please give me some knowledge when working with pointers and pixels in case I misunderstood something.
Thank you
try this
for(int j = 1; j < ROI.rows ; j++){
auto current = ROI.ptr<uchar>(maxCoord.y);
auto neighbor_down = ROI.ptr<uchar>(maxCoord.y+1); //THE PROB IS HERE according to debugging
auto neighbor_up = ROI.ptr<uchar>(maxCoord.y-1);
cv::Point poi ; //point of interest
cv::Point bordess(Point(0,0));
for(int i= 0; i < ROI.cols; i++){
switch(maxCoord.x){ //PROOF FOR LOGIC
case 0:
if(maxCoord.y == 0){ //another switch statement maybe ??
diff1 = std::abs(current[maxCoord.x+1] - current[maxCoord.x]);
diff2 = std::abs(neighbor_down[maxCoord.x] - current[maxCoord.x]);
if(diff2 > diff1){
cv::Point(maxCoord.x,maxCoord.y+1) = poi & bordess;
} else {
cv::Point(maxCoord.x+1,maxCoord.y) = poi & bordess;
}
};
Ok so basically I figured out that my pointer definition was wrong due to the nature of the input image. I have done some preprocessing with the image and the ranges of the values inside changed from ucharto some other value. When I changed the auto neighbor_down = ROI.ptr<uchar>(maxCoord.y+1);for example to auto neighbor_down = ROI.ptr<float>(maxCoord.y+1); everything ran normal.