For example, in Visual Studio there is tChart and its Series property, which is responsible for drawing lines of the graph. Here is an example of this code
for (int j = 1; j < Nt - 1; j++)
{
for (int i = 1; i < Nt - 1; i++)
{
chart2->Series["" + (j + 1).ToString()]->Points->AddXY(i, wht[j][i]);
}
}
And draw this graph with a lot of lines.
But my task is transfering in Qt Creator(because in Qt Creator may making a lot of opportunities)
This code
void MainWindow::drawdifnet(int Nt)
{
int N=Nt;
int N1=pow(N,2);
QVector<double> x22(N), y22(N1);
int ii=0,jj=0;
for (int j = 0; j < Nt ; j++)
{
for (int i = 0; i < Nt ; i++)
{
x22[jj]=i;
y22[ii]=wht[j][i];
ui->widget_2->addGraph();
ui->widget_2->graph(0)->setData(x22,y22);
ii++;
}
jj++;
}
ui->widget_2->xAxis->setLabel("OsX");
ui->widget_2->yAxis->setLabel("OsY");
ui->widget_2->xAxis->setRange(30,30);
ui->widget_2->replot();
}
Doesn't work correctly.
Result is the empty widget
In first I with help debugger check the QVectors data
In this pictures see that my dinamic array wht[j][i] in work and loaded in QVector yy[ii]
I think the problem in the loop.
In QtCustomPlot tutorial this problem solving this code
ui->widget_2->graph(0)->setData(x,y);
ui->widget_2->graph(1)->setData(x11,y11);
ui->widget_2->graph(2)->setData(x22,y22);
But in my situation the quantity of lines is know when the program working.
How I create and assigned my array
void created(int Nt, int Nx) ///This function creating my dynamic array
{
wht = new double *[Nt];
for (int i = 0; i < Nt; i++)
wht[i] = new double[Nx];
}
inline double fn(int T, double x) ///these 4 functions for my mathematical part(works good)
{
if (x >= 0)
return T;
return 0;
}
inline double u0(int T, double x)
{
return fn(T, x);
}
inline double u1(int T, double a, int xmin, double t)
{
return fn(T, xmin - a * t);
}
inline double u2(int T, double a, int xmax, double t)
{
return fn(T, xmax - a * t);
}
void calculatedifnet(int xmin, double hx, double ht, double a, int Nx, int Nt, int T)
//These main function.We have the empty array and in this function we fill array. Then we solve in the main loop and the fill the first indexes wht[j]
{
for (int i = 0; i < Nt; i++)
{
wht[0][i] = u0(T, xmin + i*hx);//fill the second indexeswht[null][i]
}
for (int j = 0; j < Nt - 1; j++)//the calculated code(works right).The result writing in wht[j]
{
wht[j + 1][0] = u1(T, a, xmin, j*ht);
for (int i = 1; i < Nt; i++)
{
double dudx = (wht[j][i] - wht[j][i - 1]) / hx;
wht[j + 1][i] = -a * dudx * ht + wht[j][i];
}
}
}
In your code there are the following errors:
If we observe x is a constant vector from 0 to Nt-1, then we only have to create it once:
QVector<double> x(Nt);
for (int i = 0; i < Nt ; i++)
x[i]=i;//0 to Nt-1
addGraph() adds a graph and places it in the last position, if you want to graph you must access by the last index, not by the index 0:
ui->widget_2->addGraph()->setData(xx, yy);
Assuming that wht is of type QVector<QVector<double>> and of size NtxNt, then is not necessary to access each element, we can access each QVector<double> since the function setData() accepts as input this type of data. To the function setData() you must pass 2 vectors of the same size, but you were passing 2 vectors of Nt and Nt*Nt, this generated a warning:
ui->widget_2->addGraph()->setData(x, wht[j]);
setRange() places the range from a to b, but if they are the same QCustomPlot will never fit the range, for my test I set it as follows:
ui->widget_2->xAxis->setRange(0,Nt);
ui->widget_2->yAxis->setRange(0,Nt*Nt);
In short the code would be as follows:
void MainWindow::drawdifnet(int Nt){
QVector<double> x(Nt);
for (int i = 0; i < Nt ; i++)
x[i]=i;//0 to Nt-1
for (int j = 0; j < Nt ; j++)
ui->widget_2->addGraph()->setData(x, wht[j]);
/* if c++11
for (auto& row: wht)
ui->widget_2->addGraph()->setData(x, row);
*/
ui->widget_2->xAxis->setLabel("OsX");
ui->widget_2->yAxis->setLabel("OsY");
ui->widget_2->xAxis->setRange(0,Nt);
ui->widget_2->yAxis->setRange(0,Nt*Nt);
ui->widget_2->replot();
}
Output:
Note: For the test wht[i][j] = i*j
In your case wht is a variable of type double**, also assume that Nx>=Nt, for this you must use the following code:
void MainWindow::drawdifnet(int Nt)
{
QVector<double> x(Nt);
for (int i = 0; i < Nt ; i++){
x[i]=i;//0 to Nt-1
}
QVector<double> y(Nt);
for(int i=0; i<Nt; i++){
for(int j=0; j<Nt; j++){
y[j] = wht[i][j];
}
ui->widget_2->addGraph()->setData(x, y);
}
ui->widget_2->xAxis->setLabel("OsX");
ui->widget_2->yAxis->setLabel("OsY");
ui->widget_2->xAxis->setRange(0,12);
ui->widget_2->yAxis->setRange(0,3.5);
ui->widget_2->replot();
}
Input:
created(12, 12);
calculatedifnet(1, .5, .5, 0.9, 12, 12, 3);
Output:
Related
I am attempting to write a naive implementation of the Short-Time Fourier Transform using consecutive FFT frames in time, calculated using the FFTW library, but I am getting a Segmentation fault and cannot work out why.
My code is as below:
// load in audio
AudioFile<double> audioFile;
audioFile.load ("assets/example-audio/file_example_WAV_1MG.wav");
int N = audioFile.getNumSamplesPerChannel();
// make stereo audio mono
double fileDataMono[N];
if (audioFile.isStereo())
for (int i = 0; i < N; i++)
fileDataMono[i] = ( audioFile.samples[0][i] + audioFile.samples[1][i] ) / 2;
// setup stft
// (test transform, presently unoptimized)
int stepSize = 512;
int M = 2048; // fft size
int noOfFrames = (N-(M-stepSize))/stepSize;
// create Hamming window vector
double w[M];
for (int m = 0; m < M; m++) {
w[m] = 0.53836 - 0.46164 * cos( 2*M_PI*m / M );
}
double* input;
// (pads input array if necessary)
if ( (N-(M-stepSize))%stepSize != 0) {
noOfFrames += 1;
int amountOfZeroPadding = stepSize - (N-(M-stepSize))%stepSize;
double ipt[N + amountOfZeroPadding];
for (int i = 0; i < N; i++) // copy values from fileDataMono into input
ipt[i] = fileDataMono[i];
for (int i = 0; i < amountOfZeroPadding; i++)
ipt[N + i] = 0;
input = ipt;
} else {
input = fileDataMono;
}
// compute stft
fftw_complex* stft[noOfFrames];
double frames[noOfFrames][M];
fftw_plan fftPlan;
for (int i = 0; i < noOfFrames; i++) {
stft[i] = (fftw_complex*)fftw_malloc(sizeof(fftw_complex) * M);
for (int m = 0; m < M; m++)
frames[i][m] = input[i*stepSize + m] * w[m];
fftPlan = fftw_plan_dft_r2c_1d(M, frames[i], stft[i], FFTW_ESTIMATE);
fftw_execute(fftPlan);
}
// compute istft
double* outputFrames[noOfFrames];
double output[N];
for (int i = 0; i < noOfFrames; i++) {
outputFrames[i] = (double*)fftw_malloc(sizeof(double) * M);
fftPlan = fftw_plan_dft_c2r_1d(M, stft[i], outputFrames[i], FFTW_ESTIMATE);
fftw_execute(fftPlan);
for (int m = 0; i < M; m++) {
output[i*stepSize + m] += outputFrames[i][m];
}
}
fftw_destroy_plan(fftPlan);
for (int i = 0; i < noOfFrames; i++) {
fftw_free(stft[i]);
fftw_free(outputFrames[i]);
}
// output audio
AudioFile<double>::AudioBuffer outputBuffer;
outputBuffer.resize (1);
outputBuffer[0].resize(N);
outputBuffer[0].assign(output, output+N);
bool ok = audioFile.setAudioBuffer(outputBuffer);
audioFile.setAudioBufferSize (1, N);
audioFile.setBitDepth (16);
audioFile.setSampleRate (8000);
audioFile.save ("out/audioOutput.wav");
The segfault seems to be being raised by the first fftw_malloc when computing the forward STFT.
Thanks in advance!
The relevant bit of code is:
double* input;
if ( (N-(M-stepSize))%stepSize != 0) {
double ipt[N + amountOfZeroPadding];
//...
input = ipt;
}
//...
input[i*stepSize + m];
Your input pointer points at memory that exists only inside the if statement. The closing brace denotes the end of the lifetime of the ipt array. When dereferencing the pointer later, you are addressing memory that no longer exists.
I'm trying to do a normalization of data for a polinomial interpolation with perceptron, I'm using the following formula:
Where:
xi is a data point (x1, x2…xn).
x̄ is the sample mean.
s is the sample standard deviation.
and Z is my new value of input for the perceptron.
I'm programming in C ++, and plotting graph with freeglut.
My function for normalize:
vector<double> Perceptron::normalizar(double x) {
vector<double> aux;
aux.push_back(1.0);
for (unsigned i = 1; i < pesos.size(); i++) {
double t = (pow(x,i) - means[i]) / devianation[i];
aux.push_back(t);
}
return aux;
}
The problem is: before I did the normalization, the polynomial was converging to the points.
But after normalization, the polynomial is converging to other points, and I do not know where it is converging.
The formula for the polynomial would be as follows (with W being the weights of the perceptron):
So I used a mean formula for each value of x.
See the code:
void Perceptron::mean(Points P) { //P is a struct with all x and y values of the points.
means.clear(); //vector that stores the means
for (unsigned i = 0; i < weights.size(); i++) {
double m = 0;
for (unsigned j = 0; j < P.size(); j++) {
m += pow(P[i].x, i);
}
means.push_back(m / P.size());
}
}
void Perceptron::deviation(Points P) {
deviations.clear(); //vector that stores the deviations
for (unsigned i = 0; i < weights.size(); i++) {
double sd = 0;
for (unsigned j = 0; j < P.size(); j++) {
sd += pow(pow(P[j].x, i) - means[i], 2);
}
deviations.push_back(sqrt(sd / P.size()));
}
}
My code seems to have a bug somewhere but I just can't catch it. I'm passing a 2d array to three sequential functions. First function populates it, second function modifies the values to 1's and 0's, the third function counts the 1's and 0's. I can access the array easily inside the first two functions, but I get an access violation at the first iteration of the third one.
Main
text_image_data = new int*[img_height];
for (i = 0; i < img_height; i++) {
text_image_data[i] = new int[img_width];
}
cav_length = new int[numb_of_files];
// Start processing - load each image and find max cavity length
for (proc = 0; proc < numb_of_files; proc++)
{
readImage(filles[proc], text_image_data, img_height, img_width);
threshold = makeBinary(text_image_data, img_height, img_width);
cav_length[proc] = measureCavity(bullet[0], img_width, bullet[1], img_height, text_image_data);
}
Functions
int makeBinary(int** img, int height, int width)
{
int threshold = 0;
unsigned long int sum = 0;
for (int k = 0; k < width; k++)
{
sum = sum + img[1][k] + img[2][k] + img[3][k] + img[4][k] + img[5][k];
}
threshold = sum / (width * 5);
for (int i = 0; i < height; i++)
{
for (int j = 0; j < width; j++)
{
img[i][j] = img[i][j] > threshold ? 1 : 0;
}
}
return threshold;
}
// Count pixels - find length of cavity here
int measureCavity(int &x, int& width, int &y, int &height, int **img)
{
double mean = 1.;
int maxcount = 0;
int pxcount = 0;
int i = x - 1;
int j;
int pxsum = 0;
for (j = 0; j < height - 2; j++)
{
while (mean > 0.0)
{
for (int ii = i; ii > i - 4; ii--)
{
pxsum = pxsum + img[ii][j] + img[ii][j + 1];
}
mean = pxsum / 4.;
pxcount += 2;
i += 2;
pxsum = 0;
}
maxcount = std::max(maxcount, pxcount);
pxcount = 0;
j++;
}
return maxcount;
}
I keep getting an access violation in the measureCavity() function. I'm passing and accessing the array text_image_data the same way as in makeBinary() and readImage(), and it works just fine for those functions. The size is [550][70], I'm getting the error when trying to access [327][0].
Is there a better, more reliable way to pass this array between the functions?
I am new to C++ and I am using the Eigen library. I was wondering if there was a way to sum certain elements in a vector. For example, say I have a vector that is a 100 by 1 and I just want to sum the first 10 elements. Is there a way of doing that using the Eigen library?
What I am trying to do is this: say I have a vector that is 1000 by 1 and I want to take the mean of the first 10 elements, then the next 10 elements, and so on and store that in some vector. Hence I will have a vector of size 100 of the averages. Any thoughts or suggestions are greatly appreciated.
Here is the beginning steps I have in my code. I have a S_temp4vector that is 1000 by 1. Now I intialize a new vector S_A that I want to have as the vector of the means. Here is my messy sloppy code so far: (Note that my question resides in the crudeMonteCarlo function)
#include <iostream>
#include <cmath>
#include <math.h>
#include <Eigen/Dense>
#include <Eigen/Geometry>
#include <random>
#include <time.h>
using namespace Eigen;
using namespace std;
void crudeMonteCarlo(int N,double K, double r, double S0, double sigma, double T, int n);
VectorXd time_vector(double min, double max, int n);
VectorXd call_payoff(VectorXd S, double K);
int main(){
int N = 100;
double K = 100;
double r = 0.2;
double S0 = 100;
double sigma = 0.4;
double T = 0.1;
int n = 10;
crudeMonteCarlo(N,K,r,S0,sigma,T,n);
return 0;
}
VectorXd time_vector(double min, double max, int n){
VectorXd m(n + 1);
double delta = (max-min)/n;
for(int i = 0; i <= n; i++){
m(i) = min + i*delta;
}
return m;
}
MatrixXd generateGaussianNoise(int M, int N){
MatrixXd Z(M,N);
static random_device rd;
static mt19937 e2(time(0));
normal_distribution<double> dist(0.0, 1.0);
for(int i = 0; i < M; i++){
for(int j = 0; j < N; j++){
Z(i,j) = dist(e2);
}
}
return Z;
}
VectorXd call_payoff(VectorXd S, double K){
VectorXd C(S.size());
for(int i = 0; i < S.size(); i++){
if(S(i) - K > 0){
C(i) = S(i) - K;
}else{
C(i) = 0.0;
}
}
return C;
}
void crudeMonteCarlo(int N,double K, double r, double S0, double sigma, double T, int n){
// Create time vector
VectorXd tt = time_vector(0.0,T,n);
VectorXd t(n);
double dt = T/n;
for(int i = 0; i < n; i++){
t(i) = tt(i+1);
}
// Generate standard normal Z matrix
//MatrixXd Z = generateGaussianNoise(N,n);
// Generate the log normal stock process N times to get S_A for crude Monte Carlo
MatrixXd SS(N,n+1);
MatrixXd Z = generateGaussianNoise(N,n);
for(int i = 0; i < N; i++){
SS(i,0) = S0;
for(int j = 1; j <= n; j++){
SS(i,j) = SS(i,j-1)*exp((double) (r - pow(sigma,2.0))*dt + sigma*sqrt(dt)*(double)Z(i,j-1));
}
}
// This long bit of code gives me my S_A.....
Map<RowVectorXd> S_temp1(SS.data(), SS.size());
VectorXd S_temp2(S_temp1.size());
for(int i = 0; i < S_temp2.size(); i++){
S_temp2(i) = S_temp1(i);
}
VectorXd S_temp3(S_temp2.size() - N);
int count = 0;
for(int i = N; i < S_temp2.size(); i++){
S_temp3(count) = S_temp2(i);
count++;
}
VectorXd S_temp4(S_temp3.size());
for(int i = 0; i < S_temp4.size(); i++){
S_temp4(i) = S_temp3(i);
}
VectorXd S_A(N);
S_A(0) = (S_temp4(0) + S_temp4(1) + S_temp4(2) + S_temp4(3) + S_temp4(4) + S_temp4(5) + S_temp4(6) + S_temp4(7) + S_temp4(8) + S_temp4(9))/(n);
S_A(1) = (S_temp4(10) + S_temp4(11) + S_temp4(12) + S_temp4(13) + S_temp4(14) + S_temp4(15) + S_temp4(16) + S_temp4(17) + S_temp4(18) + S_temp4(19))/(n);
int count1 = 0;
for(int i = 0; i < S_temp4.size(); i++){
S_A(count1) =
}
// Calculate payoff of Asian option
//VectorXd call_fun = call_payoff(S_A,K);
}
This question includes a lot of code, which makes it hard to understand the question you're trying to ask. Consider including only the code specific to your question.
In any case, you can use Eigen directly to do all of these things quite simply. In Eigen, Vectors are just matrices with 1 column, so all of the reasoning here is directly applicable to what you've written.
const Eigen::Matrix<double, 100, 1> v = Eigen::Matrix<double, 100, 1>::Random();
const int num_rows = 10;
const int num_cols = 1;
const int starting_row = 0;
const int starting_col = 0;
const double sum_of_first_ten = v.block(starting_row, starting_col, num_rows, num_cols).sum();
const double mean_of_first_ten = sum_of_first_ten / num_rows;
In summary: You can use .block to get a block object, .sum() to sum that block, and then conventional division to get the mean.
You can reshape the input using Map and then do all sub-summations at once without any loop:
VectorXd A(1000); // input
Map<MatrixXd> B(A.data(), 10, A.size()/10); // reshaped version, no copy
VectorXd res = B.colwise().mean(); // partial reduction, you can also use .sum(), .minCoeff(), etc.
The Eigen documentation at https://eigen.tuxfamily.org/dox/group__TutorialBlockOperations.html says an Eigen block is a rectangular part of a matrix or array accessed by matrix.block(i,j,p,q) where i and j are the starting values (eg 0 and 0) and p and q are the block size (eg 10 and 1). Presumably you would then iterate i in steps of 10, and use std::accumulate or perhaps an explicit summation to find the mean of matrix.block(i,0,10,1).
I had a previous question about a stack overflow error and switch to vectors for my arrays of objects. That question can be referenced here if needed: How to get rid of stack overflow error
My current question is however, how do I speed up the initialization of the vectors. My current method currently takes ~15 seconds. Using arrays instead of vectors it took like a second with a size of arrays small enough that didn't throw the stack overflow error.
Here is how I am initializing it:
in main.cpp I initialize my dungeon object:
dungeon = Dungeon(0, &textureHandler, MIN_X, MAX_Y);
in my dungeon(...) constructor, I initialize my 5x5 vector of rooms and call loadDungeon:
Dungeon::Dungeon(int dungeonID, TextureHandler* textureHandler, int topLeftX, int topLeftY)
{
currentRoomRow = 0;
currentRoomCol = 0;
for (int r = 0; r < MAX_RM_ROWS; ++r)
{
rooms.push_back(vector<Room>());
for (int c = 0; c < MAX_RM_COLS; ++c)
{
rooms[r].push_back(Room());
}
}
loadDungeon(dungeonID, textureHandler, topLeftX, topLeftY);
}
my Room constructor populates my 30x50 vector of cells (so I can set them up in the loadDungeon function):
Room::Room()
{
for (int r = 0; r < MAX_ROWS; ++r)
{
cells.push_back(vector<Cell>());
for (int c = 0; c < MAX_COLS; ++c)
{
cells[r].push_back(Cell());
}
}
}
My default cell constructor is simple and isn't doing much but I'll post it anyway:
Cell::Cell()
{
x = 0;
y = 0;
width = 16;
height = 16;
solid = false;
texCoords.push_back(0);
texCoords.push_back(0);
texCoords.push_back(1);
texCoords.push_back(0);
texCoords.push_back(1);
texCoords.push_back(1);
texCoords.push_back(0);
texCoords.push_back(1);
}
And lastly my loadDungeon() function will set up the cells. Eventually this will read from a file and load the cells up but for now I would like to optimize this a bit if possible.
void Dungeon::loadDungeon(int dungeonID, TextureHandler* textureHandler, int topLeftX, int topLeftY)
{
int startX = topLeftX + (textureHandler->getSpriteWidth()/2);
int startY = topLeftY - (textureHandler->getSpriteHeight()/2);
int xOffset = 0;
int yOffset = 0;
for (int r = 0; r < MAX_RM_ROWS; ++r)
{
for (int c = 0; c < MAX_RM_COLS; ++c)
{
for (int cellRow = 0; cellRow < rooms[r][c].getMaxRows(); ++cellRow)
{
xOffset = 0;
for (int cellCol = 0; cellCol < rooms[r][c].getMaxCols(); ++cellCol)
{
rooms[r][c].setupCell(cellRow, cellCol, startX + xOffset, startY - yOffset, textureHandler->getSpriteWidth(), textureHandler->getSpriteHeight(), false, textureHandler->getSpriteTexCoords("grass"));
xOffset += textureHandler->getSpriteWidth();
}
yOffset += textureHandler->getSpriteHeight();
}
}
}
currentDungeon = dungeonID;
currentRoomRow = 0;
currentRoomCol = 0;
}
So how can I speed this up so it doesn't take ~15 seconds to load up every time. I feel like it shouldn't take 15 seconds to load a simple 2D game.
SOLUTION
Well my solution was to use std::vector::reserve call (rooms.reserve in my code and it ended up working well. I changed my function Dungeon::loadDungeon to Dungeon::loadDefaultDungeon because it now loads off a save file.
Anyway here is the code (I got it down to about 4-5 seconds from ~15+ seconds in debug mode):
Dungeon::Dungeon()
{
rooms.reserve(MAX_RM_ROWS * MAX_RM_COLS);
currentDungeon = 0;
currentRoomRow = 0;
currentRoomCol = 0;
}
void Dungeon::loadDefaultDungeon(TextureHandler* textureHandler, int topLeftX, int topLeftY)
{
int startX = topLeftX + (textureHandler->getSpriteWidth()/2);
int startY = topLeftY - (textureHandler->getSpriteHeight()/2);
int xOffset = 0;
int yOffset = 0;
cerr << "Loading default dungeon..." << endl;
for (int roomRow = 0; roomRow < MAX_RM_ROWS; ++roomRow)
{
for (int roomCol = 0; roomCol < MAX_RM_COLS; ++roomCol)
{
rooms.push_back(Room());
int curRoom = roomRow * MAX_RM_COLS + roomCol;
for (int cellRow = 0; cellRow < rooms[curRoom].getMaxRows(); ++cellRow)
{
for (int cellCol = 0; cellCol < rooms[curRoom].getMaxCols(); ++cellCol)
{
rooms[curRoom].setupCell(cellRow, cellCol, startX + xOffset, startY - yOffset, textureHandler->getSpriteWidth(), textureHandler->getSpriteHeight(), false, textureHandler->getSpriteTexCoords("default"), "default");
xOffset += textureHandler->getSpriteWidth();
}
yOffset += textureHandler->getSpriteHeight();
xOffset = 0;
}
cerr << " room " << curRoom << " complete" << endl;
}
}
cerr << "default dungeon loaded" << endl;
}
Room::Room()
{
cells.reserve(MAX_ROWS * MAX_COLS);
for (int r = 0; r < MAX_ROWS; ++r)
{
for (int c = 0; c < MAX_COLS; ++c)
{
cells.push_back(Cell());
}
}
}
void Room::setupCell(int row, int col, float x, float y, float width, float height, bool solid, /*std::array<float, 8>*/ vector<float> texCoords, string texName)
{
cells[row * MAX_COLS + col].setup(x, y, width, height, solid, texCoords, texName);
}
void Cell::setup(float x, float y, float width, float height, bool solid, /*std::array<float,8>*/ vector<float> t, string texName)
{
this->x = x;
this->y = y;
this->width = width;
this->height = height;
this->solid = solid;
for (int i = 0; i < t.size(); ++i)
this->texCoords.push_back(t[i]);
this->texName = texName;
}
It seems wasteful to have so many dynamic allocations. You can get away with one single allocation by flattening out your vector and accessing it in strides:
std::vector<Room> rooms;
rooms.resize(MAX_RM_ROWS * MAX_RM_COLS);
for (unsigned int i = 0; i != MAX_RM_ROWS; ++i)
{
for (unsigned int j = 0; j != MAX_RM_COLS; ++j)
{
Room & r = rooms[i * MAX_RM_COLS + j];
// use `r` ^^^^^^^^^^^^^^^^^^^-----<< strides!
}
}
Note how resize is performed exactly once, incurring only one single allocation, as well as default-constructing each element. If you'd rather construct each element specifically, use rooms.reserve(MAX_RM_ROWS * MAX_RM_COLS); instead and populate the vector in the loop.
You may also wish to profile with rows and columns swapped and see which is faster.
Since it seems that your vectors have their size defined at compile time, if you can use C++11, you may consider using std::array instead of std::vector. std::array cannot be resized and lacks many of the operations in std::vector, but is much more lightweight and it seems a good fit for what you are doing.
As an example, you could declare cells as:
#include <array>
/* ... */
std::array<std::array<Cell, MAX_COLS>, MAX_ROWS> cells;
UPDATE: since a locally defined std::array allocates its internal array on the stack, the OP will experience a stack overflow due to the considerably large size of the arrays. Still, it is possible to use an std::array (and its benefits compared to using std::vector), by allocating the array on the heap. That can be done by doing something like:
typedef std::array<std::array<Cell, MAX_COLS>, MAX_ROWS> Map;
Map* cells;
/* ... */
cells = new Map();
Even better, smart pointers can be used:
#include <memory>
/* ... */
std::unique_ptr<Map> cells;
cells = std::unique_ptr(new Map());