How do I change this variable into an array c++ - c++

I am writing code in c++ for a game in which a bucket controlled by the user collects raindrops with the same radius. I want to use an array to make each of the 16 raindrops a different size(radius). I have no clue how to change the variable into an array.
I am given a variable:
int radius = randomBetween( MARGIN / 4, MARGIN / 2 );

Here is an example that uses actual C++.
#include <algorithm>
#include <functional>
#include <random>
#include <vector>
std::mt19937 prng(seed);
std::uniform_int_distribution<> dist(MARGIN / 4, MARGIN / 2);
std::vector<int> radii(16);
std::generate(radii.begin(), radii.end(), std::bind(dist, std::ref(prng)));

You're probably going to want to use floats, but basically if I understand you correctly...
int size_in_elements = 16;
float *a= new float[size_in_elements];
float maxvalue = 100.0f; // this will be the maximum value to assign to each element
for(int i = 0; i < size_in_elements; i++)
{
a[i] = fmodf((float)rand(), maxvalue);
}
delete[] a; // Don't forget the brackets here... delete[] is used for deleting arrays.
Hope I helped some

Related

Iterating the creation of objects in C++

I want to be able to create N skyscrapers. Using an inputdata string, I would like to give them coordinate values of their X and Y positions. My main function I used "i" to demonstrate that I am trying to create as many skyscrapers as I can using the input data. Essentially, I would like to create N/3 skyscrapers and assign the input to coordinates for each.
#include <iostream>
#include <vector>
#include <string>
#include <math.h>
using namespace std;
vector<int> inputData = {1, 4, 10, 3, 5, 7, 9, 10, 4, 11, 3, 2, 14, 5, 5};
int N = inputData.size();
class Buildings{
public:
int yCoordinateLow;
int yCoordinateHigh;
int xCoordinateLeft;
int xCoordinateRight;
};
int main(){
for(int i=0; i<N; i=i+3){
Buildings skyscraper;
skyscraper.xCoordianteLeft = inputData.at(i);
skyscraper.yCoordianteLow = 0;
skyscraper.yCoordinateHigh = inputData.at(i+1);
skyscraper.xCoordinateRight = inputData.at(i+2);
}
return 0;
}
Jeff Atwood once said: use the best tools money can buy. And those aren't even expensive: Visual Studio community edition is free. Such a proper IDE will tell you that the skyscraper is unused except for the assignments.
Since you probably want to do something with those skyscrapers later, you should store them somewhere, e.g. in another vector.
int main() {
vector<Buildings> skyscrapers;
for (int i = 0; i < N; i = i + 3) {
Buildings skyscraper{};
skyscraper.xCoordinateLeft = inputData.at(i);
skyscraper.yCoordinateLow = 0;
skyscraper.yCoordinateHigh = inputData.at(i + 1);
skyscraper.xCoordinateRight = inputData.at(i + 2);
skyscrapers.push_back(skyscraper);
}
return 0;
}
Other than that, I'd say the loop works fine as long as there are N*3 coordinates in the original vector.
If you e.g. implement a game, you would probably not hard code the skyscraper coordinates in a vector but rather read that data from a file, potentially per level.
Instead of doing all the error-prone coding, maybe you want to initialize the skyscrapers immediately
vector<Buildings> skyscrapers = {{1,0,4,10}, {3,0,5,7}, {9,0,10,4}, {11,0,3,4}, {14,0,5,5}};

Butterfly pattern appears in random walk using srand(), why?

About 3 years ago I coded a 2D random walk togheter with a coleague in C++, first it seemed to work properly as we obtained a diferent pattern each time. But whenever we decided to increase the number of steps above some threshold an apparent butterfly pattern appeared, we noticed that with each run of the code the pattern would repeat but starting on a different place of the butterfly. We concluded and reported then that it was due to the pseudorandom generator associated with srand() function, but today I found again this report and there are still some things that I would like to understand. I would like to understand better how the pseudorandom generator works in order to obtain this sort of symmetry and ciclic pattern. The pattern I'm talking about is this (The steps are color coded in a rainbow sequence to apreciate the progression of the walk):
EDIT:
I'm adding the code used to obtain this figure:
#include<iostream>
#include<cmath>
#include<stdlib.h>
#include<time.h>
#include <fstream>
#include <string.h>
#include <string>
#include <iomanip>
using namespace std;
int main ()
{
srand(time(NULL));
int num1,n=250000;
ofstream rnd_coordinates("Random2D.txt");
float x=0,y=0,sumx_f=0,sumy_f=0,sum_d=0,d_m,X,t,d;
float x_m,y_m;
x=0;
y=0;
for(int i=0;i<n;i++){
t=i;
num1= rand()%4;
if(num1==0){
x++;
}
if(num1==1){
x--;
}
if(num1==2){
y++;
}
if(num1==3){
y--;
}
rnd_coordinates<<x<<','<<y<<','<<t<<endl;
}
rnd_coordinates.close();
return 0;
}
You never hit rand()'s period, but keep in mind you don't actually use the entire rand() range that in its entirety guarantees a 2^32 period.
With that in mind, you have 2 options:
Use all the bits. rand() returns 2 bytes (16 bits), and you need 2 bits (for 4 possible values). Split that 16 bit output into chunks of 2 bits and use them all in sequence.
At the very least if you insist on using the lazy %n way, choose a modulo that's not a divisor of your period. For example choose 5 instead of 4, since 5 is prime, and if you get the 5th value reroll.
The code below constitutes a complete compileable example.
Your issue is with dropping bits from the random generator. Lets's see how one could write a source of random bit pairs that doesn't drop bits. It requires that RAND_MAX is of the form 2^n−1, but the idea could be extended to support any RAND_MAX >= 3.
#include <cassert>
#include <cstdint>
#include <cstdlib>
class RandomBitSource {
int64_t bits = rand();
int64_t bitMask = RAND_MAX;
static_assert((int64_t(RAND_MAX + 1) & RAND_MAX) == 0, "No support for RAND_MAX != 2^(n-1)");
public:
auto get2Bits() {
if (!bitMask) // got 0 bits
bits = rand(), bitMask = RAND_MAX;
else if (bitMask == 1) // got 1 bit
bits = (bits * (RAND_MAX+1)) | rand(), bitMask = (RAND_MAX+1) | RAND_MAX;
assert(bitMask & 3);
bitMask >>= 2;
int result = bits & 3;
bits >>= 2;
return result;
}
};
Then, the random walk implementation could be as follows. Note that the ' digit separator is a C++14 feature - quite handy.
#include <vector>
using num_t = int;
struct Coord { num_t x, y; };
struct Walk {
std::vector<Coord> points;
num_t min_x = {}, max_x = {}, min_y = {}, max_y = {};
Walk(size_t n) : points(n) {}
};
auto makeWalk(size_t n = 250'000)
{
Walk walk { n };
RandomBitSource src;
num_t x = 0, y = 0;
for (auto& point : walk.points)
{
const int bits = src.get2Bits(), b0 = bits & 1, b1 = bits >> 1;
x = x + (((~b0 & ~b1) & 1) - ((b0 & ~b1) & 1));
y = y + (((~b0 & b1) & 1) - ((b0 & b1) & 1));
if (x < walk.min_x)
walk.min_x = x;
else if (x > walk.max_x)
walk.max_x = x;
if (y < walk.min_y)
walk.min_y = y;
else if (y > walk.max_y)
walk.max_y = y;
point = { x, y };
}
return walk;
}
With a bit more effort, we can make this into an interactive Qt application. Pressing Return generates a new image.
The image is viewed at the native resolution of the screen it's displayed on, i.e. it maps to physical device pixels. The image is not scaled. Instead, it is rotated when needed to better fit into the screen's orientation (portrait vs landscape). That's for portrait monitor aficionados :)
#include <QtWidgets>
QImage renderWalk(const Walk& walk, Qt::ScreenOrientation orient)
{
using std::swap;
auto width = walk.max_x - walk.min_x + 3;
auto height = walk.max_y - walk.min_y + 3;
bool const rotated = (width < height) == (orient == Qt::LandscapeOrientation);
if (rotated) swap(width, height);
QImage image(width, height, QPixmap(1, 1).toImage().format());
image.fill(Qt::black);
QPainter p(&image);
if (rotated) {
p.translate(width, 0);
p.rotate(90);
}
p.translate(-walk.min_x, -walk.min_y);
auto constexpr hueStep = 1.0/720.0;
qreal hue = 0;
int const huePeriod = walk.points.size() * hueStep;
int i = 0;
for (auto& point : walk.points) {
if (!i--) {
p.setPen(QColor::fromHsvF(hue, 1.0, 1.0, 0.5));
hue += hueStep;
i = huePeriod;
}
p.drawPoint(point.x, point.y);
}
return image;
}
#include <ctime>
int main(int argc, char* argv[])
{
srand(time(NULL));
QApplication a(argc, argv);
QLabel view;
view.setAlignment(Qt::AlignCenter);
view.setStyleSheet("QLabel {background-color: black;}");
view.show();
auto const refresh = [&view] {
auto *screen = view.screen();
auto orientation = screen->orientation();
auto pixmap = QPixmap::fromImage(renderWalk(makeWalk(), orientation));
pixmap.setDevicePixelRatio(screen->devicePixelRatio());
view.setPixmap(pixmap);
view.resize(view.size().expandedTo(pixmap.size()));
};
refresh();
QShortcut enter(Qt::Key_Return, &view);
enter.setContext(Qt::ApplicationShortcut);
QObject::connect(&enter, &QShortcut::activated, &view, refresh);
return a.exec();
}
Every pseudorandom generator is a cycle of some sequence of numbers. One of the ways we distinguish "good" prngs from "bad" prngs is the length of this sequence. There is some state associated with the generator, so the maximum period is bounded by how many distinct states there are.
Your implementation has a "short" period, because it repeats in less than the age of the universe. It probably has 32 bits of state, so the period is at most 2^32.
As you are using C++, you can try again using a randomly seeded std::mt19937, and you won't see repeats.
You might want to look at my answer to another question here about older rand() implementations. Sometimes with the old rand() and srand() functions the lower order bits are much less random than the higher order bits. Some of these older implementations still persist, it's possible you used one.

Empty Histogram: Values Possibly Not Recognized?

I had a question on how to fix an empty histogram. I am creating a vector called "mass" that I want to plot its histogram (as can be seen in the code) from the variables in my Branches Muon_PT, Muon_Eta, Muon_Phi, and mass. However, I get an empty histogram. I am suspecting that it is not reading the branch variables. I suspect this because when I write something like:
auto idx_cmb = Combinations(Muon_PT, 2);
I get the error that:
Warning in <TROOT::Append>: Replacing existing TH1: mass (Potential memory leak).
terminate called after throwing an instance of 'std::runtime_error'
what(): Cannot make unique combinations of size 2 from vector of size 0.
Although when I plot the histogram in the ROOT terminal I am able to get a histogram. It is just when I write it in its own script I am not able to call the branches.
Below is my code:
#include "Math/Vector4Dfwd.h"
#include "ROOT/RDF/RInterface.hxx"
#include "ROOT/RDataFrame.hxx"
#include "ROOT/RVec.hxx"
#include "TCanvas.h"
#include "TH1D.h"
#include "TLatex.h"
#include "TLegend.h"
#include "TStyle.h"
#include <string>
#include <vector>
using namespace ROOT::VecOps;
const auto z_mass = 91.2;
void selectMuon() {
ROOT::EnableImplicitMT();
ROOT::RDataFrame df("Delphes;4", "tag_1_delphes_events.root");
TH1F *histDiMuonMass = new TH1F(
"mass", "M_{inv}(Z[3]Z[5]); M_inv (GeV/c^2); Events", 50, 0.0, 1500);
RVec<float> Muon_PT;
RVec<float> Muon_Eta;
RVec<float> Muon_Phi;
RVec<int> Muon_Charge;
// auto idx_cmb = Combinations(Muon_PT, 2);
for (size_t i = 0; i < Muon_Charge.size(); i++) {
if (Muon_Charge[1] != Muon_Charge[2]) {
ROOT::Math::PtEtaPhiMVector m1((Muon_PT)[1], (Muon_Eta)[1], (Muon_Phi)[1],
0.1);
ROOT::Math::PtEtaPhiMVector m2((Muon_PT)[2], (Muon_Eta)[2], (Muon_Phi)[2],
0.1);
auto mass = (m1 + m2).M();
histDiMuonMass->Fill(mass);
auto df_mass = df.Define("Dimuon_mass", InvariantMass<float>,
{"Muon_PT", "Muon_Eta", "Muon_Phi", "m"});
// Make histogram of dimuon mass spectrum
auto h = df_mass.Histo1D({"Dimuon_mass", "Dimuon_mass", 30000, 0.25, 300},
"Dimuon_mass");
}
} // end of event for loop
histDiMuonMass->Draw();
}
Why isn't accessing the Branch Variables Muon_PT, etc. How can I fix this?

Trying to make a live data grapher with CImg library (C++)

I'm new to CImg. Not sure if there's already a live data plotter in the library but I thought I'd go ahead and make one myself. If what I'm looking for already exists in the library please point me to the function. otherwise, here is my super inefficient code that I'm hoping you can help me with~
#include <iostream>
#include "CImg.h"
#include <ctime>
#include <cmath>
using namespace cimg_library;
int main()
{
CImg<unsigned char> plot(400, 320, 1, 3, 0);
CImgDisplay graph(plot, "f(x)");
clock();
const unsigned char red[] = {255, 0, 0};
float* G = new float[plot.width()]; //define an array holding the values that are to be displayed on the graph
while (1){
G[0] = ((plot.height()/4) * sin(clock() / 1000.0)) + plot.height()/2; // new f(t) value
for (int i = 1; i <= plot.width() - 1; i++){
G[plot.width() - i] = G[plot.width() - i - 1]; //basically shift all the array values to current address+1
plot.draw_point(plot.width() - 3*i, G[i-1], red, 1).display(graph);
}
plot.fill(0);
}
return 0;
}
problems
the grapher traverses right to left soo slowly.. and I'm not sure how to make a smooth curve hence I went with points.. how do you make a smooth curve?
There is already something for you in the library, method CImg<T>::draw_graph(), as (brielfy) explained here :
http://cimg.eu/reference/structcimg__library_1_1CImg.html#a2e629aadedc4518001f00333f25bfec8
There are few examples provided with the library that use this method, see files examples/tutorial.cpp and examples/plotter1d.cpp.

CUDA, using memset(or fill or ...) to set an array of float to max val possible

Edit: Thanks for the previous answers. but in fact I want to do it in CUDA, and apparently there is no function Fill for CUDA. I have to fill the matrix once for each thread so I want to make sure I'm using the fastest way possible. Is this for loop my best choice?
I want to set the matrix of float to the maximum value possible (in float). What is the correct way of doing this job?
float *matrix=new float[N*N];
for (int i=0;i<N*N;i++){
matrix[i*N+j]=999999;
}
Thanks in advance.
The easiest approach in CUDA is to use thrust::fill. Thrust is included with CUDA 4.0 and later, or you can install it if you are using CUDA 3.2.
#include <thrust/fill.h>
#include <thrust/device_vector.h>
...
thrust::device_vector<float> v(N*N);
thrust::fill(v.begin(), v.end(), std::numeric_limits<float>::max()); // or 999999.f if you prefer
You could also write pure CUDA code something like this:
template <typename T>
__global__ void initMatrix(T *matrix, int width, int height, T val) {
int idx = blockIdx.x * blockDim.x + threadIdx.x;
for (int i = idx; i < width * height; i += gridDim.x * blockDim.x) {
matrix[i]=val;
}
}
int main(void) {
float *matrix = 0;
cudaMalloc((void*)&matrix, N*N * sizeof(float));
int blockSize = 256;
int numBlocks = (N*N + blockSize - 1) / (N*N);
initMatrix<<<numBlocks, blockSize>>>(matrix, N, N,
std::numeric_limits<float>::max()); // or 999999.f if you prefer
}
Use std::numeric_limits<float>::max() and std::fill as:
#include <limits> //for std::numeric_limits<>
#include <algorithm> //for std::fill
std::fill(matrix, matrix + N*N, std::numeric_limits<float>::max());
Or, std::fill_n as (looks better):
std::fill_n(matrix, N*N, std::numeric_limits<float>::max());
See these online documentation:
std::fill
std::fill_n
You need to iterate through the array and set each float element to std::numeric_limits<float>::max() in limits ... you can't use memset for this since it sets every byte in a memory buffer, not a multi-byte value like a float, etc., to a specific value.
So you would end up with code that looks like the following since you're only using a single array for your matrix (i.e., you don't need the second for-loop):
#include <limits>
float* matrix = new float[N*N];
for (int i=0; i < N*N; i++)
{
matrix[i] = std::numeric_limits<float>::max();
}
The second huge problem with your request is that memset takes an integral-type for the value to set each byte to, so you'd have to get the actual bit-pattern of the max floating point value, and use that as the input to memset. But even that won't work since memset can only set each byte in a memory buffer to a given value, therefore if you pass a 32-bit integral value representing a floating point value to memset, it's only going to use the lower 8-bits ... so in the end it's not just something we're not advising you to-do, but it's impossible for the way that memset has been implemented. You simply can't use memset to initialize a memory buffer of multi-byte types to a specific value unless you are wanting to zero-out the values, or you are doing some odd hack that lets you write the same value to all the bytes that compose a multi-byte data-type.
I suggest to easily do this job, use std::fill instead which is in algorithm header.
std::fill( matrix, matrix + (N*N), 999999 ) ;
Instead of using dynamic memory in C++, use vector and watch it do all the work for you:
std::vector<float> matrix(N * N, std::numeric_limits<float>::max());
In fact you can even make it a 2d matrix easily:
std::vector<std::vector<float> > matrix(N, std::vector<float>(N, std::numeric_limits<float>::max()));
The C++ Way:
std::fill(matrix, matrix + N*N, std::numeric_limits<float>::max());
Is matrix global memory or thread local memory? If it is in global memory, and you only need to initialize (rather than a reset in the middle of a kernel), then you can use memset from the host before launching the kernel. If it is in the middle of the kernel, consider breaking the kernel into two pieces so you can still use cudaMemset.
cudaMemset(matrix,std::numeric_limits<float>::max(),N*N*blockSize);