I m beginning in programming with openni and open cv I m working with the kinect .
here a part of the code that I m using to get de data depth map ( it s working )
Now my question is :
How could I get the depht map as an image in return ?
my data is on DepthPixel* pDepth1 and I want to display the image of the depth map (because I want to save her ).
Thank you
VideoFrameRef frame;
DepthPixel* pDepth1 = NULL;
DepthPixel* pDepth2 = NULL;
for (int i = 0; i < 2; i++)
{
int changedStreamDummy;
VideoStream* pStream = &depth;
rc = OpenNI::waitForAnyStream(&pStream, 1, &changedStreamDummy, SAMPLE_READ_WAIT_TIMEOUT);
if (rc != STATUS_OK)
{
printf("Wait failed! (timeout is %d ms)\n%s\n", SAMPLE_READ_WAIT_TIMEOUT, OpenNI::getExtendedError());
continue;
}
rc = depth.readFrame(&frame);
if (rc != STATUS_OK)
{
printf("Read failed!\n%s\n", OpenNI::getExtendedError());
continue;
}
if (frame.getVideoMode().getPixelFormat() != PIXEL_FORMAT_DEPTH_1_MM && frame.getVideoMode().getPixelFormat() != PIXEL_FORMAT_DEPTH_100_UM)
{
printf("Unexpected frame format\n");
continue;
}
if (i == 0){
int dummy;
cout << "Press any key to take first pic: ";
cin >> dummy;
pDepth1 = (DepthPixel*)frame.getData();
}
else{
int dummy;
cout << "Press any key to take second pic: ";
cin >> dummy;
pDepth2 = (DepthPixel*)frame.getData();
}
Assuming that you have your dependencies sorted out, the following has worked for me in the past
//set up the capture with the correct settings
VideoCapture capture = new VideoCapture;
capture->open(CV_CAP_OPENNI);
capture->set(CV_CAP_OPENNI_IMAGE_GENERATOR_OUTPUT_MODE, CV_CAP_OPENNI_VGA_30HZ);
//get an image and display it
Mat image;
capture->grab();
capture->retrieve(image, CV_CAP_OPENNI_DEPTH_MAP);
then work with the image as you would any other matrix. Remember that the image CV_16U as the capture records depths in 16 bit unsigned integers.
Related
I am trying to implement parts of a Linux based network Tic-Tac-Toe program into a Winsock server program. I am having some issues as the Linux version opens a socket as in integer variable:
int client_sock = 0; however, in the Winsock application a Csocket object is created: m_pClientSocket = new CSocket();
Now obviously when I try to compile the function that sends the players move to the server:
if (!SendStatus(client_sock, MOVE)) {
ServerDisconnected();
}
I get an invalid variable type error as the function is expecting an int and it's receiving a Csock object.
My question is how do I get the functions to work? I have tried getting the socket handle and using that but I'm not sure if that's the right direction and using this SendStatus(this->m_pClientSocket->GetSocketHandle(), WIN); is giving me issues as it's a static function.
Send Status Function:
bool SendStatus(int socket, StatusCode status)
{
char* data = (char*)&status;
size_t left = sizeof(status);
ptrdiff_t rc;
while (left)
{
rc = send(socket, data + sizeof(status) - left, left, 0);
if (rc <= 0) return false;
left -= rc;
}
return true;
}
TakeTurn Function:
bool TakeTurn(TTTBoard& board)
{
bool game_over = false;
bool input_good = false;
int row = 0, col = 0;
// Display the board
board.DrawBoard();
while (!input_good)
{
printf("Enter move (row col): ");
// Make sure two integers were inputted
if (scanf_s("%d %d", &row, &col) == 2)
{
if (row < 0 || row > 2)
printf("Invalid row input. Try again.\n");
else if (col < 0 || col > 2)
printf("Invalid column input. Try again.\n");
else if (!board.IsBlank(row, col))
printf("That cell isn't blank. Try again.\n");
else
input_good = true;
}
else
printf("Invalid move input. Try again.\n");
// flush any data from the internal buffers
int c;
while ((c = getchar()) != '\n' && c != EOF);
}
board.PlayerMakeMove(row, col);
cout << "Sending move to server" << endl;
// Send the move to the server
if (!SendStatus(client_sock, MOVE))
ServerDisconnected();
if (!SendInt(client_sock, row))
ServerDisconnected();
if (!SendInt(client_sock, col))
ServerDisconnected();
// Check for win/draw
if (board.IsWon())
{
cout << "YOU WIN!!!!" << endl;
game_over = true;
SendStatus(client_sock, WIN);
}
else if (board.IsDraw())
{
cout << "You tied ._." << endl;
game_over = true;
SendStatus(client_sock, DRAW);
}
return game_over;
}
Hopefully I have shared enough info, I didn't want to bloat the question, thanks!
I made a code in c++ to take a snapshot with a Ueye camera, however the picture that gets saved is just white, in saying that sometimes to can see a tiny bit of the road but still just white. I believe it’s an issue with an auto parameter but I feel as if I have tried everything.
Below is my code:
void MainWindow::CaptureImage(){
int initcamera = is_InitCamera(&hCam, hWndDisplay);
if(initcamera != IS_SUCCESS)
{
cout<<endl<<"Failed to initialize the camera"<<endl;
exit(-1);
}else{
cout<<endl<<"Initialized Camera"<<endl;
}
int camerainfo = is_GetCameraInfo (hCam, &camera_info);
if(camerainfo != IS_SUCCESS)
{
cout<<endl<<"Unable to acquire camera information"<<endl;
exit(-1);
}else{
cout<<endl<<"Camera information required"<<endl;
}
int sensorinfo = is_GetSensorInfo (hCam, &sInfo);
if(sensorinfo != IS_SUCCESS)
{
cout<<endl<<"Unable to acquire sensor information"<<endl;
exit(-1);
}else{
cout<<endl<<"Sensor information acquired"<<endl;
}
int colormode = is_SetColorMode(hCam, IS_CM_BGR8_PACKED);
if(colormode != IS_SUCCESS)
{
cout<<endl<<"Unable to set the color mode"<<endl;
exit(-1);
}else{
cout<<endl<<"Color mode set"<<endl;
}
int pXPos = (sInfo.nMaxWidth);
int pYPos = (sInfo.nMaxHeight);
int rit = is_AllocImageMem (hCam,pXPos,pYPos, 24, &m_pcImageMemory, &m_lMemoryId);
if(rit != IS_SUCCESS)
{
cout<<endl<<"UNABLE TO INITIALIZE MEMORY"<<endl;
system("PAUSE");
exit(-1);
}else{
cout<<endl<<"INITIALIZED MEMORY"<<endl;
}
int rat = is_SetImageMem (hCam, m_pcImageMemory, m_lMemoryId);
if(rat != IS_SUCCESS)
{
cout<<endl<<"UNABLE TO ACTIVATE MEMORY"<<endl;
system("PAUSE");
exit(-1);
}else{
cout<<endl<<"ACTIVATE MEMORY"<<endl;
}
double strenght_factor = 1.0;
int colorcorrection = is_SetColorCorrection(hCam, IS_CCOR_ENABLE, &strenght_factor);
double pval = 1;
int whiteb = is_SetAutoParameter(hCam, IS_SET_ENABLE_AUTO_WHITEBALANCE, &pval, 0);
double gval = 1;
int gains = is_SetAutoParameter(hCam, IS_SET_ENABLE_AUTO_GAIN, &gval, 0);
int dummy;
char *pMem, *pLast;
IMAGE_FILE_PARAMS ImageFileParams;
ImageFileParams.pwchFileName = L"./TestImage.bmp"; /// <-- Insert name and location of the image
ImageFileParams.pnImageID = NULL;
ImageFileParams.ppcImageMem = NULL;
ImageFileParams.nQuality = 0;
ImageFileParams.nFileType = IS_IMG_BMP;
int sho = is_FreezeVideo(hCam, IS_WAIT);
if(sho != IS_SUCCESS)
{
cout<<endl<<"UNABLE TO ACQUIRE FROM THE CAMERA"<<endl;
system("PAUSE");
exit(-1);
}
if (sho == IS_SUCCESS){
int m_Ret = is_GetActiveImageMem(hCam, &pLast, &dummy);
int n_Ret = is_GetImageMem(hCam, (void**)&pLast);
}
if (is_ImageFile(hCam, IS_IMAGE_FILE_CMD_SAVE, (void*)&ImageFileParams, sizeof(ImageFileParams)) == IS_SUCCESS)
{
cout << endl << "An Image was saved" << endl;
}
else
{
cout << endl << "something went wrong" << endl;
}
// Releases an image memory that was allocated
//is_FreeImageMem(hCam, pcImageMemory, nMemoryId);
// Disables the hCam camera handle and releases the data structures and memory areas taken up by the uEye camera
is_ExitCamera(hCam);
}
I have tried many things like the parameters below however it is still just white. I had the camera in the room which is darker and the image came out ok, so I defiantly think it’s due to the day light outside.
const wstring filenameU(filename.begin(), filename.end());
is_ParameterSet(hCam, IS_PARAMETERSET_CMD_LOAD_FILE,(void*) filenameU.c_str(), 0);
unsigned int range[3];
memset(range, 0, sizeof(range));
is_PixelClock(hCam, IS_PIXELCLOCK_CMD_GET_RANGE, (void*)range, sizeof(range));
unsigned int maximumPixelClock = range[1];
is_PixelClock(hCam, IS_PIXELCLOCK_CMD_SET, (void*)&maximumPixelClock, sizeof(maximumPixelClock));
double pval1 = auto_exposure, pval2 = 0;
double pval1 = 1, pval2 = 0;
is_SetAutoParameter(hCam, IS_SET_ENABLE_AUTO_SENSOR_SHUTTER, &pval1, &pval2);
is_SetAutoParameter(hCam, IS_SET_ENABLE_AUTO_SHUTTER,&pval1, &pval2);
is_SetAutoParameter(hCam, IS_SET_ENABLE_AUTO_SENSOR_FRAMERATE,&pval1, &pval2);
is_SetAutoParameter(hCam, IS_SET_AUTO_WB_OFFSET, &pval1, &pval2);
int desiredFrameRate = 60;
is_SetFrameRate(hCam, desiredFrameRate, &m_actualFrameRate);
You cannot just take one picture because the exposure might be to high. You have to capture a few images to give the auto exposure control a chance to lower the exposure. The reason you have to capture some frames is that the AEC is done in software and can only start to change parameters if it gets some frames.
I am trying to decode a video stream from the browser using the ffmpeg API. The stream is produced by the webcam and recorded with MediaRecorder as webm format. What I ultimately need is a vector of opencv cv::Mat objects for further processing.
I have written a C++ webserver using the uWebsocket library. The video stream is sent via websocket from the browser to the server once per second. On the server, I append the received data to my custom buffer and decode it with the ffmpeg API.
If I just save the data on the disk and later I play it with a media player, it works fine. So, whatever the browser sends is a valid video.
I do not think that I correctly understand how should the custom IO behave with network streaming as nothing seems to be working.
The custom buffer:
struct Buffer
{
std::vector<uint8_t> data;
int currentPos = 0;
};
The readAVBuffer method for custom IO
int MediaDecoder::readAVBuffer(void* opaque, uint8_t* buf, int buf_size)
{
MediaDecoder::Buffer* mbuf = (MediaDecoder::Buffer*)opaque;
int count = 0;
for(int i=0;i<buf_size;i++)
{
int index = i + mbuf->currentPos;
if(index >= (int)mbuf->data.size())
{
break;
}
count++;
buf[i] = mbuf->data.at(index);
}
if(count > 0) mbuf->currentPos+=count;
std::cout << "read : "<<count<<" "<<mbuf->currentPos<<", buff size:"<<mbuf->data.size() << std::endl;
if(count <= 0) return AVERROR(EAGAIN); //is this error that should be returned? It cannot be EOF since we're not done yet, most likely
return count;
}
The big decode method, that's supposed to return whatever frames it could read
std::vector<cv::Mat> MediaDecoder::decode(const char* data, size_t length)
{
std::vector<cv::Mat> frames;
//add data to the buffer
for(size_t i=0;i<length;i++) {
buf.data.push_back(data[i]);
}
//do not invoke the decoders until we have 1MB of data
if(((buf.data.size() - buf.currentPos) < 1*1024*1024) && !initializedCodecs) return frames;
std::cout << "decoding data length "<<length<<std::endl;
if(!initializedCodecs) //initialize ffmpeg objects. Custom I/O, format, decoder, etc.
{
//these are just members of the class
avioCtxPtr = std::unique_ptr<AVIOContext,avio_context_deleter>(
avio_alloc_context((uint8_t*)av_malloc(4096),4096,0,&buf,&readAVBuffer,nullptr,nullptr),
avio_context_deleter());
if(!avioCtxPtr)
{
std::cerr << "Could not create IO buffer" << std::endl;
return frames;
}
fmt_ctx = std::unique_ptr<AVFormatContext,avformat_context_deleter>(avformat_alloc_context(),
avformat_context_deleter());
fmt_ctx->pb = avioCtxPtr.get();
fmt_ctx->flags |= AVFMT_FLAG_CUSTOM_IO ;
//fmt_ctx->max_analyze_duration = 2 * AV_TIME_BASE; // read 2 seconds of data
{
AVFormatContext *fmtCtxRaw = fmt_ctx.get();
if (avformat_open_input(&fmtCtxRaw, "", nullptr, nullptr) < 0) {
std::cerr << "Could not open movie" << std::endl;
return frames;
}
}
if (avformat_find_stream_info(fmt_ctx.get(), nullptr) < 0) {
std::cerr << "Could not find stream information" << std::endl;
return frames;
}
if((video_stream_idx = av_find_best_stream(fmt_ctx.get(), AVMEDIA_TYPE_VIDEO, -1, -1, nullptr, 0)) < 0)
{
std::cerr << "Could not find video stream" << std::endl;
return frames;
}
AVStream *video_stream = fmt_ctx->streams[video_stream_idx];
AVCodec *dec = avcodec_find_decoder(video_stream->codecpar->codec_id);
video_dec_ctx = std::unique_ptr<AVCodecContext,avcodec_context_deleter> (avcodec_alloc_context3(dec),
avcodec_context_deleter());
if (!video_dec_ctx)
{
std::cerr << "Failed to allocate the video codec context" << std::endl;
return frames;
}
avcodec_parameters_to_context(video_dec_ctx.get(),video_stream->codecpar);
video_dec_ctx->thread_count = 1;
/* video_dec_ctx->max_b_frames = 0;
video_dec_ctx->frame_skip_threshold = 10;*/
AVDictionary *opts = nullptr;
av_dict_set(&opts, "refcounted_frames", "1", 0);
av_dict_set(&opts, "deadline", "1", 0);
av_dict_set(&opts, "auto-alt-ref", "0", 0);
av_dict_set(&opts, "lag-in-frames", "1", 0);
av_dict_set(&opts, "rc_lookahead", "1", 0);
av_dict_set(&opts, "drop_frame", "1", 0);
av_dict_set(&opts, "error-resilient", "1", 0);
int width = video_dec_ctx->width;
videoHeight = video_dec_ctx->height;
if(avcodec_open2(video_dec_ctx.get(), dec, &opts) < 0)
{
std::cerr << "Failed to open the video codec context" << std::endl;
return frames;
}
AVPixelFormat pFormat = AV_PIX_FMT_BGR24;
img_convert_ctx = std::unique_ptr<SwsContext,swscontext_deleter>(sws_getContext(width, videoHeight,
video_dec_ctx->pix_fmt, width, videoHeight, pFormat,
SWS_BICUBIC, nullptr, nullptr,nullptr),swscontext_deleter());
frame = std::unique_ptr<AVFrame,avframe_deleter>(av_frame_alloc(),avframe_deleter());
frameRGB = std::unique_ptr<AVFrame,avframe_deleter>(av_frame_alloc(),avframe_deleter());
int numBytes = av_image_get_buffer_size(pFormat, width, videoHeight,32 /*https://stackoverflow.com/questions/35678041/what-is-linesize-alignment-meaning*/);
std::unique_ptr<uint8_t,avbuffer_deleter> imageBuffer((uint8_t *) av_malloc(numBytes*sizeof(uint8_t)),avbuffer_deleter());
av_image_fill_arrays(frameRGB->data,frameRGB->linesize,imageBuffer.get(),pFormat,width,videoHeight,32);
frameRGB->width = width;
frameRGB->height = videoHeight;
initializedCodecs = true;
}
AVPacket pkt;
av_init_packet(&pkt);
pkt.data = nullptr;
pkt.size = 0;
int read_frame_return = 0;
while ( (read_frame_return=av_read_frame(fmt_ctx.get(), &pkt)) >= 0)
{
readFrame(&frames,&pkt,video_dec_ctx.get(),frame.get(),img_convert_ctx.get(),
videoHeight,frameRGB.get());
//if(cancelled) break;
}
avioCtxPtr->eof_reached = 0;
avioCtxPtr->error = 0;
//flush
// readFrame(frames.get(),nullptr,video_dec_ctx.get(),frame.get(),
// img_convert_ctx.get(),videoHeight,frameRGB.get());
avioCtxPtr->eof_reached = 0;
avioCtxPtr->error = 0;
if(frames->size() <= 0)
{
std::cout << "buffer pos: "<<buf.currentPos<<", buff size:"<<buf.data.size()
<<",read_frame_return:"<<read_frame_return<< std::endl;
}
return frames;
}
What I would expect to happen would be for a continuous extraction of cv::Mat frames as I feed it more and more data. What actually happens is that after the the buffer is fully read I see:
[matroska,webm # 0x507b450] Read error at pos. 1278266 (0x13813a)
[matroska,webm # 0x507b450] Seek to desired resync point failed. Seeking to earliest point available instead.
And then no more bytes are read from the buffer even if later I increase the size of it.
There is something terribly wrong I'm doing here and I don't understand what.
What I ended up doing was to do the reading of the incoming data and actual decoding in a different thread. The read method, however, will just block if there are no more bytes available, waiting until anything is coming.
When new bytes are arriving, they're added to the buffer and the conditional_variable signals the waiting thread to wake up and start reading data again from the buffer.
It works well enough.
i'm developing a program to detect object in video or image.
It works with the image, but now i want to use it with video. I use a specific folder to pick the image so i wanted to save frames from video in that folder before the detection.
The variable video and salvataggio are alredy setted.
In the following code i navigate throw the folder to analize the video:
DIR *dir;
dir = opendir(video.c_str());
string vidName;
struct dirent *ent;
if (dir != NULL) {
while ((ent = readdir (dir)) != NULL) {
vidName= ent->d_name;
if(vidName.compare(".")!= 0 && vidName.compare("..")!= 0)
{
//string vidPath(neg + vidName);
estraiframe(video, vidName, salvataggio);
}
}
closedir (dir);
}
else {
cout<<"directory "<< video << " not present"<<endl;
}
}
The function estraiframe save the frame in the output folder.
void estraiframe(string path, string vidName, string output){
string vidPath(path + vidName);
VideoCapture cap(vidPath);
if( !cap.isOpened()){
cout << "Cannot open the video file" << endl;
return;
}
double count = cap.get(CV_CAP_PROP_FRAME_COUNT);
double rate = cap.get(CV_CAP_PROP_FPS);
int counter = 0;
for (int i=1; i< count; i+=rate*5)
{
cap.set(CV_CAP_PROP_POS_FRAMES,i);
Mat frame;
cap.read(frame);
counter++;
string nomeframe = to_string(counter) + "-frame_from"+vidName+".jpg";
string percorso (output+nomeframe);
cout << percorso;
imwrite(percorso,frame);
}
}
Apparently it works but after the last frame it gave me the following error:
Assertion stream_index < ogg->nstreams failed at libavformat/oggdec.c:898
I locked for it but i didin't find where is the error
Your video contains various stream indexes, e.g. 3 audios, and 1 video stream would result in 4 stream indexes. I recommend that you check the amount of streams your video contains and print ogg->nstreams to check if this corresponds. Looking at the source code in oggdec.c at line 898
static int ogg_read_seek(AVFormatContext *s, int stream_index,
int64_t timestamp, int flags)
{
struct ogg *ogg = s->priv_data;
struct ogg_stream *os = ogg->streams + stream_index;
int ret;
av_assert0(stream_index < ogg->nstreams); /* Line 898 */
You're clearly going out of bounds here.
Apparently I manage to solve it, I modified the function that extract the frame.
void estraiframe(string path, string vidName, string output){
string vidPath(path + vidName);
VideoCapture cap(vidPath);
if( !cap.isOpened()){
cout << "Cannot open the video file" << endl;
return;
}
double rate = cap.get(CV_CAP_PROP_FPS);
int counter = 0;
Mat frame;
int i=1;
while(1)
{
cap.read ( frame);
if( frame.empty()) break;
counter++;
if (counter == rate*5*i){
i++;
string nomeframe = to_string(counter) + "-frame_from"+vidName+".jpg";
string percorso (output+nomeframe);
imwrite(percorso, frame);
}
char key = waitKey(10);
if ( key == 27) break;
}
}
I've made a program in c++ using OpenCV library. The program record video from webcam and then split it in frames. I want to know if the frames are in RGB beacuse i want to access the RGB properties of every pixel. The codec for capture is CV_FOURCC('M','J','P','G'). How can i get the frames in RGB colorspace?
int main() {
Mat image;
VideoCapture cap(0);
cap.set(CV_CAP_PROP_FPS, 10);
if ( !cap.isOpened() ) {
cout << "ERROR : Cannot open the video file"<<endl;
return -1;
}
namedWindow("MyWindow", CV_WINDOW_AUTOSIZE);
double dWidth = cap.get(CV_CAP_PROP_FRAME_WIDTH);
double dHeight = cap.get(CV_CAP_PROP_FRAME_HEIGHT);
cout << "Frame size :" << dWidth << "x" << dHeight << endl;
Size frameSize(static_cast<int>(dWidth), static_cast<int>(dHeight));
VideoWriter oVideoWriter("E:/myVideo.avi", CV_FOURCC('M', 'J', 'P', 'G'), 10, frameSize, true);
if (!oVideoWriter.isOpened()) {
cout << "ERROR : Failed to write the video"<<endl;
return - 1;
}
while (1) {
Mat image;
bool bSuccess = cap.read(image);
if (!bSuccess) {
cout << "ERROR : Cannot read a frame from video file" << endl;
break;
}
oVideoWriter.write(image);
imshow("MyWindow", image);
if (waitKey(10) == 27) {
saveImages();
cout << "ESC key is pressed by user" << endl;
break
}
}
return 0;
}
int saveImages() {
CvCapture *capture = cvCaptureFromFile("E:/myVideo.avi");
if(!capture)
{
cout<<"!!! cvCaptureFromAVI failed (file not found?)"<<endl;
return -1;
}
int fps = (int) cvGetCaptureProperty(capture, CV_CAP_PROP_FPS);
IplImage* frame = NULL;
int frame_number = 0;
char key = 0;
while (key != 'q')
{
frame = cvQueryFrame(capture);
if (!frame)
{
cout<<"!!! cvQueryFrame failed: no frame"<<endl;
break;
}
char filename[100];
strcpy(filename, "frame_");
char frame_id[30];
_itoa(frame_number, frame_id, 10);
strcat(filename, frame_id);
strcat(filename, ".jpg");
printf("* Saving: %s\n", filename);
if (!cvSaveImage(filename, frame))
{
cout<<"!!! cvSaveImage failed"<<endl;
break;
}
frame_number++;
key = cvWaitKey(1000 / fps);
}
cvReleaseCapture(&capture);
return 0;
}
When OpenCV loads colored images (i.e. 3 channel) from the disk, camera, or a video file, the image data will be stored in the BGR format. This is a simple test that you can do:
/* Code using the C++ API */
cv::VideoCapture cap(0);
if (!cap.isOpened()) {
std::cout << "!!! Failed to open webcam" << std::endl;
return -1;
}
if (!cap.read(frame)) {
std::cout << "!!! Failed to read a frame from the camera" << std::endl;
return -1;
}
bool is_colored = false;
if (frame.channels() == 3) {
is_colored = true;
}
// Do something with is_colored
// ...
Unless you have a weird camera, the frames will always be colored (and as result, stored as BGR).
When cv::imwrite() (C++ API) or cvSaveImage() (C API) are called, OpenCV does the proper magic tricks to ensure the data is saved in a compatible way with requested output format (JPG, PNG, AVI, etc) and during this process it automatically converts the data to RGB if it needs to.
Nevertheless, if for some reason you need to convert the image to RGB you can call:
cv::Mat img_rgb;
cv::cvtColor(frame, img_rgb, CV_BGR2RGB);
Please note that OpenCV has a C API and also a C++ API, and they shouldn't be mixed:
If you use IplImage then stick with the rest of the C API.
If you decide to go with cv::Mat, then keep using the C++ API.
There are different ways to access the pixels of a cv::Mat, here is one of them:
unsigned char* pixels = (unsigned char*)(frame.data);
for (int i = 0; i < frame.rows; i++)
{
for (int j = 0; j < frame.cols; j++)
{
char b = pixels[frame.step * j + i] ;
char g = pixels[frame.step * j + i + 1];
char r = pixels[frame.step * j + i + 2];
}
}