Change resolution on openni2 not working - c++

I want to read depth frame at 640x480.
I am using windows 8.1 64bit, openni2 32bit, kinect:PSMP05000,PSCM04900(PrimeSense)
I take code reference from here:
cannot set VGA resolution
Simple Read
Combined to this code:
main.cpp
OniSampleUtilities.h
SimpleRead.vcxproj
should be compiled if you install openni2 32bit from here:
OpeniNI 2
#include "iostream"
#include "OpenNI.h"
#include "OniSampleUtilities.h"
#define SAMPLE_READ_WAIT_TIMEOUT 2000 //2000ms
using namespace openni;
using namespace std;
int main()
{
Status rc = OpenNI::initialize();
if (rc != STATUS_OK)
{
cout << "Initialize failed:" << endl << OpenNI::getExtendedError() << endl;
return 1;
}
Device device;
rc = device.open(ANY_DEVICE);
if (rc != STATUS_OK)
{
cout << "Couldn't open device" << endl << OpenNI::getExtendedError() << endl;
return 2;
}
VideoStream depth;
if (device.getSensorInfo(SENSOR_DEPTH) != NULL)
{
rc = depth.create(device, SENSOR_DEPTH);
if (rc != STATUS_OK)
{
cout << "Couldn't create depth stream" << endl << OpenNI::getExtendedError() << endl;
return 3;
}
}
rc = depth.start();
if (rc != STATUS_OK)
{
cout << "Couldn't start the depth stream" << endl << OpenNI::getExtendedError() << endl;
return 4;
}
VideoFrameRef frame;
// set resolution
// depth modes
cout << "Depth modes" << endl;
const openni::SensorInfo* sinfo = device.getSensorInfo(openni::SENSOR_DEPTH); // select index=4 640x480, 30 fps, 1mm
const openni::Array< openni::VideoMode>& modesDepth = sinfo->getSupportedVideoModes();
for (int i = 0; i<modesDepth.getSize(); i++) {
printf("%i: %ix%i, %i fps, %i format\n", i, modesDepth[i].getResolutionX(), modesDepth[i].getResolutionY(),
modesDepth[i].getFps(), modesDepth[i].getPixelFormat()); //PIXEL_FORMAT_DEPTH_1_MM = 100, PIXEL_FORMAT_DEPTH_100_UM
}
rc = depth.setVideoMode(modesDepth[0]);
if (openni::STATUS_OK != rc)
{
cout << "error: depth fromat not supprted..." << endl;
}
system("pause");
while (!wasKeyboardHit())
{
int changedStreamDummy;
VideoStream* pStream = &depth;
rc = OpenNI::waitForAnyStream(&pStream, 1, &changedStreamDummy, SAMPLE_READ_WAIT_TIMEOUT);
if (rc != STATUS_OK)
{
cout << "Wait failed! (timeout is " << SAMPLE_READ_WAIT_TIMEOUT << " ms)" << endl << OpenNI::getExtendedError() << endl;
continue;
}
rc = depth.readFrame(&frame);
if (rc != STATUS_OK)
{
cout << "Read failed!" << endl << OpenNI::getExtendedError() << endl;
continue;
}
if (frame.getVideoMode().getPixelFormat() != PIXEL_FORMAT_DEPTH_1_MM && frame.getVideoMode().getPixelFormat() != PIXEL_FORMAT_DEPTH_100_UM)
{
cout << "Unexpected frame format" << endl;
continue;
}
DepthPixel* pDepth = (DepthPixel*)frame.getData();
int middleIndex = (frame.getHeight()+1)*frame.getWidth()/2;
printf("[%08llu] %8d\n", (long long)frame.getTimestamp(), pDepth[middleIndex]);
}
depth.stop();
depth.destroy();
device.close();
OpenNI::shutdown();
return 0;
}
There is 6 mode of operation:
0: 320x240, 30 fps, 100 format
1: 320x240, 30 fps, 101 format
2: 320x240, 60 fps, 100 format
3: 320x240, 60 fps, 101 format
4: 640x480, 30 fps, 100 format
5: 640x480, 30 fps, 101 format
It can read only from modes=0-3.
At mode 4,5 i get timeout.
How i can read depth frame at 640x480?
Thanks for the help,
Tal.
====================================================
new information:
I use also this line, and i get the same results:
const openni::SensorInfo* sinfo = &(depth.getSensorInfo());
This line never execute at any mode:
cout << "error: depth fromat not supprted..." << endl;
At mode 4,5 I always get this line execute:
cout << "Wait failed! (timeout is " << SAMPLE_READ_WAIT_TIMEOUT << " ms)" << endl << OpenNI::getExtendedError() << endl;
I think maybe it a bug at openni2.
At openni1, I can read depth image at 640x480, in the same computer,os and device.

Maybe I am wrong, but I am almost sure that the problem is the order that you are doing it.
I think you should change it before depth.start() and after depth.create(device, SENSOR_DEPTH)
If I remember correctly, once it has started you may bot change the resolution of the stream.
So it should be something like this
...
if (device.getSensorInfo(SENSOR_DEPTH) != NULL)
{
rc = depth.create(device, SENSOR_DEPTH);
if (rc != STATUS_OK)
{
cout << "Couldn't create depth stream" << endl << OpenNI::getExtendedError() << endl;
return 3;
}
}
// set resolution
// depth modes
cout << "Depth modes" << endl;
const openni::SensorInfo* sinfo = device.getSensorInfo(openni::SENSOR_DEPTH);
const openni::Array< openni::VideoMode>& modesDepth = sinfo->getSupportedVideoModes();
rc = depth.setVideoMode(modesDepth[0]);
if (openni::STATUS_OK != rc)
{
cout << "error: depth fromat not supprted..." << endl;
}
rc = depth.start();
if (rc != STATUS_OK)
{
cout << "Couldn't start the depth stream" << endl << OpenNI::getExtendedError() << endl;
return 4;
}
VideoFrameRef frame;
...
I hope that this helps you, if not, please add a comment. I have a similar code working in the git repository I show you the other day, tested with a PrimeSense carmine camera.

In my case (Asus Xtion PRO in a USB 3.0 port, OpenNI2, Windows 8.1), it seems there are something wrong with OpenNI2 (or its driver) that prevents me from changing the resolution in the code. NiViewer simple hangs or has frame rates drop significantly if the color resolution is set to 640x480.
However, on Windows, I managed to change the resolution by changing the settings in PS1080.ini in OpenNI2/Tools/OpenNI2/Drivers folder. In the ini file, for Asus, make sure
UsbInterface = 2
is enabled. By default it's zero. Then set Resolution = 1 for the depth and image sections.
My Asus Xtion firmware is v5.8.22.

I've tried the method #api55 mentioned and it works. The code and result are in the following.
But there is a problem when I make the similar change to the OpenNI sample code "SampleViewer" so that I can change the resolution free. When I set the resolution to 320*240 all is well. However, when I change it to 640*480, although the program still read frames in (at a apparently slower rate), the program display just get stuck.
2015-12-27 15:15:32
Then I test the aforementioned sample viewer with a kinect 1.0 depth camera. Since the color camera has a resolution no less than 640*480, I cannot experiment the resolution of 320*240. But the program works well with kinect 1.0 at a resolution of 640*480. In conclusion, I think that there must be some problem with the ASUS Xtion camera.
#include <iostream>
#include <cstdio>
#include <vector>
#include <OpenNI.h>
#include "OniSampleUtilities.h"
#pragma comment(lib, "OpenNI2")
#define SAMPLE_READ_WAIT_TIMEOUT 2000 //2000ms
using namespace openni;
using namespace std;
int main()
{
Status rc = OpenNI::initialize();
if (rc != STATUS_OK)
{
printf("Initialize failed:\n%s\n", OpenNI::getExtendedError());
return 1;
}
Device device;
openni::Array<openni::DeviceInfo> deviceInfoList;
OpenNI::enumerateDevices(&deviceInfoList);
for (int i = 0; i < deviceInfoList.getSize(); i++)
{
printf("%d: Uri: %s\n"
"Vendor: %s\n"
"Name: %s\n", i, deviceInfoList[i].getUri(), deviceInfoList[i].getVendor(), deviceInfoList[i].getName());
}
rc = device.open(deviceInfoList[0].getUri());
if (rc != STATUS_OK)
{
printf("Counldn't open device\n%s\n", OpenNI::getExtendedError());
return 2;
}
VideoStream depth;
// set resolution
// depth modes
printf("\nDepth modes\n");
const openni::SensorInfo* sinfo = device.getSensorInfo(openni::SENSOR_DEPTH); // select index=4 640x480, 30 fps, 1mm
if (sinfo == NULL)
{
printf("Couldn't get device info\n%s\n", OpenNI::getExtendedError());
return 3;
}
rc = depth.create(device, SENSOR_DEPTH);
if (rc != STATUS_OK)
{
printf("Couldn't create depth stream\n%s\n", OpenNI::getExtendedError());
return 4;
}
const openni::Array< openni::VideoMode>& modesDepth = sinfo->getSupportedVideoModes();
vector<int> item;
for (int i = 0; i < modesDepth.getSize(); i++) {
printf("%i: %ix%i, %i fps, %i format\n", i, modesDepth[i].getResolutionX(), modesDepth[i].getResolutionY(),
modesDepth[i].getFps(), modesDepth[i].getPixelFormat()); //PIXEL_FORMAT_DEPTH_1_MM = 100, PIXEL_FORMAT_DEPTH_100_UM
if (modesDepth[i].getResolutionX() == 640 && modesDepth[i].getResolutionY() == 480)
item.push_back(i);
}
int item_idx = item[0];
printf("Choose mode %d\nWidth: %d, Height: %d\n", item_idx, modesDepth[item_idx].getResolutionX(), modesDepth[item_idx].getResolutionY());
rc = depth.setVideoMode(modesDepth[item_idx]);
if (rc != STATUS_OK)
{
printf("error: depth format not supported...\n");
return 5;
}
rc = depth.start();
if (rc != STATUS_OK)
{
printf("Couldn't start the depth stream\n%s\n", OpenNI::getExtendedError());
return 6;
}
VideoFrameRef frame;
printf("\nCurrent resolution:\n");
printf("Width: %d Height: %d\n", depth.getVideoMode().getResolutionX(), depth.getVideoMode().getResolutionY());
system("pause");
while (!wasKeyboardHit())
{
int changedStreamDummy;
VideoStream* pStream = &depth;
rc = OpenNI::waitForAnyStream(&pStream, 1, &changedStreamDummy, SAMPLE_READ_WAIT_TIMEOUT);
if (rc != STATUS_OK)
{
printf("Wait failed! (timeout is \" %d \" ms)\n%s\n", SAMPLE_READ_WAIT_TIMEOUT, OpenNI::getExtendedError());
continue;
}
rc = depth.readFrame(&frame);
if (rc != STATUS_OK)
{
printf("Read failed!\n%s\n", OpenNI::getExtendedError());
continue;
}
if (frame.getVideoMode().getPixelFormat() != PIXEL_FORMAT_DEPTH_1_MM && frame.getVideoMode().getPixelFormat() != PIXEL_FORMAT_DEPTH_100_UM)
{
printf("Unexpected frame format\n");
continue;
}
DepthPixel* pDepth = (DepthPixel*)frame.getData();
int middleIndex = (frame.getHeight() + 1)*frame.getWidth() / 2;
printf("[%08llu] %8d\n", (long long)frame.getTimestamp(), pDepth[middleIndex]);
printf("Width: %d Height: %d\n", frame.getWidth(), frame.getHeight());
}
depth.stop();
depth.destroy();
device.close();
OpenNI::shutdown();
return 0;
}

I had the same problem, but now solved it by referencing NiViewer example in OpenNI2. Apparently after you start the stream, either depth or color, you have to stop it to change the resolution and then start
const openni::SensorInfo* sinfo = device.getSensorInfo(openni::SENSOR_DEPTH);
const openni::Array< openni::VideoMode>& modesDepth = sinfo->getSupportedVideoModes();
depth.stop();
rc = depth.setVideoMode(modesDepth[4]);
depth.start();
I confirmed that this works on Asus Xtion on OpenNI2.
Hope this helps!

Final conclusion:
Actually, it is Xtion's problem itself (maybe related to hardware).
If you want just one of depth or color to be 640*480, and the other to be 320*240, it'll work. I can post my code if you want.
Details
Some of the answers above made a mistake: even the NiViewer.exe itself doesn't allow a depth 640*480 and color 640*480 at the same time.
Note: don't be misled by the visualization of NiViewer.exe, the video stream displayed is large but actually it does not mean 640*480. Actually it is initialsed with
depth: 320*240
color: 320*240
When you set either of the mode to 640*480, it is still works, which is
depth: 640*480
color: 320*240
or
depth: 320*240
color: 640*480
But when you want both of them to be the highest resolution:
depth: 640*480
color: 640*480
The viewer program starts encountering acute frame drop in the depth mode (in my case), but since the viewer retrieves the depth frame in an un-block way (the default code is written in a block way), you still see the color updates normally, while the depth updates every two seconds or even more.
To conclude
You could only set either of depth or color to be 640*480, and the other to be 320*240.

Related

Set snd_pcm_sw_params_set_stop_threshold to boundary, still getting underrun on snd_pcm_writei

The question says it all. I am going in circles here. I set snd_pcm_sw_params_set_stop_threshold to boundary (and zero too just for fun) and I am still getting buffer underrun errors on snd_pcm_writei. I cannot understand why. The documentation is pretty clear on this:
If the stop threshold is equal to boundary (also software parameter - sw_param) then automatic stop will be disabled
Here is a minimally reproducible example:
#include <alsa/asoundlib.h>
#include <iostream>
#define AUDIO_DEV "default"
#define AC_FRAME_SIZE 960
#define AC_SAMPLE_RATE 48000
#define AC_CHANNELS 2
//BUILD g++ -o main main.cpp -lasound
using namespace std;
int main() {
int err;
unsigned int i;
snd_pcm_t *handle;
snd_pcm_sframes_t frames;
snd_pcm_uframes_t boundary;
snd_pcm_sw_params_t *sw;
snd_pcm_hw_params_t *params;
unsigned int s_rate;
unsigned int buffer_time;
snd_pcm_uframes_t f_size;
unsigned char buffer[AC_FRAME_SIZE * 2];
int rc;
for (i = 0; i < sizeof(buffer); i++)
buffer[i] = random() & 0xff;
if ((err = snd_pcm_open(&handle, AUDIO_DEV, SND_PCM_STREAM_PLAYBACK, 0)) < 0) {
cout << "open error " << snd_strerror(err) << endl;
return 0;
}
s_rate = AC_SAMPLE_RATE;
f_size = AC_FRAME_SIZE;
buffer_time = 2500;
cout << s_rate << " " << f_size << endl;
snd_pcm_hw_params_alloca(&params);
snd_pcm_hw_params_any(handle, params);
snd_pcm_hw_params_set_access(handle, params, SND_PCM_ACCESS_RW_INTERLEAVED);
snd_pcm_hw_params_set_format(handle, params, SND_PCM_FORMAT_S16_LE);
snd_pcm_hw_params_set_channels(handle, params, AC_CHANNELS);
snd_pcm_hw_params_set_rate_near(handle, params, &s_rate, 0);
snd_pcm_hw_params_set_period_size_near(handle, params, &f_size, 0);
cout << s_rate << " " << f_size << endl;
rc = snd_pcm_hw_params(handle, params);
if (rc < 0) {
cout << "open error " << snd_strerror(err) << endl;
return 0;
}
snd_pcm_sw_params_alloca(&sw);
snd_pcm_sw_params_current(handle, sw);
snd_pcm_sw_params_get_boundary(sw, &boundary);
snd_pcm_sw_params_set_stop_threshold(handle, sw, boundary);
rc = snd_pcm_sw_params(handle, sw);
if (rc < 0) {
cout << "open error " << snd_strerror(err) << endl;
return 0;
}
snd_pcm_sw_params_current(handle, sw);
snd_pcm_sw_params_get_stop_threshold(sw, &boundary);
cout << "VALUE " << boundary << endl;
for (i = 0; i < 1600; i++) {
usleep(100 * 1000);
frames = snd_pcm_writei(handle, buffer, f_size);
if (frames < 0)
frames = snd_pcm_recover(handle, frames, 0);
if (frames < 0) {
cout << "open error " << snd_strerror(frames) << endl;
break;
}
}
return 0;
}
Okay I figured it out. To anyone who runs into this issue who also has pipewire or pulse (or any other thirdparty non-alsa audio card) enabled as the "default" card the solution is to not use pipewire or pulse directly. It seems that snd_pcm_sw_params_set_stop_threshold is not implemented properly in pipewire/pulseaudio. You'll notice that if you disable pipewire or pulse this code will run exactly the way you want it to run.
Here is how you can disable pulseaudio (which was the issue on my system):
systemctl --user stop pulseaudio.socket
systemctl --user stop pulseaudio.service
Although a much better solution is to just set the AUDIO_DEV to write directly to an alsa card. You can find the names of these cards by running aplay -L. But in 95% of cases updating AUDIO_DEV in my sample code to the following:
#define AUDIO_DEV "hw:0,0"
Will usually fix the issue.

Using ffmpeg's avcodec_receive_frame() and why do I get these vertical lines in the decodec image sometimes

I'm using the modern ffmpeg API which instructs me to use avcodec_send_packet and avcodec_receive_frame. There are literally no examples of usage in github so I couldn't compare with other code.
My code kinda works, but sometimes, for less than a second or two, the video gets decoded like this:
I thought it was a buffer size problem, so I increased from
const size_t bufferSize = 408304;
to
const size_t bufferSize = 10408304;
Just to see but the problem persists.
(the video size is 1920x1080 and this happens even when there's little motion in the screen)
Here's my decoder class which sends the decoded data to OpenGL in the line
this->frameUpdater->updateData(avFrame->data, avFrame->width, avFrame->height);
void FfmpegDecoder::decodeFrame(uint8_t* frameBuffer, int frameLength)
{
if (frameLength <= 0) return;
int frameFinished = 0;
AVPacket* avPacket = av_packet_alloc();
if (!avPacket) std::cout << "av packet error" << std::endl;
avPacket->size = frameLength;
avPacket->data = frameBuffer;
//Disable ffmpeg annoying output
av_log_set_level(AV_LOG_QUIET);
int sendPacketResult = avcodec_send_packet(avCodecContext, avPacket);
if (!sendPacketResult) {
int receiveFrameResult = avcodec_receive_frame(avCodecContext, avFrame);
if (!receiveFrameResult) {
this->frameUpdater->updateData(avFrame->data, avFrame->width, avFrame->height);
} else if ((receiveFrameResult < 0) && (receiveFrameResult != AVERROR(EAGAIN)) && (receiveFrameResult != AVERROR_EOF)) {
std::cout << "avcodec_receive_frame returned error " << receiveFrameResult /*<< *av_err2str(result).c_str()*/ << std::endl;
} else {
switch (receiveFrameResult) {
//Not exactly an error, we just have to wait for more data
case AVERROR(EAGAIN):
break;
//To be done: what does this error mean? I think it's literally the end of an mp4 file
case AVERROR_EOF:
std::cout << "avcodec_receive_frame AVERROR_EOF" << std::endl;
break;
//To be done: describe what error is this in std cout before stopping
default:
std::cout << "avcodec_receive_frame returned error, stopping..." << receiveFrameResult /*<< av_err2str(result).c_str()*/ << std::endl;
break;
//Error happened, should break anyway
break;
}
}
} else {
switch (sendPacketResult) {
case AVERROR(EAGAIN):
std::cout << "avcodec_send_packet EAGAIN" << std::endl;
break;
case AVERROR_EOF:
std::cout << "avcodec_send_packet AVERROR_EOF" << std::endl;
break;
default:
break;
}
}
}
I think that people who knows how decoding works might know instantly why the image gets decoded like this. It'd be very helpful to know why. Thanks!

ALSA takes 1024 buffers before interrupt

I'm trying to make my ALSA interface work with as low latency as possible. The ALSA documentation about all the frames / periods / buffers is very confusing, so I'm asking here.
The below code forces the ALSA to actually read 1024 times the buffer size (I set the buffer size to 1024 bytes). The writeAudio function takes 1024 * 1024 bytes before it slows down. I would like to keep this frames ? count as low as possible, so I could track the playtime in my application itself. If I try to set the period size to 2 with the snd_pcm_hw_params_set_periods (I guess it would then slow down the reading after 2 * 1024 bytes has been written to the buffer.) However this change in code doesn't change the behaviour; the player still buffers 1024 * 1024 bytes before the buffering slows down to the rate the audio is being played from the speakers.
TLDR; 1024 * 1024 bytes buffering size is way too much for me, how to lower it? Below is my code. And yes, I'm playing unsigned 8 bit with mono output.
int32_t ALSAPlayer::initPlayer(ALSAConfig cfg)
{
std::cout << "=== INITIALIZING ALSA ===" << std::endl;
if(!cfg.channels || !cfg.rate)
{
std::cout << "ERROR: player config was bad" << std::endl;
return -1;
}
m_channels = cfg.channels;
m_rate = cfg.rate;
m_frames = 1024;
uint32_t tmp;
uint32_t buff_size;
int dir = 0;
/* Open the PCM device in playback mode */
if ((pcm = snd_pcm_open(&pcm_handle, PCM_DEVICE, SND_PCM_STREAM_PLAYBACK, 0)) < 0)
{
printf("ERROR: Can't open \"%s\" PCM device. %s\n", PCM_DEVICE, snd_strerror(pcm));
}
snd_pcm_hw_params_alloca(&params);
snd_pcm_hw_params_any(pcm_handle, params);
if ((pcm = snd_pcm_hw_params_set_access(pcm_handle, params, SND_PCM_ACCESS_RW_INTERLEAVED)) < 0)
{
printf("ERROR: Can't set interleaved mode. %s\n", snd_strerror(pcm));
}
if ((pcm = snd_pcm_hw_params_set_format(pcm_handle, params, SND_PCM_FORMAT_S8)) < 0)
{
printf("ERROR: Can't set format. %s\n", snd_strerror(pcm));
}
if ((pcm = snd_pcm_hw_params_set_channels(pcm_handle, params, m_channels)) < 0)
{
printf("ERROR: Can't set channels number. %s\n", snd_strerror(pcm));
}
if ((pcm = snd_pcm_hw_params_set_rate_near(pcm_handle, params, &m_rate, &dir)) < 0)
{
printf("ERROR: Can't set rate. %s\n", snd_strerror(pcm));
}
// force the ALSA interface to use exactly *m_frames* number of frames
snd_pcm_hw_params_set_period_size(pcm_handle, params, m_frames, dir);
/* Write parameters */
if ((pcm = snd_pcm_hw_params(pcm_handle, params)) < 0)
{
printf("ERROR: Can't set harware parameters. %s\n", snd_strerror(pcm));
}
std::cout << "ALSA output device name: " << snd_pcm_name(pcm_handle) << std::endl;
std::cout << "ALSA output device state: " << snd_pcm_state_name(snd_pcm_state(pcm_handle)) << std::endl;
snd_pcm_hw_params_get_channels(params, &tmp);
std::cout << "ALSA output device channels: " << tmp << std::endl;
snd_pcm_hw_params_get_rate(params, &tmp, 0);
std::cout << "ALSA output device rate: " << tmp << std::endl;
snd_pcm_hw_params_get_period_size(params, &m_frames, &dir);
buff_size = m_frames * m_channels;
std::cout << "ALSA output device frames size: " << m_frames << std::endl;
std::cout << "ALSA output device buffer size: " << buff_size << "(should be 1024)" << std::endl;
return 0;
}
int ALSAPlayer::writeAudio(byte* buffer, uint32_t buffSize)
{
int pcmRetVal;
if(buffSize == 0)
{
snd_pcm_drain(pcm_handle);
snd_pcm_close(pcm_handle);
return -1;
}
if((pcmRetVal = snd_pcm_writei(pcm_handle, buffer, m_frames)) == -EPIPE)
{
snd_pcm_prepare(pcm_handle);
}
else if(pcm < 0)
{
std::cout << "ERROR: could not write to audio interface" << std::endl;
}
return 0;
}
(One frame is not necessarily one byte; please don't confuse them.)
This code does not set the buffer size.
snd_pcm_hw_params_set_periods() sets the number of periods.
snd_pcm_hw_params_set_period_size() would set the period size.
To have a 2048-frame buffer with two periods with 1024 frames each, set the number of periods to 2, and the period size to 1024.
You must check all function calls for errors, including snd_pcm_hw_params_set_period_size().

FFmpeg: Live streaming using RSTP C++

I want to receive video stream from camera, process it using openCV (for tests - draw red rectangle) and live stream result.
I already can read camera frames, convert to openCV Mat and change them back to AVFrame.
From console im starting rtsp server using: ffplay -rtsp_flags listen -i rtsp://127.0.0.1:8765/live.sdp
Problem shows when im trying call avio_open();
av_register_all();
avformat_network_init();
avcodec_register_all();
(...)
avformat_alloc_output_context2(&outputContext, NULL, "rtsp", outputPath.c_str());
outputFormat = outputContext->oformat;
cout << "Codec = " << avcodec_get_name(outputFormat->video_codec) << endl;
if (outputFormat->video_codec != AV_CODEC_ID_NONE) {
videoStream = add_stream(outputContext, &outputVideoCodec, outputFormat->video_codec);
}
char errorBuff[80];
int k = avio_open(&outputContext->pb, outputPath.c_str(), AVIO_FLAG_WRITE);
if (k < 0) {
cout << "code: " << k << endl;
fprintf(stderr, "%s \n", av_make_error_string(errorBuff, 80, k));
}
if (avformat_write_header(outputContext, NULL) < 0) {
fprintf(stderr, "Error occurred when writing header");
}
}
Where outputPath = "rtsp://127.0.0.1:8765/live.sdp"
avformat_alloc_output_context2 returns 0, but avio_open < 0 so app prints:
code: -1330794744
Protocol not found
I have no idea what is wrong. I am using ffmpeg build from https://ffmpeg.zeranoe.com/builds/ 64-bit Dev
Enable the file protocol by doing:
--enable-protocol=file

opencv stereo camera error

i working in a stereo camera project i have two cameras 5megapixels in every one i connected it with my laptop and run my code but when i run it i get this error libv4l2: error turning on stream: No space left on device
im linux os that's my c++ opencv code there are any ideas how to fix it i tried others codes i found it in network but still give me the same error
#include <opencv2/opencv.hpp>
int main()
{
cv::VideoCapture cap1(1);
cv::VideoCapture cap2(2);
if(!cap1.isOpened())
{
std::cout << "Cannot open the video cam [1]" << std::endl;
return -1;
}
if(!cap2.isOpened())
{
std::cout << "Cannot open the video cam [2]" << std::endl;
return -1;
}
cap1.set(CV_CAP_PROP_FPS, 15);
cap2.set(CV_CAP_PROP_FPS, 15);
// Values taken from output of Version 1 and used to setup the exact same parameters with the exact same values!
cap1.set(CV_CAP_PROP_FRAME_WIDTH, 640);
cap1.set(CV_CAP_PROP_FRAME_HEIGHT, 480);
cap2.set(CV_CAP_PROP_FRAME_WIDTH, 640);
cap2.set(CV_CAP_PROP_FRAME_HEIGHT, 480);
cv::namedWindow("cam[1]",CV_WINDOW_AUTOSIZE);
cv::namedWindow("cam[2]",CV_WINDOW_AUTOSIZE);
while(1)
{
cv::Mat frame1, frame2;
bool bSuccess1 = cap1.read(frame1);
bool bSuccess2 = cap2.read(frame2);
if (!bSuccess1)
{
std::cout << "Cannot read a frame from video stream [1]" << std::endl;
break;
}
if (!bSuccess2)
{
std::cout << "Cannot read a frame from video stream [2]" << std::endl;
break;
}
cv::imshow("cam[1]", frame1);
cv::imshow("cam[2]", frame2);
if(cv::waitKey(30) == 27)
{
std::cout << "ESC key is pressed by user" << std::endl;
break;
}
}
return 0;
}