FFmpeg: Live streaming using RSTP C++ - c++

I want to receive video stream from camera, process it using openCV (for tests - draw red rectangle) and live stream result.
I already can read camera frames, convert to openCV Mat and change them back to AVFrame.
From console im starting rtsp server using: ffplay -rtsp_flags listen -i rtsp://127.0.0.1:8765/live.sdp
Problem shows when im trying call avio_open();
av_register_all();
avformat_network_init();
avcodec_register_all();
(...)
avformat_alloc_output_context2(&outputContext, NULL, "rtsp", outputPath.c_str());
outputFormat = outputContext->oformat;
cout << "Codec = " << avcodec_get_name(outputFormat->video_codec) << endl;
if (outputFormat->video_codec != AV_CODEC_ID_NONE) {
videoStream = add_stream(outputContext, &outputVideoCodec, outputFormat->video_codec);
}
char errorBuff[80];
int k = avio_open(&outputContext->pb, outputPath.c_str(), AVIO_FLAG_WRITE);
if (k < 0) {
cout << "code: " << k << endl;
fprintf(stderr, "%s \n", av_make_error_string(errorBuff, 80, k));
}
if (avformat_write_header(outputContext, NULL) < 0) {
fprintf(stderr, "Error occurred when writing header");
}
}
Where outputPath = "rtsp://127.0.0.1:8765/live.sdp"
avformat_alloc_output_context2 returns 0, but avio_open < 0 so app prints:
code: -1330794744
Protocol not found
I have no idea what is wrong. I am using ffmpeg build from https://ffmpeg.zeranoe.com/builds/ 64-bit Dev

Enable the file protocol by doing:
--enable-protocol=file

Related

How to open GStreamer pipeline in OpenCV

I'm a software engineer in South Korea.
I'm trying to open webm video using GStreamer pipeline in opencv program
But I can't find any solution to figure out it.
I'm using OpenCV 3.4.1 in Visual Studio 19 Community IDE.
Below is my code.
#include <opencv2/opencv.hpp>
int main()
{
std::string pipeline = "playbin uri=https://www.freedesktop.org/software/gstreamer-sdk/data/media/sintel_trailer-480p.webm";
std::cout << "Using pipeline: \n" << pipeline << "\n";
cv::VideoCapture cap(pipeline, cv::CAP_GSTREAMER);
if (!cap.isOpened()) {
std::cout << "Failed to open camera." << std::endl;
return (-1);
}
cv::namedWindow("CSI Camera", cv::WINDOW_AUTOSIZE);
cv::Mat img;
std::cout << "Hit ESC to exit" << "\n";
while (true)
{
if (!cap.read(img)) {
std::cout << "Capture read error" << std::endl;
break;
}
cv::imshow("CSI Camera", img);
int keycode = cv::waitKey(10) & 0xff;
if (keycode == 27) break;
}
cap.release();
cv::destroyAllWindows();
return 0;
}
It is very simple code like Tutorial. But I can't open VieoCapure cap...
Anybody have tried this project or figured out?
Best regard
To use a GStreamer pipeline in a cv::VideoCapture it must end in an appsink. The appsink is a GStreamer element that allows an application (in this case OpenCV) to take buffers out of the pipeline.
Modify your pipeline to look like the following:
std::string pipeline = "uridecodebin uri=https://www.freedesktop.org/software/gstreamer-sdk/data/media/sintel_trailer-480p.webm ! videoconvert ! video/x-raw,format=RGB ! appsink";

opencv stereo camera error

i working in a stereo camera project i have two cameras 5megapixels in every one i connected it with my laptop and run my code but when i run it i get this error libv4l2: error turning on stream: No space left on device
im linux os that's my c++ opencv code there are any ideas how to fix it i tried others codes i found it in network but still give me the same error
#include <opencv2/opencv.hpp>
int main()
{
cv::VideoCapture cap1(1);
cv::VideoCapture cap2(2);
if(!cap1.isOpened())
{
std::cout << "Cannot open the video cam [1]" << std::endl;
return -1;
}
if(!cap2.isOpened())
{
std::cout << "Cannot open the video cam [2]" << std::endl;
return -1;
}
cap1.set(CV_CAP_PROP_FPS, 15);
cap2.set(CV_CAP_PROP_FPS, 15);
// Values taken from output of Version 1 and used to setup the exact same parameters with the exact same values!
cap1.set(CV_CAP_PROP_FRAME_WIDTH, 640);
cap1.set(CV_CAP_PROP_FRAME_HEIGHT, 480);
cap2.set(CV_CAP_PROP_FRAME_WIDTH, 640);
cap2.set(CV_CAP_PROP_FRAME_HEIGHT, 480);
cv::namedWindow("cam[1]",CV_WINDOW_AUTOSIZE);
cv::namedWindow("cam[2]",CV_WINDOW_AUTOSIZE);
while(1)
{
cv::Mat frame1, frame2;
bool bSuccess1 = cap1.read(frame1);
bool bSuccess2 = cap2.read(frame2);
if (!bSuccess1)
{
std::cout << "Cannot read a frame from video stream [1]" << std::endl;
break;
}
if (!bSuccess2)
{
std::cout << "Cannot read a frame from video stream [2]" << std::endl;
break;
}
cv::imshow("cam[1]", frame1);
cv::imshow("cam[2]", frame2);
if(cv::waitKey(30) == 27)
{
std::cout << "ESC key is pressed by user" << std::endl;
break;
}
}
return 0;
}

Change resolution on openni2 not working

I want to read depth frame at 640x480.
I am using windows 8.1 64bit, openni2 32bit, kinect:PSMP05000,PSCM04900(PrimeSense)
I take code reference from here:
cannot set VGA resolution
Simple Read
Combined to this code:
main.cpp
OniSampleUtilities.h
SimpleRead.vcxproj
should be compiled if you install openni2 32bit from here:
OpeniNI 2
#include "iostream"
#include "OpenNI.h"
#include "OniSampleUtilities.h"
#define SAMPLE_READ_WAIT_TIMEOUT 2000 //2000ms
using namespace openni;
using namespace std;
int main()
{
Status rc = OpenNI::initialize();
if (rc != STATUS_OK)
{
cout << "Initialize failed:" << endl << OpenNI::getExtendedError() << endl;
return 1;
}
Device device;
rc = device.open(ANY_DEVICE);
if (rc != STATUS_OK)
{
cout << "Couldn't open device" << endl << OpenNI::getExtendedError() << endl;
return 2;
}
VideoStream depth;
if (device.getSensorInfo(SENSOR_DEPTH) != NULL)
{
rc = depth.create(device, SENSOR_DEPTH);
if (rc != STATUS_OK)
{
cout << "Couldn't create depth stream" << endl << OpenNI::getExtendedError() << endl;
return 3;
}
}
rc = depth.start();
if (rc != STATUS_OK)
{
cout << "Couldn't start the depth stream" << endl << OpenNI::getExtendedError() << endl;
return 4;
}
VideoFrameRef frame;
// set resolution
// depth modes
cout << "Depth modes" << endl;
const openni::SensorInfo* sinfo = device.getSensorInfo(openni::SENSOR_DEPTH); // select index=4 640x480, 30 fps, 1mm
const openni::Array< openni::VideoMode>& modesDepth = sinfo->getSupportedVideoModes();
for (int i = 0; i<modesDepth.getSize(); i++) {
printf("%i: %ix%i, %i fps, %i format\n", i, modesDepth[i].getResolutionX(), modesDepth[i].getResolutionY(),
modesDepth[i].getFps(), modesDepth[i].getPixelFormat()); //PIXEL_FORMAT_DEPTH_1_MM = 100, PIXEL_FORMAT_DEPTH_100_UM
}
rc = depth.setVideoMode(modesDepth[0]);
if (openni::STATUS_OK != rc)
{
cout << "error: depth fromat not supprted..." << endl;
}
system("pause");
while (!wasKeyboardHit())
{
int changedStreamDummy;
VideoStream* pStream = &depth;
rc = OpenNI::waitForAnyStream(&pStream, 1, &changedStreamDummy, SAMPLE_READ_WAIT_TIMEOUT);
if (rc != STATUS_OK)
{
cout << "Wait failed! (timeout is " << SAMPLE_READ_WAIT_TIMEOUT << " ms)" << endl << OpenNI::getExtendedError() << endl;
continue;
}
rc = depth.readFrame(&frame);
if (rc != STATUS_OK)
{
cout << "Read failed!" << endl << OpenNI::getExtendedError() << endl;
continue;
}
if (frame.getVideoMode().getPixelFormat() != PIXEL_FORMAT_DEPTH_1_MM && frame.getVideoMode().getPixelFormat() != PIXEL_FORMAT_DEPTH_100_UM)
{
cout << "Unexpected frame format" << endl;
continue;
}
DepthPixel* pDepth = (DepthPixel*)frame.getData();
int middleIndex = (frame.getHeight()+1)*frame.getWidth()/2;
printf("[%08llu] %8d\n", (long long)frame.getTimestamp(), pDepth[middleIndex]);
}
depth.stop();
depth.destroy();
device.close();
OpenNI::shutdown();
return 0;
}
There is 6 mode of operation:
0: 320x240, 30 fps, 100 format
1: 320x240, 30 fps, 101 format
2: 320x240, 60 fps, 100 format
3: 320x240, 60 fps, 101 format
4: 640x480, 30 fps, 100 format
5: 640x480, 30 fps, 101 format
It can read only from modes=0-3.
At mode 4,5 i get timeout.
How i can read depth frame at 640x480?
Thanks for the help,
Tal.
====================================================
new information:
I use also this line, and i get the same results:
const openni::SensorInfo* sinfo = &(depth.getSensorInfo());
This line never execute at any mode:
cout << "error: depth fromat not supprted..." << endl;
At mode 4,5 I always get this line execute:
cout << "Wait failed! (timeout is " << SAMPLE_READ_WAIT_TIMEOUT << " ms)" << endl << OpenNI::getExtendedError() << endl;
I think maybe it a bug at openni2.
At openni1, I can read depth image at 640x480, in the same computer,os and device.
Maybe I am wrong, but I am almost sure that the problem is the order that you are doing it.
I think you should change it before depth.start() and after depth.create(device, SENSOR_DEPTH)
If I remember correctly, once it has started you may bot change the resolution of the stream.
So it should be something like this
...
if (device.getSensorInfo(SENSOR_DEPTH) != NULL)
{
rc = depth.create(device, SENSOR_DEPTH);
if (rc != STATUS_OK)
{
cout << "Couldn't create depth stream" << endl << OpenNI::getExtendedError() << endl;
return 3;
}
}
// set resolution
// depth modes
cout << "Depth modes" << endl;
const openni::SensorInfo* sinfo = device.getSensorInfo(openni::SENSOR_DEPTH);
const openni::Array< openni::VideoMode>& modesDepth = sinfo->getSupportedVideoModes();
rc = depth.setVideoMode(modesDepth[0]);
if (openni::STATUS_OK != rc)
{
cout << "error: depth fromat not supprted..." << endl;
}
rc = depth.start();
if (rc != STATUS_OK)
{
cout << "Couldn't start the depth stream" << endl << OpenNI::getExtendedError() << endl;
return 4;
}
VideoFrameRef frame;
...
I hope that this helps you, if not, please add a comment. I have a similar code working in the git repository I show you the other day, tested with a PrimeSense carmine camera.
In my case (Asus Xtion PRO in a USB 3.0 port, OpenNI2, Windows 8.1), it seems there are something wrong with OpenNI2 (or its driver) that prevents me from changing the resolution in the code. NiViewer simple hangs or has frame rates drop significantly if the color resolution is set to 640x480.
However, on Windows, I managed to change the resolution by changing the settings in PS1080.ini in OpenNI2/Tools/OpenNI2/Drivers folder. In the ini file, for Asus, make sure
UsbInterface = 2
is enabled. By default it's zero. Then set Resolution = 1 for the depth and image sections.
My Asus Xtion firmware is v5.8.22.
I've tried the method #api55 mentioned and it works. The code and result are in the following.
But there is a problem when I make the similar change to the OpenNI sample code "SampleViewer" so that I can change the resolution free. When I set the resolution to 320*240 all is well. However, when I change it to 640*480, although the program still read frames in (at a apparently slower rate), the program display just get stuck.
2015-12-27 15:15:32
Then I test the aforementioned sample viewer with a kinect 1.0 depth camera. Since the color camera has a resolution no less than 640*480, I cannot experiment the resolution of 320*240. But the program works well with kinect 1.0 at a resolution of 640*480. In conclusion, I think that there must be some problem with the ASUS Xtion camera.
#include <iostream>
#include <cstdio>
#include <vector>
#include <OpenNI.h>
#include "OniSampleUtilities.h"
#pragma comment(lib, "OpenNI2")
#define SAMPLE_READ_WAIT_TIMEOUT 2000 //2000ms
using namespace openni;
using namespace std;
int main()
{
Status rc = OpenNI::initialize();
if (rc != STATUS_OK)
{
printf("Initialize failed:\n%s\n", OpenNI::getExtendedError());
return 1;
}
Device device;
openni::Array<openni::DeviceInfo> deviceInfoList;
OpenNI::enumerateDevices(&deviceInfoList);
for (int i = 0; i < deviceInfoList.getSize(); i++)
{
printf("%d: Uri: %s\n"
"Vendor: %s\n"
"Name: %s\n", i, deviceInfoList[i].getUri(), deviceInfoList[i].getVendor(), deviceInfoList[i].getName());
}
rc = device.open(deviceInfoList[0].getUri());
if (rc != STATUS_OK)
{
printf("Counldn't open device\n%s\n", OpenNI::getExtendedError());
return 2;
}
VideoStream depth;
// set resolution
// depth modes
printf("\nDepth modes\n");
const openni::SensorInfo* sinfo = device.getSensorInfo(openni::SENSOR_DEPTH); // select index=4 640x480, 30 fps, 1mm
if (sinfo == NULL)
{
printf("Couldn't get device info\n%s\n", OpenNI::getExtendedError());
return 3;
}
rc = depth.create(device, SENSOR_DEPTH);
if (rc != STATUS_OK)
{
printf("Couldn't create depth stream\n%s\n", OpenNI::getExtendedError());
return 4;
}
const openni::Array< openni::VideoMode>& modesDepth = sinfo->getSupportedVideoModes();
vector<int> item;
for (int i = 0; i < modesDepth.getSize(); i++) {
printf("%i: %ix%i, %i fps, %i format\n", i, modesDepth[i].getResolutionX(), modesDepth[i].getResolutionY(),
modesDepth[i].getFps(), modesDepth[i].getPixelFormat()); //PIXEL_FORMAT_DEPTH_1_MM = 100, PIXEL_FORMAT_DEPTH_100_UM
if (modesDepth[i].getResolutionX() == 640 && modesDepth[i].getResolutionY() == 480)
item.push_back(i);
}
int item_idx = item[0];
printf("Choose mode %d\nWidth: %d, Height: %d\n", item_idx, modesDepth[item_idx].getResolutionX(), modesDepth[item_idx].getResolutionY());
rc = depth.setVideoMode(modesDepth[item_idx]);
if (rc != STATUS_OK)
{
printf("error: depth format not supported...\n");
return 5;
}
rc = depth.start();
if (rc != STATUS_OK)
{
printf("Couldn't start the depth stream\n%s\n", OpenNI::getExtendedError());
return 6;
}
VideoFrameRef frame;
printf("\nCurrent resolution:\n");
printf("Width: %d Height: %d\n", depth.getVideoMode().getResolutionX(), depth.getVideoMode().getResolutionY());
system("pause");
while (!wasKeyboardHit())
{
int changedStreamDummy;
VideoStream* pStream = &depth;
rc = OpenNI::waitForAnyStream(&pStream, 1, &changedStreamDummy, SAMPLE_READ_WAIT_TIMEOUT);
if (rc != STATUS_OK)
{
printf("Wait failed! (timeout is \" %d \" ms)\n%s\n", SAMPLE_READ_WAIT_TIMEOUT, OpenNI::getExtendedError());
continue;
}
rc = depth.readFrame(&frame);
if (rc != STATUS_OK)
{
printf("Read failed!\n%s\n", OpenNI::getExtendedError());
continue;
}
if (frame.getVideoMode().getPixelFormat() != PIXEL_FORMAT_DEPTH_1_MM && frame.getVideoMode().getPixelFormat() != PIXEL_FORMAT_DEPTH_100_UM)
{
printf("Unexpected frame format\n");
continue;
}
DepthPixel* pDepth = (DepthPixel*)frame.getData();
int middleIndex = (frame.getHeight() + 1)*frame.getWidth() / 2;
printf("[%08llu] %8d\n", (long long)frame.getTimestamp(), pDepth[middleIndex]);
printf("Width: %d Height: %d\n", frame.getWidth(), frame.getHeight());
}
depth.stop();
depth.destroy();
device.close();
OpenNI::shutdown();
return 0;
}
I had the same problem, but now solved it by referencing NiViewer example in OpenNI2. Apparently after you start the stream, either depth or color, you have to stop it to change the resolution and then start
const openni::SensorInfo* sinfo = device.getSensorInfo(openni::SENSOR_DEPTH);
const openni::Array< openni::VideoMode>& modesDepth = sinfo->getSupportedVideoModes();
depth.stop();
rc = depth.setVideoMode(modesDepth[4]);
depth.start();
I confirmed that this works on Asus Xtion on OpenNI2.
Hope this helps!
Final conclusion:
Actually, it is Xtion's problem itself (maybe related to hardware).
If you want just one of depth or color to be 640*480, and the other to be 320*240, it'll work. I can post my code if you want.
Details
Some of the answers above made a mistake: even the NiViewer.exe itself doesn't allow a depth 640*480 and color 640*480 at the same time.
Note: don't be misled by the visualization of NiViewer.exe, the video stream displayed is large but actually it does not mean 640*480. Actually it is initialsed with
depth: 320*240
color: 320*240
When you set either of the mode to 640*480, it is still works, which is
depth: 640*480
color: 320*240
or
depth: 320*240
color: 640*480
But when you want both of them to be the highest resolution:
depth: 640*480
color: 640*480
The viewer program starts encountering acute frame drop in the depth mode (in my case), but since the viewer retrieves the depth frame in an un-block way (the default code is written in a block way), you still see the color updates normally, while the depth updates every two seconds or even more.
To conclude
You could only set either of depth or color to be 640*480, and the other to be 320*240.

Decode Audio from Memory - C++

I have two functions:
a internet-socket function which gets mp3-data and writes it to file ,
a function which decodes mp3-files.
However, I would rather decode the data, which is currently written to disk, be decoded in-memory by the decode function.
My decode function looks like this, and it is all initialized via
avformat_open_input(AVCodecContext, filename, NULL, NULL)
How can I read in the AVCodecContext without a filename, and instead using only the in-memory buffer?
I thought I would post some code to illustrate how to achieve this, I have tried to comment but am pressed for time, however it should all be relatively straightforward stuff. Return values are based on interpolation of the associated message into a hex version of 1337 speak converted to decimal values, and I have tried to keep it as light as possible in tone:)
#include <iostream>
extern "C"
{
#include <libavcodec/avcodec.h>
#include <libavformat/avformat.h>
#include <libavutil/avutil.h>
};
std::string tooManyChannels = "The audio stream (and its frames) has/have too many channels to properly fit in\n to frame->data. Therefore, to access the audio data, you need to use\nframe->extended_data to access the audio data."
"It is a planar store, so\neach channel is in a different element.\n"
" E.G.: frame->extended_data[0] has the data for channel 1\n"
" frame->extended_data[1] has the data for channel 2\n"
"And so on.\n";
std::string nonPlanar = "Either the audio data is not planar, or there is not enough room in\n"
"frame->data to store all the channel data. Either use\n"
"frame->data\n or \nframe->extended_data to access the audio data\n"
"both should just point to the same data in this instance.\n";
std::string information1 = "If the frame is planar, each channel is in a separate element:\n"
"frame->data[0]/frame->extended_data[0] contains data for channel 1\n"
"frame->data[1]/frame->extended_data[1] contains data for channel 2\n";
std::string information2 = "If the frame is in packed format( and therefore not planar),\n"
"then all the data is contained within:\n"
"frame->data[0]/frame->extended_data[0] \n"
"Similar to the manner in which some image formats have RGB(A) pixel data packed together,\n"
"rather than containing separate R G B (and A) data.\n";
void printAudioFrameInfo(const AVCodecContext* codecContext, const AVFrame* frame)
{
/*
This url: http://ffmpeg.org/doxygen/trunk/samplefmt_8h.html#af9a51ca15301871723577c730b5865c5
contains information on the type you will need to utilise to access the audio data.
*/
// format the tabs etc. in this string to suit your font, they line up for mine but may not for yours:)
std::cout << "Audio frame info:\n"
<< "\tSample count:\t\t" << frame->nb_samples << '\n'
<< "\tChannel count:\t\t" << codecContext->channels << '\n'
<< "\tFormat:\t\t\t" << av_get_sample_fmt_name(codecContext->sample_fmt) << '\n'
<< "\tBytes per sample:\t" << av_get_bytes_per_sample(codecContext->sample_fmt) << '\n'
<< "\tPlanar storage format?:\t" << av_sample_fmt_is_planar(codecContext->sample_fmt) << '\n';
std::cout << "frame->linesize[0] tells you the size (in bytes) of each plane\n";
if (codecContext->channels > AV_NUM_DATA_POINTERS && av_sample_fmt_is_planar(codecContext->sample_fmt))
{
std::cout << tooManyChannels;
}
else
{
stc::cout << nonPlanar;
}
std::cout << information1 << information2;
}
int main()
{
// You can change the filename for any other filename/supported format
std::string filename = "../my file.ogg";
// Initialize FFmpeg
av_register_all();
AVFrame* frame = avcodec_alloc_frame();
if (!frame)
{
std::cout << "Error allocating the frame. Let's try again shall we?\n";
return 666; // fail at start: 66 = number of the beast
}
// you can change the file name to whatever yo need:)
AVFormatContext* formatContext = NULL;
if (avformat_open_input(&formatContext, filename, NULL, NULL) != 0)
{
av_free(frame);
std::cout << "Error opening file " << filename<< "\n";
return 800; // cant open file. 800 = Boo!
}
if (avformat_find_stream_info(formatContext, NULL) < 0)
{
av_free(frame);
avformat_close_input(&formatContext);
std::cout << "Error finding the stream information.\nCheck your paths/connections and the details you supplied!\n";
return 57005; // stream info error. 0xDEAD in hex is 57005 in decimal
}
// Find the audio stream
AVCodec* cdc = nullptr;
int streamIndex = av_find_best_stream(formatContext, AVMEDIA_TYPE_AUDIO, -1, -1, &cdc, 0);
if (streamIndex < 0)
{
av_free(frame);
avformat_close_input(&formatContext);
std::cout << "Could not find any audio stream in the file. Come on! I need data!\n";
return 165; // no(0) (a)udio s(5)tream: 0A5 in hex = 165 in decimal
}
AVStream* audioStream = formatContext->streams[streamIndex];
AVCodecContext* codecContext = audioStream->codec;
codecContext->codec = cdc;
if (avcodec_open2(codecContext, codecContext->codec, NULL) != 0)
{
av_free(frame);
avformat_close_input(&formatContext);
std::cout << "Couldn't open the context with the decoder. I can decode but I need to have something to decode.\nAs I couldn't find anything I have surmised the decoded output is 0!\n (Well can't have you thinking I am doing nothing can we?\n";
return 1057; // cant find/open context 1057 = lost
}
std::cout << "This stream has " << codecContext->channels << " channels with a sample rate of " << codecContext->sample_rate << "Hz\n";
std::cout << "The data presented in format: " << av_get_sample_fmt_name(codecContext->sample_fmt) << std::endl;
AVPacket readingPacket;
av_init_packet(&readingPacket);
// Read the packets in a loop
while (av_read_frame(formatContext, &readingPacket) == 0)
{
if (readingPacket.stream_index == audioStream->index)
{
AVPacket decodingPacket = readingPacket;
// Audio packets can have multiple audio frames in a single packet
while (decodingPacket.size > 0)
{
// Try to decode the packet into a frame(s)
// Some frames rely on multiple packets, so we have to make sure the frame is finished
// before utilising it
int gotFrame = 0;
int result = avcodec_decode_audio4(codecContext, frame, &gotFrame, &decodingPacket);
if (result >= 0 && gotFrame)
{
decodingPacket.size -= result;
decodingPacket.data += result;
// et voila! a decoded audio frame!
printAudioFrameInfo(codecContext, frame);
}
else
{
decodingPacket.size = 0;
decodingPacket.data = nullptr;
}
}
}
// You MUST call av_free_packet() after each call to av_read_frame()
// or you will leak so much memory on a large file you will need a memory-plumber!
av_free_packet(&readingPacket);
}
// Some codecs will cause frames to be buffered in the decoding process.
// If the CODEC_CAP_DELAY flag is set, there can be buffered frames that need to be flushed
// therefore flush them now....
if (codecContext->codec->capabilities & CODEC_CAP_DELAY)
{
av_init_packet(&readingPacket);
// Decode all the remaining frames in the buffer
int gotFrame = 0;
while (avcodec_decode_audio4(codecContext, frame, &gotFrame, &readingPacket) >= 0 && gotFrame)
{
// Again: a fully decoded audio frame!
printAudioFrameInfo(codecContext, frame);
}
}
// Clean up! (unless you have a quantum memory machine with infinite RAM....)
av_free(frame);
avcodec_close(codecContext);
avformat_close_input(&formatContext);
return 0; // success!!!!!!!!
}
Hope this helps. Let me know if you need more info, and I will try and help out:)
There is also some very good tutorial information available at dranger.com which you may find useful.
Preallocate the format context and set its pb field as suggested in the note of avformat_open_input() documentation.
.

No Sound with SDL_mixer

I read and tried all the other posts to that topic, but nothing helped. When I try to play music with Mix_PlayChannel() I don't get an error nor do I hear some sound! I tried for hours now and nothing helps. The program just finishes happily. But no sound! I am using Ubuntu 12.04 64bit.
Thanks!
[EDIT]
Here is the code I use:
#include <iostream>
#include <SDL/SDL.h>
#include <SDL/SDL_mixer.h>
int main(int argc, char** argv) {
Mix_Music *music = NULL;
Mix_Chunk *wave = NULL;
SDL_Init(SDL_INIT_AUDIO);
int audio_rate = 44100;
Uint16 audio_format = AUDIO_S16; /* 16-bit stereo */
int audio_channels = 1;
int audio_buffers = 4096;
if(Mix_OpenAudio(audio_rate, audio_format, audio_channels, audio_buffers) < 0) {
printf("Unable to open audio!\n");
exit(1);
}
if(Mix_Init(MIX_INIT_MOD) != MIX_INIT_MOD)
std::cout << "errer";
Mix_Volume(-1, MIX_MAX_VOLUME);
music = Mix_LoadMUS("1.wav");
wave = Mix_LoadWAV("1.wav");
if (music == NULL) {
std::cout << "Could not load 1.wav\n";
std::cout << Mix_GetError();
}
if (wave == NULL) {
std::cout << "Could not load 1.wav\n";
std::cout << Mix_GetError();
}
Mix_VolumeChunk(wave, MIX_MAX_VOLUME);
Mix_VolumeMusic(MIX_MAX_VOLUME);
Mix_PlayMusic(music, 0);
std::cout << Mix_GetError();
Mix_FadeInChannelTimed(-1, wave, 0, 100,1);
std::cout << Mix_GetError();
return 1;
}
I try both PlayMusic() and Mix_FadeInChannelTimed(). Both files are loaded correctly but not played. Sound is not muted, wav-file is playable with aplay or other tools. I check with alsamixer that all channels are open and not too low.
I now found out that the program needs to run until the sound has finished playing! I added a usleep() command after the play command and it plays nicely. So that was really mentioned nowhere that PlayMusic() does not keep running.