PLaying sound on smarteyeglasses causes warning - sony-smarteyeglass

I am following the guide here https://developer.sony.com/develop/wearables/smarteyeglass-sdk/guides/use-bluetooth-for-audio-io/
When I play a raw mp3 on Smarteyeglasses using the code
SoundPool mSoundPool = new SoundPool(5, AudioManager.MODE_IN_CALL, 0);
// Use the play() method to request for playback of an audio file.
int soundId = mSoundPool.load(mContext, R.raw.cannotregisterreceiver, 1);
mSoundPool.setOnLoadCompleteListener(new OnLoadCompleteListener() {
#Override
public void onLoadComplete(SoundPool soundPool, int sampleId, int status) {
Log.d(TAG, "Playing sound");
int returnValue = soundPool.play(sampleId, 1, 1, 0, 0, 1);
Log.d(TAG, "Playing sound " + returnValue);
}
});
I get a warning saying AUDIO_OUTPUT_FLAG_FAST denied by client and the audio is not played
Any ideas why?

Related

Play one sound after another with SDL_mixer?

I have 4 sounds. I need play sound 1, when it finishes, automatically play sound 2; when sound 2 finishes, automatically play sound 3. Soun 3 finishes, play sound 4.... I'm using SDL Mixer 2.0, no SDL Sound...Is there a way?
int main() {
int frequencia = 22050;
Uint16 formato = AUDIO_S16SYS;
int canal = 2; // 1 mono; 2 = stereo;
int buffer = 4096;
Mix_OpenAudio(frequencia, formato, canal, buffer);
Mix_Chunk* sound_1;
Mix_Chunk* sound_2;
Mix_Chunk* sound_3;
Mix_Chunk* sound_4;
som_1 = Mix_LoadWAV("D:\\sound1.wav");
som_2 = Mix_LoadWAV("D:\\sound1.wav");
som_3 = Mix_LoadWAV("D:\\sound1.wav");
som_4 = Mix_LoadWAV("D:\\sound1.wav");
Mix_PlayChannel(-1, sound_1, 0);
Mix_PlayChannel(1, sound_2, 0);
Mix_PlayChannel(2, sound_3, 0);
Mix_PlayChannel(3, sound_4, 0);
return 0;
}
Check in a loop whether the channel is still playing using Mix_Playing(), and add a delay using SDL_Delay() to prevent the loop from consuming all available CPU time.
(In this example, I changed your first call to Mix_PlayChannel() from -1 to 1.)
Mix_PlayChannel(1, sound_1, 0);
while (Mix_Playing(1) != 0) {
SDL_Delay(200); // wait 200 milliseconds
}
Mix_PlayChannel(2, sound_2, 0);
while (Mix_Playing(2) != 0) {
SDL_Delay(200); // wait 200 milliseconds
}
// etc.
You should probably wrap that into a function instead so that you don't repeat what is basically the same code over and over again:
void PlayAndWait(int channel, Mix_Chunk* chunk, int loops)
{
channel = Mix_PlayChannel(channel, chunk, loops);
if (channel < 0) {
return; // error
}
while (Mix_Playing(channel) != 0) {
SDL_Delay(200);
}
}
// ...
PlayAndWait(-1, sound_1, 0);
PlayAndWait(1, sound_2, 0);
PlayAndWait(2, sound_3, 0);
PlayAndWait(3, sound_3, 0);

In webRTC no frame transmitted to browser from C++ code after calling onFrame in the capturer

I am trying to make my app up to date with webrtc. This is basically a desktop streaming application with a C++ application video streaming to a browser using webRTC.
My implementation used to use a bunch of deprecated stuff like SignalFrameCaptured and cricket::CapturedFrame.
Looking at webRTC right now it seems those classes/signals are not available.
Here is my capturer:
class Capturer
: public cricket::VideoCapturer,
public webrtc::DesktopCapturer::Callback
{
public:
sigslot::signal1<Capturer*> SignalDestroyed;
Capturer();
~Capturer();
void ResetSupportedFormats(const std::vector<cricket::VideoFormat>& formats);
bool CaptureFrame();
cricket::CaptureState Start(const cricket::VideoFormat& format) override;
void Stop() override;
bool IsRunning() override;
bool IsScreencast() const override;
bool GetPreferredFourccs(std::vector<uint32_t>* fourccs) override;
virtual void OnCaptureResult(webrtc::DesktopCapturer::Result result,
std::unique_ptr<webrtc::DesktopFrame> desktop_frame) override;
private:
bool running_;
int64_t initial_timestamp_;
int64_t next_timestamp_;
std::unique_ptr<webrtc::DesktopCapturer> _capturer;
};
Capturer::CaptureFrame() is called periodically from another thread and Capturer::OnCaptureResult is called as expected with a DesktopFrame as parameter.
Now looking at the implementation of OnCaptureResult:
void Capturer::OnCaptureResult(webrtc::DesktopCapturer::Result result,
std::unique_ptr<webrtc::DesktopFrame> desktopFrame)
{
if (result != webrtc::DesktopCapturer::Result::SUCCESS)
{
return; // Never called. Which leads me to conclude thedesktop capturer works
}
int width = desktopFrame->size().width();
int height = desktopFrame->size().height();
rtc::scoped_refptr<webrtc::I420Buffer> res_i420_frame = webrtc::I420Buffer::Create(width, height);
webrtc::ConvertToI420(webrtc::VideoType::kABGR,
desktopFrame->data(),
0, 0,
width, height,
0,
webrtc::kVideoRotation_0,
res_i420_frame);
webrtc::VideoFrame frame = webrtc::VideoFrame(res_i420_frame, webrtc::kVideoRotation_0, next_timestamp_ / rtc::kNumNanosecsPerMicrosec);
this->OnFrame(frame, width, height);
}
No frame are ever transmitted to the connected browser, as seen from chrome::webrtc-internals.
Back to the code, here is how I create the peerConnection:
void Conductor::connectToPeer() {
this->_peerConnectionFactory = webrtc::CreatePeerConnectionFactory();
if (!this->_peerConnectionFactory.get())
{
std::cerr << "Failed to initialize PeerConnectionFactory" << std::endl;
throw new std::runtime_error("Cannot initialize PeerConnectionFactory");
}
webrtc::PeerConnectionInterface::RTCConfiguration config;
webrtc::PeerConnectionInterface::IceServer server;
server.uri = "stun:stun.l.google.com:19302";
config.servers.push_back(server);
webrtc::FakeConstraints constraints;constraints.AddOptional(webrtc::MediaConstraintsInterface::kEnableDtlsSrtp, "true");
this->_peerConnection = this->_peerConnectionFactory->CreatePeerConnection(config, &constraints, NULL, NULL, this);
if (!this->_peerConnection.get())
{
std::cerr << "Failed to initialize PeerConnection" << std::endl;
throw new std::runtime_error("Cannot initialize PeerConnection");
}
auto capturer = new Capturer();
CapturerThread *capturerThread = new CapturerThread(capturer); // This thread sole functionis to call CaptureFrameperiodically
rtc::scoped_refptr<webrtc::VideoTrackSourceInterface> videoSource = this->_peerConnectionFactory->CreateVideoSource(capturer, NULL);
rtc::scoped_refptr<webrtc::VideoTrackInterface> videoTrack(this->_peerConnectionFactory->CreateVideoTrack("video_label", videoSource));
rtc::scoped_refptr<webrtc::MediaStreamInterface> stream = this->_peerConnectionFactory->CreateLocalMediaStream("stream_label");
stream->AddTrack(videoTrack);
if (!this->_peerConnection->AddStream(stream))
{
std::cerr << "Adding stream to PeerConnection failed" << std::endl;
throw new std::runtime_error("Cannot add stream");
}
typedef std::pair<std::string,rtc::scoped_refptr<webrtc::MediaStreamInterface>> MediaStreamPair;
this->_activeStreams.insert(MediaStreamPair(stream->label(), stream));
webrtc::SdpParseError error;
webrtc::SessionDescriptionInterface* sessionDescription(webrtc::CreateSessionDescription("offer", this->_offer, &error));
if (!sessionDescription)
{
std::cerr << "Cannot initialize session description." << std::endl;
throw new std::runtime_error("Cannot set session description");
}
this->_peerConnection->SetRemoteDescription(DummySetSessionDescriptionObserver::Create(), sessionDescription);
this->_peerConnection->CreateAnswer(this, NULL);
}
My problem is no video frames are even transmitted to the browser client even though both the capturer and the peer connection are in place as expected. Is there something I'm missing ?
I got to the bottom of this. The local description was simply not set after creating the answer.
Calling createAnswer will hopefully trigger the OnSuccess callback in the CreateSessionDescriptionObserver (in my case Conductor).
There I failed to save the answer as the local description. Here is how the OnSuccess method should be implemented:
void Conductor::OnSuccess(webrtc::SessionDescriptionInterface *desc)
{
this->_peerConnection->SetLocalDescription(DummySetSessionDescriptionObserver::Create(), desc);
}
Doing this will ultimately trigger OnIceGatheringChange with kIceGatheringComplete as parameter, meaning both sides are ready.
According with google group discuss-webrtc thread
you should implement VideoTrackSourceInterface. In my case, I used the base class AdaptedVideoTrackSource and I created a method FrameCaptured it's is called from my thread. In the method FrameCaptured I call the base method OnFrame. It's work fine !!!
class StreamSource : public rtc::AdaptedVideoTrackSource
{
void OnFrameCaptured(const webrtc::VideoFrame& frame);
}
void StreamSource::OnFrameCaptured(const webrtc::VideoFrame& frame)
{
OnFrame(frame);
}

AVerMedia Capture Card C985 didn't work with C++ and openCV

I bought 'AVerMedia Capture Card (C985 LITE)' last week, and I connected video camera to this capture card's HDMI input.
When I tested with AVerMedia's RECentral software, Amcap, ffmpeg, it worked.
But, when I tested with AVerMedia's AVerCapSDKDemo, VLC, Windows Movie maker, Windows directshow, it didn't work.
Then, I try to get camera frame(in real time) by internet sample code and my c++ code (with and without using openCV). All of the code work with general USB Webcam, but didn't work with this capture card.
The result showed that every c++ code can see this capture card, but can't see the camera that connected to the card.
The conditions, that I tested and it didn't work, are below:
1st PC Spec: Intel core i5, Ram 16 GB, HDD 1 TB, DirectX 11 with windows10 64 bit
2nd PC Spec: Intel core i7, Ram 8 GB, HDD 1 TB, DirectX 11 with windows7 64 bit
IDE: visual studio 2015
Camera: GoPro and SONY Handycam, both full HD with HDMI output
About my project, I want to tracking the car on the road in real time,
therefore I decided to use C985 Capture Card that support full HD.
Does anyone have any advice?
Thank you very much.
Best regards,
--
Edit: Add Example Code
1.My code with openCV: For this code, it always show "error: frame not read from webcam\n".
#include<opencv2/core/core.hpp>
#include<opencv2/highgui/highgui.hpp>
#include<opencv2/imgproc/imgproc.hpp>
#include<iostream>
#include<conio.h>
int main() {
cv::VideoCapture capWebcam(0); // declare a VideoCapture object and associate to webcam, 0 => use 1st webcam
if (capWebcam.isOpened() == false) { // check if VideoCapture object was associated to webcam successfully
std::cout << "error: capWebcam not accessed successfully\n\n"; // if not, print error message to std out
_getch(); // may have to modify this line if not using Windows
return(0); // and exit program
}
char charCheckForEscKey = 0;
while (charCheckForEscKey != 27 && capWebcam.isOpened()) { // until the Esc key is pressed or webcam connection is lost
bool blnFrameReadSuccessfully = capWebcam.read(imgOriginal); // get next frame
if (!blnFrameReadSuccessfully || imgOriginal.empty()) { // if frame not read successfully
std::cout << "error: frame not read from webcam\n"; // print error message to std out
continue; // and jump out of while loop
}
cv::namedWindow("imgOriginal", CV_WINDOW_NORMAL); // note: you can use CV_WINDOW_NORMAL which allows resizing the window
cv::imshow("imgOriginal", imgOriginal); // show windows
charCheckForEscKey = cv::waitKey(1); // delay (in ms) and get key press, if any
} // end while
return(0);
}
2.My code without openCV. (Using AForge): For this code, the image show nothing.
private void Form1_Load(object sender, EventArgs e)
{
FilterInfoCollection videoDevices = new FilterInfoCollection(FilterCategory.VideoInputDevice);
for (int i = 0; i< videoDevices.Count; i++)
{
comboBox1.Items.Add(videoDevices[i].MonikerString);
}
// create video source
}
private void video_NewFrame(object sender, NewFrameEventArgs eventArgs)
{
Bitmap img = (Bitmap)eventArgs.Frame.Clone();
pictureBox1.Image = img;
}
private void button1_Click(object sender, EventArgs e)
{
VideoCaptureDeviceForm xx = new VideoCaptureDeviceForm();
xx.ShowDialog();
VideoCaptureDevice videoSource = new VideoCaptureDevice(xx.VideoDeviceMoniker);
//videoSource.Source = "AVerMedia HD Capture C985 Bus 2";
VideoInput input = videoSource.CrossbarVideoInput;
MessageBox.Show("" + videoSource.CheckIfCrossbarAvailable());
MessageBox.Show(" " + input.Index + " " + input.Type);
// set NewFrame event handler
videoSource.NewFrame += video_NewFrame;
foreach(var x in videoSource.AvailableCrossbarVideoInputs)
{
MessageBox.Show("AvailableCrossbarVideoInputs > " + x.Index);
}
videoSource.VideoSourceError += VideoSource_VideoSourceError;
// start the video source
videoSource.Start();
// signal to stop when you no longer need capturing
videoSource.SignalToStop();
videoSource.Start();
MessageBox.Show("AvailableCrossbarVideoInputs length :" + videoSource.AvailableCrossbarVideoInputs.Length);
input = videoSource.CrossbarVideoInput;
MessageBox.Show(" " + input.Index + " " + input.Type);
videoSource.SignalToStop();
videoSource.Start();
}
3.Code from Internet: I use the code from code project(Capture Live Video from various Video Devices) in link below. It showed "can't detect Webcam".
https://www.codeproject.com/articles/7123/capture-live-video-from-various-video-devices
Hope my code can help: (I use AVerMedia SDK + OpenCV3, use directshow api to open device then get video to mat format)
#include "stdafx.h"
#include "atlstr.h"
#include <iostream>
#include "AVerCapAPI_Pro.h"
#include "opencv2/core/core.hpp"
#include "opencv2/highgui/highgui.hpp"
#include <windows.h>
using namespace std;
using namespace cv;
void ErrorMsg(DWORD ErrorCode)
{
printf("ErrorCode = %d\n", ErrorCode);
if (ErrorCode == CAP_EC_SUCCESS)
{
printf("CAP_EC_SUCCESS\n");
}
if (ErrorCode == CAP_EC_INIT_DEVICE_FAILED)
{
printf("CAP_EC_INIT_DEVICE_FAILED\n");
}
if (ErrorCode == CAP_EC_DEVICE_IN_USE)
{
printf("CAP_EC_DEVICE_IN_USE\n");
}
if (ErrorCode == CAP_EC_NOT_SUPPORTED)
{
printf("CAP_EC_NOT_SUPPORTED\n");
}
if (ErrorCode == CAP_EC_INVALID_PARAM)
{
printf("CAP_EC_INVALID_PARAM\n");
}
if (ErrorCode == CAP_EC_TIMEOUT)
{
printf("CAP_EC_TIMEOUT\n");
}
if (ErrorCode == CAP_EC_NOT_ENOUGH_MEMORY)
{
printf("CAP_EC_NOT_ENOUGH_MEMORY\n");
}
if (ErrorCode == CAP_EC_UNKNOWN_ERROR)
{
printf("CAP_EC_UNKNOWN_ERROR\n");
}
if (ErrorCode == CAP_EC_ERROR_STATE)
{
printf("CAP_EC_ERROR_STATE\n");
}
if (ErrorCode == CAP_EC_HDCP_PROTECTED_CONTENT)
{
printf("CAP_EC_HDCP_PROTECTED_CONTENT\n");
}
}
BOOL WINAPI CaptureVideo(VIDEO_SAMPLE_INFO VideoInfo, BYTE *pbData, LONG lLength, __int64 tRefTime, LONG lUserData);
BOOL bGetData = FALSE;
Mat ans2;
int main(int argc, char** argv)
{
LONG lRetVal;
DWORD dwDeviceNum;
DWORD dwDeviceIndex = 0;
HANDLE hAverCapturedevice[10];
//Device Control
//1. Get Device Number
lRetVal = AVerGetDeviceNum(&dwDeviceNum);
if (lRetVal != CAP_EC_SUCCESS) {
printf("\nAVerGetDeviceNum Fail");
ErrorMsg(lRetVal);
system("pause");
}
if (dwDeviceNum == 0) {
printf("NO device found\n");
system("pause");
}
else {
printf("Device Number = %d\n", dwDeviceNum);
}
//2. Create device representative object handle
for (DWORD dwDeviceIndex = 0; dwDeviceIndex < dwDeviceNum; dwDeviceIndex++) {
lRetVal = AVerCreateCaptureObjectEx(dwDeviceIndex, DEVICETYPE_ALL, NULL, &hAverCapturedevice[dwDeviceIndex]);
if (lRetVal != CAP_EC_SUCCESS) {
printf("\nAVerCreateCaptureObjectEx Fail\n");
ErrorMsg(lRetVal);
system("pause");
}
else
printf("\nAVerCreateCaptureObjectEx Success\n");
}
//3. Start Streaming//
//3.1 set video source
//lRetVal = AVerSetVideoSource(hAverCapturedevice[0], 3);
lRetVal = AVerSetVideoSource(hAverCapturedevice[0], 3);
//3.2 set Video Resolution & FrameRate
VIDEO_RESOLUTION VideoResolution = { 0 };
INPUT_VIDEO_INFO InputVideoInfo;
ZeroMemory(&InputVideoInfo, sizeof(InputVideoInfo));
InputVideoInfo.dwVersion = 2;
Sleep(500);
lRetVal = AVerGetVideoInfo(hAverCapturedevice[0], &InputVideoInfo);
VideoResolution.dwVersion = 1;
VideoResolution.dwVideoResolution = VIDEORESOLUTION_1280X720;
lRetVal = AVerSetVideoResolutionEx(hAverCapturedevice[0], &VideoResolution);
lRetVal = AVerSetVideoInputFrameRate(hAverCapturedevice[0], 6000);
//3.3 Start Streaming
lRetVal = AVerStartStreaming(hAverCapturedevice[0]);
if (lRetVal != CAP_EC_SUCCESS) {
printf("\AVerStartStreaming Fail\n");
ErrorMsg(lRetVal);
//system("pause");
}
else
{
printf("\AVerStartStreaming Success\n");
//system("pause");
}
//4. Capture Single Image
#if 0
CAPTURE_IMAGE_INFO m_CaptureImageInfo = { 0 };
char text[] = "E:\Lena.bmp";
wchar_t wtext[20];
#define _CRT_SECURE_NO_WARNINGS
#pragma warning( disable : 4996 )
mbstowcs(wtext, text, strlen(text) + 1);//Plus null
LPWSTR m_strSavePath = wtext;
CAPTURE_SINGLE_IMAGE_INFO pCaptureSingleImageInfo = { 0 };
pCaptureSingleImageInfo.dwVersion = 1;
pCaptureSingleImageInfo.dwImageType = 2;
pCaptureSingleImageInfo.bOverlayMix = FALSE;
pCaptureSingleImageInfo.lpFileName = m_strSavePath;
//pCaptureSingleImageInfo.rcCapRect = 0;
lRetVal = AVerCaptureSingleImage(hAverCapturedevice[0], &pCaptureSingleImageInfo);
printf("\AVerCaptureSingleImage\n");
ErrorMsg(lRetVal);
#endif
#if 1
//video capture
VIDEO_CAPTURE_INFO VideoCaptureInfo;
ZeroMemory(&VideoCaptureInfo, sizeof(VIDEO_CAPTURE_INFO));
VideoCaptureInfo.bOverlayMix = FALSE;
VideoCaptureInfo.dwCaptureType = CT_SEQUENCE_FRAME;
VideoCaptureInfo.dwSaveType = ST_CALLBACK_RGB24;
VideoCaptureInfo.lpCallback = CaptureVideo;
VideoCaptureInfo.lCallbackUserData = NULL;
lRetVal = AVerCaptureVideoSequenceStart(hAverCapturedevice[0], VideoCaptureInfo);
if (FAILED(lRetVal))
{
return lRetVal;
}
//system("pause");// hange up
#endif
int i;
scanf_s("%d", &i, 4); //must input any number in console !!
//5. Stop Streaming
lRetVal = AVerCaptureVideoSequenceStop(hAverCapturedevice[0]);
lRetVal = AVerStopStreaming(hAverCapturedevice[0]);
//printf("\AVerStopStreaming Success\n");
ErrorMsg(lRetVal);
return 0;
}
BOOL WINAPI CaptureVideo(VIDEO_SAMPLE_INFO VideoInfo, BYTE *pbData, LONG lLength, __int64 tRefTime, LONG lUserData)
{
if (!bGetData)
{
ans2 = Mat(VideoInfo.dwHeight, VideoInfo.dwWidth, CV_8UC3, (uchar*)pbData).clone();//single capture image
//ans2 = Mat(VideoInfo.dwHeight, VideoInfo.dwWidth, CV_8UC3, (uchar*)pbData); //sequence capture image
bGetData = TRUE;
}
imshow("ans2", ans2);
waitKey(1);
return TRUE;
}
Now, it's solved by formatted computer and installed Windows 10 without updates.
And I wrote program to call GraphEdit that set up the following filters.
GraphEdit's filter
Everything seemed to work fine until I updated windows by mistake.

How do I get Fmod to work from a class?

In my project written in C++, I have FMOD currently working from my main.cpp. To help organize my engine I want to move my sound code to it's own translation unit. For some reason when I try to run my sound code from within my class, it doesn't play any sound. I'm not sure if it is because of incorrect assignment of the value or if there is a bigger issue that I don't know about. This is my class implementation:
//Sound.h
#ifndef SOUND_H
#define SOUND_H
#include <iostream>
#include "inc\fmod.hpp"
#include "inc\fmod_errors.h"
class Sound
{
public:
Sound(void);
~Sound(void);
void Init();
void FMODErrorCheck(FMOD_RESULT res);
void PlaySound();
void ResumeSound();
void PauseSound();
void Update();
private:
//sound
FMOD::System *sys;
FMOD_RESULT result;
size_t version; //this is just an unsigned int
FMOD_SPEAKERMODE speakerMode;
int numDrivers;
FMOD_CAPS caps;
char name[256];
FMOD::Sound *sound;
FMOD::Channel *channel;
bool quitFlag;
};
#endif
//Sound.cpp
#include "Sound.h"
Sound::Sound(void)
{
Init();
}
Sound::~Sound(void)
{
FMODErrorCheck(sound->release());
FMODErrorCheck(sys->release());
}
void Sound::Init()
{
// Create FMOD interface object
result = FMOD::System_Create(&sys);
FMODErrorCheck(result);
// Check version
result = sys->getVersion(&version);
FMODErrorCheck(result);
if(version < FMOD_VERSION)
{
std::cout << "Error! You are using an old version of FMOD " << version << ". This program requires " << FMOD_VERSION << std::endl;
exit(0);
}
// Get number of sound cards
result = sys->getNumDrivers(&numDrivers);
FMODErrorCheck(result);
// No sound cards (disable sound)
if(numDrivers == 0)
{
result = sys->setOutput(FMOD_OUTPUTTYPE_NOSOUND);
FMODErrorCheck(result);
}
// At least one sound card
else
{
// Get the capabilities of the default (0) sound card
result = sys->getDriverCaps(0, &caps, 0, &speakerMode);
FMODErrorCheck(result);
// Set the speaker mode to match that in Control Panel
result = sys->setSpeakerMode(speakerMode);
FMODErrorCheck(result);
// Increase buffer size if user has Acceleration slider set to off
if(caps & FMOD_CAPS_HARDWARE_EMULATED)
{
result = sys->setDSPBufferSize(1024, 10);
FMODErrorCheck(result);
}
// Get name of driver
result = sys->getDriverInfo(0, name, 256, 0);
FMODErrorCheck(result);
// SigmaTel sound devices crackle for some reason if the format is PCM 16-bit.
// PCM floating point output seems to solve it.
if(strstr(name, "SigmaTel"))
{
result = sys->setSoftwareFormat(48000, FMOD_SOUND_FORMAT_PCMFLOAT, 0, 0, FMOD_DSP_RESAMPLER_LINEAR);
FMODErrorCheck(result);
}
}
// Initialise FMOD
result = sys->init(100, FMOD_INIT_NORMAL, 0);
// If the selected speaker mode isn't supported by this sound card, switch it back to stereo
if(result == FMOD_ERR_OUTPUT_CREATEBUFFER)
{
result = sys->setSpeakerMode(FMOD_SPEAKERMODE_STEREO);
FMODErrorCheck(result);
result = sys->init(100, FMOD_INIT_NORMAL, 0);
}
FMODErrorCheck(result);
// Open music as a stream
//FMOD::Sound *song1, *song2, *effect;
//result = sys->createStream("Effect.mp3", FMOD_DEFAULT, 0, &sound);
//FMODErrorCheck(result);
result = sys->createSound("Effect.mp3", FMOD_DEFAULT, 0, &sound);
FMODErrorCheck(result);
// Assign each song to a channel and start them paused
//result = sys->playSound(FMOD_CHANNEL_FREE, sound, true, &channel);
//FMODErrorCheck(result);
// Songs should repeat forever
channel->setLoopCount(-1);
}
void Sound::FMODErrorCheck(FMOD_RESULT res)
{
if(res != FMOD_OK)
{
std::cout << "FMOD ERROR: (" << res << ") - " << FMOD_ErrorString(res) << std::endl;
//quitFlag = true;
}
}
void Sound::PlaySound()
{
sys->playSound(FMOD_CHANNEL_FREE, sound, false, 0);
}
void Sound::ResumeSound()
{
channel->setPaused(false);
}
void Sound::PauseSound()
{
channel->setPaused(true);
}
void Sound::Update()
{
sys->update();
}
//Main.cpp
Sound sound;
// Initialization routine.
void setup(void)
{
glClearColor(0.0, 0.0, 0.0, 0.0);
sound = &Sound();
}
//------------------------------------------------------------ OnInit()
//
void OnIdle()
{
if(IsKeyPressed(KEY_ESCAPE))
{
exit(EXIT_SUCCESS);
}
if(IsKeyPressed('1'))
{
sound->PlaySound();
}
sound->Update();
// redraw the screen
glutPostRedisplay();
}
Currently it is giving me 2 errors:
Unhandled exception at 0x0F74465A (fmodex.dll) in TestOpenGL.exe: 0xC0000005: Access violation reading location 0x062C5040
and
FMOD error! (36) An invalid object handle was used
Any idea why it isn't working? Any idea how I solve these issues?
From your last comment and looking at your code I see a problem. You have created a pointer by FMOD::System *sys; but this pointer is not initialized to any instance of FMOD::System that is, there should be something like sys = new FMOD::System or sys = new FMOD::System(/* whatever argument you must supply to it's constructor */); somewhere in your code but right before you try to access anything related to FMOD::System object. This is most probably the reason for your program crash. Also since sys is a pointer to FMOD::System there's another problem at line containing result = FMOD::System_Create(&sys); you are passing a pointer by reference. I suggest you read a couple of articles about pointers in C and C++ and also some more about object creation and destruction in object oriented programming languages.
I was able to get help with the issue. I was initializing my sound variable incorrectly.
sound = &Sound();
Should actually be:
sound = new Sound();

OpenCV edit captured ip camera

Can I use OpenCV to edit video captured from Ip camera with Dahua SDK?
Here are the portion of Dahua sample code:
// initialized play list
BOOL bOpenRet = PLAY_OpenStream(g_lRealPort, 0, 0, 1024 * 500);
if (bOpenRet)
{
// start play
BOOL bPlayRet = PLAY_Play(g_lRealPort, hMainWnd);
if (bPlayRet)
{
// monitor preview
long lRealHandle = CLIENT_RealPlayEx(lLoginHandle, nChannelID, 0);
if (0 != lRealHandle)
{
// set recall function handling data
CLIENT_SetRealDataCallBackEx(lRealHandle, RealDataCallBackEx, (DWORD)0, 0x1f);
}
else
{
//printf("Fail to play!\n");
PLAY_Stop(g_lRealPort);
PLAY_CloseStream(g_lRealPort);
}
}
else
{
PLAY_CloseStream(g_lRealPort);
}
}
The code above is connecting to cam using TCP and the streaming the video, the call back function RealDataCallBackEx is called for streaming, I can display the video on a Window, but how can I let OpenCV library to deal with it?
Here are the code of the RealDataCallBackEx function:
void __stdcall RealDataCallBackEx(LONG lRealHandle, DWORD dwDataType, BYTE *pBuffer, DWORD dwBufSize, LONG lParam, DWORD dwUser)
{
BOOL bInput = FALSE;
bInput = PLAY_InputData(g_lRealPort, pBuffer, dwBufSize);
}
If the IP camera uses certain standards you should be able to grab an image using the following OpenCV code (adapt where needed, i copied it from one of my own programs). I think you can also test this by pasting the url in your browser with the correct ip, port and login. I left the port at 88 because that is normally where you can send these commands to for an ip camera.
Mat returnFrame;
string url = "http://";
url.append("192.168.1.108");
url.append(":88/cgi-bin/CGIProxy.fcgi?cmd=snapPicture2&usr=");
url.append("admin");
url.append("&pwd=");
url.append("admin");
VideoCapture cap(url);
if (cap.isOpened()) {
Mat frame;
if (cap.read(frame) == false) {
cout << "Unable to grab frame" << endl;
} else returnFrame = frame.clone();
cap.release();
} else cout << "Can't open URL" << endl;
if (returnFrame.empty()) cout << "No frame to grab for cam!" << endl;
else cout << "Cam Grabbed frame succesfully" << endl;
If you want to turn a byte buffer into a OpenCV Mat you can use the following code:
byte buf[] = new byte[100];
//fill buffer here
Mat m = new Mat(1, 100, CvType.CV_8UC1);
m.put(0, 0, buf);
Be sure to define the size and type correctly. In this example it is a 1 channel Mat of 1 x 100 pixels.
I only succeeded with the function CLIENT_SnapPictureEx
CLIENT_SetSnapRevCallBack(OnSnapRevMessage, dwUser);
NET_SNAP_PARAMS _netSnapParam;
_netSnapParam.Channel = (uint)ChannelNum;
_netSnapParam.mode = 1;
CLIENT_SnapPictureEx(lLoginID, _netSnapParam, reserved);
private void SnapRevCallBack(IntPtr lLoginID, IntPtr pBuf, uint RevLen, uint EncodeType, uint CmdSerial, IntPtr dwUser)
{
byte[] data = new byte[RevLen];
Marshal.Copy(pBuf, data, 0, (int)RevLen);
img = Cv2.ImDecode(data, ImreadModes.Color);
}