SDL_mixer wont play sound Raspberry Pi - c++

I can't get SDL_Mixer to play sound on the raspberry. The program compiles and builds OK, but all I hear is a short squeak (like static) and nothing.
Any ideas?
#include <SDL/SDL.h>
#include <SDL/SDL_mixer.h>
#include <stdio.h>
#include <string>
#include <iostream>
int main()
{
Mix_Chunk *snd1 = NULL;
Mix_Music *m = NULL;
if(SDL_Init(SDL_INIT_VIDEO | SDL_INIT_AUDIO) < 0) {
std::cout << "Something went wrong";
}
if(Mix_OpenAudio(44100, MIX_DEFAULT_FORMAT, 2, 1024) < 0) {
std::cout << "Kunne ikke loade musikk";
}
snd1 = Mix_LoadWAV("bicycle_bell.wav");
if(snd1 == NULL) {
std::cout << "Fant ikke filen";
}
Mix_PlayChannel(-1, snd1, 0);
Mix_FreeChunk(snd1);
Mix_Quit();
return 1;
}

Solved it. The program terminated before the sound was playing.
When testing a simple:
while(Mix_playing(sound));
fixed it.

Related

SFML program crashes on run no compile time errors

Sorry for the bad code and English,
So I was testing out SFML Audio and I was trying to make an audio player,
but It crashes while I tried to run it,
So basically it was supposed to play an audio file and play it.
Also, there's another thing I'm having an issue with that is if the audio is paused it rewinds itself
as though it has been stopped
No arguments were given in and it didn't print anything either it just crashed.
#include <windows.h>
#include <iostream>
#include <SFML/Audio.hpp>
#include <string>
#include <thread>
sf::Music audio_player;
bool looping = false;
/*
*
* audio_state = 2 // Playing
* audio_state = 1 // Paused
* audio_state = 0 // Stopped
*
*/
bool audio_playable() // Checks if the audio is either paused or stopped
{
if(audio_player.getStatus() == 0 ||
audio_player.getStatus() == 1 )
return true;
else return false;
}
void loop()
{
looping = true;
while(1)
{
if(GetKeyState('P') & 0x8000 && audio_player.getStatus() == 2)
{
audio_player.pause();
std::cout << "Paused\n";
}
if(GetKeyState('S') & 0x8000 && audio_player.getStatus() == 2)
{
audio_player.stop();
std::cout << "Stoped\n";
}
if(GetKeyState('L') & 0x8000 && audio_playable() )
{
audio_player.play();
std::cout << "Started\n";
}
if(GetKeyState(VK_ESCAPE) & 0x8000)
{
looping = false;
}
}
}
int main(int argc, char* argv[])
{
std::string file_name = ""; //Initialize the file location
if(argc < 1) // If there are no arguments like the file location ask for the file location
{
std::cout << "File Location: ";
std::cin >> file_name;
if(file_name == "")
{
std::cout << "File Location cannot be null" << std::endl;
main(argc, argv);
}
}
if(file_name == "")file_name = argv[1];
audio_player.openFromFile(file_name);
audio_player.play();
std::thread pauseFunc(loop); // new thread for the pause play function
while(looping)
{
}
return 0;
}

No sound when using SDL_Mixer in C++/Linux

I'm trying to use SDL_mixer in C++ under Linux to play sounds asynchronously, but it somehow doesn't work. When I execute it, no sound is playing at all. I'm not quite familiar with SDL and classes though, so it would be very helpful if someone could detect where the error in my code is.
My header file (sample.h):
#pragma once
#include <string>
#include <memory>
#include "SDL_mixer.h"
class sample {
public:
sample(const std::string &path, int volume);
void play();
void play(int times);
void set_volume(int volume);
private:
std::unique_ptr<Mix_Chunk, void (*)(Mix_Chunk *)> chunk;
};
My main program (.cpp):
#include "sample.h"
#include <iostream>
sample::sample(const std::string &path, int volume) : chunk(Mix_LoadWAV(path.c_str()), Mix_FreeChunk) {
if (!chunk.get()) {
std::cout << "Could not load audio sample: " << path << std::endl;
}
Mix_VolumeChunk(chunk.get(), volume);
}
void sample::play() {
Mix_PlayChannel(-1, chunk.get(), 0);
}
void sample::play(int times) {
Mix_PlayChannel(-1, chunk.get(), times - 1);
}
void sample::set_volume(int volume) {
Mix_VolumeChunk(chunk.get(), volume);
}
int main() {
if (Mix_Init(MIX_INIT_FLAC | MIX_INIT_MP3 | MIX_INIT_OGG) < 0) {
return -1;
}
if (Mix_OpenAudio(44100, MIX_DEFAULT_FORMAT, 2, 1024) < 0) {
std::cout << "Can not initialize mixer!" << std::endl;
return -1;
}
// Amount of channels (Max amount of sounds playing at the same time)
Mix_AllocateChannels(32);
sample s("Snare-Drum-1.wav", MIX_MAX_VOLUME / 2);
s.play();
Mix_Quit();
return 0;
}
Your binary runs and finishes before phaffing about long enough to render the audio so solution is to make the code remain running longer ... I got your code working by adding this
s.play();
std::chrono::milliseconds timespan(2000); // or whatever
std::this_thread::sleep_for(timespan);
in your header replace
#include "SDL_mixer.h"
with
#include <SDL2/SDL_mixer.h>
#include <chrono>
#include <thread>
so now its compiled using SDL2 and not SDL
g++ -o sample sample.cpp -lSDL2 -lSDL2_mixer
So .... what is the real SDL2 solution ? well I would say a typical use case is SDL2 is part of a game which remains running hence the code base has an event loop which remains active long enough to hear the audio getting rendered. Another solution short of explicitly using a sleep or a gui is to put the code into a server ... SDL2 api itself must have their one-liner as well

Crackling audio using SDL2 and stb_vorbis

In a project I'm working on, I'd like to use stb_vorbis to stream audio from an ogg file. However, after implementing the processes to do this, I found the audio was crackly. I feel the problem may be similar to this question, but I can't see where a problem could be.
Here is my code:
#include <SDL2/SDL.h>
#include <stb/vorbis.h>
#include <iostream>
void sdlAudioCallback(void* userData, Uint8* stream, int len)
{
stb_vorbis* myVorbis = (stb_vorbis*) userData;
SDL_memset(stream, 0, len);
stb_vorbis_get_frame_short_interleaved(myVorbis, 2, (short*) stream, len/sizeof(short));
}
int main()
{
if (SDL_Init(SDL_INIT_AUDIO) != 0)
return -1;
int error = 0;
stb_vorbis_alloc* alloc;
stb_vorbis* vorbis = stb_vorbis_open_filename("res/thingy.ogg", &error, alloc);
if (error != 0)
return -2;
stb_vorbis_info info = stb_vorbis_get_info(vorbis);
SDL_AudioSpec spec;
spec.freq = info.sample_rate;
spec.format = AUDIO_S16;
spec.channels = 2;
spec.samples = 1024;
spec.callback = sdlAudioCallback;
spec.userdata = vorbis;
if (SDL_OpenAudio(&spec, NULL) < 0) {
std::cout << "failed to open audio device: " << SDL_GetError() << std::endl;
SDL_Quit();
return -3;
}
SDL_PauseAudio(0);
SDL_Delay(5000);
}
More information:
thingy.ogg is from Sean Barrett's samples
building with g++ on an OSX machine

C++ - msgsnd & msgrcv communication between 2 different programs

I have two programs and I want them to communicate together by msgrcv() && msgsnd(). I so have a master program which init the message queue and start the 2 others programs:
#include <sys/types.h>
#include <sys/ipc.h>
#include <sys/msg.h>
#include <string>
#include <iostream>
#include <unistd.h>
#include <sys/wait.h>
#include <stdlib.h>
#include <stdio.h>
int main() {
int qid = msgget(ftok(".",'u'), 0);
char* params[3];
params[1] = (char *)malloc(sizeof(char) * 9);
sprintf(params[1], "%d", qid);
params[2] = NULL;
printf("qid = %d and qid(str) = %s", qid, params[1]);
// return (0);
//spawning two child processes
pid_t cpid = fork();
if (cpid == 0) {
params[0] = (char*)"./sender";
execv(params[0], params);
exit(0);
}
cpid = fork();
if (cpid == 0) {
params[0] = (char*)"./receiver";
execv(params[0], params);
exit(0);
}
while (wait(NULL) != -1); // waiting for both children to terminate
msgctl(qid, IPC_RMID, NULL);
std::cout << "parent proc: " << getpid()
<< " now exits" << std::endl;
exit(0);
}
I also prepare the parameters and start the both following programs:
sender
#include <sys/types.h>
#include <sys/ipc.h>
#include <sys/msg.h>
#include <string.h>
#include <iostream>
#include <unistd.h>
#include <sys/wait.h>
#include <stdlib.h>
int main(int ac, char **av) {
if (ac != 2)
return (-1);
// create my msgQ with key value from ftok()
// int qid = msgget(IPC_PRIVATE, IPC_EXCL|IPC_CREAT|0600);
int qid = atoi(av[1]);
// declare my message buffer
struct buf {
long mtype; // required
char greeting[50]; // mesg content
};
buf msg;
int size = sizeof(msg)-sizeof(long);
std::cout << "Welcome in the prog assignment 2! Type [exit] to stop the program." << std::endl;
bool exit = false;
while (!exit)
{
std::cout << getpid() << ": ";
std::cin.getline(msg.greeting, 50, '\n');
std::cout << msg.greeting << std::endl;
msg.mtype = 114; // only reading mesg with type mtype = 114
if (strcmp(msg.greeting, "exit") == 0)
exit = true;
msgsnd(qid, (struct msgbuf *)&msg, size, 0);
}
}
receiver
#include <sys/types.h>
#include <sys/ipc.h>
#include <sys/msg.h>
#include <string.h>
#include <iostream>
#include <unistd.h>
#include <sys/wait.h>
#include <stdlib.h>
#include <stdio.h>
int main(int ac, char **av) {
int i = 0;
while (i < ac)
printf("AV: %s\n", av[i++]);
if (ac != 2)
return (-1);
// int qid = msgget(IPC_PRIVATE, IPC_EXCL|IPC_CREAT|0600);
int qid = atoi(av[1]);
// declare my message buffer
struct buf {
long mtype;
char greeting[50];
};
buf msg;
int size = sizeof(msg)-sizeof(long);
bool exit = false;
while (!exit)
{
msgrcv(qid, (struct msgbuf *)&msg, size, 114, 0);
if (strcmp(msg.greeting, "exit") == 0)
exit = true;
std::cout << getpid() << msg.greeting << std::endl;
}
std::cout << "get out" << std::endl;
}
It doesn't work and I'm not sure to understand why because, I'm creating the message queue, passing it as parameter, then I put it back as int and then use it. However, it just gives me an infinite loop of weird display, why?
ANy help is welcome.. Thank !

C++ Linux: Get the refresh rate of a monitor

In Windows, winapi provides a function that reports information about a monitor:
DEVMODE dm;
dm.dmSize = sizeof(DEVMODE);
EnumDisplaySettings(NULL, ENUM_CURRENT_SETTINGS, &dm);
int FPS = dm.dmDisplayFrequency;
What is the equivalent of this on Linux? The Linux man pages direct me to an allegro library function, but not only am I not using allegro, that function is from a very outdated version of said library and reportedly only works on Windows.
Use XRandr API (man 3 Xrandr). See here for an example:
http://www.blitzbasic.com/Community/posts.php?topic=86911
You can also look at the code for xrandr(1).
Edit1: For posterity sake:
Sample code slightly adjusted so its more of a demo:
#include <cstdio>
#include <cstdlib>
#include <cstring>
#include <string>
#include <iostream>
#include <unistd.h>
#include <X11/Xlib.h>
#include <X11/extensions/Xrandr.h>
int main()
{
int num_sizes;
Rotation current_rotation;
Display *dpy = XOpenDisplay(NULL);
Window root = RootWindow(dpy, 0);
XRRScreenSize *xrrs = XRRSizes(dpy, 0, &num_sizes);
//
// GET CURRENT RESOLUTION AND FREQUENCY
//
XRRScreenConfiguration *conf = XRRGetScreenInfo(dpy, root);
short current_rate = XRRConfigCurrentRate(conf);
SizeID current_size_id = XRRConfigCurrentConfiguration(conf, &current_rotation);
int current_width = xrrs[current_size_id].width;
int current_height = xrrs[current_size_id].height;
std::cout << "current_rate = " << current_rate << std::endl;
std::cout << "current_width = " << current_width << std::endl;
std::cout << "current_height = " << current_height << std::endl;
XCloseDisplay(dpy);
}
Compile with:
g++ 17797636.cpp -o 17797636 -lX11 -lXrandr
Output:
$ ./17797636
current_rate = 50
current_width = 1920
current_height = 1080
A simple example:
#include <X11/Xlib.h>
#include <X11/extensions/Xrandr.h>
int main(int argc, char *argv[])
{
Display *display = XOpenDisplay(NULL);
Window default_root_window = XDefaultRootWindow(display);
XRRScreenResources *screen_resources = XRRGetScreenResources(display, default_root_window);
RRMode active_mode_id = 0;
for (int i = 0; i < screen_resources->ncrtc; ++i) {
XRRCrtcInfo *crtc_info = XRRGetCrtcInfo(display, screen_resources, screen_resources->crtcs[i]);
// If None, then is not displaying the screen contents
if (crtc_info->mode != None) {
active_mode_id = crtc_info->mode;
}
}
double active_rate = 0;
for (int i = 0; i < screen_resources->nmode; ++i) {
XRRModeInfo mode_info = screen_resources->modes[i];
if (mode_info.id == active_mode_id) {
active_rate = (double)mode_info.dotClock / ((double)mode_info.hTotal * (double)mode_info.vTotal);
}
}
printf("Active rate is: %.1f\n", active_rate);
return 0;
}
Iwan's answer did not work for me; xrandr has changed since 2013 I guess? The command-line tool xrandr can read my refresh rate correctly, but its source code is too complex for me to be willing to copy the way it's doing so. Instead I have chosen to clumsily delegate the work to the entire xrandr program. My crappy solution is pasted below.
Note that this solution is likely to be unreliable when multiple display devices are connected, and will probably someday break when xrandr changes again.
(pstream.h is provided by Jonathan Wakely's PStreams library, referenced here: https://stackoverflow.com/a/10702464/1364776
I'm only using it to turn the output of a command into a std::string; obviously there are various other ways to do that so use one of them if you prefer.)
#include <pstream.h>
#include <cctype>
#include <cstdlib>
#include <cmath>
float getRefreshRate()
{
try
{
redi::ipstream queryStream("xrandr");
std::string chunk;
while (queryStream >> chunk)
{
auto rateEnd = chunk.find("*");
if (rateEnd != std::string::npos)
{
auto rateBeginning = rateEnd;
while (std::isdigit(chunk[rateBeginning]) || std::ispunct(chunk[rateBeginning]))
--rateBeginning;
++rateBeginning;
auto numberString = chunk.substr(rateBeginning, rateEnd - rateBeginning);
float rate = std::strtof(numberString.data(), nullptr);
if (rate != 0 && rate != HUGE_VALF)
return rate;
}
}
}
catch (...)
{
}
return 60; // I am not proud of any of this :(
}