In Windows, winapi provides a function that reports information about a monitor:
DEVMODE dm;
dm.dmSize = sizeof(DEVMODE);
EnumDisplaySettings(NULL, ENUM_CURRENT_SETTINGS, &dm);
int FPS = dm.dmDisplayFrequency;
What is the equivalent of this on Linux? The Linux man pages direct me to an allegro library function, but not only am I not using allegro, that function is from a very outdated version of said library and reportedly only works on Windows.
Use XRandr API (man 3 Xrandr). See here for an example:
http://www.blitzbasic.com/Community/posts.php?topic=86911
You can also look at the code for xrandr(1).
Edit1: For posterity sake:
Sample code slightly adjusted so its more of a demo:
#include <cstdio>
#include <cstdlib>
#include <cstring>
#include <string>
#include <iostream>
#include <unistd.h>
#include <X11/Xlib.h>
#include <X11/extensions/Xrandr.h>
int main()
{
int num_sizes;
Rotation current_rotation;
Display *dpy = XOpenDisplay(NULL);
Window root = RootWindow(dpy, 0);
XRRScreenSize *xrrs = XRRSizes(dpy, 0, &num_sizes);
//
// GET CURRENT RESOLUTION AND FREQUENCY
//
XRRScreenConfiguration *conf = XRRGetScreenInfo(dpy, root);
short current_rate = XRRConfigCurrentRate(conf);
SizeID current_size_id = XRRConfigCurrentConfiguration(conf, ¤t_rotation);
int current_width = xrrs[current_size_id].width;
int current_height = xrrs[current_size_id].height;
std::cout << "current_rate = " << current_rate << std::endl;
std::cout << "current_width = " << current_width << std::endl;
std::cout << "current_height = " << current_height << std::endl;
XCloseDisplay(dpy);
}
Compile with:
g++ 17797636.cpp -o 17797636 -lX11 -lXrandr
Output:
$ ./17797636
current_rate = 50
current_width = 1920
current_height = 1080
A simple example:
#include <X11/Xlib.h>
#include <X11/extensions/Xrandr.h>
int main(int argc, char *argv[])
{
Display *display = XOpenDisplay(NULL);
Window default_root_window = XDefaultRootWindow(display);
XRRScreenResources *screen_resources = XRRGetScreenResources(display, default_root_window);
RRMode active_mode_id = 0;
for (int i = 0; i < screen_resources->ncrtc; ++i) {
XRRCrtcInfo *crtc_info = XRRGetCrtcInfo(display, screen_resources, screen_resources->crtcs[i]);
// If None, then is not displaying the screen contents
if (crtc_info->mode != None) {
active_mode_id = crtc_info->mode;
}
}
double active_rate = 0;
for (int i = 0; i < screen_resources->nmode; ++i) {
XRRModeInfo mode_info = screen_resources->modes[i];
if (mode_info.id == active_mode_id) {
active_rate = (double)mode_info.dotClock / ((double)mode_info.hTotal * (double)mode_info.vTotal);
}
}
printf("Active rate is: %.1f\n", active_rate);
return 0;
}
Iwan's answer did not work for me; xrandr has changed since 2013 I guess? The command-line tool xrandr can read my refresh rate correctly, but its source code is too complex for me to be willing to copy the way it's doing so. Instead I have chosen to clumsily delegate the work to the entire xrandr program. My crappy solution is pasted below.
Note that this solution is likely to be unreliable when multiple display devices are connected, and will probably someday break when xrandr changes again.
(pstream.h is provided by Jonathan Wakely's PStreams library, referenced here: https://stackoverflow.com/a/10702464/1364776
I'm only using it to turn the output of a command into a std::string; obviously there are various other ways to do that so use one of them if you prefer.)
#include <pstream.h>
#include <cctype>
#include <cstdlib>
#include <cmath>
float getRefreshRate()
{
try
{
redi::ipstream queryStream("xrandr");
std::string chunk;
while (queryStream >> chunk)
{
auto rateEnd = chunk.find("*");
if (rateEnd != std::string::npos)
{
auto rateBeginning = rateEnd;
while (std::isdigit(chunk[rateBeginning]) || std::ispunct(chunk[rateBeginning]))
--rateBeginning;
++rateBeginning;
auto numberString = chunk.substr(rateBeginning, rateEnd - rateBeginning);
float rate = std::strtof(numberString.data(), nullptr);
if (rate != 0 && rate != HUGE_VALF)
return rate;
}
}
}
catch (...)
{
}
return 60; // I am not proud of any of this :(
}
Related
I have simplified my code, and it compiles, but it doesn't do anything. It doesn't error out though either. I am trying to get 7 threads (on my 8-core processor) in this example to write to a variable to benchmark my system. I would like to do this with multiple threads to see if it's faster. It's based off other code that worked before I added multithreading. When I run, it just terminates. It should show progress each second of how many total iterations all the threads have done together. Some of the includes are there from other code I am working on.
I would like to also gracefully terminate all 7 threads when Ctrl-C is pressed. Help would be appreciated. Thanks!
//Compiled using: g++ ./test.cpp -lpthread -o ./test
#include <stdio.h>
#include <string>
#include <iostream>
#include <time.h>
#include <ctime>
#include <ratio>
#include <chrono>
#include <iomanip>
#include <locale.h>
#include <cstdlib>
#include <pthread.h>
using namespace std;
using namespace std::chrono;
const int NUM_THREADS = 7;
const std::string VALUE_TO_WRITE = "TEST";
unsigned long long int total_iterations = 0;
void * RunBenchmark(void * threadid);
class comma_numpunct: public std::numpunct < char > {
protected: virtual char do_thousands_sep() const {
return ',';
}
virtual std::string do_grouping() const {
return "\03";
}
};
void * RunBenchmark(void * threadid) {
unsigned long long int iterations = 0;
std::string benchmark;
int seconds = 0;
std::locale comma_locale(std::locale(), new comma_numpunct());
std::cout.imbue(comma_locale);
auto start = std::chrono::system_clock::now();
auto end = std::chrono::system_clock::now();
do {
start = std::chrono::system_clock::now();
while ((std::chrono::duration_cast < std::chrono::seconds > (end - start).count() != 1)) {
benchmark = VALUE_TO_WRITE;
iterations += 1;
}
total_iterations += iterations;
iterations = 0;
cout << "Total Iterations: " << std::setprecision(0) << std::fixed << total_iterations << "\r";
} while (1);
}
int main(int argc, char ** argv) {
unsigned long long int iterations = 0;
int tc, tn;
pthread_t threads[NUM_THREADS];
for (tn = 0; tn < NUM_THREADS; tn++) {
tc = pthread_create( & threads[tn], NULL, & RunBenchmark, NULL);
}
return 0;
}
I am having a problem in trying to serialize an array of unsigned char into file with GZIP compression using protobuf while playing with the library.
I think the problem might have to do with some of my syntax or misuse of API.
I have also tried std::fstream.
FYI, Windows 8.1 & VS2013 is the building environment.
scene.proto
syntax = "proto3";
package Recipe;
message Scene
{
repeated int32 imageData = 1 [packed=true];
}
source.cpp
#include <iostream>
#include <fstream>
#include <ostream>
#include <istream>
#include <string>
#include <cstdint>
#include "Scene.pb.h"
#include <google\protobuf\io\zero_copy_stream_impl.h>
#include <google\protobuf\io\gzip_stream.h>
int const _MIN = 0;
int const _MAX = 255;
unsigned int const _SIZE = 65200000;
unsigned int const _COMPRESSION_LEVEL = 10;
void randWithinUnsignedCharSize(uint8_t * buffer, unsigned int size)
{
for (size_t i = 0; i < size; ++i)
{
buffer[i] = _MIN + (rand() % static_cast<int>(_MAX - _MIN + 1));
}
}
using namespace google::protobuf::io;
int main()
{
GOOGLE_PROTOBUF_VERIFY_VERSION;
Recipe::Scene * scene = new Recipe::Scene();
uint8_t * imageData = new uint8_t[_SIZE];
randWithinUnsignedCharSize(imageData, _SIZE);
for (size_t i = 0; i < _SIZE; i++)
{
scene->add_imagedata(imageData[i]);
}
std::cout << "scene->imagedata_size() " << scene->imagedata_size() << std::endl;
{
std::ofstream output("scene.art", std::ofstream::out | std::ofstream::trunc | std::ofstream::binary);
OstreamOutputStream outputFileStream(&output);
GzipOutputStream::Options options;
options.format = GzipOutputStream::GZIP;
options.compression_level = _COMPRESSION_LEVEL;
GzipOutputStream gzipOutputStream(&outputFileStream, options);
if (!scene->SerializeToZeroCopyStream(&gzipOutputStream)) {
std::cerr << "Failed to write scene." << std::endl;
return -1;
}
}
Recipe::Scene * scene1 = new Recipe::Scene();
{
std::ifstream input("scene.art", std::ifstream::in | std::ifstream::binary);
IstreamInputStream inputFileStream(&input);
GzipInputStream gzipInputStream(&inputFileStream);
if (!scene1->ParseFromZeroCopyStream(&gzipInputStream)) {
std::cerr << "Failed to parse scene." << std::endl;
return -1;
}
}
std::cout << "scene1->imagedata_size() " << scene1->imagedata_size() <<std::endl;
google::protobuf::ShutdownProtobufLibrary();
return 0;
}
You seem to have a typo in your code. Compression level is according to documentation in range 0-9. You set incorrectly compression level to 10.
Your example is working for me when corrected to:
unsigned int const _COMPRESSION_LEVEL = 9;
I am trying to make a fun program where it display random numbers, but I need to remove the scrollbar so it looks more convincing. I managed to make the program full screen but I can't remove the vertical scrollbar. Screenshot
Code:
#include <iostream>
#include <Windows.h>
using namespace std;
int main() {
SetConsoleDisplayMode(GetStdHandle(STD_OUTPUT_HANDLE), CONSOLE_FULLSCREEN_MODE, 0);
int output;
bool done = false;
system("color a");
while (!done) {
output = 1 + (rand() % (int)(1000 - 1 + 1));
cout << output;
}
}
There are many ways, one of them is manipulating the size of the internal buffer of the console to have the same size of the window and then using ShowScrollBar function to remove the scrolls.
#include <iostream>
#include <Windows.h>
#include <WinUser.h>
using namespace std;
int main() {
SetConsoleDisplayMode(GetStdHandle(STD_OUTPUT_HANDLE), CONSOLE_FULLSCREEN_MODE, 0);
HANDLE hstdout = GetStdHandle(STD_OUTPUT_HANDLE);
CONSOLE_SCREEN_BUFFER_INFO csbi;
GetConsoleScreenBufferInfo(hstdout, &csbi);
csbi.dwSize.X = csbi.dwMaximumWindowSize.X;
csbi.dwSize.Y = csbi.dwMaximumWindowSize.Y;
SetConsoleScreenBufferSize(hstdout, csbi.dwSize);
HWND x = GetConsoleWindow();
ShowScrollBar(x, SB_BOTH, FALSE);
int output;
bool done = false;
system("color a");
while (!done) {
output = 1 + (rand() % (int)(1000 - 1 + 1));
cout << output;
}
}
Another way is to rely on conio.h or another C/C++ header/library which implements user interface functions.
I can't get SDL_Mixer to play sound on the raspberry. The program compiles and builds OK, but all I hear is a short squeak (like static) and nothing.
Any ideas?
#include <SDL/SDL.h>
#include <SDL/SDL_mixer.h>
#include <stdio.h>
#include <string>
#include <iostream>
int main()
{
Mix_Chunk *snd1 = NULL;
Mix_Music *m = NULL;
if(SDL_Init(SDL_INIT_VIDEO | SDL_INIT_AUDIO) < 0) {
std::cout << "Something went wrong";
}
if(Mix_OpenAudio(44100, MIX_DEFAULT_FORMAT, 2, 1024) < 0) {
std::cout << "Kunne ikke loade musikk";
}
snd1 = Mix_LoadWAV("bicycle_bell.wav");
if(snd1 == NULL) {
std::cout << "Fant ikke filen";
}
Mix_PlayChannel(-1, snd1, 0);
Mix_FreeChunk(snd1);
Mix_Quit();
return 1;
}
Solved it. The program terminated before the sound was playing.
When testing a simple:
while(Mix_playing(sound));
fixed it.
Hey everyone, I'm currently trying to figure out how to play back a tone I have generated using a sinusoidal wave.
Here's my code:
#include <iostream>
#include <OpenAL/al.h>
#include <OpenAL/alc.h>
#include <Math.h>
using namespace std;
int main (int argc, char * const argv[]) {
int number = 0;
int i, size;
double const Pi=4*atan(1);
cout << "Enter number of seconds:" << endl;
scanf("%d", &number);
size = 44100*number;
unsigned char buffer [size]; //buffer array
for(i = 0; i < size; i++){
buffer[i] = (char)sin((2*Pi*440)/(44100*i))*127;
}
return 0;
}
Obviously it doesn't do anything at the moment, since I have no idea how to play the buffer.
I don't want to generate a wav file, nor do I want to load one in. I just want to play back the buffer I have generated.
I am currently working on Mac OS X, and have tried using OpenAL methods - however I have found that alut and alu are not part of it anymore and if I try to use it then it turns out that it's all depredated anyway.
I have also tried to include QAudioOutput, but for some reason it does not appear to be anywhere on my Mac.
I just want a simple playback of the tone I've created. Does anyone have anything they can point me to?
Thanks heaps!!!
I've written an example exactly for this. Runs fine with OpenAL under MacOSX and plays smooth sines. Take a look here:
http://ioctl.eu/blog/2011/03/16/openal-sine-synth/
Code is quite short, i guess i can add it here as well for sake of completeness:
#include <cstdio>
#include <cstdlib>
#include <cmath>
#include <iostream>
#include <OpenAL/al.h>
#include <OpenAL/alc.h>
#define CASE_RETURN(err) case (err): return "##err"
const char* al_err_str(ALenum err) {
switch(err) {
CASE_RETURN(AL_NO_ERROR);
CASE_RETURN(AL_INVALID_NAME);
CASE_RETURN(AL_INVALID_ENUM);
CASE_RETURN(AL_INVALID_VALUE);
CASE_RETURN(AL_INVALID_OPERATION);
CASE_RETURN(AL_OUT_OF_MEMORY);
}
return "unknown";
}
#undef CASE_RETURN
#define __al_check_error(file,line) \
do { \
ALenum err = alGetError(); \
for(; err!=AL_NO_ERROR; err=alGetError()) { \
std::cerr << "AL Error " << al_err_str(err) << " at " << file << ":" << line << std::endl; \
} \
}while(0)
#define al_check_error() \
__al_check_error(__FILE__, __LINE__)
void init_al() {
ALCdevice *dev = NULL;
ALCcontext *ctx = NULL;
const char *defname = alcGetString(NULL, ALC_DEFAULT_DEVICE_SPECIFIER);
std::cout << "Default device: " << defname << std::endl;
dev = alcOpenDevice(defname);
ctx = alcCreateContext(dev, NULL);
alcMakeContextCurrent(ctx);
}
void exit_al() {
ALCdevice *dev = NULL;
ALCcontext *ctx = NULL;
ctx = alcGetCurrentContext();
dev = alcGetContextsDevice(ctx);
alcMakeContextCurrent(NULL);
alcDestroyContext(ctx);
alcCloseDevice(dev);
}
int main(int argc, char* argv[]) {
/* initialize OpenAL */
init_al();
/* Create buffer to store samples */
ALuint buf;
alGenBuffers(1, &buf);
al_check_error();
/* Fill buffer with Sine-Wave */
float freq = 440.f;
int seconds = 4;
unsigned sample_rate = 22050;
size_t buf_size = seconds * sample_rate;
short *samples;
samples = new short[buf_size];
for(int i=0; i<buf_size; ++i) {
samples[i] = 32760 * sin( (2.f*float(M_PI)*freq)/sample_rate * i );
}
/* Download buffer to OpenAL */
alBufferData(buf, AL_FORMAT_MONO16, samples, buf_size, sample_rate);
al_check_error();
/* Set-up sound source and play buffer */
ALuint src = 0;
alGenSources(1, &src);
alSourcei(src, AL_BUFFER, buf);
alSourcePlay(src);
/* While sound is playing, sleep */
al_check_error();
sleep(seconds);
/* Dealloc OpenAL */
exit_al();
al_check_error();
return 0;
}
Update: I've found OpenAL a bit too limiting for my needs, like I have some problems with low-latency playback as this appears to be not the primary domain of OpenAL. Instead, I've found the very convincing PortAudio: http://www.portaudio.com/
It supports all major platforms (Mac,Win,Unix/ALSA) and looks very good. There is an example for sine playback which is far more sophisticated, yet quite simple. Just download the latest release and find the sine-playback sample at test/patest_sine.c
You will need to go through the OS to play back sounds. It's not as straightforward as you would think. In OSX, you will need to go through CoreAudio.
A better approach would be to use a wrapper library like PortAudio (http://www.portaudio.com/) which will make your code more portable and save you some of the boilerplate needed to get sound out of your program.
Try this (this program uses Z transform concept, a complete example that generates dtmf tones using ALSA and compilable on LINUX are available here):
/*
* Cosine Samples Generator
*
* Autor: Volnei Klehm
* Data: 04/01/2014
*/
#include <math.h>
#include <stdio.h>
#define S_FREQ 8000 /*Sample frequency, should be greater thar 2*sineFrequency
If using audio output it has to be the same saple frequency
Used there*/
const float frequency_in_Hertz = 697; /*set output frequency*/
const float generatorContant1 = cosf(2*M_PI*(frequency_in_Hertz/S_FREQ));
const float generatorContant2 = sinf(2*M_PI*(frequency_in_Hertz/S_FREQ));
float GenerateSignal(){
static float Register[2]={1,0};
static float FeedBack;
FeedBack=2*generatorContant1*Register[0]-Register[1];
Register[1]=Register[0];
Register[0]=FeedBack;
return (generatorContant2*Register[1]);
}
int main(void) {
/*generate 300 samples*/
for (int NumberOfSamples = 300; NumberOfSamples > 0; NumberOfSamples--)
printf("\n%f", GenerateSignal());
return 0;
}