Were working on a family feud game and i wanted to apply changing colors
to a group of circles randomly
i tried using for loop in this given code, but i know its wrong.
how do i randomize?
//looping set1
for(x=0;x<=15;x++)
{
setcolor(x);
sleep(3000);
}
setfillstyle(1,1);
fillpoly(13,lyt1);
fillpoly(9,lyt2);
fillpoly(9,lyt3);
fillpoly(12,lyt4);
//looping set2
for(x=0;x<=15;x++);
{
setcolor(x);
sleep(3000);
}
setfillstyle(1,1);
fillpoly(11,lyt5);
fillpoly(12,lyt6);
fillpoly(13,lyt7);
fillpoly(12,lyt8);
I am assuming you are in MS-DOS (not sure if emulated or real one or just windows console) but the animation and randomization is done a bit differently.
Due to various restrictions (so it works on each platform and do not use any advanced stuff) the program structure of your main loop should look more like this:
// main loop
const int dt=40; // [ms] approximate loop iteration time
int col_t=0,col_T=3000; // [ms] time and period for changing the colors
int col;
randomize();
col=random(16);
for (;;)
{
// 1. handle keyboard,mouse,joystick... here
// do not forget to break; if exit button is hit like: if (Key==27) break;
// 2. update (world objects positions, score, game logic,etc)
col_t+=dt;
if (col_t>=col_T)
{
col_t=0;
col=random(16);
}
// 3. draw you scene here
setcolor(col);
// 4. CPU usage and fps limiter
sleep(dt); // 40ms -> 25fps
}
This structure does not need any interrupts so it is easy to understand for rookies. But games need usually more speed and event handlers are faster. For that you would need to use interrupts ISR for stuff like keyboard,PIT,...
Using sleep() is not precise so if you want precise measurement of time you should either use PIT or RDTSC but that could create incompatibilities in emulated environments...
Haven't code in MS-DOS for ages so I am not sure in which lib the random and randomize routines are they might also be called Random,Randomize my bet is they are in stdio.h or conio.h. Simply type random into program place cursor at it and hit ALT+F1 to bring up context help. There you will read which lib to include. Also I am not sure if to use random(15) or random(16) so read which is correct there too.
If you are coding a game then you will probably need some menu's. Either incorporate them into main loop or have separate main loop for each game page and use goto or encode each as separate function.
Have a look at few related QA's of mine:
adding timer to game Turbo C++ MS-DOS PIT ISR handler example
What is the best way to move an object on the screen? with one of mine MS-DOS games (but in assembler) including menu's, 2D sprite graphics, keyboard interrupt handler etc...
Display an array of color in C Direct access to VESA/VGA graphics (no BGI) in C/C++
avr code not working i want to generate random numbers help please simple custom pseudo random number generator (C/C++) and 2D white noise effect No TV signal...
And setcolor doc
Related
I have a grid where I'm using one sf:RectangleShape object and with changing parameters of it inside a loop I can populate a wide area with them. Now I want to make grid visible to assist with placing Rectangles.
But here comes the issue.
If I will use rectPtr->getRect()->setOutlineThickness(-0.5f); Note: Its pointer created from custome class. From unknown reasons final program becomes really slow and unstable. Its running just fine with setOutlineThickness(0.f);. Im using negative value because of that grid, but there is no problem,. it do same with positive values.
Is there some way to use setOutlineThickness(); function without slowing the program?
May it be some kind of bug?
I'm using sf::Transform 3x3 matrix at window.draw(), could be there some violation?
average FPS with setOutlineThickness(-1.f) = 16.fps
average FPS with setOutlineThickness(0.f) = 60.fps
many thanks
I've made a game but I don't know if the game will work the same way in other devices. For example if the CPU of a computer is high will the player and enemies move faster? If so, is there a way to define CPU usage available in SFML? The way the player and enemies move in my program is to :
1-Check if the key is pressed
2-If so : move(x,y); Or is there a way to get the CPU to do some operations in the move function.
Thank you!
It sounds like you are worried about the physics of your game being affected by the game's framerate. Your intuition is serving you well! This is a significant problem, and one you'll want to address if you want your game to feel professional.
According to Glenn Fiedler in his Gaffer on Games article 'Fix Your Timestep!'
[A game loop that handles time improperly can make] the behavior of your physics simulation [depend] on the delta time you pass in. The effect could be subtle as your game having a slightly different “feel” depending on framerate or it could be as extreme as your spring simulation exploding to infinity, fast moving objects tunneling through walls and the player falling through the floor!
Logic dictates that you must detach the dependencies of your update from the time it takes to draw a frame. A simple solution is to:
Pick an amount of time which can be safely processed (your timestep)
Add the time passed every frame into an accumulated pool of time
Process the time passed in safe chunks
In pseudocode:
time_pool = 0;
timestep = 0.01; //or whatever is safe for you!
old_time = get_current_time();
while (!closed) {
new_time = get_current_time();
time_pool += new_time - old_time;
old_time = new_time;
handle_input();
while (time_pool > timestep)
{
consume_time(timestep); //update your gamestate
time_pool -= timestep;
}
//note: leftover time is not lost, and will be left in time_pool
render();
}
It is worth noting that this method has its own problem: future frames have to consume the time produced by calls to consume_time. If a call to consume_time takes too long, the time produced might require two calls be made next frame - then four - then eight - and so on. If you use this method, you will have to make sure consume_time is very efficient, and even then it would be best to have a contingency plan.
For a more thorough treatment I encourage you to read the linked article.
I am currently working on a demo, just to get to grips of how to make a game. It might turn into something in the future, but for now, it's just for learning.
My demo is, I guess, influenced by The Legend of Zelda. It has that top-down look that TLoZ has.
My sprite is a 32x32 pixel image, the demo runs at 60fps and I have already worked out how fast I want my sprite to animate using PyxelEdit. Each animation frame is being displayed every 170ms when my character walks. He moves 4 pixels every frame, so he is moving and animating at the speed I want him to.
The problem I have got is that I want my character to finish the animation loop when my key has been released and he won't. When I have released a movement key, he will sometimes stop on the wrong animation frame, like say his left or right foot is forward when I want him to be still. I'm just not sure how to do it. I've tried checking the animation count when the Event::KeyReleased event occurs and incrementing the animation count until it reaches a certain number so that it stops on say the number 1 so he's standing still, it just doesn't work.
I don't think this requires a look at my code, just need a general idea on how to go about making sure that when the a movement key is released, animate him until he is on frame 1 and move him a certain amount of pixels each time until he stops.
You could use a FSM so something along the lines of.
// Visual states of the character.
enum class State { WALKING, STANDING, ATTACK, };
State character_state = State::STANDING;
// Change on input (or other things like impact.)
if(input.up() || input.down() || input.left() || input.right)
character_state = State::WALKING;
else
character_state = State::STANDING;
// Render based on the current state.
switch(character_state)
{
case(State::WALKING):
render(cycle_walk_animation(frame_time));
break;
case(State::STANDING):
render(standing_still_frame());
break;
}
I've done this with 2D and 3D.
If I understand correctly, you need something like this:
// The game loop
while (game_running)
{
// ...
// Your code
// ...
// Advance the animation while moving, or if not, cycle through
// the animation frames until frame 1 is reached
if (keyPressed(somekey) || currentAnimationFrame != 1)
{
advanceAnimationFrame();
}
// ...
// Your code
// ...
}
Of course, this is not SFML code, but it should get the general idea across
I'm trying to create a program, using Qt (c++), which can record audio from my microphone using QAudioinput and QIODevice.
Now, I want to visualize my signal
Any help would be appreciated. Thanks
[Edit1] - copied from your comment (by Spektre)
I Have only one Buffer for both channel
I use Qt , the value of channel are interlaced on buffer
this is how I separate values
for ( int i = 0, j = 0; i < countSamples ; ++j)
{
YVectorRight[j]=Samples[i++];
YVectorLeft[j] =Samples[i++];
}
after I plot YvectorRight and YvectorLeft. I don't see how to trigger only one channel
hehe done this few years back for students during class. I hope you know how oscilloscopes works so here are just the basics:
timebase
fsmpl is input signal sampling frequency [Hz]
Try to use as big as possible (44100,48000, ???) so the max frequency detected is then fsmpl/2 this gives you the top of your timebase axis. The low limit is given by your buffer length
draw
Create function that will render your sampling buffer from specified start address (inside buffer) with:
Y-scale ... amplitude setting
Y-offset ... Vertical beam position
X-offset ... Time shift or horizontal position
This can be done by modification of start address or by just X-offsetting the curve
Level
Create function which will emulate Level functionality. So search buffer from start address and stop if amplitude cross Level. You can have more modes but these are basics you should implement:
amplitude: ( < lvl ) -> ( > lvl )
amplitude: ( > lvl ) -> ( < lvl )
There are many other possibilities for level like glitch,relative edge,...
Preview
You can put all this together for example like this: you have start address variable so sample data to some buffer continuously and on timer call level with start address (and update it). Then call draw with new start address and add timebase period to start address (of course in term of your samples)
multichannel
I use Line IN so I have stereo input (A,B = left,right) therefore I can add some other stuff like:
Level source (A,B,none)
render mode (timebase,Chebyshev (Lissajous curve if closed))
Chebyshev = x axis is A, y axis is B this creates famous Chebyshev images which are good for dependent sinusoidal signals. Usually forming circles,ellipses,distorted loops ...
miscel stuff
You can add filters for channels emulating capacitance or grounding of input and much more
GUI
You need many settings I prefer analog knobs instead of buttons/scrollbars/sliders just like on real Oscilloscope
(semi)Analog values: Amplitude,TimeBase,Level,X-offset,Y-offset
discrete values: level mode(/,),level source(A,B,-),each channel (direct on,ground,off,capacity on)
Here are some screenshots of my oscilloscope:
Here is screenshot of my generator:
And finally after adding some FFT also Spectrum Analyser
PS.
I started with DirectSound but it sucks a lot because of buggy/non-functional buffer callbacks
I use WinAPI WaveIn/Out for all sound in my Apps now. After few quirks with it, is the best for my needs and has the best latency (Directsound is too slow more than 10 times) but for oscilloscope it has no merit (I need low latency mostly for emulators)
Btw. I have these three apps as linkable C++ subwindow classes (Borland)
and last used with my ATMega168 emulator for my sensor-less BLDC driver debugging
here you can try my Oscilloscope,generator and Spectrum analyser If you are confused with download read the comments below this post btw password is: "oscill"
Hope it helps if you need help with anything just comment me
[Edit1] trigger
You trigger all channels at once but the trigger condition is checked usually just from one Now the implementation is simple for example let the trigger condition be the A(left) channel rise above level so:
first make continuous playback with no trigger you wrote it is like this:
for ( int i = 0, j = 0; i < countSamples ; ++j)
{
YVectorRight[j]=Samples[i++];
YVectorLeft[j] =Samples[i++];
}
// here draw or FFT,draw buffers YVectorRight,YVectorLeft
Add trigger
To add trigger condition you just find sample that meets it and start drawing from it so you change it to something like this
// static or global variables
static int i0=0; // actual start for drawing
static bool _copy_data=true; // flag that new samples need to be copied
static int level=35; // trigger level value datatype should be the same as your samples...
int i,j;
for (;;)
{
// copy new samples to buffer if needed
if (_copy_data)
for (_copy_data=false,i=0,j=0;i<countSamples;++j)
{
YVectorRight[j]=Samples[i++];
YVectorLeft[j] =Samples[i++];
}
// now search for new start
for (i=i0+1;i<countSamples>>1;i++)
if (YVectorLeft[i-1]<level) // lower then level before i
if (YVectorLeft[i]>=level) // higher then level after i
{
i0=i;
break;
}
if (i0>=(countSamples>>1)-view_samples) { i0=0; _copy_data=true; continue; }
break;
}
// here draw or FFT,draw buffers YVectorRight,YVectorLeft from i0 position
the view_samples is the viewed/processed size of data (for one or more screens) it should be few times less then the (countSamples>>1)
this code can loose one screen on the border area to avoid that you need to implement cyclic buffers (rings) but for starters is even this OK
just encode all trigger conditions through some if's or switch statement
I'm new to C++ and DirectX, I come from XNA.
I have developed a game like Fly The Copter.
What i've done is created a class named Wall.
While the game is running I draw all the walls.
In XNA I stored the walls in a ArrayList and in C++ I've used vector.
In XNA the game just runs fast and in C++ really slow.
Here's the C++ code:
void GameScreen::Update()
{
//Update Walls
int len = walls.size();
for(int i = wallsPassed; i < len; i++)
{
walls.at(i).Update();
if (walls.at(i).pos.x <= -40)
wallsPassed += 2;
}
}
void GameScreen::Draw()
{
//Draw Walls
int len = walls.size();
for(int i = wallsPassed; i < len; i++)
{
if (walls.at(i).pos.x < 1280)
walls.at(i).Draw();
else
break;
}
}
In the Update method I decrease the X value by 4.
In the Draw method I call sprite->Draw (Direct3DXSprite).
That the only codes that runs in the game loop.
I know this is a bad code, if you have an idea to improve it please help.
Thanks and sorry about my english.
Try replacing all occurrences of at() with the [] operator. For example:
walls[i].Draw();
and then turn on all optimisations. Both [] and at() are function calls - to get the maximum performance you need to make sure that they are inlined, which is what upping the optimisation level will do.
You can also do some minimal caching of a wall object - for example:
for(int i = wallsPassed; i < len; i++)
{
Wall & w = walls[i];
w.Update();
if (w.pos.x <= -40)
wallsPassed += 2;
}
Try to narrow the cause of the performance problem (also termed profiling). I would try drawing only one object while continue updating all the objects. If its suddenly faster, then its a DirectX drawing problem.
Otherwise try drawing all the objects, but updating only one wall. If its faster then your update() function may be too expensive.
How fast is 'fast'?
How slow is'really slow'?
How many sprites are you drawing?
How big is each one as an image file, and in pixels drawn on-screen?
How does performance scale (in XNA/C++) as you change the number of sprites drawn?
What difference do you get if you draw without updating, or vice versa
Maybe you just have forgotten to turn on release mode :) I had some problems with it in the past - I thought my code was very slow because of debug mode. If it's not it, you can have a problem with rendering part, or with huge count of objects. The code you provided looks good...
Have you tried multiple buffers (a.k.a. Double Buffering) for the bitmaps?
The typical scenario is to draw in one buffer, then while the first buffer is copied to the screen, draw in a second buffer.
Another technique is to have a huge "logical" screen in memory. The portion draw in the physical display is a viewport or view into a small area in the logical screen. Moving the background (or screen) just requires a copy on the part of the graphics processor.
You can aid batching of sprite draw calls. Presumably Your draw call calls your only instance of ID3DXSprite::Draw with the relevant parameters.
You can get much improved performance by doing a call to ID3DXSprite::Begin (with the D3DXSPRITE_SORT_TEXTURE flag set) and then calling ID3DXSprite::End when you've done all your rendering. ID3DXSprite will then sort all your sprite calls by texture to decrease the number of texture switches and batch the relevant calls together. This will improve performance massively.
Its difficult to say more, however, without seeing the internals of your Update and Draw calls. The above is only a guess ...
To draw every single wall with a different draw call is a bad idea. Try to batch the data into a single vertex buffer/index buffer and send them into a single draw. That's a more sane idea.
Anyway for getting an idea of WHY it goes slowly try with some CPU and GPU (PerfHud, Intel GPA, etc...) to know first of all WHAT's the bottleneck (if the CPU or the GPU). And then you can fight to alleviate the problem.
The lookups into your list of walls are unlikely to be the source of your slowdown. The cost of drawing objects in 3D will typically be the limiting factor.
The important parts are your draw code, the flags you used to create the DirectX device, and the flags you use to create your textures. My stab in the dark... check that you initialize the device as HAL (hardware 3d) rather than REF (software 3d).
Also, how many sprites are you drawing? Each draw call has a fair amount of overhead. If you make more than couple-hundred per frame, that will be your limiting factor.