C++ Best way to "share screens" over socket [closed] - c++

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
In my current app I need to share screens ala skype or discord, I'd prefer to not use external libs, but will if I have to.
So far I have been sending the screenshots in down-scaled bitmap form over TCP sockets and repainting the window every few milliseconds, this is of course an effort I knew was doomed from the start, is there any api that could save me?
Any help appreciated.

While I haven't implemented it myself, I believe that what's usually done is the screen is broken into 16x16 pixel blocks. You can keep the previous screenshot, take a new one, compare which blocks have changed and send only the 16x16 blocks that have changes in them.
You can further improve performance by having a change threshold. If fewer than x pixels have changed in a block, don't send yet. Or if the cumulative sum of the changes in a block (the difference between corresponding pixels) is below some threshold, don't send that block.
The blocks are also often compressed using a lossy compression scheme that really shrinks down the required size you need to send per block. The image blocks are often also sent in 4:2:2 mode, meaning you store the red and blue channels at half the resolution of the green channel. This is based on how the visual system works, but it explains why things that are pure red or pure blue sometimes get blockiness or fringing around them when screen sharing.

Related

Store static environment in chunks [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I have terrain separated by chunks and I would like to put environment (For example, rocks, trees, etc..) in each chunk randomly.
My question is related to how to implement such system in OpenGL.
What I have tried:
Solution: Draw the environment with instancing once for all the terrain (not a specific chunk)
Problem: I except the chunk to sometimes take a bit to load and because I am using threads the environment will appear as floating.
Solution: Draw the environment with instancing for each chunk.
Problem: To draw each chunk, I will need to bind the VBO for the chunk, draw the chunk, bind the VBO for the environment (and the VAO probably) and draw it.
I don't want to put so many glBindBuffer functions because I heard it is slow (Please correct me if I am wrong)
(Not tried) Solution: Somehow merge the vertices of the terrain with its environment and draw them together.
Problem: My terrain is drawn with GL_TRIANGLE_STRIP so this is a first problem, the second problem(?) is that I don't know how well it will function (talking speed).
I tried looking up solutions on the internet but didn't seem to find any that relate to chunks.
Anyone know how other games that uses chunks do that? Is there a way to do it without causing a lot of speed decrease?

reading black and white pixels as an array from jpeg in c and c++ [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
im trying to learn how to read a jpeg image as an array of pixels in c++ or c. so far ive learned that i have to include a outside library such as libjpg.h.
ive been told that the jpeg is formated in a RGB structure where each pixel gives 3 values. is this true? and if so how would i read values for a purely black and white image?
the purpose of this question is that i am trying to assign a pointer to the top right corner of a white squre in a black picture.
if someone could show me how to read out the vaules that are given to me for this situation so i could assign this pointer i would be greatful.
Let's suppose you run with libjpeg. You'll allocate a buffer and then call jpeg_read_scanlines a sufficient number of times to get all of your decompressed image data into memory. You can read scanlines (rows) individually and reformat them as needed. If the image is grayscale, the RGB values will all be equal, so you can just read one of them.
Paul Bourke's site has some pretty good usage examples of libjpeg.

Minesweeper C++ [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I've written the minesweeper game for C++, and the core game is complete.
There are three things I need to ask.
Currently, my mines are placed at random positions.. I was wondering if this is true for the actual game? Are the mines random or is their some specific pattern or algorithm for placing the mines.
When I play minesweeper on Win 7, I never see a 0. But in my program, there are cases when all 8 neighbors are non mines. What should I display then? I want the game to be as close to the Windows version as possible.
I think this may be related to 2 above, when I play on Win 7 sometimes when I click on a cell multiple cells are revealed. I want to do this in my program but I don't know the controlling logic behind it. I mean, when does this have to happen? And when it does happen, how do I know how many and which cells to open up?
On a related note, my current program is text based (in code blocks). Currently I know only C++. What do I need to learn to be able to make the game interactive?
The first guess is never a mine, so your generation algorithm must delay itself until this happens. As far as I am aware, mines are placed pseudo randomly.
When no adjacent mines are found on a guessed square, it reveals all adjacent squares.
On the versions I have played, when you left click and then right click together and when a location has a sufficient number of flags placed around it, it reveals all adjacent squares.
Yes they are placed at random. You need to make sure that you don't place two mines in the same spot.
A 0 is displayed as a blank in windows.
When you expose a square with no adjacent mines, it will automatically expose all 8 of those squares. If any of those are also zeros, they will be exposed also until an entire region is exposed.

How can I programmatically identify altered frames from a video [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
A video can be edit by deleting some frames from it.that can be done by deleting consecutive frames or random frames in that video.
we need to detect the tampered videos, we hope to do this by detecting frames which has been altered , simply we need to have an algorithm for way to identify deleted frames.Because we are creating a tool which could use for tamper detection as a video surveillance tool.Which could use in law enforcements and courts
Is there any method to identify that frames of a video has been deleted by malicious attacks, there are already available methods such as watermarking, digital signature, but we need an algorithm to detect that frames by using proper algorithm.
In general, whatever image sequence I am handed, some or all of it could have come from a real camera, from Photoshop or from a 3D renderer, and there is no general purpose image processing technique that will be able to tell the difference based on analysis of the image content alone, either well enough to stand up in court, or, indeed, at all.
You'll need to embed some easy to recognise yet difficult to fake, relocate or tamper with signal into the video at recording time. You've tagged the question "opencv", but the fields you want to investigate are cryptography and watermarking, not computer vision.
Have the video surveillance equipment use public key crypto to visibly watermark each frame with an identifier unique to the piece of equipment, a timestamp, a frame sequence number and a hash or other suitable function of the frame image, using a scheme robust to compression.
Tampering with the video sequence will then require either knowledge of the device's private key, or removal of the watermark. This isn't great (keeping the private key secret will be a logistical headache) but is probably the best solution you can get.
this can't be done in general. However some approaches may be possible.
the used video format may support frame wise meta data that stores the index or time index and that is not touched during editing
the image sensor itself may be configured to write some meta data to some specific region of the image
you may have some external reference that was imaged by the camera and may help identify missing frames
precise clock
fast blinking indicator
some uniform motion

PaintGL() call understanding [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I know that PaintGL() function is call at the same frequency that the screen's (let's say 60 times per second). But if no pixel is displayed on the screen (if another window hide the OpenGL one) the call to PaintGL() is no longer restrain and is called a lot much more... Which use the CPU at max and is annoying.
So, is there a way to restrain it ?
I'm using MacOS 10.9 and Qt Creator.
I don't know a lot about vsync. The fact is my software use 30% of the CPU when it's in the forground and when it's hidden, it goes up to 95%.
If you haven't enabled vsync, frames swap as often as possible (if you haven't added artificial pause). If you pushed high load on graphics card, it is likely that your program is GPU-limited (CPU have nothing to do and standing idle waiting for GPU to finish drawing).
When your program is invisible, drawing costs close to nothing because noone sees results anyway (optimisation performed by graphics driver internally).
So, answer to your question is - enable vsync. It will lock buffer swapping interval to monitor's refresh rate, so your frame rate will never rise higher than refresh rate (in fact, it will be locked to numbers 60/30/20/etc., if your monitor is at 60 Hz). This is very useful technique that e.g. eliminates screen tearing.