I'm writing a kind of "Remote Desktop" program and I got stuck with a few points.
I use QPixmap::grabWindow on the server side to capture the screenshot and send it to client, which in turn is written to QByteArray and is sent via QTcpSocket.
The size of the resulting QPixmap is too big and as you understand the application is time critical. Is there a way to optimize that?
(In addition to Michael's more detailed answer:) For compression you can use qCompress / qUncompress (which actually depends on Qt's included zlib) http://qt-project.org/doc/qt-4.8/qbytearray.html#qUncompress
Use deltas. The basic idea is this: imagine a grid overlaying the window image, that divides it into 16px by 16px or so squares. Compare each square with the corresponding one in the previous window that was sent to the client. If so much as one pixel has changed, send the square's new content to the client.
Try compressing the image using some form of quick compression. You could use zlib for example, but keep the compression level at 3 or below. Or you could compress the entire data stream as it is being sent via TCP (this is tricky - you have to be careful to flush buffers and such.)
Adding to Michaels answer:
Reduce resolution
Reduce color depth
Reduce frame rate
Use a screencast codec / decoder
Related
I am looking for a standardized approach to stream JPG images over the network. Also desirable would be a C++ programming interface, which can be easily integrated into existing software.
I am working on a GPGPU program which processes digitized signals and compresses them to JPG images. The size of the images can be defined by a user, typically the images are 1024 x 2048 or 2048 x 4096 pixel. I have written my "own" protocol, which first sends a header (image size in bytes, width, height and corresponding channel) and then the JPG data itself via TCP. After that the receiver sends a confirmation that all data are received and displayed correctly, so that the next image can be sent. So far so good, unfortunately my approach reaches just 12 fps, which does not satisfy the project requirements.
I am sure that there are better approaches that have higher frame rates. Which approach do streaming services like Netflix and Amzon take for UHD videos? Of course I googled a lot, but I couldn't find any satisfactory results.
Is there a standardized method to send JPG images over network with TCP/IP?
There are several internet protocols that are commonly used to transfer files over TCP. Perhaps the most commonly used protocol is HTTP. Another, older one is FTP.
Which approach do streaming services like Netflix and Amzon take for UHD videos?
Firstly, they don't use JPEG at all. They use some video compression codec (such as MPEG), that does not only compress the data spatially, but also temporally (successive frames tend to hold similar data). An example of the protocol that they might use to stream the data is DASH, which is operates over HTTP.
I don't have a specific library in mind that already does these things well, but some items to keep in mind:
Most image / screenshare/ video streaming applications use exclusively UDP, RTP,RTSP for the video stream data, in a lossy fashion. They use TCP for control flow data, like sending key commands, or communication between client / server on what to present, but the streamed data is not TCP.
If you are streaming video, see this.
Sending individual images you just need efficient methods to compress, serialize, and de-serialize, and you probably want to do so in a batch fashion instead of just one at a time.Batch 10 jpegs together, compress them, serialize them, send.
You mentioned fps so it sounds like you are trying to stream video and not just copy over images in fast way. I'm not entirely sure what you are trying to do. Can you elaborate on the digitized signals and why they have to be in jpeg? Can they not be in some other format, later converted to jpeg at the receiving end?
This is not a direct answer to your question, but a suggestion that you will probably need to change how you are sending your movie.
Here's a calculation: Suppose you can get 1Gb/s throughput out of your network. If each 2048x4096 file compresses to about 10MB (80Mb), then:
1000000000 ÷ (80 × 1000000) = 12.5
So, you can send about 12 frames a second. This means if you have a continuous stream of JPGs you want to display, if you want faster frame rates, you need a faster network.
If your stream is a fixed length movie, then you could buffer the data and start the movie after enough data is buffered to allow playback at desired frame rate sooner than waiting for the entire movie to download. If you want playback at 24 frames a second, then you will need to buffer at least 2/3rds of the movie before you being playback, because the the playback is twice is fast as your download speed.
As stated in another answer, you should use a streaming codec so that you can also take advantage of compression between successive frames, rather than just compressing the current frame alone.
To sum up, your playback rate will be limited by the number of frames you can transfer per second if the stream never ends (e.g., a live stream).
If you have a fixed length movie, buffering can be used to hide the throughput and latency bottlenecks.
Is it possible to play a simple and short video on smart eye glasses?
I know that it can play audio and it can show images one after the other. It should not be too much work from there I am just guessing.
There is no direct support for video playback, but as Ahmet says, you can approach this with showing Bitmaps as fast as possible.
The playback speed depends on the connection - so it is recommended to use High performance mode - wifi connection to achieve highest frame rate (setPowerMode)
Also take a look at showBitmapWithCallback which provides you a callback right after previous frame gets rendered, so you can show another one.
Yes, it is possible. You can grab frames of the video and display them one after another, as bitmaps.
That should give you a video playback view on the SmartEyeglass.
I am about to grab the video output of my raspberry pi to pass it to kinda adalight ambient lightning system.
The XBMC's player for PI, omxplayer, users OpenMAX API for decoding and other functions.
Looking into the code gives the following:
m_omx_tunnel_sched.Initialize(&m_omx_sched, m_omx_sched.GetOutputPort(), &m_omx_render, m_omx_render.GetInputPort());
as far as I understand, this sets a pipeline between the video scheduler and the renderer [S]-->[R].
Now my idea is to write a grabber component and plug-in it hardly into the pipeline [S]-->[G]->[R]. The grabber will extract the pixels from the framebuffer and pass it to a deamon which will drive the leds.
Now I am about to dig into OpenMAX API which seems to be pretty weird. Where should I start? Is it a feasible approach?
Best Regards
If you want the decoded data then just do not send to the renderer. Instead of rendering, take the data and do whatever you want to do. The decoded data should be taken from the output port of the video_decode OpenMAX IL component. I suppose you'll also need to set the correct output pixel format, so set the component output port to the correct format you need, so the conversion is done by the GPU (YUV or RGB565 are available).
At first i think you should attach a buffer to the output of camera component, do everything you want with that frame in the CPU, and send a frame through a buffer attached to the input port of the render, its not going to be a trivial task, since there is little documentation about OpenMax on the raspberry.
Best place to start:
https://jan.newmarch.name/RPi/
Best place to have on hands:
http://home.nouwen.name/RaspberryPi/documentation/ilcomponents/index.html
Next best place: source codes distributed across the internet.
Good luck.
I'm trying to write an application that records and saves the screen in C++ on the windows platform. I'm not sure where to start with this. I assume I need some sort of API, (FFMPEG, maybe OpenGL?). Could someone point me in the right direction?
You could start by looking at Windows remote desktop protocol, maybe some programming libraries are provided for that.
I know of a product that intercepts calls into the Windows GDI dll and uses that to store the screen drawing activities.
A far more simpler approach would be to do screenshots as often as possible and somehow minimize redundant data (parts of the screen that didn't change between frames).
If the desired output of your app is a video file (like mpeg) you are probably better off just grabbing frames and feeding them into a video encoder. I don't know how fast the encoders are these days. Ffmpeg would be a good place to start.
If the encoder turns out not fast enough, you can try storing the frames and encoding the video file afterwards. Consecutive frames should have many matching pixels, so you could use that to reduce the amount of data stored.
I'm starting to implement some sort of remote screencasting (VNC-alike) client/server software in C++ (Windows platform), which just transmits the screen updates (image tiles) over the network.
The screen is divided in blocks and each tile is compressed into JPEG (probably I'll use libjpeg-turbo), before sending over network. So my question is, will it be good to implement another layer of compression (lossless) for these (already-JPEG-compressed) tiles, e.g. using zlib?
I have a feeling that zlib won't give any significant improvement in terms of bandwidth as the JPEG files will be already compressed. I'd like to avoid further time and money investment for implementing additional compression layer just for testing purposes, so I'd like to hear your suggestions.
P.S.: As a side question, are there any better alternatives than encoding tiles into JPEG? Maybe lossless compression right away? Is the above-mentioned technique (dividing screen into tiles => selecting updated tiles => compressing them into JPEG => sending over network) good way to implement such software?
Any kind of input would be much appreciated!
JPEG files are already compressed nearly as small as they can be. You might save a few bytes on the header, but that may be overcome by the overhead of the additional compression.
If you need to do a quick check just to prove the point, it should be easy to zip up a collection of sample jpeg files and see what the difference is.
Absolutely unneccessary.
Better option is to use both methods: count the colors in the block and use zlib/rle/etc for few colors and jpeg for many. That's the very basic approach. I recommend you to take a look at the Remote Framebuffer Protocol of the VNC.