Controlling the individual pixels of a projector - c++

I need to control the individual pixels of a projector (an Infocus IN3104) whose native resolution is 1024x768. I would like to know which subset of functions in C or an APL to do this either by:
Functions that control the individual pixels of the adapter (not the pixels of a window).
A pixel-perfect, 1:1 map from an image file (1024x728) to the adaptor set at the native resolution of the projector.
In a related question ([How can I edit individual pixels in a window?][1]) the answerer Caladain states "Things have come a bit from the old days of direct memory manipulation.". I feel I need to go back to that to achieve my goal.
I don't know enough of the "graphic pipeline" to know what API or software tool to use. I'm overwhelmed by the number of technologies when I search this topic. I program in R, which easily interfaces to C, but would welcome suggestions of subsets of functions in OpenGL or C++ or ..... any other technology?
Or even an full blown application (rendering) which will map without applying a transformation.
For example even MS paint has the >VIEW>Bitmap but I get some transformation applied and I don't get pixel perfect rendering. This projector has DisplayLink digital input and I've also tried to tweek the timing parameters when using the VESA inputs and I don't think the transformation happens in the projector. In any case, using MS paint would not be flexible enough for me.
Platform: Linux or Windows.

I don't see a reason why a full-screen window, e.g. using SDL, wouldn't work. Normal bitmapped graphics is always 1:1, there shouldn't be any weird scaling going on behind your back for a full-screen:ed window.
Since SDL is portable, you should be able to run the same code in Windows or Linux (or any other supported platform).

The usual approach to this problem on current systems is:
Set graphics card to desired resolution
Create borderless full screen window
Draw whatever you want
There's really not much to gain from a "low level access", although it were certainly possible.

Related

Needed: Cross platform C++ 2D graphics library for fast audio waveform presentation

I am trying to build a multimedia editor. It includes audio and relatively simple 2D graphics. I am using C++. I would like as much of it to be cross platform as possible.
I wrote audio interface classes for android and windows using a common API so I have that under control for now but I need a 2D graphics package and possibly a cross platform GUI as well.
The big challenge is in trying to render the time line. It needs to generate many rows of waveforms and intersperse them with characters and other shapes some of which may include blends and transparencies. Or rather I should say the big challenge has been animating the time line as often I will need to update it in real time. I have this working nicely using a lot of cashing and shifting around of the pixels of off-screen bitmaps. If I have 50 lines on the screen and the screen is 1000 pixels wide that translates to over 50,000 line draw operations per frame. Actually I use multi segment lines that end up drawing 3 times as many segments. To generate each line of the audio waveform it needs to look at a few hundred samples of audio and compute the max and min values or maybe do an FFT to create a line of different colored pixels if I want to offer this to the user some day. Various forms of cashing let me do this with reasonable latency.
The animation side of things will include everything from moving poly lines and polygons around in 2D to importing images and playing back moving multiple images (video) at different arbitrary frame rates. I don’t think 3D is very useful for now anyway.
At the moment I am using a crazy mix of GDI and GDI plus on windows and running it all in a win32 “thing”. This is not great as I cannot invert regions in off screen bitmaps and I cannot draw individual pixels quickly enough to for instance show a spectrogram in real time. I think they were written in the 90s so there must be something newer I can use and get better performance and cross platform capabilities. I have been pulling out my remaining hair to figure out what to use.
I found another library on android that will let me set pixels and the performance actually seems a lot better but it does not support writing text. So I am hoping there is something else I could use for that. On Android the plan is to generate the bitmap and then blit it into an interface build with a Native Android GUI. These solutions do not seem great though a vast majority of the rest of the code can be ported without issue just being standard C++ and these horrors being cleanly wrapped.
I have seen a few potential candidates: openGL and Vulkan, seem to do 2D graphics as well as 3D but perhaps they are much more complex then what I need.
For the GUI I looked at QT but gave up on it (it seems to need half of my hard drive and has an incomprehensible licensing model). I recently started looking at IMGUI. They say it redraws on every frame. I don’t know how that will play with my existing rendering system and if it would drain a phones battery. A while ago I was able to get visual studio to create a cross platform App that would run on android but for some reason ditched that perhaps I should revisit it.
For the time line I need to draw a waveform. This could be done by drawing a lot of lines (50-150 k/frame) they can just be vertical ones for the most part, they do not need to be of fractional pixel width they need not be anti-aliased, and can have their end points specified with just integers. I also need to add some other lines polygons and text that does need to be anti-aliased. I may need to set a lot of pixels directly. Blends and transparencies would be nice but not essential. I also need to copy square chunks of bit map around. I also need sprites for things like the cursor. I am currently doing this by copying fragments of the bit map on and off screen. It would also be very nice to be able to select square regions of off screen bitmaps to invert for doing selections. And I need to assemble this off screen in a 2 or 3 buffer configuration so I can reuse chunks of one bitmap to make the next one and present it to the user in a real time animation. (all of this works with my GDI / GDI + wrapper though I have to work around the inversion problem)
For the animation part I need to draw similar graphics primitives, though it would also be nice to draw characters at arbitrary angles and scaling. As for Video if I can extract the images I guess I could blit them to the screen as needed. Maybe I would need yet another package to composite them into the other parts of the frame. Farther it would be nice to be able to write the animation out in a higher quality format in non real time to make a video file of some kind. It would be nice if I did not have to wrap yet another framework to make this happen though I can deal with this if I need to.
For the GUI it does not have to be all that fancy. Ideally I would like to have 2 or 3 floating and dockable windows on the PC and a few screens on a phone. I will have to make slightly different UIs for both but the time line bitmap and the media window bitmaps should be reusable for the most part. I just need standard widgets for the most part though.
My needs are somewhere in-between that of a game and that of a regular boring old forms app except for the need to animate the waveform.
Does anyone have any suggestions and perhaps know these systems well enough to know if they have a good chance to do what I need?
I fear I would have to spend weeks learning each one just to see if they give me the capabilities I need.
Is IMGUI likely to eat the phones battery just to make the cursor blink?
Any tips would be most welcome.

Change display mode under Linux

I'm trying to grasp programming graphics with Xlib and OpenGL. I can create windows etc., but I stuck at changing display modes.
I can list available video modes with Xrandr functions (XRRSizes, XRRRates, XRRGetScreenInfo, XRRConfigSizes), check which one is currently set (XRRConfigCurrentConfiguration), and change the resolution (XRRSetScreenConfig).
I can list available bit depths (a.k.a. color depths, that is, bits per pixel) with XListDepths.
What I don't know is how to change the bit depth for a given screen.
I couldn't find any suitable function for setting bit depths along with screen sizes in Xrandr. It seems to be totally ignorant about bit depths, which is really weird. I couldn't find any suitable function in the Xlib documentation either.
So my question is:
How to chhange the resolution and bit depth programatically under Linux?
Are there any functions in the Xlib library or somewhere else?
I know that there are full-blown libraries for graphics, such as SDL, but I don't want to use them as a dependency just for changing display modes, since I'm attempting to write a minimal graphics library myself, for learning purposes.
Edit:
What I want to achieve doesn't have to be done particularly with Xlib or X, but it has to cooperate with X gracefully. E.g. I don't want to get rid of X altogether ─ It is still useful for displaying graphics in windowed mode. But i also need some way for switching to fullscreen mode where I need full control over the video mode: resolutions, color depths, refresh rates, and direct access to the actual pixels in the frame buffer, not some "emulation". I assume that there is some way to do it, since there are video games who can do it on Linux.

Vector-based fonts on OpenGL

I started working at this company that uses an 2D OpenGL implementation to show our system's data (which runs on Windows.) The whole system was built with C++ (using C++Builder 2007). Thing is, all the text they print there are pixelized when you zoom in, which I think happens because the text is a bitmap:
From what I know they use the same font files as Windows does. I asked around here on why this happens and the answer I got is that the guy who implemented it (which doesn't work at the company anymore) said fonts on OpenGL are hard and this was the best he could do or something like it.
My question is: is there any simple and effective way to make the text also a vector (the same way those lines in the picture are?) So when I zoom the camera, which happens a lot, they don't pixelize. I have little knowledge of OpenGL and if you have some guide and/or tutorial related to this to point me towards the right direction I'd be very thankful. Basically any material would be great.
Most of OpenGL text rendering libraries come to this: creating bitmaps for the fonts. This means you are going to have problems with scaling and aliasing unless you do some hacks.
One of the popular hacks is Valve's approach: Chris Green. 2007. "Improved Alpha-Tested Magnification for Vector Textures and Special Effects.". You use signed distance field algo to generate your fonts bitmap which then helps you to smooth the text outlines on scale during rendering. Wikidot has the C++ implementation for Distance field generation.
If you stick to NVidia specific hardware, you can try the NVidia Path extension which allows you to render graphics directly on GPU. Remember, it is a NVidia only thing.
But in general, signed distance field based approach is the smoothest and easiest to implement.
BTW, freetype-gl uses Valve's approach and also the modern pipeline.
You can try freetype-gl its a library for font rendering in OpenGL.
The issue with using fonts in OpenGL is that they are handled inconsistently across platforms, and that they have minimal support. If you're willing to go with a helper library for OpenGL (SDL comes to mind), then this behaviour will likely be wrapped, meaning that you merely need to provide a suitable font file for them to use.
You may try out FTOGL4 , the fonts for OpenGL4

Is it possible to control pixels on the screen just from plain C or plain C++ without any opengl / directx hassle?

Well, I want to know.. maybe others too.
Is it possible to control each pixel separately on a screen by programming, especially C or C++?
Do you need special control over the drivers for the current screen? Are there operating systems which allow you to change pixels (for example draw a message/overlay on top of everything)?
Or does windows support this maybe in it's WinApi?
Edit:
I am asking this question because I want to make my computer warn me when I'm gaming and my processor gets too hot. I mainly use Windows but I have a dual boot ubuntu distro.
The lower you go, the more hassle you'll run into.
If you want raw pixel manipulation you might check out http://www.libsdl.org/ which helps you mitigate the hassle of creating surfaces/windows and that kind of stuff.
Linux has a few means to get you even lower if you want (ie without "windows" or "xwindows" or anything of the sort, just the raw screen), look in to the Linux Frame Buffer if you're interested in that.
Delving even lower (such as doing things with your own OS), the BIOS will let you go into certain video modes, this is what OS installers tend to use (at least they used to, some of the fancier ones don't anymore). This isn't the fastest way of doing things, but can get you into the realm of showing pixels in a few assembly instructions.
And of course if you wanted to do your own OS and take advantage of the video card (bypass the BIOS), you're then talking about writing video drivers and such, which is obviously a substantial amount of work :)
Re overlay messages ontop of the screen and that sort of thing, windows does support that sort of thing, so I'm sure you can do it with the WinAPI, although there are likely libraries that would make that easier. I do know you don't need to delve too deep to do that sort of thing though.
Let's look at each bit at a time:
Is it possible to control each pixel separately on a screen by
programming, especially C or C++?
Possibly. It really depends on the graphics architecture, and in many modern systems, the actual screen surface (that is "the bunch of pixels appearing on the screen") is not directly under software control - at least not from "usermode" (that is, from an application that you or I can write - you need to write driver code, and you need to co-operate sufficiently with the existing graphics driver).
It is generally accepted that drawing the data into an off-screen buffer and using a BitBlt [BitBlockTransfer] function to copy the content onto the screen is the prefferred way to do this sort of thing.
So, in reality, you probably can't manipulate each pixel ON the screen - but you may be able to appear like you do.
Do you need special control over the drivers for the current screen?
Assuming you could get direct access to the screen memory, your code certainly will have to have cooperation with the driver - otherwise, who's to say that what you want to appear on the screen doesn't get overwritten by something else [e.g. you want full screen access, and the clock-updater updates the time on screen every once a minute on top of what you draw, etc].
You may be able to set the driver into a mode where you have a "hole" that allows you to access the screen memory as a big "framebuffer". I don't think there's an easy way to do this in Windows. I don't remember one from back in 2003-2005 when I wrote graphics drivers for a living.
Are there operating systems which allow you to change pixels (for
example draw a message/overlay on top of everything)?
It is absolutely possible to create an overlay layer in the hardware of modern graphics cards. That's generally how video playback works - the video is played into a piece of framebuffer memory that is overlaid on top of the other graphics. You need help from the driver, and this is definitely available in the Windows API, via DirectX as far as I remember.
Or does windows support this maybe in it's WinApi?
Probably, but to answer precisely, we need to understand better what you are looking to do.
Edit: In your particular use-case, I would have thought that making sounds or ejecting the CD/DVD drive may be a more suitable opton. It can be hard to overlay something on top of the graphics drawn by a game, because games often try to use as much as possible of the available graphics resource, and you will probably have a hard time finding a way that works even for the most simple use-cases - never mind something that works for multiple different categories of games using different drawing/engine/graphics libraries. I'm also not entirely sure it's anything to worry overly about, since modern CPU's are pretty tolerant to overheating, so the CPU will just slow down, possibly grind to a halt, but it will not break - even if you take the heatsink off, it won't go wrong [no, I don't suggest you try this!]
Every platform supports efficient raw pixel block transfer "aka BitBlt()", so if you really want to go to frame buffer level you can allocate a bitmap and use pointers to set its contents directly then with one line of code efficiently flip this memory chunk into video ram buffer. Of course it is not as efficient as working with PCI framebuffers directly, but on the other hand this approach (BitBlt) was fast enough even in Win95 days to port Wolfenstein 3d on Pentium CPU WITHOUT the use of WinG.
HOWEVER, a care must be taken while creating this bitmap to match its format (i.e. RGB 16 bits, or 32 bits etc...) with actual mode that device is in, otherwise the graphics sub-system will do a lengthy recoding/dithering which will completely kill your speed.
So depending on your goals, If you want a 3d game your performance will suck with this approach. If you want just to render some shapes and dont need more than 10-15fps - this will work without diving into any device-driver levels.
Here is a few tips for overlaying in Windows:
hdc = GetDC(0);//returns hdc for the whole screen and is VERY fast
You can take HDC for screen and do a BItBlt(hdc, ..... SRCCOPY) to flip blocks of raster efficiently. There are also pre-defined Windows Handles for desktop but I dont recall the exact mechanics but if you are on multiple monitors you can get HDC for each desktop, look at "GetDesktopWindow", "GetDC" and the like...

Dispay 3D video on the web without plugins

Here is a possibly unanswerable question...
How do I create a website capable of displaying 3D images on a 3D capable display/monitor without using plugins?
Ignore the issues of bandwidth as they are not an issue. I also wish to avoid the red/green effect (anaglyph) as they have many problems. I figure that I could simply display an 120Hz video but then how do I sync the left and right image up with the screen's timing?
Any help would be appreciated however 'impossible' is never an answer.
Thanks
One solution could be to mimic the red/green 3d effect. You'd pass the left and right eye images through a filter and then display them on top of each other, though I'm not sure how off the top of my head. If you could make the views transparent that might work.
You wouldn't need to display anything at 120 Hz or have any synchronisation or plugins.
Google Streetview uses this 3d mode.
There is no way a browser has access to the graphic adapter specific driver libraries. hence it is not possible to make a website even with a plugin. Not to mention that most graphic adapters cant handle windowed 3D frames except proffesional Quadro cards. Every other 3D capable card has to run under certain resolutions and full screen.
First of all, I'm really not sure that there's a good way to sync output with the screen refresh. This is because anything running in a browser is subject to the browser's compositing and rendering.
You may want to look into WebGL--it's essentially a subset of OpenGL intended to provide hardware accelerated graphics in the browser. It's also supported by all of the beta or upcoming versions of the major browsers. Unfortunately, without any syncing mechanism, I don't know of any way to support the polarization method of 3D.