The problem is simple enough, i have a code that generates a pixel buffer. Now i need to present this pixel buffer instead of saving image and then analyzing it after.
What would be the solution to:
Open window
Replace all pixels in this window with my pixels RGB888
So far suggestion were: To use opengl, create a vertex buffer for a rect covering a window, and use pixel shader to draw your pixels. Which clearly is not the best way to swap pixel buffers in window.
Platform: Ubuntu 18
You can also display bitmapped images in a window pretty easily with SFML. In fact, it seems considerably faster than CImg in my other answer. I am no expert in this, but the following code does what you seem to want:
// g++ -std=c++11 main.cpp $(pkg-config --libs --cflags sfml-graphics sfml-window)
#include <SFML/Graphics.hpp>
#include <iostream>
#include <cstdint>
int main()
{
const unsigned width = 1024;
const unsigned height= 768;
// create the window
sf::RenderWindow window(sf::VideoMode(width, height), "Some Funky Title");
// create a texture
sf::Texture texture;
texture.create(width, height);
// Create a pixel buffer to fill with RGBA data
unsigned char *pixbuff = new unsigned char[width * height * 4];
// Create uint32_t pointer to above for easy access as RGBA
uint32_t * intptr = (uint32_t *)pixbuff;
// The colour we will fill the window with
unsigned char red = 0;
unsigned char blue = 255;
// run the program as long as the window is open
int frame = 0;
while (window.isOpen())
{
// check all the window's events that were triggered since the last iteration of the loop
sf::Event event;
while (window.pollEvent(event))
{
// "close requested" event: we close the window
if (event.type == sf::Event::Closed)
window.close();
}
// clear the window with black color
window.clear(sf::Color::Black);
// Create RGBA value to fill screen with.
// Increment red and decrement blue on each cycle. Leave green=0, and make opaque
uint32_t RGBA;
RGBA = (red++ << 24) | (blue-- << 16) | 255;
// Stuff data into buffer
for(int i=0;i<width*height;i++){
intptr[i] = RGBA;
}
// Update screen
texture.update(pixbuff);
sf::Sprite sprite(texture);
window.draw(sprite);
// end the current frame
window.display();
std::cout << "Frame: " << frame << std::endl;
frame++;
if(frame==1000)break;
}
return 0;
}
On my Mac, I achieved the following frame rates:
700 fps # 640x480 resolution
384 fps # 1024x768 resolution
You can/could create and fill a texture off-screen in a second thread if you want to improve performance, but this is already pretty fast.
Keywords: C++, Image Processing, display, bitmapped graphics, pixel buffer, SFML, imshow, prime.
You could use CImg which is a small, fast, modern C++ library. It is "header only" so no complicated linking or dependencies.
// http://cimg.eu/reference/group__cimg__tutorial.html
#include <iostream>
#include <string>
#include "CImg.h"
using namespace cimg_library;
int main(int argc,char **argv) {
const unsigned char white[] = { 255,255,255 };
const int width = 320;
const int height = 240;
// Create 3-channel RGB image
CImg<> img(width,height,1,3);
// Create main window
CImgDisplay main_window(img,"Random Data",0);
int frame = 0;
while (!main_window.is_closed()) {
// Fill image with random noise
img.rand(0,255);
// Draw in frame counter
std::string text = "Frame: " + std::to_string(frame);
img.draw_text(10,10,text.c_str(),white,0,1,32);
main_window.display(img);
frame++;
std::cout << "Frame: " << frame << std::endl;
}
}
Here it is in action - the quality is not best because random data is poorly compressible and Stack Overflow has a 2MB image limit. It is good in real-life.
Note that as I am using X11 underneath here, the compilation command must define cimg_display so will look something like:
g++ -Dcimg_display=1 -std=c++11 -I /opt/X11/include -L /opt/X11/lib -lx11 ...
Note also that I am using img.rand() to fill the image with data, you will want to get img.data() which is a pointer to the pixel buffer and then memcpy() your image data into the buffer at that address.
Note that I also did some stuff with writing to the framebuffer directly in another answer. That was in Python but it is easily adapted.
Related
In my game engine, I have a texture loading API which wraps low level libraries like OpenGL, DirectX, etc. This API uses Magick++ because I found it to be a convenient cross-platform solution and allows me to create procedural textures fairly easily.
I'm now adding a text rendering system using freetype where I want to use this texture API to dynamically generate a texture atlas for any given font where all the glyphs are stored horizontally adjacent.
I have been able to get this to work in the past by buffering the bitmaps directly into OpenGL. But now I want to accomplish this in a platform independent way, using this API.
I've looked around for a few examples but I can't find anything quite like what I'm after so if there are any magick++ experts around, I'd really appreciate some pointers.
So in simple terms: I've got a freetype bitmap and I want to be able to copy its pixel buffer to a specific offset inside a Magick::Image.
This code might help to clarify:
auto texture = e6::textures->create(e6::texture::specification{}, [name, totalWidth, maxHeight](){
// Initialises Freetype
FT_Face face;
FT_Library ft;
if (FT_Init_FreeType(&ft)) {
std::cout << "ERROR::FREETYPE: Could not init FreeType Library" << std::endl;
}
if (int error = FT_New_Face(ft, path(name.c_str()).c_str(), 0, &face)) {
std::cout << "Failed to initialise fonts: " << name << std::endl;
throw std::exception();
}
// Sets the size of the font
FT_Set_Pixel_Sizes(face, 0, 100);
unsigned int cursor = 0; // Keeps track of the horizontal offset.
// Preallocate an image buffer
// totalWidth and maxHeight is the size of the entire atlas
Magick::Image image(Magick::Geometry(totalWidth, maxHeight), "BLACK");
image.type(Magick::GrayscaleType);
image.magick("BMP");
image.depth(8);
image.modifyImage();
Magick::Pixels view(image);
// Loops through a subset of the ASCII codes
for (uint8_t c = 32; c < 128; c++) {
if (FT_Load_Char(face, c, FT_LOAD_RENDER)) {
std::cout << "Failed to load glyph: " << c << std::endl;
continue;
}
// Just for clarification...
unsigned int width = face->glyph->bitmap.width;
unsigned int height = face->glyph->bitmap.rows;
unsigned char* image_data = face->glyph->bitmap.buffer;
// This is the problem part.
// How can I copy the image_data into `image` at the cursor position?
cursor += width; // Advance the cursor
}
image.write(std::string(TEXTURES) + "font-test.bmp"); // Write to filesystem
// Clean up freetype
FT_Done_Face(face);
FT_Done_FreeType(ft);
return image;
}, "font-" + name);
I tried using a pixel cache which the documentation demonstrates:
Magick::Quantum *pixels = view.get(cursor, 0, width, height);
*pixels = *image_data;
view.sync();
But this leaves me with a completely black image, I think because the image_data goes out of scope.
I was hoping there'd be a way to modify the image data directly but after a lot of trial and error, I ended up just creating an image for each glyph and compositing them together:
...
Magick::Image glyph (Magick::Geometry(), "BLACK");
glyph.type(MagickCore::GrayscaleType);
glyph.magick("BMP");
glyph.depth(8);
glyph.read(width, height, "R", Magick::StorageType::CharPixel, image_data);
image.composite(glyph, cursor, 0);
cursor += width;
At the very least, I hope this helps to prevent someone else going down the same rabbit hole I did.
I want to extract raw frames or bitmaps from a video that I'm playing in my C++ console application using C++/WinRT APIs. I'm simply using CopyFrameToVideoSurface to copy the video's frame to a IDirect3DSurface. But, it just crashes my program (which works fine, if I don't set up this frame extracting callback). My goal is to render this frame buffer somewhere else to display the video.
Frame extracting code
(see complete project here: https://github.com/harmonoid/libwinmedia/tree/stackoverflow)
IDirect3DSurface surface = IDirect3DSurface();
Streams::IBuffer buffer = Streams::IBuffer();
DLLEXPORT void PlayerSetFrameEventHandler(
int32_t player_id, void (*callback)(uint8_t* buffer, int32_t size,
int32_t width, int32_t height)) {
g_media_players.at(player_id).IsVideoFrameServerEnabled(true);
g_media_players.at(player_id)
.VideoFrameAvailable([=](auto, const auto& args) -> void {
g_media_players.at(player_id).CopyFrameToVideoSurface(surface);
SoftwareBitmap bitmap =
SoftwareBitmap::CreateCopyFromSurfaceAsync(surface).get();
bitmap.CopyToBuffer(buffer);
(*callback)(buffer.data(), buffer.Length(), bitmap.PixelWidth(),
bitmap.PixelHeight());
});
}
You may simply build this shared library using cmake --build .
For testing the crash, you can compile following example (also present on the link repo):
https://github.com/harmonoid/libwinmedia/blob/stackoverflow/examples/frame_extractor.cpp
#include <cstdio>
#include "../include/internal.hpp"
int32_t main() {
using namespace Internal;
// Create a list of medias.
const char* media_uris[] = {
"http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/"
"ForBiggerJoyrides.mp4"};
const int media_ids[] = {0};
// Create a player instance.
PlayerCreate(0);
// Set frame callback (comment out the code to prevent crash from happening).
PlayerSetFrameEventHandler(
0, [](uint8_t*, int32_t, int32_t width, int32_t height) {
printf("Video width: %d, Video height: %d.", width, height);
});
// Open list of medias.
PlayerOpen(0, 1, media_uris, media_ids);
// Start playing the player.
PlayerPlay(0);
// Prevent console from closing.
getchar();
return 0;
}
I will be really helped, if I can get help to fix the code or any other working method for extracting the frames or video bitmaps using winrt::Windows::Media::Playback::MediaPlayer.
Thankyou 🙏.
Following is the stacktrace of the crash:
I need to draw some graphics in c++, pixel by pixel on a window. In order to do this I create a SFML window, sprite and texture. I draw my desired graphics to a uint8_t array and then update the texture and sprite with it. This process takes about 2500 us. Drawing two triangles which fill the entire window takes only 10 us. How is this massive difference possible? I've tried multithreading the pixel-by-pixel drawing, but the difference of two orders of magnitude remains. I've also tried drawing the pixels using a point-map, with no improvement. I understand that SFML uses some GPU-acceleration in the background, but simply looping and assigning the values to the pixel array already takes hundreds of microseconds.
Does anyone know of a more effective way to assign the values of pixels in a window?
Here is an example of the code I'm using to compare the speed of triangle and pixel-by-pixel drawing:
#include <SFML/Graphics.hpp>
#include <chrono>
using namespace std::chrono;
#include <iostream>
#include<cmath>
uint8_t* pixels;
int main(int, char const**)
{
const unsigned int width=1200;
const unsigned int height=1200;
sf::RenderWindow window(sf::VideoMode(width, height), "MA: Rasterization Test");
pixels = new uint8_t[width*height*4];
sf::Texture pixels_texture;
pixels_texture.create(width, height);
sf::Sprite pixels_sprite(pixels_texture);
sf::Clock clock;
sf::VertexArray triangle(sf::Triangles, 3);
triangle[0].position = sf::Vector2f(0, height);
triangle[1].position = sf::Vector2f(width, height);
triangle[2].position = sf::Vector2f(width/2, height-std::sqrt(std::pow(width,2)-std::pow(width/2,2)));
triangle[0].color = sf::Color::Red;
triangle[1].color = sf::Color::Blue;
triangle[2].color = sf::Color::Green;
while (window.isOpen()){
sf::Event event;
while (window.pollEvent(event)) {
if (event.type == sf::Event::Closed) {
window.close();
}
if (event.type == sf::Event::KeyPressed && event.key.code == sf::Keyboard::Escape) {
window.close();
}
}
window.clear(sf::Color(255,255,255,255));
// Pixel-by-pixel
int us = duration_cast< microseconds >(system_clock::now().time_since_epoch()).count();
for(int i=0;i!=width*height*4;++i){
pixels[i]=255;
}
pixels_texture.update(pixels);
window.draw(pixels_sprite);
int duration=duration_cast< microseconds >(system_clock::now().time_since_epoch()).count()-us;
std::cout<<"Background: "<<duration<<" us\n";
// Triangle
us = duration_cast< microseconds >(system_clock::now().time_since_epoch()).count();
window.draw(triangle);
duration=duration_cast< microseconds >(system_clock::now().time_since_epoch()).count()-us;
std::cout<<"Triangle: "<<duration<<" us\n";
window.display();
}
return EXIT_SUCCESS;
}
Graphics drawing in modern devices using Graphic cards, and the speed of drawing depends on how many triangles in the data you sent to the Graphic memory. That's why just drawing two triangles is fast.
As you mentioned about multithreading, if you using OpenGL (I don't remember what SFML use, but should be the same), what you thinking you are drawing is basically send commands and data to graphic cards, so multithreading here is not very useful, the graphic card has it's own thread to do this.
If you are curious about how graphic card works, this tutorial is the
book you should read.
P.S. As you edit you question, I guess the duration 2500us vs 10us is because you for loop create a texture(even if the texture is a pure white background)(and the for loop, you probably need to start counting after the for loop), and send texture to graphic card need time, while draw triangle only send several points. Still, I suggest to read the tutorial, create a texture pixel by pixel potentially prove the miss understanding of how GPU works.
I've been working around to make a little light shader.
It works perfectly, I mean, the light fades as it's supposed to, it's a circle around my character moving with it.
It could be perfect only if that resizing event wasn't existing.
When SFML resizes the window, it enlarges everything, but in a strange way. It enlarged everything but shaders.
I tried to resize my window (I love resizing pixel graph games, I find it most beautiful. So I don't want to prevent the resizing event).
Here's my shader :
uniform vec3 light;
void main(void) {
float distance = sqrt(pow(gl_FragCoord.x - light.x, 2) + pow(gl_FragCoord.y - light.y, 2));
float alpha = 1.;
if (distance <= light.z) {
alpha = (1.0 / light.z) * distance;
}
gl_FragColor = vec4(0., 0., 0., alpha);
}
So, the problem is, my window is showed at 1280 x 736 (to fit with 32x32 textures), and I have a 1920 x 1080 monitor. When I enlarge the window to fit in 1920 x 1080 (title bar included), the whole thing resizes correctly, everything's fine, but the shader is now 1920x1080 (minus the title bar). So the shader needs different coordinates (what's supposed to be in x = 32, y = 0 is, for the shader, in x = 48 y = 0).
So I was wondering, is it possible to enlarge the shader with the whole window ? Should I use events or something like that ?
Thanks for your answers ^^
EDIT : Here's some pics :
So this is the light shader before it resizes (it's dark everywhere but on the player, like it's supposed to be).
Then I resize the window, the player doesn't move, the textures fit the entire window, but the light moved.
So, to explain correctly, when I resize the window, I want everything to fit the window, so it's full of textures, but when I do that, the coordinates given to my shader are the ones before resizing, and if I move it moves as if I didn't resize the window, so the light is never on my player again.
I'm not sure it's clearer, but I tried my best.
EDIT2 : Here's my code which calls the shader :
void Graphics::UpdateLight() {
short radius = 65; // 265 on the pictures
int x = m_game->GetPlayer()->GetSprite()->getPosition().x + CASE_LEN / 2; // Setting on the middle of the player sprite (CASE_LEN is a const which contains the size of a case (here 32))
int y = HEIGHT - (m_game->GetPlayer()->GetSprite()->getPosition().y + CASE_LEN / 2); // (the "HEIGHT -" part was set because it seems that y = 0 is on the bottom of the texture for GLSL)
sf::Vector3f shaderLight;
shaderLight.x = x;
shaderLight.y = y;
shaderLight.z = radius;
m_lightShader.setParameter("light", shaderLight);
}
The code snippet you're showing really only updates the shader coordinates (and from a quick glimpse it looks fine). The bug most likely happens somewhere where you're actually drawing things.
I'd use a completely different approach, because your shader approach might get rather tedious once you're rendering multiple things, other light sources, etc.
As such I'd suggest you render a light map to a render texture (which would essentially be like "black = no light, color = light of that color").
Rather than trying to explain everything in text, I've written a quick commented example program which will draw a window on screen and move some light sources over a background image (I've used the one that comes with SFML's shader example):
There are no requirements other than having a file called "background.jpg" in your startup path.
Feel free to copy this code or use it for inspiration. Just keep in mind this isn't optimized and really just a quick edit to show the general idea.
#include <SFML/Graphics.hpp>
#include <vector>
#include <cmath>
const float PI = 3.1415f;
struct Light
{
sf::Vector2f position;
sf::Color color;
float radius;
};
int main()
{
// Let's setup a window
sf::RenderWindow window(sf::VideoMode(640, 480), "SFML Lights");
window.setVerticalSyncEnabled(false);
window.setFramerateLimit(60);
// Create something simple to draw
sf::Texture texture;
texture.loadFromFile("background.jpg");
sf::Sprite background(texture);
// Setup everything for the lightmap
sf::RenderTexture lightmapTex;
// We're using a 512x512 render texture for max. compatibility
// On modern hardware it could match the window resolution of course
lightmapTex.create(512, 512);
sf::Sprite lightmap(lightmapTex.getTexture());
// Scale the sprite to fill the window
lightmap.setScale(640 / 512.f, 480 / 512.f);
// Set the lightmap's view to the same as the window
lightmapTex.setView(window.getDefaultView());
// Drawable helper to draw lights
// We'll just have to adjust the first vertex's color to tint it
sf::VertexArray light(sf::PrimitiveType::TriangleFan);
light.append({sf::Vector2f(0, 0), sf::Color::White});
// This is inaccurate, but for demo purposes…
// This could be more elaborate to allow better graduation etc.
for (float i = 0; i <= 2 * PI; i += PI * .125f)
light.append({sf::Vector2f(std::sin(i), std::cos(i)), sf::Color::Transparent});
// Setup some lights
std::vector<Light> lights;
lights.push_back({sf::Vector2f(50.f, 50.f), sf::Color::White, 100.f });
lights.push_back({sf::Vector2f(350.f, 150.f), sf::Color::Red, 150.f });
lights.push_back({sf::Vector2f(150.f, 250.f), sf::Color::Yellow, 200.f });
lights.push_back({sf::Vector2f(250.f, 450.f), sf::Color::Cyan, 100.f });
// RenderStates helper to transform and draw lights
sf::RenderStates rs(sf::BlendAdd);
while (window.isOpen()) {
sf::Event event;
while (window.pollEvent(event)) {
switch (event.type) {
case sf::Event::Closed:
window.close();
break;
}
}
bool flip = false; // simple toggle to animate differently
// Draw the light map
lightmapTex.clear(sf::Color::Black);
for(Light &l : lights)
{
// Apply all light attributes and render it
// Reset the transformation
rs.transform = sf::Transform::Identity;
// Move the light
rs.transform.translate(l.position);
// And scale it (this could be animated to create flicker)
rs.transform.scale(l.radius, l.radius);
// Adjust the light color (first vertex)
light[0].color = l.color;
// Draw the light
lightmapTex.draw(light, rs);
// To make things a bit more interesting
// We're moving the lights
l.position.x += flip ? 2 : -2;
flip = !flip;
if (l.position.x > 640)
l.position.x -= 640;
else if (l.position.x < 0)
l.position.x += 640;
}
lightmapTex.display();
window.clear(sf::Color::White);
// Draw the background / game
window.draw(background);
// Draw the lightmap
window.draw(lightmap, sf::BlendMultiply);
window.display();
}
}
I found this (http://lodev.org/cgtutor/raycasting.html) tutorial on the Internet and was interested and wanted to make my own. I wanted to do it in SFML though, and I wanted to extend it, and make a 3D version, so there could be different levels the player can walk on. Thus, you would need 1 ray for every pixel, and thus each pixel would have to be drawn independently. I found this (http://www.sfml-dev.org/tutorials/2.1/graphics-vertex-array.php) tutorial, and it seemed easy enough to have the array be of individual vertices. To start, I figured the best thing to do would be to create a class that could read the pixels returned by the rays, and draw them to the screen. I used the VertexArray, but things were not working for some reason. I tried to isolate the problem, but I've had little success. I wrote a simple vertex array of just green pixels that should fill up part of the screen, and still there are problems. The pixels only show my code and the pic. of what I mean.
#include "SFML/Graphics.hpp"
int main() {
sf::RenderWindow window(sf::VideoMode(400, 240), "Test Window");
window.setFramerateLimit(30);
sf::VertexArray pointmap(sf::Points, 400 * 10);
for(register int a = 0;a < 400 * 10;a++) {
pointmap[a].position = sf::Vector2f(a % 400,a / 400);
pointmap[a].color = sf::Color::Green;
}
while (window.isOpen()) {
sf::Event event;
while (window.pollEvent(event)) {
if (event.type == sf::Event::Closed)
window.close();
}
window.clear();
window.draw(pointmap);
//</debug>
window.display();
}
return 0;
}
I meant for this to just fill in the top 10 rows with Green, but apparently that is not what I did... I think if I can figure out what is causing this not to work, I can probably fix the main problem. Also if you think there is a better way to do this instead, you could let me know :)
Thanks!
I think you misused the vertex array. Take a look at the sf::Quads primitive in the tutorial's table : you need to define 4 points (coordinates) to draw a quad, and a pixel is just a quad of side length 1.
So what you need is to create a vertex array of size 400*10*4, and set the same position to every following four vertices.
You can also use another method provided by SFML : draw directly a texture pixel by pixel and display it. It may not be the most efficient thing to do (you'll have to compare with vertices) but it has the advantage of being rather simple.
const unsigned int W = 400;
const unsigned int H = 10; // you can change this to full window size later
sf::UInt8* pixels = new sf::UInt8[W*H*4];
sf::Texture texture;
texture.create(W, H);
sf::Sprite sprite(texture); // needed to draw the texture on screen
// ...
for(register int i = 0; i < W*H*4; i += 4) {
pixels[i] = r; // obviously, assign the values you need here to form your color
pixels[i+1] = g;
pixels[i+2] = b;
pixels[i+3] = a;
}
texture.update(pixels);
// ...
window.draw(sprite);
The sf::Texture::update function accepts an array of sf::UInt8. They represent the color of each pixel of the texture. But as the pixels need to be 32bit RGBA, 4 following sf::UInt8 are the RGBA composants of the pixel.
Replace the line:
pointmap[a].position = sf::Vector2f(a % 400,a / 400);
With:
pointmap[a].position = sf::Vector2f(a % 400,(a/400) % 400);