I'm trying to make a custom Surface class for awesomium to blit to a regular SDL window. Awesomium claims that their pixel format is BGRA (http://awesomium.com/docs/1_7_0/cpp_api/class_awesomium_1_1_surface.html), and SDL has provided me with a ARGB window.
virtual void Paint( unsigned char* src_buffer, int src_row_span, const Awesomium::Rect& src_rect, const Awesomium::Rect& dest_rect )
{
SDL_LockSurface(screen);
for(int x=0;x<src_rect.width;x++)
for(int y=0;y<src_rect.height;y++)
{
unsigned char *s=src_buffer+src_rect.x*4+src_rect.y*src_row_span;
Uint32 *d=(Uint32*)screen->pixels+(x+dest_rect.x)+(y+dest_rect.y)*screen->w;
Uint32 sp=*((Uint32*)s);
int r=(sp&0xFF00)>>8;
int g=(sp&0xFF0000)>>16;
int b=(sp&0xFF000000)>>24;
Uint32 dp=0xFF000000|(r<<16)|(g<<8)|(b);
*d=dp;
if(x==491 && y==235)
printf("");
}
SDL_UnlockSurface(screen);
SDL_UpdateRect(screen,dest_rect.x,dest_rect.y,dest_rect.width,dest_rect.height);
}
However when I run this on Google's home page I get a blue screen with two white rectangles, once matching up with google's search box and the other below it where there is only blank space on google.com.
Related
I want to extract raw frames or bitmaps from a video that I'm playing in my C++ console application using C++/WinRT APIs. I'm simply using CopyFrameToVideoSurface to copy the video's frame to a IDirect3DSurface. But, it just crashes my program (which works fine, if I don't set up this frame extracting callback). My goal is to render this frame buffer somewhere else to display the video.
Frame extracting code
(see complete project here: https://github.com/harmonoid/libwinmedia/tree/stackoverflow)
IDirect3DSurface surface = IDirect3DSurface();
Streams::IBuffer buffer = Streams::IBuffer();
DLLEXPORT void PlayerSetFrameEventHandler(
int32_t player_id, void (*callback)(uint8_t* buffer, int32_t size,
int32_t width, int32_t height)) {
g_media_players.at(player_id).IsVideoFrameServerEnabled(true);
g_media_players.at(player_id)
.VideoFrameAvailable([=](auto, const auto& args) -> void {
g_media_players.at(player_id).CopyFrameToVideoSurface(surface);
SoftwareBitmap bitmap =
SoftwareBitmap::CreateCopyFromSurfaceAsync(surface).get();
bitmap.CopyToBuffer(buffer);
(*callback)(buffer.data(), buffer.Length(), bitmap.PixelWidth(),
bitmap.PixelHeight());
});
}
You may simply build this shared library using cmake --build .
For testing the crash, you can compile following example (also present on the link repo):
https://github.com/harmonoid/libwinmedia/blob/stackoverflow/examples/frame_extractor.cpp
#include <cstdio>
#include "../include/internal.hpp"
int32_t main() {
using namespace Internal;
// Create a list of medias.
const char* media_uris[] = {
"http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/"
"ForBiggerJoyrides.mp4"};
const int media_ids[] = {0};
// Create a player instance.
PlayerCreate(0);
// Set frame callback (comment out the code to prevent crash from happening).
PlayerSetFrameEventHandler(
0, [](uint8_t*, int32_t, int32_t width, int32_t height) {
printf("Video width: %d, Video height: %d.", width, height);
});
// Open list of medias.
PlayerOpen(0, 1, media_uris, media_ids);
// Start playing the player.
PlayerPlay(0);
// Prevent console from closing.
getchar();
return 0;
}
I will be really helped, if I can get help to fix the code or any other working method for extracting the frames or video bitmaps using winrt::Windows::Media::Playback::MediaPlayer.
Thankyou 🙏.
Following is the stacktrace of the crash:
The problem is simple enough, i have a code that generates a pixel buffer. Now i need to present this pixel buffer instead of saving image and then analyzing it after.
What would be the solution to:
Open window
Replace all pixels in this window with my pixels RGB888
So far suggestion were: To use opengl, create a vertex buffer for a rect covering a window, and use pixel shader to draw your pixels. Which clearly is not the best way to swap pixel buffers in window.
Platform: Ubuntu 18
You can also display bitmapped images in a window pretty easily with SFML. In fact, it seems considerably faster than CImg in my other answer. I am no expert in this, but the following code does what you seem to want:
// g++ -std=c++11 main.cpp $(pkg-config --libs --cflags sfml-graphics sfml-window)
#include <SFML/Graphics.hpp>
#include <iostream>
#include <cstdint>
int main()
{
const unsigned width = 1024;
const unsigned height= 768;
// create the window
sf::RenderWindow window(sf::VideoMode(width, height), "Some Funky Title");
// create a texture
sf::Texture texture;
texture.create(width, height);
// Create a pixel buffer to fill with RGBA data
unsigned char *pixbuff = new unsigned char[width * height * 4];
// Create uint32_t pointer to above for easy access as RGBA
uint32_t * intptr = (uint32_t *)pixbuff;
// The colour we will fill the window with
unsigned char red = 0;
unsigned char blue = 255;
// run the program as long as the window is open
int frame = 0;
while (window.isOpen())
{
// check all the window's events that were triggered since the last iteration of the loop
sf::Event event;
while (window.pollEvent(event))
{
// "close requested" event: we close the window
if (event.type == sf::Event::Closed)
window.close();
}
// clear the window with black color
window.clear(sf::Color::Black);
// Create RGBA value to fill screen with.
// Increment red and decrement blue on each cycle. Leave green=0, and make opaque
uint32_t RGBA;
RGBA = (red++ << 24) | (blue-- << 16) | 255;
// Stuff data into buffer
for(int i=0;i<width*height;i++){
intptr[i] = RGBA;
}
// Update screen
texture.update(pixbuff);
sf::Sprite sprite(texture);
window.draw(sprite);
// end the current frame
window.display();
std::cout << "Frame: " << frame << std::endl;
frame++;
if(frame==1000)break;
}
return 0;
}
On my Mac, I achieved the following frame rates:
700 fps # 640x480 resolution
384 fps # 1024x768 resolution
You can/could create and fill a texture off-screen in a second thread if you want to improve performance, but this is already pretty fast.
Keywords: C++, Image Processing, display, bitmapped graphics, pixel buffer, SFML, imshow, prime.
You could use CImg which is a small, fast, modern C++ library. It is "header only" so no complicated linking or dependencies.
// http://cimg.eu/reference/group__cimg__tutorial.html
#include <iostream>
#include <string>
#include "CImg.h"
using namespace cimg_library;
int main(int argc,char **argv) {
const unsigned char white[] = { 255,255,255 };
const int width = 320;
const int height = 240;
// Create 3-channel RGB image
CImg<> img(width,height,1,3);
// Create main window
CImgDisplay main_window(img,"Random Data",0);
int frame = 0;
while (!main_window.is_closed()) {
// Fill image with random noise
img.rand(0,255);
// Draw in frame counter
std::string text = "Frame: " + std::to_string(frame);
img.draw_text(10,10,text.c_str(),white,0,1,32);
main_window.display(img);
frame++;
std::cout << "Frame: " << frame << std::endl;
}
}
Here it is in action - the quality is not best because random data is poorly compressible and Stack Overflow has a 2MB image limit. It is good in real-life.
Note that as I am using X11 underneath here, the compilation command must define cimg_display so will look something like:
g++ -Dcimg_display=1 -std=c++11 -I /opt/X11/include -L /opt/X11/lib -lx11 ...
Note also that I am using img.rand() to fill the image with data, you will want to get img.data() which is a pointer to the pixel buffer and then memcpy() your image data into the buffer at that address.
Note that I also did some stuff with writing to the framebuffer directly in another answer. That was in Python but it is easily adapted.
I'm trying to draw Xiaolin Wu's line algorithm in SDL2, due to alpha channel problem i get nothing , i am drawing it in surface ,then creating texture from surface
i have tried
int SDL_SetTextureBlendMode(SDL_Texture* texture, SDL_BlendMode blendMode)
with all possible blending modes
getting color:(there's no error here i think)
void LINE_PlotPoint(SDL_Surface * surface,int x,int y, double alpha)
{
Uint32 *pixels = (Uint32 *)surface->pixels;
Uint32 pixel=SYS_GetForegroundColor();
Uint8 a= alpha*255;
pixel&=~amask;
pixel |= a;
pixels[ ( y * surface->w ) + x ] =pixel;
}
the main loop for this task is:
if(event.type==SDL_MOUSEMOTION)
{
SDL_GetMouseState(&i,&j);
i-=grect.x;
j-=grect.y;
TOOL_DrawLine(tempSurface,x,y,i,j,1);//Xiaolin Wu's line algorithm
if(tempTexture)
{
SDL_DestroyTexture(tempTexture);
}
tempTexture=TOOL_CreateLineTexture(tempSurface,&srect,&drect);//create texture from surface and calculating rect(src & dest)
if(tempTexture==NULL)
{
puts("error");
}
SDL_SetTextureBlendMode(tempTexture,SDL_BLENDMODE_BLEND);
SDL_FillRect(tempSurface,NULL,NULL);//delete pixel from surface (no need for it)
}
i have tried it before with alpha channel =255 and it's work normally,but with varies alpha values no thing apear
now i have found the problem
just i have forgot to shift the alpha value to it's right place
pixel&=~amask;
pixel |= a << ashift;
Meta Context:
I'm currently working on a game that utilizes opencv as a substitute for ordinary inputs (keyboard, mouse, etc...). I'm using Unity3D's C# scripts and opencv in C++ via DllImports. My goal is to create an image inside my game coming from opencv.
Code Context:
As done usually in OpenCV, I'm using Mat to represent my image. This is the way that I'm exporting the image bytes:
cv::Mat _currentFrame;
...
extern "C" byte * EXPORT GetRawImage()
{
return _currentFrame.data;
}
And this is how i'm importing from C#:
[DllImport ("ImageInputInterface")]
private static extern IntPtr GetRawImage ();
...
public static void GetRawImageBytes (ref byte[] result, int arrayLength) {
IntPtr a = GetRawImage ();
Marshal.Copy(a, result, 0, arrayLength);
FreeBuffer(a);
}
Judging by the way I understand OpenCV, I expect the byte array to be structured in this way when serialized in a uchar pointer:
b1, g1, r1, b2, g2, r2, ...
I'm converting this BGR array to a RGB array using:
public static void BGR2RGB(ref byte[] buffer) {
byte swap;
for (int i = 0; i < buffer.Length; i = i + 3) {
swap = buffer[i];
buffer[i] = buffer[i + 2];
buffer[i + 2] = swap;
}
}
Finally, I'm using Unity's LoadRawTextureData to load the bytes to a texture:
this.tex = new Texture2D(
ImageInputInterface.GetImageWidth(),
ImageInputInterface.GetImageHeight(),
TextureFormat.RGB24,
false
);
...
ImageInputInterface.GetRawImageBytes(ref ret, ret.Length);
ImageInputInterface.BGR2RGB(ref ret);
tex.LoadRawTextureData(ret);
tex.Apply();
Results:
The final image seems to be scattered in someway, it resembles some shapes, but it seems to triple the shapes as well. This is me holding my hand in front of the camera:
[Me, my hand and the camera]
Doing some tests, I concluded that I decoded the channels correctly, since, using my phone to emit RGB light, I can reproduce the colors from the real world:
[Red Test]
[Blue Test]
[Green Test]
There are also some strange lines in the image:
[Spooky Lines]
There is also my face to compare these images to:
[My face in front of the camera]
Questions:
Since I'm able to correctly decode the color channels, what have I assumed wrong in decoding the OpenCV array? It's that I don't know how the Unity's LoadRawTextureData works, or have I decoded something in the wrong way?
How is the OpenCV Mat.data array structured?
UPDATE
Thanks to #Programmer, his solution worked like magic.
[Me Happy]
I changed his script a little, there was no need to do some stuff. And in my case i needed to use BGR2RGBA, not RGB2RGBA:
extern "C" void EXPORT GetRawImage( byte *data, int width, int height )
{
cv::Mat resizedMat( height, width, _currentFrame.type() );
cv::resize( _currentFrame, resizedMat, resizedMat.size(), cv::INTER_CUBIC );
cv::Mat argbImg;
cv::cvtColor( resizedMat, argbImg, CV_BGR2RGBA );
std::memcpy( data, argbImg.data, argbImg.total() * argbImg.elemSize() );
}
Use SetPixels32 instead of LoadRawTextureData. Instead of returning the array data from C++, do that from C#. Create Color32 array and pin it in c# with GCHandle.Alloc, send the address of the pinned Color32 array to C++, use cv::resize to resize the cv::Mat to match the size of pixels sent from C#. You must do this step or expect some error or issues.
Finally, convert cv::Mat from RGB to ARGB then use std::memcpy to update the array from C++. The SetPixels32 function can then be used to load that updated Color32 array into Texture2D. This is how I do it and it has been working for me without any issues. There might be other better ways to do it but I have never found one.
C++:
cv::Mat _currentFrame;
void GetRawImageBytes(unsigned char* data, int width, int height)
{
//Resize Mat to match the array passed to it from C#
cv::Mat resizedMat(height, width, _currentFrame.type());
cv::resize(_currentFrame, resizedMat, resizedMat.size(), cv::INTER_CUBIC);
//You may not need this line. Depends on what you are doing
cv::imshow("Nicolas", resizedMat);
//Convert from RGB to ARGB
cv::Mat argb_img;
cv::cvtColor(resizedMat, argb_img, CV_RGB2BGRA);
std::vector<cv::Mat> bgra;
cv::split(argb_img, bgra);
std::swap(bgra[0], bgra[3]);
std::swap(bgra[1], bgra[2]);
std::memcpy(data, argb_img.data, argb_img.total() * argb_img.elemSize());
}
C#:
Attach to any GameObject with a Renderer and you should see the cv::Mat displayed and updated on that Object every frame. Code is commented if confused:
using System;
using System.Runtime.InteropServices;
using UnityEngine;
public class Test : MonoBehaviour
{
[DllImport("ImageInputInterface")]
private static extern void GetRawImageBytes(IntPtr data, int width, int height);
private Texture2D tex;
private Color32[] pixel32;
private GCHandle pixelHandle;
private IntPtr pixelPtr;
void Start()
{
InitTexture();
gameObject.GetComponent<Renderer>().material.mainTexture = tex;
}
void Update()
{
MatToTexture2D();
}
void InitTexture()
{
tex = new Texture2D(512, 512, TextureFormat.ARGB32, false);
pixel32 = tex.GetPixels32();
//Pin pixel32 array
pixelHandle = GCHandle.Alloc(pixel32, GCHandleType.Pinned);
//Get the pinned address
pixelPtr = pixelHandle.AddrOfPinnedObject();
}
void MatToTexture2D()
{
//Convert Mat to Texture2D
GetRawImageBytes(pixelPtr, tex.width, tex.height);
//Update the Texture2D with array updated in C++
tex.SetPixels32(pixel32);
tex.Apply();
}
void OnApplicationQuit()
{
//Free handle
pixelHandle.Free();
}
}
I have this image:
I have the following functions:
void DrawPixel(int x, int y, unsigned char r, unsigned char g, unsigned char b);
void ReadPixel(GLint x, GLint y, unsigned char &r, unsigned char &g, unsigned char &b);
Objective
Remove the small shapes of the image, there are 4 small shapes in the image, I want to remove them. One way to do this would be by how many red pixels each shape has, if one shape has less than 200 red pixels for example, I'll remove it from the image by painting it in black. This is just a form of solution that I imagined, if anyone has any other alternative it will be welcome.
What I've tried
void RemoveLastNoises(){
int x,y;
int cont = 0;
unsigned char r,g,b;
int xAux;
for(y=0;y<NewImage.SizeY()-0;y++){
for(x=0;x<NewImage.SizeX()-0;x++){
NewImage.ReadPixel(x,y,r,g,b);
if(r == 255){
cont = 0;
while(r == 255){
NewImage.ReadPixel(x+cont,y,r,g,b);
cont = cont + 1;
}
if(cont < 300){
NewImage.DrawPixel(x,y,255,255,255);
}
}
xAux = x;
x = x+cont;
}
x = xAux;
}
}
This works, but only counts how many red pixels it has in a row (x), I found it interesting to put it here as a reference. Anyway, any idea to remove the small shapes will be welcome.
Note:The larger shapes are not to be modified, the height and width of the image is larger, I decrease the dimensions for the question to be readable.
There is a set of nonlinear filters that are called morophological filters.
They use "and" and "or" combination to create filter masks.
What you need to do is implement a closing. OpenGL does not privde such functions thus, you have to write the code on your own. To do so you
If they are always red just use the red channel as grey-scale image
Create a binary image from the created image
Inverse the image from 2
Create a filter mask with '1's that is large enough to cover the small parts you want to rease
Do A Dilation (https://en.wikipedia.org/wiki/Dilation_(morphology)) on the image with the filter mask
Do an Erosion (https://en.wikipedia.org/wiki/Erosion_(morphology)) on the image with the same filter mask or a slightly smaller one
Invert the image again
5 and 6 are describing a "Closing" https://en.wikipedia.org/wiki/Closing_(morphology)