Rust and OpenGL setup - opengl

I'm trying to export my OpenGL game from C++ to Rust and achieve 2 things:
1. Print OpenGL error messages to console.
2. Autocompletion of GL functions and constants in my IDE (Visual Studio Code).
I simly generated OpenGL bindings with gl_generator crate and copied bindings.rs file to my cargo.
extern crate sdl2;
mod bindings;
use bindings as GL;
fn main() {
let sdl = sdl2::init().unwrap();
let mut event_pump = sdl.event_pump().unwrap();
let video_subsystem = sdl.video().unwrap();
let gl_attr = video_subsystem.gl_attr();
gl_attr.set_context_profile(sdl2::video::GLProfile::Core);
gl_attr.set_context_version(4, 5);
let window = video_subsystem.window("Window", 900, 700).opengl().resizable().build().unwrap();
let _gl_context = window.gl_create_context().unwrap();
let gl = GL::Gl::load_with(|s| video_subsystem.gl_get_proc_address(s) as *const std::os::raw::c_void);
unsafe{ gl.Viewport(0, 0, 900, 700); }
'main: loop {
unsafe {
gl.UseProgram(42); // <- error (^ GL error triggered: 1281)
gl.ClearColor(0.0, 0.3, 0.6, 1.0 );
gl.Clear(GL::COLOR_BUFFER_BIT);
}
window.gl_swap_window();
for event in event_pump.poll_iter() {
match event {
sdl2::event::Event::Quit {..} => { break 'main },
_ => {},
}
}
}
}
The problem is that gl variable, where all functions are stored, is not global and I'm not sure how to use it with different modules/functions.
The reason why all functions are inside Gl struct is because I used DebugStructGenerator in my build function. It prints not only errors but all OpenGL function calls (e.g., [OpenGL] ClearColor(0.0, 0.3, 0.6, 1.0)). Would be great if it only printed error messages.
My build.rs file:
extern crate gl_generator;
use gl_generator::{Registry, Fallbacks, /*StructGenerator, GlobalGenerator,*/ DebugStructGenerator, Api, Profile};
use std::env;
use std::fs::File;
use std::path::Path;
fn main() {
let out_dir = env::var("OUT_DIR").unwrap();
let mut file_gl = File::create(&Path::new(&out_dir).join("bindings.rs")).unwrap();
let registry = Registry::new(Api::Gl, (4, 5), Profile::Core, Fallbacks::All, [ "GL_NV_command_list" ]);
registry.write_bindings(DebugStructGenerator, &mut file_gl).unwrap();
}

There is no need to include the bindings.rs:
pub mod gl {
include!(concat!(env!("OUT_DIR"), "/gl_bindings.rs"));
}
Include this in your main.rs or lib.rs. Then you can include the gl functions from any of your modules just with (as gl is includes as pub module):
// anywhere in a module
use crate::gl; // static gl with lot of constants e.g gl::ARRAY_BUFFER
use crate::gl::Gl; // gl instance struct with gl: &Gl => gl.bindBuffer(...)
use crate::gl::types::*; // include all gl types like GLuint, GLenum, ...
As we are in rust, you need to provide your gl instance to your functions as reference or whatever you want.
Maybe check lazy_static, which can be used as global variable:
lazy_static! {
pub static ref gl: gl::Gl = create_opengl_context();
}
First access to gl will call the create context function. Repeated calls will use the created instance.
Maybe have a look at my game-gl library (graphics branch) at
https://github.com/Kaiser1989/game-gl or the example at https://github.com/Kaiser1989/rust-android-example. (Android and Windows openGL game loop framework)
.

Related

How do I extract a GL texture id from a BufferRef using the gstreamer crates?

I am working on a tool to use GL to render frames from a video onto a texture-mapped mesh. I already have a GL app working with a single image (PNG). Now I am trying to use gstreamer to decode the video.
I started with the appsink example.
I have gotten as far as piping the decoded video through glupload into an appsink. Now I need to convert the BufferRef I get from appsink.pull_sample().get_buffer() into a GL texture id (a u32) so I can pass it to GL functions like gl::BindTexture(gl::TEXTURE_2D, tex). I used set_caps() on the appsink to ensure the buffer has feature memory:GLMemory, so it better be a texture and not off-GPU.
How do I extract a GL texture id from a BufferRef using Rust's gstreamer and gstreamer-* crates?
Retrieving the texture from a GstGLMemory in C requires mapping the GstGLMemory itself with a special GST_MAP_GL flag. That specific interface for mapping an OpenGL texture does not currently have an analogue in rust yet. There is some work in a related area within https://gitlab.freedesktop.org/gstreamer/gstreamer-rs/-/merge_requests/581 to help improve the situation with GStreamer OpenGL usage in rust.
If you only need readable access to the texture, there is an extension trait VideoFrameGLExt on VideoFrame that can get you access to the OpenGL texture. There is a usage of VideoFrameGLExt in the glupload example in the gstreamer-rs repository available from https://gitlab.freedesktop.org/gstreamer/gstreamer-rs/-/blob/master/examples/src/bin/glupload.rs. The VideoFrameGLExt trait is currently implemented within https://gitlab.freedesktop.org/gstreamer/gstreamer-rs/-/blob/master/gstreamer-gl/src/gl_video_frame.rs
Something like the following should work for read-only access:
// buffer: gst::Buffer
// info: gst::VideoInfo
if let Ok(frame) = gst_video::VideoFrame::from_buffer_readable_gl(buffer, &info) {
if let Some(texture) = frame.get_texture_id(0) {
// use texture somehow
}
}
If instead you also need to write to the texture, that is currently not exposed and manual bindings would need to be written.
The code I eventually got working was
fn get_gl_memory(bref: &BufferRef, idx: u32) -> Option<*mut GstGLMemory> {
unsafe {
let n = gst_sys::gst_buffer_n_memory(bref.as_ptr() as *mut _);
if idx >= n {
return None;
}
let mem = gst_sys::gst_buffer_peek_memory(bref.as_ptr() as *mut _, idx);
if 0 != gst_gl_sys::gst_is_gl_memory(mem) {
Some(mem as *mut _)
} else {
None
}
}
}
//
let gl_mem = get_gl_memory(buffer, 0).unwrap();
let gl_mem = unsafe { &*gl_mem };
let tex_id = gl_mem.tex_id;
although the solution from ystreet00 works great if you have convenient access to the gst::VideoInfo.

Why glium `Headless` can not render an image like normal window context?

I am working on an off-screen render program and I use crate glium to do this. I have followed the example of screenshot.rs and this example worked well.
Then I made some change:
The orignal code was
fn main() {
// building the display, ie. the main object
let event_loop = glutin::EventsLoop::new();
let wb = glutin::WindowBuilder::new().with_visible(true);
let cb = glutin::ContextBuilder::new();
let display = glium::Display::new(wb, cb, &event_loop).unwrap();
// building the vertex buffer, which contains all the vertices that we will draw
I grouped these codes into a function:
fn main() {
// building the display, ie. the main object
let event_loop = glutin::EventsLoop::new();
let display = build_display((128,128), &event_loop);
// building the vertex buffer, which contains all the vertices that we will draw
pub fn build_display(size: (u32, u32), event_loop: &glutin::EventsLoop) -> glium::Display {
let version = parse_version(); //this will return `OpenGL 3.3`
let wb = glutin::WindowBuilder::new()
.with_visibility(false)
.with_dimensions(glutin::dpi::LogicalSize::from(size));
let cb = glutin::ContextBuilder::new()
.with_gl(version);
glium::Display::new(wb, cb, &event_loop).unwrap()
}
After this modification, the program still worked well. So I continued to add the headless-context:
fn main() {
// building the display, ie. the main object
let event_loop = glutin::EventsLoop::new();
let display = build_display_headless((128,128), &event_loop);
// building the vertex buffer, which contains all the vertices that we will draw
pub fn build_display_headless(size: (u32, u32), event_loop: &glutin::EventsLoop) -> glium::HeadlessRenderer {
let version = parse_version(); // this will return `OpenGL 3.3`
let ctx = glutin::ContextBuilder::new()
.with_gl(version)
.build_headless(&event_loop, glutin::dpi::PhysicalSize::from(size))
.expect("1");
//let ctx = unsafe { ctx.make_current().expect("3") };
glium::HeadlessRenderer::new(ctx).expect("4")
}
But this time, the program did not work. There was no panic during the running, but the output image was empty with black, and its size was not 128x128 but 800x600.
I have tried to remove the libEGL.dll so that, due to the doc of crate glutin, the function .build_headless will build a window and hide it, just as my function build_display does. However, this failed too. So what can cause this?

Use a Rust macro to generate and compile a shader

My origin of this approach comes from OpenGL shader programming, but the problem is more abstract. I will write some pseudo code to clarify what I mean.
In OpenGL, rendering is done in so called "shaders". A shader is a calculation kernel that is applied to each element of a data set, but with the advantage that the calculation is executed on the GPU and therefore takes the advantage of concurrent nature of the GPU to calculate as much at the same time as possible.
The problem is that shaders are presented as text at compile time, and the shader needs to be compiled at runtime by the driver of the GPU. This means that at the start of each program an init function needs to compile each shader source file into a program before the shader can be called. Here is an example, keep in mind it is simplified pseudocode:
let shader_src_A = r#"
attribute float a;
attribute float b;
out float b;
void main() {
b = a * b;
}
"#;
let shader_src_B = r#"
attribute float a;
attribute float b;
out float b;
void main() {
b = a + b;
}
"#;
let mut program_A : ShaderProgram;
let mut program_B : ShaderProgram;
fn init() {
initGL();
program_A = compile_and_link(shader_src_A);
program_B = compile_and_link(shader_src_B);
}
fn render() {
let data1 = vec![1,2,3,4];
let data2 = vec![5,6,7,8];
// move data to the gpu
let gpu_data_1 = move_to_gpu(data1);
let gpu_data_2 = move_to_gpu(data2);
let gpu_data_3 : GpuData<float>;
let gpu_data_4 : GpuData<float>;
program_A(
(gpu_data_1, gpu_data_2) // input
(gpu_data_3,) // output
);
program_B(
(gpu_data_1, gpu_data_2) // input
(gpu_data_4,) // output
);
let data_3 = move_to_cpu(gpu_data_3);
let data_4 = move_to_cpu(gpu_data_4);
println!("data_3 {:?} data_4 {:?}", data_3, data_4);
// data_3 [5, 12, 21, 32] data_4 [6, 8, 10, 12]
}
The goal for me to to be able to write something like this:
fn init() {
initGL();
mystery_macro!();
}
fn render() {
let data1 = vec![1,2,3,4];
let data2 = vec![5,6,7,8];
// move data to the gpu
let gpu_data_1 = move_to_gpu(data1);
let gpu_data_2 = move_to_gpu(data2);
let gpu_data_3 : GpuData<float>;
let gpu_data_4 : GpuData<float>;
shade!(
(gpu_data_1, gpu_data_2), // input tuple
(gpu_data_3,), // output tuple
"gpu_data_3 = gpu_data_1 * gpu_data_2;" // this is the shader source, the rest should be generated by the macro.
);
shade!(
(gpu_data_1, gpu_data_2), // input tuple
(gpu_data_3,), // output tuple
"gpu_data_4 = gpu_data_1 + gpu_data_2;" // this is the shader source, the rest should be generated by the macro.
);
let data_3 = move_to_cpu(gpu_data_3);
let data_4 = move_to_cpu(gpu_data_4);
println!("data_3 {:?} data_4 {:?}", data_3, data_4);
}
The key difference is that I do not have a common place where all my shaders are written. I write my shaders where I call them, and I do not write the part of the shader that can be inferred by the other arguments. Generating the part of the shader that is missing should be straight forward, the problem is the compilation of the shader. A renderer that calls the compilation of each shader on each call is far too slow to be useful at all. The idea is that the macro should generate this common place with all the shader sources and programs, so that the init function can compile and link all programs at program start.
Despite the title, I am also ok with a solution that solves my problem differently, but I would prefer a solution where all programs can be compiled in the init function.
EDIT:
I could also imagine, that shade is not a macro, but a placeholder no-op function, the macro will then operate on the shade function, and by traversing the AST, it can find all calls to shade, and create everything that needs to be done in the init function.
From The Rust Programming Language section on macros (emphasis mine):
Macros allow us to abstract at a syntactic level. A macro invocation is shorthand for an "expanded" syntactic form. This expansion happens early in compilation, before any static checking. As a result, macros can capture many patterns of code reuse that Rust’s core abstractions cannot.
Said another way, macros are only useful when you already have some code that has appreciable boilerplate. They cannot do something beyond what the code itself does.
Additionally, Rust macros work at a level above C macros. Rust macros are not presented with the raw text, but instead have some pieces of the AST of the program.
Let's start with this simplified version:
struct Shader(usize);
impl Shader {
fn compile(source: &str) -> Shader {
println!("Compiling a shader");
Shader(source.len())
}
fn run(&self) {
println!("Running a shader {}", self.0)
}
}
fn main() {
for _ in 0..10 {
inner_loop();
}
}
fn inner_loop() {
let shader_1_src = r#"add 1 + 1"#;
let shader_1 = Shader::compile(shader_1_src);
let shader_2_src = r#"add 42 + 53"#;
let shader_2 = Shader::compile(shader_2_src);
shader_1.run();
shader_2.run();
}
The biggest problem here is the repeated compilation, so we can lazily compile it once using the lazy_static crate:
#[macro_use]
extern crate lazy_static;
// Previous code...
fn inner_loop() {
const SHADER_1_SRC: &'static str = r#"add 1 + 1"#;
lazy_static! {
static ref SHADER_1: Shader = Shader::compile(SHADER_1_SRC);
}
const SHADER_2_SRC: &'static str = r#"add 42 + 53"#;
lazy_static! {
static ref SHADER_2: Shader = Shader::compile(SHADER_2_SRC);
}
SHADER_1.run();
SHADER_2.run();
}
You can then go one step further and make another macro around that:
// Previous code...
macro_rules! shader {
($src_name: ident, $name: ident, $l: expr, $r: expr) => {
const $src_name: &'static str = concat!("add ", $l, " + ", $r);
lazy_static! {
static ref $name: Shader = Shader::compile($src_name);
}
}
}
fn inner_loop() {
shader!(S1, SHADER_1, "1", "2");
shader!(S2, SHADER_2, "42", "53");
SHADER_1.run();
SHADER_2.run();
}
Note that we have to provide a name for the inner source constant because there's currently no way of generating arbitrary identifiers in macros.
I'm no game programmer, but this type of code would make me wary. At potentially any point, you might execute some shader compilation, slowing down your program. I agree that pre-compiling all your shaders at program startup makes the most sense (or at Rust compile time, if possible!), but it simply doesn't make sense with your desired structure. If you can write plain Rust code that does what you want, then you can make a macro that makes it prettier. I just don't believe it's possible to write Rust code that does what you want.
There is a possibility that a syntax extension may be able to do what you want, but I don't have enough experience with them yet to soundly rule it in or out.

LLVM API: correct way to create/dispose

I'm attempting to implement a simple JIT compiler using the LLVM C API. So far, I have no problems generating IR code and executing it, that is: until I start disposing objects and recreating them.
What I basically would like to do is to clean up the JIT'ted resources the moment they're no longer used by the engine. What I'm basically attempting to do is something like this:
while (true)
{
// Initialize module & builder
InitializeCore(GetGlobalPassRegistry());
module = ModuleCreateWithName(some_unique_name);
builder = CreateBuilder();
// Initialize target & execution engine
InitializeNativeTarget();
engine = CreateExecutionEngineForModule(...);
passmgr = CreateFunctionPassManagerForModule(module);
AddTargetData(GetExecutionEngineTargetData(engine), passmgr);
InitializeFunctionPassManager(passmgr);
// [... my fancy JIT code ...] --** Will give a serious error the second iteration
// Destroy
DisposePassManager(passmgr);
DisposeExecutionEngine(engine);
DisposeBuilder(builder);
// DisposeModule(module); //--> Commented out: Deleted by execution engine
Shutdown();
}
However, this doesn't seem to be working correctly: the second iteration of the loop I get a pretty bad error...
So to summarize: what's the correct way to destroy and re-create the LLVM API?
Posting this as Answer because the code's too long. If possible and no other constraints, try to use LLVM like this. I am pretty sure the Shutdown() inside the loop is the culprit here. And I dont think it would hurt to keep the Builder outside, too. This reflects well the way I use LLVM in my JIT.
InitializeCore(GetGlobalPassRegistry());
InitializeNativeTarget();
builder = CreateBuilder();
while (true)
{
// Initialize module & builder
module = ModuleCreateWithName(some_unique_name);
// Initialize target & execution engine
engine = CreateExecutionEngineForModule(...);
passmgr = CreateFunctionPassManagerForModule(module);
AddTargetData(GetExecutionEngineTargetData(engine), passmgr);
InitializeFunctionPassManager(passmgr);
// [... my fancy JIT code ...] --** Will give a serious error the second iteration
// Destroy
DisposePassManager(passmgr);
DisposeExecutionEngine(engine);
}
DisposeBuilder(builder);
Shutdown();
/* program init */
LLVMInitializeNativeTarget();
LLVMInitializeNativeAsmPrinter();
LLVMInitializeNativeAsmParser();
LLVMLinkInMCJIT();
ctx->context = LLVMContextCreate();
ctx->builder = LLVMCreateBuilderInContext(ctx->context);
LLVMParseBitcodeInContext2(ctx->context, module_template_buf, &module) // create module
do IR code creation
{
function = LLVMAddFunction(ctx->module, "my_func")
LLVMAppendBasicBlockInContext(ctx->context, ...
LLVMBuild...
...
}
optional optimization
{
LLVMPassManagerBuilderRef pass_builder = LLVMPassManagerBuilderCreate();
LLVMPassManagerBuilderSetOptLevel(pass_builder, 3);
LLVMPassManagerBuilderSetSizeLevel(pass_builder, 0);
LLVMPassManagerBuilderUseInlinerWithThreshold(pass_builder, 1000);
LLVMPassManagerRef function_passes = LLVMCreateFunctionPassManagerForModule(ctx->module);
LLVMPassManagerRef module_passes = LLVMCreatePassManager();
LLVMPassManagerBuilderPopulateFunctionPassManager(pass_builder, function_passes);
LLVMPassManagerBuilderPopulateModulePassManager(pass_builder, module_passes);
LLVMPassManagerBuilderDispose(pass_builder);
LLVMInitializeFunctionPassManager(function_passes);
for (LLVMValueRef value = LLVMGetFirstFunction(ctx->module); value;
value = LLVMGetNextFunction(value))
{
LLVMRunFunctionPassManager(function_passes, value);
}
LLVMFinalizeFunctionPassManager(function_passes);
LLVMRunPassManager(module_passes, ctx->module);
LLVMDisposePassManager(function_passes);
LLVMDisposePassManager(module_passes);
}
optional for debug
{
LLVMVerifyModule(ctx->module, LLVMAbortProcessAction, &error);
LLVMPrintModule
}
if (LLVMCreateJITCompilerForModule(&ctx->engine, ctx->module, 0, &error) != 0)
my_func = (exec_func_t)(uintptr_t)LLVMGetFunctionAddress(ctx->engine, "my_func");
LLVMRemoveModule(ctx->engine, ctx->module, &ctx->module, &error);
LLVMDisposeModule(ctx->module);
LLVMDisposeBuilder(ctx->builder);
do
{
my_func(...);
}
LLVMDisposeExecutionEngine(ctx->engine);
LLVMContextDispose(ctx->context);
/* program finit */
LLVMShutdown();

How do you use a return value of true in Dart?

I get the following error when I try to execute the code below.
Uncaught TypeError: Object true has no method 'dartObjectLocalStorage$getter'
I started a Web Application in Dart Editor Version 0.1.0.201201150611 Build 3331. Here is the complete code for the lone dart file. The if statement that results in the error is commented below.
#import('dart:html');
class Test3D {
CanvasElement theCanvas;
WebGLRenderingContext gl;
static String vertexShaderSrc = """
attribute vec3 aVertexPosition;
void main(void) {
gl_Position = vec4(aVertexPosition, 1.0);
}
""";
Test3D() {
}
void run() {
write("Hello World!");
// Set up canvas
theCanvas = new Element.html("<canvas></canvas>");
theCanvas.width = 100;
theCanvas.height = 100;
document.body.nodes.add(theCanvas);
// Set up context
gl = theCanvas.getContext("experimental-webgl");
gl.clearColor(0.5, 0.5, 0.5, 1.0);
gl.clear(WebGLRenderingContext.COLOR_BUFFER_BIT);
WebGLShader vertexShader = gl.createShader(WebGLRenderingContext.VERTEX_SHADER);
gl.shaderSource(vertexShader, vertexShaderSrc);
gl.compileShader(vertexShader);
// Adding this line results in the error: Uncaught TypeError: Object true has no method 'dartObjectLocalStorage$getter
var wasSuccessful = gl.getShaderParameter(vertexShader, WebGLRenderingContext.COMPILE_STATUS);
}
void write(String message) {
// the HTML library defines a global "document" variable
document.query('#status').innerHTML = message;
}
}
void main() {
new Test3D().run();
}
I'm really keen on dart and would appreciate any help you could give me on this.
Here is the console output for the error:
Uncaught TypeError: Object true has no method 'dartObjectLocalStorage$getter'
htmlimpl0a8e4b$LevelDom$Dart.wrapObject$member
htmlimpl0a8e4b$WebGLRenderingContextWrappingImplementation$Dart.getShaderParameter$member
htmlimpl0a8e4b$WebGLRenderingContextWrappingImplementation$Dart.getShaderParameter$named
unnamedb54266$Test3D$Dart.run$member
unnamedb54266$Test3D$Dart.run$named
unnamedb54266$main$member
RunEntry.isolate$current
isolate$Isolate.run
isolate$IsolateEvent.process
isolate$doOneEventLoopIteration
next
isolate$doRunEventLoop
isolate$runEventLoop
RunEntry
(anonymous function)
The error indicates that somewhere in your code the field 'dartObjectLocalStorage' is accessed on the boolean true. The given code-snippet does not contain this identifier and is thus probably not responsible for the error.
It could be that the error-reporting gives the wrong line number (potentially even the wrong file).
To debug this:
try to find a reference to 'dartObjectLocalStorage' in your code.
try to find a reference to 'dartObjectLocalStorage$getter' in the generated code.
run on the VM or compile with a different compiler (frogc vs dartc).
good luck.
According to the documentation for WebGLRenderingContext the return type is Object, if you know the object is bool you can just use it as a bool or dynamic
var v=gl.getShaderParameter(vertexShader, WebGLRenderingContext.COMPILE_STATUS)
or more explicitly
var v=gl.getShaderParameter(vertexShader, WebGLRenderingContext.COMPILE_STATUS).dynamic
and then use it in your conditional statement. Also try compiling it with frogc rather than the build in JavaScript compiler, it usually results in better errors.