Myself and a few other guys are taking a crack at building a simple side scroller type game. However, I can not get a hold of them to help answer my question so I put it to you, the following code leaves me with a SIGSEGV error in the notated place... if anyone can tell me why, I would really appreciate it. If you need anymore info I will be watching this closely.
Main.cpp
Vector2 dudeDim(60,60);
Vector2 dudePos(300, 300);
Entity *test = new Entity("img/images.jpg", dudeDim, dudePos, false);
leads to:
Entity.cpp
Entity::Entity(std::string filename, Vector2 size, Vector2 position, bool passable):
mTexture(filename)
{
mTexture.load(false);
mDimension2D = size;
mPosition2D = position;
mPassable = passable;
}
leads to:
Textures.cpp
void Texture::load(bool generateMipmaps)
{
FREE_IMAGE_FORMAT imgFormat = FIF_UNKNOWN;
FIBITMAP *dib(0);
imgFormat = FreeImage_GetFileType(mFilename.c_str(), 0);
//std::cout << "File format: " << imgFormat << std::endl;
if (FreeImage_FIFSupportsReading(imgFormat)) // Check if the plugin has reading capabilities and load the file
dib = FreeImage_Load(imgFormat, mFilename.c_str());
if (!dib)
std::cout << "Error loading texture files!" << std::endl;
BYTE* bDataPointer = FreeImage_GetBits(dib); // Retrieve the image data
mWidth = FreeImage_GetWidth(dib); // Get the image width and height
mHeight = FreeImage_GetHeight(dib);
mBitsPerPixel = FreeImage_GetBPP(dib);
if (!bDataPointer || !mWidth || !mHeight)
std::cout << "Error loading texture files!" << std::endl;
// Generate and bind ID for this texture
vvvvvvvvvv!!!ERROR HERE!!!vvvvvvvvvvv
glGenTextures(1, &mId);
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
glBindTexture(GL_TEXTURE_2D, mId);
int format = mBitsPerPixel == 24 ? GL_BGR_EXT : mBitsPerPixel == 8 ? GL_LUMINANCE : 0;
int iInternalFormat = mBitsPerPixel == 24 ? GL_RGB : GL_DEPTH_COMPONENT;
if(generateMipmaps)
glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, mWidth, mHeight, 0, format, GL_UNSIGNED_BYTE, bDataPointer);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR); // Linear Filtering
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR); // Linear Filtering
//std::cout << "texture generated " << mId << std::endl;
FreeImage_Unload(dib);
}
after reading Peter's suggestion I have changed my main.cpp file to:
#include <iostream>
#include <vector>
#include "Game.h"
using namespace std;
int main(int argc, char** argv)
{
Game theGame;
/* Initialize game control objects and resources */
if (theGame.onInit() != false)
{
return theGame.onExecute();
}
else
{
return -1;
}
}
and it would seem the SIGSEGV error is gone and I'm now left with something not initializing. So thank you peter you were correct now I'm off to solve this issue.
ok so this is obviously a small amount of the code but in order to save time and a bit of sanity: all the code is available at:
GitHub Repo
So after looking at your code I can say that it's probably that you have not initialized you OpenGL context before executing that code.
You need to call your Game::onInit() which also calls RenderEngine::initGraphics() before making any calls to OpenGL. Which you currently don't do. You currently do main()->Game ctor (calls rendering engine ctor but that ctor doesn't init SDL and OpenGL)->Entity ctor->load texture
For details look at the OpenGL Wiki FAQ
Related
In my game engine, I have a texture loading API which wraps low level libraries like OpenGL, DirectX, etc. This API uses Magick++ because I found it to be a convenient cross-platform solution and allows me to create procedural textures fairly easily.
I'm now adding a text rendering system using freetype where I want to use this texture API to dynamically generate a texture atlas for any given font where all the glyphs are stored horizontally adjacent.
I have been able to get this to work in the past by buffering the bitmaps directly into OpenGL. But now I want to accomplish this in a platform independent way, using this API.
I've looked around for a few examples but I can't find anything quite like what I'm after so if there are any magick++ experts around, I'd really appreciate some pointers.
So in simple terms: I've got a freetype bitmap and I want to be able to copy its pixel buffer to a specific offset inside a Magick::Image.
This code might help to clarify:
auto texture = e6::textures->create(e6::texture::specification{}, [name, totalWidth, maxHeight](){
// Initialises Freetype
FT_Face face;
FT_Library ft;
if (FT_Init_FreeType(&ft)) {
std::cout << "ERROR::FREETYPE: Could not init FreeType Library" << std::endl;
}
if (int error = FT_New_Face(ft, path(name.c_str()).c_str(), 0, &face)) {
std::cout << "Failed to initialise fonts: " << name << std::endl;
throw std::exception();
}
// Sets the size of the font
FT_Set_Pixel_Sizes(face, 0, 100);
unsigned int cursor = 0; // Keeps track of the horizontal offset.
// Preallocate an image buffer
// totalWidth and maxHeight is the size of the entire atlas
Magick::Image image(Magick::Geometry(totalWidth, maxHeight), "BLACK");
image.type(Magick::GrayscaleType);
image.magick("BMP");
image.depth(8);
image.modifyImage();
Magick::Pixels view(image);
// Loops through a subset of the ASCII codes
for (uint8_t c = 32; c < 128; c++) {
if (FT_Load_Char(face, c, FT_LOAD_RENDER)) {
std::cout << "Failed to load glyph: " << c << std::endl;
continue;
}
// Just for clarification...
unsigned int width = face->glyph->bitmap.width;
unsigned int height = face->glyph->bitmap.rows;
unsigned char* image_data = face->glyph->bitmap.buffer;
// This is the problem part.
// How can I copy the image_data into `image` at the cursor position?
cursor += width; // Advance the cursor
}
image.write(std::string(TEXTURES) + "font-test.bmp"); // Write to filesystem
// Clean up freetype
FT_Done_Face(face);
FT_Done_FreeType(ft);
return image;
}, "font-" + name);
I tried using a pixel cache which the documentation demonstrates:
Magick::Quantum *pixels = view.get(cursor, 0, width, height);
*pixels = *image_data;
view.sync();
But this leaves me with a completely black image, I think because the image_data goes out of scope.
I was hoping there'd be a way to modify the image data directly but after a lot of trial and error, I ended up just creating an image for each glyph and compositing them together:
...
Magick::Image glyph (Magick::Geometry(), "BLACK");
glyph.type(MagickCore::GrayscaleType);
glyph.magick("BMP");
glyph.depth(8);
glyph.read(width, height, "R", Magick::StorageType::CharPixel, image_data);
image.composite(glyph, cursor, 0);
cursor += width;
At the very least, I hope this helps to prevent someone else going down the same rabbit hole I did.
Can any one tell me why my code works on local machine but not on a remote machine I'm trying to push to?
local driver version: NVIDIA-SMI 460.91.03 Driver Version: 460.91.03
remote driver version: NVIDIA-SMI 435.21 Driver Version: 435.21
When trying to run on remote machine, I keep getting:
QGLFramebufferObject: Framebuffer incomplete attachment.
QGLFramebufferObject: Framebuffer incomplete attachment.
Framebuffer is not valid, may be out of memoryor context is not valid
Could not bind framebuffer
Image passed to GenImage is null!
Framebuffer code:
/*--------------------------------------------------------------------------*/
/*--------------------------------------------------------------------------*/
void GlowFramebuffer::Create( int width , int height )
{
QGLFramebufferObjectFormat format;
if( m_format == GLOW_R )
{
format.setInternalTextureFormat(GL_RED );
m_framebuffer =
QSharedPointer<QGLFramebufferObject>(
new QGLFramebufferObject(width, height, format) );
} else if ( m_attachment == GLOW_DEPTH_STENCIL ) {
format.setAttachment( QGLFramebufferObject::CombinedDepthStencil );
m_framebuffer =
QSharedPointer<QGLFramebufferObject>(
new QGLFramebufferObject(width, height, format) );
}
else // GLOW_RGBA
{
m_framebuffer =
QSharedPointer<QGLFramebufferObject>(
new QGLFramebufferObject(width, height) );
}
SetClearColor( m_clear_color );
}
/*--------------------------------------------------------------------------*/
/*--------------------------------------------------------------------------*/
void GlowFramebuffer::Create( const QSize& size )
{
Create( size.width() , size.height() );
if( !m_framebuffer->isValid() )
{
qCritical() << "Framebuffer is not valid, may be out of memory"
"or context is not valid";
}
}
/*--------------------------------------------------------------------------*/
/*--------------------------------------------------------------------------*/
int GlowFramebuffer::CopyMultiTexture( GlowFilter filter , GlowFormat format )
{
GLint width = m_framebuffer->width();
GLint height = m_framebuffer->height();
GLuint FramebufferName = 0;
glGenFramebuffers(1, &FramebufferName);
glBindFramebuffer(GL_FRAMEBUFFER, FramebufferName);
GLenum glfilter = (filter == GLOW_NEAREST) ? GL_NEAREST : GL_LINEAR;
GLenum glformat = (format == GLOW_R ) ? GL_R : GL_RGBA;
GLuint renderedTexture;
glGenTextures(1, &renderedTexture);
// "Bind" the newly created texture : all future texture functions will modify this texture
glBindTexture(GL_TEXTURE_2D, renderedTexture);
// Give an empty image to OpenGL ( the last "0" )
glTexImage2D(GL_TEXTURE_2D, 0,glformat, width, height, 0,glformat, GL_UNSIGNED_BYTE, 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, glfilter);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, glfilter);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, renderedTexture, 0);
// Set the list of draw buffers.
GLenum DrawBuffers[1] = {GL_COLOR_ATTACHMENT0};
glDrawBuffers(1, DrawBuffers); // "1" is the size of DrawBuffers
GLenum status = glCheckFramebufferStatus( GL_FRAMEBUFFER );
if( status != GL_FRAMEBUFFER_COMPLETE )
{
qCritical() << "Error with framebuffer!";
}
GLuint handle = m_framebuffer->handle();
GLClear();
glBindFramebuffer( GL_DRAW_FRAMEBUFFER , FramebufferName );
glBindFramebuffer( GL_READ_FRAMEBUFFER , handle );
glDrawBuffer( GL_BACK );
glBlitFramebuffer( 0 , 0 , width , height , 0 , 0 , width , height , GL_COLOR_BUFFER_BIT , GL_NEAREST );
glBindTexture( GL_TEXTURE_2D , 0 );
glBindFramebuffer( GL_FRAMEBUFFER , handle );
glDeleteFramebuffers( 1 , &FramebufferName );
return renderedTexture;
}
I know its likely because FBOs are specific to each machine and driver. In order to ensure its compatible, you need to check the system to make sure the Format which you created your frame buffer is valid.
I think its failing on the remote machine at this line:
GLenum status = glCheckFramebufferStatus( GL_FRAMEBUFFER );
if( status != GL_FRAMEBUFFER_COMPLETE )
{
qCritical() << "Error with framebuffer!";
}
I think glformat variable is the format to be adjusted in this function:
glTexImage2D(GL_TEXTURE_2D, 0,glformat, width, height, 0,glformat, GL_UNSIGNED_BYTE, 0);
How do I have it pick the appropriate format to make the FBO on the remote machine?
Code for the image from the framebuffer we're trying to attach:
bool Product::SetImgInfo(GeoRadarProd &prod, const JsonConfig &config)
{
DataInput input(FLAGS_input.c_str());
QString file_name_data = FLAGS_file_name.c_str();
int width = config.GetInt("width");
int height = config.GetInt("height");
if(!input.isValid())
{
LOG(INFO) << "Input is not valid";
return false;
}
QByteArray data = input.getByteArray();
VLOG(3) << "width from config: "<< width;
VLOG(3) << "height from config: "<< height;
VLOG(3) << "data : "<< data.size();
QImage image_data;
image_data.fromData(data, "PNG");
VLOG(3) << "what is file_name_data ???: " << file_name_data.toStdString();
VLOG(3) << "is image_data load???: " << image_data.load(file_name_data, "PNG");
VLOG(3) << "is image_data null???: " << image_data.isNull();
VLOG(3) << "image data width: "<< image_data.width();
VLOG(3) << "image data height: "<< image_data.height();
VLOG(3)<< "Original Format was tif";
VLOG(3)<<"Data Img H: "<< image_data.height()<<" W: "<<image_data.width();
QImage new_image(image_data.width(), image_data.height(), QImage::Format_RGBA8888);
// Format_ARGB32 , Format_RGBA8888_Premultiplied
VLOG(3)<<"New Img H: "<<new_image.height()<<" W: "<<new_image.width();
VLOG(3)<<"Setting img data";
for(int idx = 0; idx < image_data.width(); idx++)
{
for(int idy = 0; idy < image_data.height(); idy++)
{
int index_value = image_data.pixelIndex(idx, idy);
uint color_value;
if(index_value == 0 )
{
color_value = qRgba((int(0)), 0, 0, 0);
}
else
{
//! +1*20 to have a wider spread in the palette
//! and since the values start from 0
// index_value = index_value + 1;
color_value = qRgb(((int)index_value ), 0, 0);
}
new_image.setPixel(idx, idy, color_value);
}
}
const QImage& img = QGLWidget::convertToGLFormat(new_image);
prod.setQImageData(img);
return true;
}
The image format you use is never the cause of this completeness error:
Framebuffer incomplete attachment.
This error means that one of the attachments violates OpenGL's rules on attachment completeness. This is not a problem of the wrong format (per se); it is a code bug.
If you see this on one implementation, you should see it on all implementations running the same FBO setup code. Attachment completeness rules are not allowed to change based on the implementation (with an exception). So if you're seeing this error appear or not appear based on the implementation, one of the following is happening:
There is a bug in one or more of the implementations. Your code may or may not be following the attachment completeness rules, but implementations are supposed to implement the same rules.
You are not in fact using the same FBO setup logic on the different implementations. Since you're creating a QGLFramebufferObject rather than a proper FBO, a lot of the FBO setup code is out of your hands. So maybe stop using Qt for this.
You are running on implementations that differ significantly in which version of OpenGL they implement. See, while the attachment rules don't allow implementation variance, this is only true for a specific version of OpenGL itself. New versions of the API can change the attachment rules, but backwards compatibility guarantees ensure that code that worked on older versions continues to work on newer ones.
But that only works in one direction: code written against newer OpenGL versions may not be compatible with older implementation versions. Not unless you specifically check the standards and write code that ought to work on all versions you intend to run on.
Again, most of your FBO logic is hidden behind Qt's implementation, so there's no simple way to know what's going on. It would be good to ditch Qt and do it yourself to make sure.
Now, the above assumes that when Qt tells you "QGLFramebufferObject: Framebuffer incomplete attachment", it really means the GL_FRAMEBUFFER_INCOMPLETE_ATTACHMENT completeness status. It's possible Qt is lying to you and your problem really is the format.
When making textures meant to be used by FBOs, you should stick to the image formats that are required to work for FBO-attached images. GL_RED is not one of them.
I'm facing this issue for months. Every font I load with freetype works fine, I can prove that to myself by writing the glyphs to a BMP file using the stb_write_bmp from the STB library. I can see all hundreds of glyph bitmaps written correctly in my folder.
OpenGL, on the other hand, is unable to load the textures from specific fonts, and I see no explainable reason as to why some load and others don't. I have done everything to try and solve it but it keeps happening. The textures simply don't load at all. I'm doing exactly the same procedure for every font, so I should be getting the same results.
Examples: (with maximum of 0.1 alpha for debugging invisible squares like the last ones)
Cantarell-Regular.ttf
CPMono_v07-Light.otf
SourceSansPro-Regular.ttf (bugged - no texture)
RobotoCondensed-Regular.ttf (bugged - no texture)
What do the loaded fonts have in common? It's not the extension, at least, since I can render cantarell's TTF files. All of them, even the ones that the texture don't load, are correctly written as BMP if i send them to the STB function. So why are the last ones not loaded by the GL? This makes no sense to me... there's no consistency except that the ones that don't load, never load, and the ones that load, always load. But I see nothing that could have different effects on different fonts, and this is confusing me a lot. I am also returning the errors to an int on every FT function and it's all clear by their side.
This is my renderer's constructor, where everything relevant to the text rendering and texture uploading is setup:
if (error = FT_Init_FreeType(&ftlib))
{
std::cout << "Freetype failed to initialize (exit " << error << ')'
<< std::endl;
}
else
{
std::cout << "Freetype initialized";
}
if (error = FT_New_Face(ftlib, fontFile, 0, &face))
{
std::cout << "but could not load font " << face->family_name
<< " (exit " << error << ')' << std::endl;
}
else
{
std::cout << " and loaded font " << face->family_name << std::endl;
FT_Set_Pixel_Sizes(face, 0, height);
/// Determine total length of texture atlas
FT_UInt cindex = 0;
for (
FT_ULong c = FT_Get_First_Char(face, &cindex);
cindex != 0;
c = FT_Get_Next_Char(face, c, &cindex)
) {
if (error = FT_Load_Glyph(face, cindex, FT_LOAD_BITMAP_METRICS_ONLY))
{
std::cout << "Freetype: Could not load glyph "
"(exit " << error << "[1])"
<< std::endl;
continue;
}
else
{
atlasLength += face->glyph->bitmap.width;
if (totalRows < face->glyph->bitmap.rows) {
totalRows = face->glyph->bitmap.rows;
};
}
}
bindShaders();
/// Setup VAO and texture object.
newVertexArray();
bindVertexArray(vertexArray[0]);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glGenTextures(1, &atlas);
glBindTexture(GL_TEXTURE_2D, atlas);
glTexImage2D(
GL_TEXTURE_2D, 0, GL_RED,
atlasLength, totalRows,
0, GL_RED, GL_UNSIGNED_BYTE, 0
);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glGenerateMipmap(GL_TEXTURE_2D);
GLint offset = 0;
cindex = 0;
/// Start indexing textures
for (
FT_ULong c = FT_Get_First_Char(face, &cindex);
cindex != 0;
c = FT_Get_Next_Char(face, c, &cindex))
{
if (error = FT_Load_Glyph(face, cindex, FT_LOAD_RENDER | FT_LOAD_FORCE_AUTOHINT))
{
std::cout << "Freetype could not load glyph "
"(exit " << error << "[2])" << "\n"
;
continue;
}
else
{
FT_GlyphSlot glyph = face->glyph;
glTexSubImage2D(
GL_TEXTURE_2D, 0, offset, 0,
glyph->bitmap.width, glyph->bitmap.rows,
GL_RED, GL_UNSIGNED_BYTE, glyph->bitmap.buffer
);
Glyph character =
{
static_cast<Float>(offset),
static_cast<Float>(offset) + glyph->bitmap.width,
glm::u16vec2(glyph->bitmap.width, glyph->bitmap.rows),
glm::u16vec2(face->glyph->bitmap_left, face->glyph->bitmap_top),
glm::u16vec2(face->glyph->advance.x, face->glyph->advance.y),
};
characters.insert(std::make_pair(c, character));
}
offset += face->glyph->bitmap.width;
}
}
glBindVertexArray(0);
bindShaders(0);
Am I doing something wrong? As far as I can see there's nothing that could go wrong here.
Edit:
I have found that the font height is playing a part here. For some reason, if I send a value of 12px to FT_Set_Pixel_Sizes, all fonts open. Increasing the height makes Source Sans eventually stop working at some point (I haven't marked exactly at what size it stopped working but somewhere around 20px), and at 42px Roboto stops working too. I don't know why though, since the bitmaps are ok in all these cases. At least, this is a bit less mistifying now.
I don’t know if it gonna help, but there is one problem with your code. You’re ignoring the FT_Bitmap.pitch field. The RAM layout of the bitmap depends substantially on that value, yet there’s no way to supply the value to OpenGL. There’s GL_UNPACK_ROW_LENGTH setting but that value is pixels, not bytes.
Try to copy these bitmaps to another buffer with stride equal to glyph width * sizeof(pixel), and then pass that temporary buffer to glTexSubImage2D. If you’re on Windows call MFCopyImage, otherwise do it manually by calling memcpy() in a loop.
Don’t forget the pitch can be negative, which means the FT bitmap is stored with bottom-to-top row order.
Also, check FT_Bitmap.pixel_mode field is FT_PIXEL_MODE_GRAY, that’s the only value compatible with your GL texture.
Intel RealSense Depth Camera D435i.
I try to capture an image and save it as stl format.
I use this project provided by Intel to achieve this task.
https://github.com/IntelRealSense/librealsense/releases/download/v2.29.0/Intel.RealSense.SDK.exe
In the solution there is an application named PointCloud.
I modified a little the application to have a clear image.
But even with the basic code, the result is not very satisfying.
I capture a smooth surface but there are many little bumps on result.
I don't know if the problem comes from the SDK or from the camera.
I check the result in MeshLab which is a great 3D tool.
Any idea ?
The result (a smooth table surface) :
Here is my code C++ (I added some filters only but without filters I have the same problem) :
#include <librealsense2/rs.hpp> // Include RealSense Cross Platform API
#include "example.hpp" // Include short list of convenience functions for rendering
#include <algorithm> // std::min, std::max
#include <iostream>
#include <Windows.h>
#include <imgui.h>
#include "imgui_impl_glfw.h"
#include <stdio.h>
#include <windows.h>
#include <conio.h>
#include "tchar.h"
// Helper functions
void register_glfw_callbacks(window& app, glfw_state& app_state);
int main(int argc, char * argv[]) try
{
::ShowWindow(::GetConsoleWindow(), SW_HIDE);
// Create a simple OpenGL window for rendering:
window app(1280, 720, "Capron 3D");
ImGui_ImplGlfw_Init(app, false);
bool capture = false;
HWND hWnd;
hWnd = FindWindow(NULL, _T("Capron 3D"));
ShowWindow(hWnd, SW_MAXIMIZE);
// Construct an object to manage view state
glfw_state app_state;
// register callbacks to allow manipulation of the pointcloud
register_glfw_callbacks(app, app_state);
app_state.yaw = 3.29;
app_state.pitch = 0;
// Declare pointcloud object, for calculating pointclouds and texture mappings
rs2::pointcloud pc;
// We want the points object to be persistent so we can display the last cloud when a frame drops
rs2::points points;
// Declare RealSense pipeline, encapsulating the actual device and sensors
rs2::pipeline pipe;
// Start streaming with default recommended configuration
pipe.start();
rs2::decimation_filter dec_filter;
rs2::spatial_filter spat_filter;
rs2::threshold_filter thres_filter;
rs2::temporal_filter temp_filter;
float w = static_cast<float>(app.width());
float h = static_cast<float>(app.height());
while (app) // Application still alive?
{
static const int flags = ImGuiWindowFlags_NoCollapse
| ImGuiWindowFlags_NoScrollbar
| ImGuiWindowFlags_NoSavedSettings
| ImGuiWindowFlags_NoTitleBar
| ImGuiWindowFlags_NoResize
| ImGuiWindowFlags_NoMove;
ImGui_ImplGlfw_NewFrame(1);
ImGui::SetNextWindowSize({ app.width(), app.height() });
ImGui::Begin("app", nullptr, flags);
// Set options for the ImGui buttons
ImGui::PushStyleColor(ImGuiCol_TextSelectedBg, { 1, 1, 1, 1 });
ImGui::PushStyleColor(ImGuiCol_Button, { 36 / 255.f, 44 / 255.f, 51 / 255.f, 1 });
ImGui::PushStyleColor(ImGuiCol_ButtonHovered, { 40 / 255.f, 170 / 255.f, 90 / 255.f, 1 });
ImGui::PushStyleColor(ImGuiCol_ButtonActive, { 36 / 255.f, 44 / 255.f, 51 / 255.f, 1 });
ImGui::PushStyleVar(ImGuiStyleVar_FrameRounding, 12);
ImGui::SetCursorPos({ 10, 10 });
if (ImGui::Button("Capturer", { 100, 50 }))
{
capture = true;
}
// Wait for the next set of frames from the camera
auto frames = pipe.wait_for_frames();
auto color = frames.get_color_frame();
// For cameras that don't have RGB sensor, we'll map the pointcloud to infrared instead of color
if (!color)
color = frames.get_infrared_frame();
// Tell pointcloud object to map to this color frame
pc.map_to(color);
auto depth = frames.get_depth_frame();
/*spat_filter.set_option(RS2_OPTION_FILTER_SMOOTH_DELTA, 50);
depth = spat_filter.process(depth);*/
spat_filter.set_option(RS2_OPTION_FILTER_SMOOTH_ALPHA, 1);
depth = spat_filter.process(depth);
spat_filter.set_option(RS2_OPTION_HOLES_FILL, 2);
depth = spat_filter.process(depth);
//temp_filter.set_option(RS2_OPTION_FILTER_SMOOTH_ALPHA, 1);
//depth = temp_filter.process(depth);
// Generate the pointcloud and texture mappings
points = pc.calculate(depth);
// Upload the color frame to OpenGL
app_state.tex.upload(color);
thres_filter.set_option(RS2_OPTION_MIN_DISTANCE, 0);
depth = thres_filter.process(depth);
// Draw the pointcloud
draw_pointcloud(int(w) / 2, int(h) / 2, app_state, points);
if (capture)
{
points.export_to_ply("My3DFolder\\new.ply", depth);
return EXIT_SUCCESS;
}
ImGui::PopStyleColor(4);
ImGui::PopStyleVar();
ImGui::End();
ImGui::Render();
}
return EXIT_SUCCESS;
}
catch (const rs2::error & e)
{
std::cerr << "RealSense error calling " << e.get_failed_function() << "(" << e.get_failed_args() << "):\n " << e.what() << std::endl;
MessageBox(0, "Erreur connexion RealSense. Veuillez vérifier votre caméra 3D.", "Capron Podologie", 0);
return EXIT_FAILURE;
}
catch (const std::exception & e)
{
std::cerr << e.what() << std::endl;
return EXIT_FAILURE;
}
I found the answer,
I was using HolesFill filter.
//These lines
spat_filter.set_option(RS2_OPTION_HOLES_FILL, 2);
depth = spat_filter.process(depth);
And the holes fill algorithm is prédictif. It creates points but the coordinates of these points are not exactly correct. The second parameter of the spat_filter.set_option is between 1 and 5. More I increade this parameter, more noised is the result.
If I remove these lines, I have a clearer result.
But this time I have many holes on the result.
I want to get 1280x720 depth image and 1280x720 color image.
So I founded as coded :
// License: Apache 2.0. See LICENSE file in root directory.
// Copyright(c) 2017 Intel Corporation. All Rights Reserved.
#include "librealsense2/rs.hpp" // Include RealSense Cross Platform API
#include "example.hpp" // Include short list of convenience functions for rendering
#include "opencv2/opencv.hpp"
#include <iostream>
#include "stb-master\stb_image_write.h"
using namespace std;
using namespace cv;
// Capture Example demonstrates how to
// capture depth and color video streams and render them to the screen
int main(int argc, char * argv[]) try
{
int width = 1280;
int height = 720;
rs2::log_to_console(RS2_LOG_SEVERITY_ERROR);
// Create a simple OpenGL window for rendering:
window app(width, height, "RealSense Capture Example");
// Declare two textures on the GPU, one for color and one for depth
texture depth_image, color_image;
// Declare depth colorizer for pretty visualization of depth data
rs2::colorizer color_map;
color_map.set_option(RS2_OPTION_HISTOGRAM_EQUALIZATION_ENABLED,1.f);
color_map.set_option(RS2_OPTION_COLOR_SCHEME, 2.f);
// Declare RealSense pipeline, encapsulating the actual device and sensors
rs2::pipeline pipe;
// Start streaming with default recommended configuration
pipe.start();
while (app) // Application still alive?
{
rs2::frameset data = pipe.wait_for_frames(); // Wait for next set of frames from the camera
rs2::frame depth = color_map(data.get_depth_frame()); // Find and colorize the depth data
rs2::frame color = data.get_color_frame(); // Find the color data
// For cameras that don't have RGB sensor, we'll render infrared frames instead of color
if (!color)
color = data.get_infrared_frame();
// Render depth on to the first half of the screen and color on to the second
depth_image.render(depth, { 0, 0, app.width() / 2, app.height() });
color_image.render(color, { app.width() / 2, 0, app.width() / 2, app.height() });
}
return EXIT_SUCCESS;
}
catch (const rs2::error & e)
{
std::cerr << "RealSense error calling " << e.get_failed_function() << "(" << e.get_failed_args() << "):\n " << e.what() << std::endl;
return EXIT_FAILURE;
}
catch (const std::exception& e)
{
std::cerr << e.what() << std::endl;
return EXIT_FAILURE;
}
I want to this..
Push the c <- keyboard value
save the color image and depth image in PNG foramt
I can get the code about 2.
but, i don't know how to call the action when I press the "c"
I guess I have to use this example.hpp.
GLFWwindow * win = glfwCreateWindow(tile_w*cols, tile_h*rows, ss.str().c_str(), 0, 0);
glfwSetWindowUserPointer(win, &dev);
glfwSetKeyCallback(win, [](GLFWwindow * win, int key, int scancode, int action, int mods)
{
auto dev = reinterpret_cast<rs::device *>(glfwGetWindowUserPointer(win));
if (action != GLFW_RELEASE) switch (key)
{
case GLFW_KEY_R: color_rectification_enabled = !color_rectification_enabled; break;
case GLFW_KEY_C: align_color_to_depth = !align_color_to_depth; break;
case GLFW_KEY_D: align_depth_to_color = !align_depth_to_color; break;
case GLFW_KEY_E:
if (dev->supports_option(rs::option::r200_emitter_enabled))
{
int value = !dev->get_option(rs::option::r200_emitter_enabled);
std::cout << "Setting emitter to " << value << std::endl;
dev->set_option(rs::option::r200_emitter_enabled, value);
}
break;
case GLFW_KEY_A:
if (dev->supports_option(rs::option::r200_lr_auto_exposure_enabled))
{
int value = !dev->get_option(rs::option::r200_lr_auto_exposure_enabled);
std::cout << "Setting auto exposure to " << value << std::endl;
dev->set_option(rs::option::r200_lr_auto_exposure_enabled, value);
}
break;
}
});
This code is used in librealsense 1.X version. I would like to change this to librealsense 2.0 version code. But I do not know what to do.
How way do I change this code??
Thanks for reading!
Useful samples to get you on your way with RealSense SDK 2.0 and OpenCV are available in the repo at /wrappers/opencv
Keep in mind that the Supported Devices by SDK 2.0 are:
Intel® RealSense™ Camera D400-Series
Intel® RealSense™ Developer Kit SR300