Kinect Facetracking C++ start up error - c++

We are working with the Kinect to track faces for a schoolproject. We have set up Visual Studio 2012, and all the test programs are working correctly. However we are trying to run this code and it gives us an error. After many attempts to fix the code, it gives the following error:
"The application was unable to start correctly (0xc000007b).Click OK to close the application.
The good thing is that it's finally running. The bad thing is that the compiler doesn't throw any errors other than this vague error.
We are completely lost and we hope that someone can help us or point us into the right direction. Thanks in advance for helping us.
The code:
#include "stdafx.h"
#include <iostream>
#include <Windows.h>
#include <NuiApi.h>
#include <FaceTrackLib.h>
#include <NuiSensor.h>
using namespace std;
HANDLE rgbStream;
HANDLE depthStream;
INuiSensor* sensor;
#define width 640
#define height 480
bool initKinect() {
// Get a working kinect sensor
int numSensors;
if (NuiGetSensorCount(&numSensors) < 0 || numSensors < 1) return false;
if (NuiCreateSensorByIndex(0, &sensor) < 0) return false;
// Initialize sensor
sensor->NuiInitialize(NUI_INITIALIZE_FLAG_USES_DEPTH | NUI_INITIALIZE_FLAG_USES_COLOR);
sensor->NuiImageStreamOpen(
NUI_IMAGE_TYPE_COLOR, // Depth camera or rgb camera?
NUI_IMAGE_RESOLUTION_640x480, // Image resolution
0, // Image stream flags, e.g. near mode
2, // Number of frames to buffer
NULL, // Event handle
&rgbStream);
// --------------- END CHANGED CODE -----------------
return true;
}
BYTE* dataEnd;
USHORT* dataEndD;
void getKinectDataD(){
NUI_IMAGE_FRAME imageFrame;
NUI_LOCKED_RECT LockedRect;
if (sensor->NuiImageStreamGetNextFrame(rgbStream, 0, &imageFrame) < 0) return;
INuiFrameTexture* texture = imageFrame.pFrameTexture;
texture->LockRect(0, &LockedRect, NULL, 0);
const USHORT* curr = (const USHORT*)LockedRect.pBits;
const USHORT* dataEnding = curr + (width*height);
if (LockedRect.Pitch != 0)
{
const BYTE* curr = (const BYTE*)LockedRect.pBits;
dataEnd = (BYTE*)(curr + (width*height) * 4);
}
while (curr < dataEnding) {
// Get depth in millimeters
USHORT depth = NuiDepthPixelToDepth(*curr++);
dataEndD = (USHORT*)depth;
// Draw a grayscale image of the depth:
// B,G,R are all set to depth%256, alpha set to 1.
}
texture->UnlockRect(0);
sensor->NuiImageStreamReleaseFrame(rgbStream, &imageFrame);
}
// This example assumes that the application provides
// void* cameraFrameBuffer, a buffer for an image, and that there is a method
// to fill the buffer with data from a camera, for example
// cameraObj.ProcessIO(cameraFrameBuffer)
int main(){
initKinect();
// Create an instance of a face tracker
IFTFaceTracker* pFT = FTCreateFaceTracker();
if (!pFT)
{
// Handle errors
}
// Initialize cameras configuration structures.
// IMPORTANT NOTE: resolutions and focal lengths must be accurate, since it affects tracking precision!
// It is better to use enums defined in NuiAPI.h
// Video camera config with width, height, focal length in pixels
// NUI_CAMERA_COLOR_NOMINAL_FOCAL_LENGTH_IN_PIXELS focal length is computed for 640x480 resolution
// If you use different resolutions, multiply this focal length by the scaling factor
FT_CAMERA_CONFIG videoCameraConfig = { 640, 480, NUI_CAMERA_COLOR_NOMINAL_FOCAL_LENGTH_IN_PIXELS };
// Depth camera config with width, height, focal length in pixels
// NUI_CAMERA_COLOR_NOMINAL_FOCAL_LENGTH_IN_PIXELS focal length is computed for 320x240 resolution
// If you use different resolutions, multiply this focal length by the scaling factor
FT_CAMERA_CONFIG depthCameraConfig = { 320, 240, NUI_CAMERA_DEPTH_NOMINAL_FOCAL_LENGTH_IN_PIXELS };
// Initialize the face tracker
HRESULT hr = pFT->Initialize(&videoCameraConfig, &depthCameraConfig, NULL, NULL);
if (FAILED(hr))
{
// Handle errors
}
// Create a face tracking result interface
IFTResult* pFTResult = NULL;
hr = pFT->CreateFTResult(&pFTResult);
if (FAILED(hr))
{
// Handle errors
}
// Prepare image interfaces that hold RGB and depth data
IFTImage* pColorFrame = FTCreateImage();
IFTImage* pDepthFrame = FTCreateImage();
if (!pColorFrame || !pDepthFrame)
{
// Handle errors
}
// Attach created interfaces to the RGB and depth buffers that are filled with
// corresponding RGB and depth frame data from Kinect cameras
pColorFrame->Attach(640, 480, dataEnd, FTIMAGEFORMAT_UINT8_R8G8B8, 640 * 3);
pDepthFrame->Attach(320, 240, dataEndD, FTIMAGEFORMAT_UINT16_D13P3, 320 * 2);
// You can also use Allocate() method in which case IFTImage interfaces own their memory.
// In this case use CopyTo() method to copy buffers
FT_SENSOR_DATA sensorData;
sensorData.ZoomFactor = 1.0f; // Not used must be 1.0
bool isFaceTracked = false;
// Track a face
while (true)
{
// Call Kinect API to fill videoCameraFrameBuffer and depthFrameBuffer with RGB and depth data
getKinectDataD();
// Check if we are already tracking a face
if (!isFaceTracked)
{
// Initiate face tracking.
// This call is more expensive and searches the input frame for a face.
hr = pFT->StartTracking(&sensorData, NULL, NULL, pFTResult);
if (SUCCEEDED(hr))
{
isFaceTracked = true;
}
else
{
// No faces found
isFaceTracked = false;
}
}
else
{
// Continue tracking. It uses a previously known face position.
// This call is less expensive than StartTracking()
hr = pFT->ContinueTracking(&sensorData, NULL, pFTResult);
if (FAILED(hr))
{
// Lost the face
isFaceTracked = false;
}
}
// Do something with pFTResult like visualize the mask, drive your 3D avatar,
// recognize facial expressions
}
// Clean up
pFTResult->Release();
pColorFrame->Release();
pDepthFrame->Release();
pFT->Release();
return 0;
}

We figured it out we used the wrong dll indeed, it runs without errors now. But we ran in to an another problem, we have no clue how to use the pFTResult and retrieve the face angles with use of "getFaceRect". Does somebody know how?

Related

Applying HLSL Pixel Shaders to Win32 Screen Capture

A little background: I'm attempting to make a Windows (10) application which makes the screen look like an old CRT monitor, scanlines, blur, and all. I'm using this official Microsoft screen capture demo as a starting point: At this stage I can capture a window, and display it back in a new mouse-through window as if it were the original window.
I am attempting to use the CRT-Royale CRT shaders which are generally considered the best CRT shaders; these are available in .cg format. I transpile them with cgc to hlsl, then compile the hlsl files to compiled shader byte code with fxc. I am able to successfully load the compiled shaders and create the pixel shader. I then set the pixel shader in the d3d context. I then attempt to copy the capture surface frame to a pixel shader resource and set the created shaders resource. All of this builds and runs, but I do not see any difference in the output image and am not sure how to proceed. Below is the relevant code. I am not a c++ developer and am making this as a personal project which I plan on open sourcing once I have a primitive working version. Any advice is appreciated, thanks.
SimpleCapture::SimpleCapture(
IDirect3DDevice const& device,
GraphicsCaptureItem const& item)
{
m_item = item;
m_device = device;
// Set up
auto d3dDevice = GetDXGIInterfaceFromObject<ID3D11Device>(m_device);
d3dDevice->GetImmediateContext(m_d3dContext.put());
auto size = m_item.Size();
m_swapChain = CreateDXGISwapChain(
d3dDevice,
static_cast<uint32_t>(size.Width),
static_cast<uint32_t>(size.Height),
static_cast<DXGI_FORMAT>(DirectXPixelFormat::B8G8R8A8UIntNormalized),
2);
// ADDED THIS
HRESULT hr1 = D3DReadFileToBlob(L"crt-royale-first-pass-ps_4_0.fxc", &ps_1_buffer);
HRESULT hr = d3dDevice->CreatePixelShader(
ps_1_buffer->GetBufferPointer(),
ps_1_buffer->GetBufferSize(),
nullptr,
&ps_1
);
m_d3dContext->PSSetShader(
ps_1,
nullptr,
0
);
// END OF ADDED CHANGES
// Create framepool, define pixel format (DXGI_FORMAT_B8G8R8A8_UNORM), and frame size.
m_framePool = Direct3D11CaptureFramePool::Create(
m_device,
DirectXPixelFormat::B8G8R8A8UIntNormalized,
2,
size);
m_session = m_framePool.CreateCaptureSession(m_item);
m_lastSize = size;
m_frameArrived = m_framePool.FrameArrived(auto_revoke, { this, &SimpleCapture::OnFrameArrived });
}
void SimpleCapture::OnFrameArrived(
Direct3D11CaptureFramePool const& sender,
winrt::Windows::Foundation::IInspectable const&)
{
auto newSize = false;
{
auto frame = sender.TryGetNextFrame();
auto frameContentSize = frame.ContentSize();
if (frameContentSize.Width != m_lastSize.Width ||
frameContentSize.Height != m_lastSize.Height)
{
// The thing we have been capturing has changed size.
// We need to resize our swap chain first, then blit the pixels.
// After we do that, retire the frame and then recreate our frame pool.
newSize = true;
m_lastSize = frameContentSize;
m_swapChain->ResizeBuffers(
2,
static_cast<uint32_t>(m_lastSize.Width),
static_cast<uint32_t>(m_lastSize.Height),
static_cast<DXGI_FORMAT>(DirectXPixelFormat::B8G8R8A8UIntNormalized),
0);
}
{
auto frameSurface = GetDXGIInterfaceFromObject<ID3D11Texture2D>(frame.Surface());
com_ptr<ID3D11Texture2D> backBuffer;
check_hresult(m_swapChain->GetBuffer(0, guid_of<ID3D11Texture2D>(), backBuffer.put_void()));
// ADDED THIS
D3D11_TEXTURE2D_DESC txtDesc = {};
txtDesc.MipLevels = txtDesc.ArraySize = 1;
txtDesc.Format = DXGI_FORMAT_B8G8R8A8_UNORM;
txtDesc.SampleDesc.Count = 1;
txtDesc.Usage = D3D11_USAGE_IMMUTABLE;
txtDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE;
auto d3dDevice = GetDXGIInterfaceFromObject<ID3D11Device>(m_device);
ID3D11Texture2D *tex;
d3dDevice->CreateTexture2D(&txtDesc, NULL,
&tex);
frameSurface.copy_to(&tex);
d3dDevice->CreateShaderResourceView(
tex,
nullptr,
srv_1
);
auto texture = srv_1;
m_d3dContext->PSSetShaderResources(0, 1, texture);
// END OF ADDED CHANGES
m_d3dContext->CopyResource(backBuffer.get(), frameSurface.get());
}
}
DXGI_PRESENT_PARAMETERS presentParameters = { 0 };
m_swapChain->Present1(1, 0, &presentParameters);
... // Truncated
Shaders define how things are drawn. However, you don't draw anything - you just copy, which is why the shader doesn't do anything.
What you should do is to remove the CopyResource call, and instead draw a full screen quad on the back buffer (Which requires you to create a vertex buffer that you can bind, then set the back buffer as render target, and finally call Draw/DrawIndexed to actually render something, which then will invoke the shader).
Also - since I'm not sure whether you already do this and just stripped it from the shown code - functions like CreatePixelShader don't return HRESULTs just for the fun of it - you should check what is actually returned, because DirectX silently returns most errors and expects you to handle them, instead of crashing your program.

Intel RealSense Depth Camera D435i noises and mounds

Intel RealSense Depth Camera D435i.
I try to capture an image and save it as stl format.
I use this project provided by Intel to achieve this task.
https://github.com/IntelRealSense/librealsense/releases/download/v2.29.0/Intel.RealSense.SDK.exe
In the solution there is an application named PointCloud.
I modified a little the application to have a clear image.
But even with the basic code, the result is not very satisfying.
I capture a smooth surface but there are many little bumps on result.
I don't know if the problem comes from the SDK or from the camera.
I check the result in MeshLab which is a great 3D tool.
Any idea ?
The result (a smooth table surface) :
Here is my code C++ (I added some filters only but without filters I have the same problem) :
#include <librealsense2/rs.hpp> // Include RealSense Cross Platform API
#include "example.hpp" // Include short list of convenience functions for rendering
#include <algorithm> // std::min, std::max
#include <iostream>
#include <Windows.h>
#include <imgui.h>
#include "imgui_impl_glfw.h"
#include <stdio.h>
#include <windows.h>
#include <conio.h>
#include "tchar.h"
// Helper functions
void register_glfw_callbacks(window& app, glfw_state& app_state);
int main(int argc, char * argv[]) try
{
::ShowWindow(::GetConsoleWindow(), SW_HIDE);
// Create a simple OpenGL window for rendering:
window app(1280, 720, "Capron 3D");
ImGui_ImplGlfw_Init(app, false);
bool capture = false;
HWND hWnd;
hWnd = FindWindow(NULL, _T("Capron 3D"));
ShowWindow(hWnd, SW_MAXIMIZE);
// Construct an object to manage view state
glfw_state app_state;
// register callbacks to allow manipulation of the pointcloud
register_glfw_callbacks(app, app_state);
app_state.yaw = 3.29;
app_state.pitch = 0;
// Declare pointcloud object, for calculating pointclouds and texture mappings
rs2::pointcloud pc;
// We want the points object to be persistent so we can display the last cloud when a frame drops
rs2::points points;
// Declare RealSense pipeline, encapsulating the actual device and sensors
rs2::pipeline pipe;
// Start streaming with default recommended configuration
pipe.start();
rs2::decimation_filter dec_filter;
rs2::spatial_filter spat_filter;
rs2::threshold_filter thres_filter;
rs2::temporal_filter temp_filter;
float w = static_cast<float>(app.width());
float h = static_cast<float>(app.height());
while (app) // Application still alive?
{
static const int flags = ImGuiWindowFlags_NoCollapse
| ImGuiWindowFlags_NoScrollbar
| ImGuiWindowFlags_NoSavedSettings
| ImGuiWindowFlags_NoTitleBar
| ImGuiWindowFlags_NoResize
| ImGuiWindowFlags_NoMove;
ImGui_ImplGlfw_NewFrame(1);
ImGui::SetNextWindowSize({ app.width(), app.height() });
ImGui::Begin("app", nullptr, flags);
// Set options for the ImGui buttons
ImGui::PushStyleColor(ImGuiCol_TextSelectedBg, { 1, 1, 1, 1 });
ImGui::PushStyleColor(ImGuiCol_Button, { 36 / 255.f, 44 / 255.f, 51 / 255.f, 1 });
ImGui::PushStyleColor(ImGuiCol_ButtonHovered, { 40 / 255.f, 170 / 255.f, 90 / 255.f, 1 });
ImGui::PushStyleColor(ImGuiCol_ButtonActive, { 36 / 255.f, 44 / 255.f, 51 / 255.f, 1 });
ImGui::PushStyleVar(ImGuiStyleVar_FrameRounding, 12);
ImGui::SetCursorPos({ 10, 10 });
if (ImGui::Button("Capturer", { 100, 50 }))
{
capture = true;
}
// Wait for the next set of frames from the camera
auto frames = pipe.wait_for_frames();
auto color = frames.get_color_frame();
// For cameras that don't have RGB sensor, we'll map the pointcloud to infrared instead of color
if (!color)
color = frames.get_infrared_frame();
// Tell pointcloud object to map to this color frame
pc.map_to(color);
auto depth = frames.get_depth_frame();
/*spat_filter.set_option(RS2_OPTION_FILTER_SMOOTH_DELTA, 50);
depth = spat_filter.process(depth);*/
spat_filter.set_option(RS2_OPTION_FILTER_SMOOTH_ALPHA, 1);
depth = spat_filter.process(depth);
spat_filter.set_option(RS2_OPTION_HOLES_FILL, 2);
depth = spat_filter.process(depth);
//temp_filter.set_option(RS2_OPTION_FILTER_SMOOTH_ALPHA, 1);
//depth = temp_filter.process(depth);
// Generate the pointcloud and texture mappings
points = pc.calculate(depth);
// Upload the color frame to OpenGL
app_state.tex.upload(color);
thres_filter.set_option(RS2_OPTION_MIN_DISTANCE, 0);
depth = thres_filter.process(depth);
// Draw the pointcloud
draw_pointcloud(int(w) / 2, int(h) / 2, app_state, points);
if (capture)
{
points.export_to_ply("My3DFolder\\new.ply", depth);
return EXIT_SUCCESS;
}
ImGui::PopStyleColor(4);
ImGui::PopStyleVar();
ImGui::End();
ImGui::Render();
}
return EXIT_SUCCESS;
}
catch (const rs2::error & e)
{
std::cerr << "RealSense error calling " << e.get_failed_function() << "(" << e.get_failed_args() << "):\n " << e.what() << std::endl;
MessageBox(0, "Erreur connexion RealSense. Veuillez vérifier votre caméra 3D.", "Capron Podologie", 0);
return EXIT_FAILURE;
}
catch (const std::exception & e)
{
std::cerr << e.what() << std::endl;
return EXIT_FAILURE;
}
I found the answer,
I was using HolesFill filter.
//These lines
spat_filter.set_option(RS2_OPTION_HOLES_FILL, 2);
depth = spat_filter.process(depth);
And the holes fill algorithm is prédictif. It creates points but the coordinates of these points are not exactly correct. The second parameter of the spat_filter.set_option is between 1 and 5. More I increade this parameter, more noised is the result.
If I remove these lines, I have a clearer result.
But this time I have many holes on the result.

SDL2 load certain texture make SDL_RenderFillRect color weird

I made a program that has two different states, one is for menu display-"Menu State", and the other state is for drawing some stuff-"Draw State".
But I came across a weird thing, if i load certain png for texture and copy them to renderer to display , then leave "Menu State" to enter "Draw State". The texture will somehow make the rectangle color not display properly (for example make green go dark).
In my code, switching to a new state(invoke MenuState::onExit()) will erase the texture map(map of texture smart pointer indexing with std::string)
So the texutre loaded doesn't even exist in the "Drawing State".
I couldn't figure out what went wrong. Here is some of my codes
void TextureManager::DrawPixel(int x, int y, int width, int height, SDL_Renderer *pRenderer)
{
SDL_Rect rect;
rect.x = x;
rect.y = y;
rect.w = width;
rect.h = height;
SDL_SetRenderDrawColor(pRenderer, 0, 255, 0, 255);//same color value
SDL_RenderFillRect(pRenderer, &rect);
}
static bool TextureManagerLoadFile(std::string filename, std::string id)
{
return TextureManager::Instance().Load(filename, id, Game::Instance().GetRenderer());
}
bool TextureManager::Load(std::string filename, std::string id, SDL_Renderer *pRenderer)
{
if(m_textureMap.count(id) != 0)
{
return false;
}
SDL_Surface *pTempSurface = IMG_Load(filename.c_str());
SDL_Texture *pTexutre = SDL_CreateTextureFromSurface(pRenderer, pTempSurface);
SDL_FreeSurface(pTempSurface);
if(pTexutre != 0)
{
m_textureMap[id] = std::make_unique<textureData>(pTexutre, 0, 0);
SDL_QueryTexture(pTexutre, NULL, NULL, &m_textureMap[id]->width, &m_textureMap[id]->height);
return true;
}
return false;
}
void TextureManager::ClearFromTextureMap(std::string textureID)
{
m_textureMap.erase(textureID);
}
bool MenuState::onEnter()
{
if(!TextureManagerLoadFile("assets/Main menu/BTN PLAY.png", "play_button"))
{
return false;
}
if(!TextureManagerLoadFile("assets/Main menu/BTN Exit.png", "exit_button"))
//replace different png file here will also affect the outcome
{
return false;
}
if(!TextureManagerLoadFile("assets/Main menu/BTN SETTINGS.png", "setting_button"))
{
return false;
}
int client_w,client_h;
SDL_GetWindowSize(Game::Instance().GetClientWindow(),&client_w, &client_h);
int playBtn_w = TextureManager::Instance().GetTextureWidth("play_button");
int playBtn_h = TextureManager::Instance().GetTuextureHeight("play_button");
int center_x = (client_w - playBtn_w) / 2;
int center_y = (client_h - playBtn_h) / 2;
ParamsLoader pPlayParams(center_x, center_y, playBtn_w, playBtn_h, "play_button");
int settingBtn_w = TextureManager::Instance().GetTextureWidth("setting_button");
int settingBtn_h = TextureManager::Instance().GetTuextureHeight("setting_button");
ParamsLoader pSettingParams(center_x , center_y + (playBtn_h + settingBtn_h) / 2, settingBtn_w, settingBtn_h, "setting_button");
int exitBtn_w = TextureManager::Instance().GetTextureWidth("exit_button");
int exitBtn_h = TextureManager::Instance().GetTuextureHeight("exit_button");
ParamsLoader pExitParams(10, 10, exitBtn_w, exitBtn_h, "exit_button");
m_gameObjects.push_back(std::make_shared<MenuUIObject>(&pPlayParams, s_menuToPlay));
m_gameObjects.push_back(std::make_shared<MenuUIObject>(&pSettingParams, s_menuToPlay));
m_gameObjects.push_back(std::make_shared<MenuUIObject>(&pExitParams, s_menuExit));
//change order of the 3 line code above
//or replace different png for exit button, will make the rectangle color different
std::cout << "Entering Menu State" << std::endl;
return true;
}
bool MenuState::onExit()
{
for(auto i : m_gameObjects)
{
i->Clean();
}
m_gameObjects.clear();
TextureManager::Instance().ClearFromTextureMap("play_button");
TextureManager::Instance().ClearFromTextureMap("exit_button");
TextureManager::Instance().ClearFromTextureMap("setting_button");
std::cout << "Exiting Menu State" << std::endl;
return true;
}
void Game::Render()
{
SDL_SetRenderDrawColor(m_pRenderer, 255, 255, 255, 255);
SDL_RenderClear(m_pRenderer);
m_pGameStateMachine->Render();
SDL_RenderPresent(m_pRenderer);
}
Menu State Figure
Correct Color
Wrong Color
edit :Also, I found out that this weird phenomenon only happens when the renderer was created with 'SDL_RENDERER_ACCELERATED' flag and -1 or 0 driver index, i.e SDL_CreateRenderer(m_pWindow, 1, SDL_RENDERER_ACCELERATED); or SDL_CreateRenderer(m_pWindow, -1, SDL_RENDERER_SOFTWARE);works fine!
I have been plagued by this very same issue. The link provided by ekodes is how I resolved it, as order of operations had no effect for me.
I was able to pull the d3d9Device via SDL_RenderGetD3D9Device(), then SetTextureStageState as described in ekodes d3d blending link.
I was having the same issue. I got a vibrant green color when trying to render a light gray.
The combination of the parameters that are fixing the issue for you pertain to the driver to be used. -1 selects the first driver that meets the criteria, int this case it needs to be accelerated.
Using SDL_GetRendererInfo I was able to see this happens when using the "direct3d" driver.
I found this question talking about blending in direct3d.
I figured it out eventually. In addition to Alpha Blending there is a Color Blending. So DirectX merges color of the last texture with the last primitive.
The question does provide a fix for this in DirectX, however I'm not sure how to apply that it in regards to SDL. I also have not been able to find a solution for this problem in SDL.
I was drawing Green text with SDL_ttf, which uses a texture. Then drawing a gray rectangle for another component elsewhere on the screen.
What's strange is it doesn't seem to happen all the time. However, mine seems to predominantly happen with SDL_ttf. At first I thought it may be a byproduct of TTF_RenderText_Blended however, it happens with the other ones as well. It also does not appear to be affected by the blend mode of the Texture generated by those functions
So in my case, I was able to change the order of the operations to get the correct color.
Alternatively, using the OpenGL driver appeared to fix this as well. Similar to what you mentioned. (This was driver index 1 for me)
I'm not sure this classifies as an "Answer" but hopefully it helps someone out or points them in the right direction.

CImage : copying 8bit JPEGs gives a black image

The following code extract I am loading an 300DPI 8-bit JPEG and then trying to write it out again in a Fresh instance of a CImage also as a JPEG.
But I end up with a black image with the correct dimensions.
Can someone explain why that is?
Ignore the commented out brush lines I'll get over that mental hurdle later.
If I hard code the bppGraphic to 24 it does copy the picture (to a DPI of 96) resulting in a smaller file size. I can live with this, I guess I am just curious.
Update 07-Nov-2018
So I added the indendented 'if' statement and it still came out black. The colorCountIMAGE comes out at 20. (The IsIndexed lines were to help me with an ASSERT issue I found in the SetColorTable - but it went away)
I think I may just force in all 24 bit.
Thanks
4GLGuy
PS This is all being done in VS2017.
char filePath[256] = "C:\\temp\\b64-one.jpg";
CImage imageGRAPHIC, imageJPG;
HRESULT retval;
bool result;
retval = imageGRAPHIC.Load(filePath);
if (retval != S_OK) {
throw FALSE;
}
int xGRAPHIC, yGRAPHIC, bppGRAPHIC = 0;
xGRAPHIC = imageGRAPHIC.GetWidth();
yGRAPHIC = imageGRAPHIC.GetHeight();
bppGRAPHIC = imageGRAPHIC.GetBPP();
//Create my target JPG same size and bit depth specifying
//that there is no alpha channel (dwflag last param)
result = imageJPG.Create(xGRAPHIC, yGRAPHIC, bppGRAPHIC, 0);
auto dcJPEG = imageJPG.GetDC();
if (bppGRAPHIC <= 8)
{
result = imageJPG.IsIndexed();
result = imageGRAPHIC.IsIndexed();
auto dcIMAGE = imageGRAPHIC.GetDC();
int colorCountIMAGE = GetDeviceCaps(dcIMAGE, NUMCOLORS);
RGBQUAD* coltblIMAGE = new RGBQUAD[colorCountIMAGE];
imageGRAPHIC.GetColorTable(0, colorCountIMAGE, &coltblIMAGE[0]);
imageJPG.SetColorTable(0, colorCountIMAGE, &coltblIMAGE[0]);
}
//Let there be white - 8 bit depth with 24 bit brush - no worky
//CRect rect{ 0, 0, xGRAPHIC, yGRAPHIC };
//HBRUSH white = CreateSolidBrush(RGB(255, 255, 255));
//FillRect(dcJPEG, &rect, white);
result = imageGRAPHIC.Draw(dcJPEG, 0, 0);
retval = imageJPG.Save(filePath, Gdiplus::ImageFormatJPEG);
if (retval != S_OK) {
throw FALSE;
}

MFT Frame Extraction in c++

I have working solution to extract frames from a video in c++ at github. Problem is its very slow. What I am doing is I am using a timer and playing video and whenever frame is ready I convert it into bitmap and saves it and seek to next position . This is not the right approach I think, there must be another way of pulling out frames. Please go through Github project and suggest any changes.
following is my Timer function
if (m_spMediaEngine != nullptr)
{
LONGLONG pts;
if (m_spMediaEngine->OnVideoStreamTick(&pts) == S_OK)
{
// new frame available at the media engine so get it
ComPtr<ID3D11Texture2D> spTextureDst;
MEDIA::ThrowIfFailed(
m_d3dDevice->CreateTexture2D(
&CD3D11_TEXTURE2D_DESC(
DXGI_FORMAT_B8G8R8A8_UNORM,
m_rcTarget.right, // Width
m_rcTarget.bottom, // Height
1, // MipLevels
1, // ArraySize
D3D11_BIND_SHADER_RESOURCE | D3D11_BIND_RENDER_TARGET
),
nullptr,
&spTextureDst
)
);
if (FAILED(
m_spMediaEngine->TransferVideoFrame(spTextureDst.Get(), nullptr, &m_rcTarget, &m_bkgColor)
))
{
return;
}
Position = Position + interval;
SetPlaybackPosition(Position);
ComPtr<IDXGISurface2> surface;
MEDIA::ThrowIfFailed(
spTextureDst.Get()->QueryInterface(
__uuidof(IDXGISurface2), &surface)
);
D2D1_BITMAP_PROPERTIES1 bitmapProperties =
D2D1::BitmapProperties1(
D2D1_BITMAP_OPTIONS_TARGET | D2D1_BITMAP_OPTIONS_CANNOT_DRAW,
D2D1::PixelFormat(DXGI_FORMAT_B8G8R8A8_UNORM, D2D1_ALPHA_MODE_PREMULTIPLIED),
96,
96
);
m_d2dContext->CreateBitmapFromDxgiSurface(surface.Get(), &bitmapProperties, &bitmap);
SaveBitmapToFile();
}
}
My Question is : Is this the right and only way of extracting frames ?
I would do something along the lines of converting it to a hashsum and storing that, which should speed it up. For example, you could create a class to hold a specific instance of a hash, (or create a linked list for those hashes), and then extract a frame and hash it using dhash http://www.hackerfactor.com/blog/index.php?/archives/2013/01/21.html