I am writing in C++ using MFC under Visual Studio 2022.
I have a CRichEditCtrl embedded in a dialog, and I want to feed it pre-formatted text markup written with an external RTF editor. In the end I want to embed the text within the program (to avoid having an external file), and so far as I can see the only way to do this is by putting the text into a CMemFile and then using CRichEditCtrl::StreamIn. If there is a better way of doing this I'd like to hear about it.
However, to start off I just tried to read from an external RFT file using the example code from the MSoft documentation.
For completion, here is my code version:
CFile cFile(TEXT("NeuroSimHelp.rtf"), CFile::modeRead);
EDITSTREAM es;
es.dwCookie = (DWORD)&cFile;
es.pfnCallback = FileStreamInCallback;
m_HelpRTF.StreamIn(SF_RTF, es);
static DWORD CALLBACK FileStreamInCallback(DWORD dwCookie, LPBYTE pbBuff, LONG cb, LONG* pcb);
static DWORD CALLBACK FileStreamInCallback(DWORD dwCookie, LPBYTE pbBuff, LONG cb, LONG* pcb)
{
CFile* pFile = (CFile*)dwCookie;
*pcb = pFile->Read(pbBuff, cb);
return 0;
}
This compiles and runs fine in x86 mode, but fails to compile in x64 mode.
The problem lies in the line:
es.pfnCallback = FileStreamInCallback;
which generates the compile error:
error C2440: '=': cannot convert from 'DWORD (__cdecl *)(DWORD,LPBYTE,LONG,LONG *)' to 'EDITSTREAMCALLBACK'
As I said, this compiles and works in x86, so I guess the 64-bit callback function address does not fit into the DWORD_PTR dwCookie in EDITSTREAMCALLBACK type.
I'm probably missing something obvious, but if anyone knows how to fix this the help would be much appreciated.
Thanks,
Bill H
Related
I have problems with FTD2xx driver.
I'm using QT(C++) in Fedora 26 (64-bit) and the last version of FTD2xx for "2232H" device.
Also the build method is:
qmake /Address/ProjectName.pro -r -spec linux-g++ CONFIG+=debug CONFIG+=qml_debug
Problem:
The FT_Status return 0(FT_OK) at ft_openex(....) command, but that return none_zero(FT_OK) for other functions of FTD2xx lib;
A section of my code:
FT_HANDLE ftH;
FT_STATUS ftStatus;
ftStatus = FT_OpenEx(const_cast<char*>("MYDevName"), FT_OPEN_BY_SERIAL_NUMBER, &ftH);
std::cout<<"FTST open:"<< ftStatus<<std::endl;
char a[10];DWORD b;
ftStatus = FT_SetBitMode(&ftH,0xff,0);
std::cout<<"FTST RESET:"<< ftStatus<<std::endl;
ftStatus = FT_SetBitMode(&ftH,0xff,0x40);
std::cout<<"FTST SPEED:"<< ftStatus<<std::endl;
ftStatus = FT_Close(&ftH);
std::cout<<"FTST CLOSE:"<< ftStatus<<std::endl;
And output :
FTST open:0
FTST RESET:1
FTST SPEED:1
FTST CLOSE:1
ftStatus =1 ;means FT_INVALID_HANDLE.
and
Command <<rmmod ftdi_sio >> is using.
and
Lib directory: /dev/local/lib
and
QT setting:
LIBS += -L$$PWD/../../../usr/local/lib/ -lftd2xx
INCLUDEPATH += $$PWD/../../../usr/local/include
DEPENDPATH += $$PWD/../../../usr/local/include
The FT_HANDLE is an output parameter in FT_OpenEx. You are correctly passing &ftH so that the function can overwrite ftH.
The FT_HANDLE is an input parameter to the other functions. You are incorrectly passing &ftH and should pass just ftH.
FT_Close(&ftH);
FT_Close(ftH);
Unfortunately FT_HANDLE is defined in a loosely-typed way:
typedef void* PVOID;
typedef PVOID FT_HANDLE;
Since void** implicitly converts to void*, the compiler cannot help you catch this mistake1. In general opaque handle types should be declared as
typedef struct AlwaysIncompleteType * MY_HANDLE;
and then the pointer and double-pointer types will be appropriately incompatible.
1 Even worse, in C, the reverse conversion from void* to void** is also implicit and you would be allowed to call FT_OpenEx(..., ftH) probably resulting in an immediate access violation (aka segmentation fault) and possibly resulting in unpredictable memory corruption. At least C++ got this right... but void* is still not conducive to strong type checking.
Sorry, stuck with VS2013 but I don't think that is a problem. The same code compiles correctly on linux. I assume I need to define uint rather than edit 100+ lines of code.
I am getting "error : explicit type is missing ("int" assumed)" on first line of below code
__device__ uint inline get_smid(void)
{
uint ret;
asm("mov.u32 %0, %%smid ;" : "=r"(ret) );
return ret;
}
At the property for the app there is only CUDA => Host => Preprocessor Definitions I put in
WIN32;uint="unsigned int"
This seemed to fix the "assumed int" but now I am getting "error : expected a declaration"
Replacing uint with unsigned int in the source will compile without error. There is a lot of uint and this breaks with the linux build. Is more required besides 'uint="unsigned int"'? Maybe a switch to cause the NVCC to accept the uint and not give an error?
Just discovered a lot of ushort and I am guessing the same problem. Also, looking at the Linux build, the source were compiled with gcc but the link was done with nvcc so there is a difference.
====sample CUDA has ushort====
i must have not set up the include correctly as these variables are probably ok to use.
I gave up trying to define uint or ushort as the sample CUDA programs had
typedef unsigned int uint;
typedef unsigned short ushort;
so I just put those in each cu file that needed it.
I have a very big problem... I' ve started today morning programming in Windows.h but I can't figure out why it gave me this problem as I litteraly copy the thing in the tut. (https://youtu.be/8GCvZs55mEM?t=5m20s) (I put the link to start the video when occurs my error)
The only thing I noticed is that in the tut it uses a LPCSTR variable for test, but my editor (Visual Studio Code) signal a LPCWSTR variable.
Sorry for the bad English.
#include <windows.h>
using namespace std;
int WINAPI WinMain (HINSTANCE hInts, HINSTANCE hPrevInst, LPSTR args, int ncmdshow)
{
MessageBox(NULL, "Ciao!", "La prima GUI", MB_OK, );
return 0;
}
How can I solve?
In the comment:
Now the error don't show up, thanks a lot. But there is a problem... The editor don' t built the application. The Console give:
Executing task: g++ -g main.cpp -o Program <
main.cpp: In function 'int WinMain(void *, void *, char *, int)':
main.coo:8: pasing '__wchar_t *' as argument 2 of 'MessageBox(void *, const char *, const char *, UINT)'
Obviously not a good tutorial. Do it like like this
MessageBox(NULL, L"Ciao!", L"La prima GUI", MB_OK);
Using L changes the string literal so that it uses wide characters. A wide character string literal can be converted to the type LPCWSTR, a normal string literal cannot.
I have a program I'm writing for Win64 in C++ that is executed from a parent program, and needs to set its parent window to the parent program's window. The parent program passes in it's HWND in as a command line argument, and I'm parsing the argument as an int (using stoi()) before it is cast to an HWND. A simplified version of my code is shown below:
int parentHwnd = stoi(args[HWND_INDEX]);
SetParent(childHwnd, (HWND) parentHwnd);
However, I'm getting the following error when compiling:
warning C4312: 'type cast': conversion from 'int' to 'HWND' of greater size
Is there safe way to cast an int to a HWNDand eliminate this error? Or should I be parsing the given command line arg to something other than an int that will safely be able to be cast to a HWND?
When passing pointers/handles you should use std::stoull;
And use explicit cast HWND parentHwnd = (HWND)std::stoull(args[HWND_INDEX]);
So I was following instructions on a book to create a d3d object. However, when I tried to compile the codes, it gives me some weird error in d3d11shader.h.
Inside the d3dshader.h,
#include "d3dcommon.h"
//other stuff
typedef struct _D3D11_SIGNATURE_PARAMETER_DESC
{
LPCSTR SemanticName;
UINT SemanticIndex;
UINT Register;
D3D_NAME SystemValueType;
D3D_REGISTER_COMPONENT_TYPE ComponentType;
BYTE Mask;
BYTE ReadWriteMask;
UINT Stream;
D3D_MIN_PRECISION MinPrecision; //Errors here
} D3D11_SIGNATURE_PARAMETER_DESC;
For details:
(1) Error 29 error C2146: syntax error : missing ';' before identifier 'MinPrecision' c:\program files (x86)\windows kits\8.0\include\um\d3d11shader.h 54
(2) Error 30 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int c:\program files (x86)\windows kits\8.0\include\um\d3d11shader.h 54
It's weird cause both d3dcommon.h and d3d11shader.h files are defined in Windows SDK and DirectX SDK so I couldn't change anything in them. Can anyone tell me how to fix it? Any input is appreciated here.
Here is the codes in my Source.cpp which are directly copied from the book 3D Game Programming with DirectX11 by Frank Luna.
#include "d3dApp.h"//This header file is provided by the book
class InitDirect3DApp : public D3DApp
{
public:
InitDirect3DApp(HINSTANCE hInstance);
~InitDirect3DApp();
bool Init();
void OnResize();
void UpdateScene(float dt);
void DrawScene();
};
int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE prevInstance,
PSTR cmdLine, int showCmd)
{
// Enable run-time memory check for debug builds.
#if defined(DEBUG) | defined(_DEBUG)
_CrtSetDbgFlag( _CRTDBG_ALLOC_MEM_DF | _CRTDBG_LEAK_CHECK_DF );
#endif
InitDirect3DApp theApp(hInstance);
if( !theApp.Init() )
return 0;
return theApp.Run();
}
Init Direct3DApp::InitDirect3DApp(HINSTANCE hInstance)
: D3DApp(hInstance)
{
}
InitDirect3DApp::~InitDirect3DApp()
{
}
bool InitDirect3DApp::Init()
{
if(!D3DApp::Init())
return false;
return true;
}
void InitDirect3DApp::OnResize()
{
D3DApp::OnResize();
}
void InitDirect3DApp::UpdateScene(float dt)
{
}
void InitDirect3DApp::DrawScene()
{
assert(md3dImmediateContext);
assert(mSwapChain);
md3dImmediateContext->ClearRenderTargetView(mRenderTargetView, reinterpret_cast<const float*>(&Colors::Blue));
md3dImmediateContext->ClearDepthStencilView(mDepthStencilView, D3D11_CLEAR_DEPTH|D3D11_CLEAR_STENCIL, 1.0f, 0);
HR(mSwapChain->Present(0, 0));
}
And another weird thing I've noticed is that sometimes when I go to the definition of D3D_MIN_PRECISION and it directs me to d3dcommon.h, the codes in d3dcommon.h are not highlighted at all. Visual Studio does highlightings for all other files automatically but not for this one.. So I just assume it doesn't recognize the codes in this specific header file some time..
Also, as I tried to compile the codes just now, there's another error popping out in addition to the previous two:
31 IntelliSense: identifier "D3D_MIN_PRECISION" is undefined c:\Program Files (x86)\Windows Kits\8.0\Include\um\d3d11shader.h 54
Just for your information, I'm using vs2012 on a windows 8.1 machine. Thanks a lot!
Your problem is most likely that you are mixing the Windows 8.x SDK and legacy DirectX SDK headers incorrectly.
Historically you would set the INCLUDE and LIB paths so that the DXSDK_DIR was first, but that only worked when the DirectX SDK headers were current. Now that they are out-dated, you need to use the WindowsSdkDir before the DXSDK_DIR (assuming you need to use stuff like D3DX that is deprecated that only available in the legacy DirectX SDK; otherwise you wouldn't need it at all).
With VS 2012's "v110" or VS 2013's "v120" Platform Toolset, if you also need to use the DirectX SDK, you use VC++ Directory settings for Include and Lib of:
$(IncludePath);$(DXSDK_DIR)Include
$(LibraryPath);$(DXSDK_DIR)Lib\<x86 or x64>
See MSDN, Living Without D3DX, XInput, and XAudio
BTW, If you were trying to use VS 2012's "v110_xp" or VS 2013's "v120_xp" Platform Toolset, it works more like it did in the old days and would use the traditional order. See this blog post.
$(DXSDK_DIR)Include;$(IncludePath);
$(DXSDK_DIR)Lib\<x86 or x64>;$(LibraryPath)