I2C error when using the Windows Monitor Configuration Functions - c++

I'm attempting to get/set the brightness level of the monitor through the Windows API. I've tried both the Low-Level Monitor Configuration Functions and the High-Level Monitor Configuration Functions, but they both seem to be breaking in the same place. In both cases I have no problem getting the HMONITOR handle and getting the physical monitor handle from the HMONITOR, but once I try to query the DDC/CI capabilities, I get an error saying "An error occurred while transmitting data to the device on the I2C bus."
The particular functions that cause this error are GetMonitorCapabilities for the high-level functions and GetCapabilitiesStringLength for the low-level functions. They both cause the same error.
This leads me to believe that maybe my monitor doesn't support DDC/CI, but I know my laptop's monitor brightness can be changed through the control panel, so it must be controlled through software somehow. Also I can successfully use the WMI classes in a PowerShell script to get/set the brightness as described on this page. Most things I've read suggest that most modern laptop screens do support DDC/CI.
Is there any way to find out what is causing this error or to get more information about it? I'm currently working in C++ in Visual Studio 2013 on Windows 7. I could probably use WMI in my C++ program if I can't get this current method working, but I thought I would ask here first.
Here's the code I currently have:
#include "stdafx.h"
#include <windows.h>
#include <highlevelmonitorconfigurationapi.h>
#include <lowlevelmonitorconfigurationapi.h>
#include <physicalmonitorenumerationapi.h>
#include <iostream>
#include <string>
int _tmain(int argc, _TCHAR* argv[])
{
DWORD minBrightness, curBrightness, maxBrightness;
HWND curWin = GetConsoleWindow();
if (curWin == NULL) {
std::cout << "Problem getting a handle to the window." << std::endl;
return 1;
}
// Call MonitorFromWindow to get the HMONITOR handle
HMONITOR curMon = MonitorFromWindow(curWin, MONITOR_DEFAULTTONULL);
if (curMon == NULL) {
std::cout << "Problem getting the display monitor" << std::endl;
return 1;
}
// Call GetNumberOfPhysicalMonitorsFromHMONITOR to get the needed array size
DWORD monitorCount;
if (!GetNumberOfPhysicalMonitorsFromHMONITOR(curMon, &monitorCount)) {
std::cout << "Problem getting the number of physical monitors" << std::endl;
return 1;
}
// Call GetPhysicalMonitorsFromHMONITOR to get a handle to the physical monitor
LPPHYSICAL_MONITOR physicalMonitors = (LPPHYSICAL_MONITOR)malloc(monitorCount*sizeof(PHYSICAL_MONITOR));
if (physicalMonitors == NULL) {
std::cout << "Unable to malloc the physical monitor array." << std::endl;
return 1;
}
if (!GetPhysicalMonitorsFromHMONITOR(curMon, monitorCount, physicalMonitors)) {
std::cout << "Problem getting the physical monitors." << std::endl;
return 1;
}
std::cout << "Num Monitors: " << monitorCount << std::endl; // This prints '1' as expected.
wprintf(L"%s\n", physicalMonitors[0].szPhysicalMonitorDescription); // This prints "Generic PnP Monitor" as expected
// Call GetMonitorCapabilities to find out which functions it supports
DWORD monCaps;
DWORD monColorTemps;
// The following function call fails with the error "An error occurred while transmitting data to the device on the I2C bus."
if (!GetMonitorCapabilities(physicalMonitors[0].hPhysicalMonitor, &monCaps, &monColorTemps)) {
std::cout << "Problem getting the monitor's capabilities." << std::endl;
DWORD errNum = GetLastError();
DWORD flags = FORMAT_MESSAGE_ALLOCATE_BUFFER | FORMAT_MESSAGE_FROM_SYSTEM | FORMAT_MESSAGE_IGNORE_INSERTS;
LPVOID buffer;
FormatMessage(flags, NULL, errNum, MAKELANGID(LANG_NEUTRAL, SUBLANG_DEFAULT), (LPWSTR)&buffer, 0, NULL);
wprintf(L"%s\n", buffer);
return 1;
}
// Same error when I use GetCapabilitiesStringLength(...) here.
// More code that is currently never reached...
return 0;
}
Edit: Also I should note that physicalMonitors[0].hPhysicalMonitor is 0, even though the monitor count and text description are valid and the GetPhysicalMonitorsFromHMONITOR function returns successfully. Any thoughts on why this might be?

This is a "wonky hardware" problem, the I2C bus it talks about is the logical interconnect between the video adapter and the display monitor. Primarily useful for plug & play. Underlying error code is 0xC01E0582, STATUS_GRAPHICS_I2C_ERROR_TRANSMITTING_DATA. It is generated by a the DxgkDdiI2CTransmitDataToDisplay() helper function in the video miniport driver. It is the vendor's video driver job to configure it, providing the functions that tickle the bus and to implement the IOCTL underlying GetMonitorCapabilities().
Clearly you are device driver land here, there isn't anything you can do about this failure in your C++ program. You can randomly spin the wheel of fortune by looking for a driver update from the video adapter manufacturer. But non-zero odds that the monitor is at fault here. Try another one first.

I know its bad time to reply but i thought you should know.
The problem you are facing is because of the DDC/CI disabled on your monitor so you should go to the monitor settings and check if DDC/CI is disabled and if it is, then you have to enable it and run your code again. It would work. If you were not able to find DDC/CI option ( some of the monitor have a separate button for enabling/disabling the DDC/CI like the Benq's T71W monitor has a separate down arrow button to enable/disable DDC/CI ) then you should refer to your monitor's manual or contact the manufacturer.
Hope that helps. and sorry for late reply.
Best of luck. :)

As I read the original question, the poster wanted to control a laptop display using DDC/CI. Laptop displays do not support DDC/CI. They provide a stripped down I2C bus sufficient to read the EDID at slave address x50, but that's it.

Related

Can't figure out how to call IOCTL_STORAGE_MANAGE_DATA_SET_ATTRIBUTES IOCTL on Windows - INVALID_PARAMETERS

Greetings!
I have come today to ask a question about invoking a very specific IOCTL on Windows. I have some amount of driver development experience, but my experience with file system drivers is relatively limited.
The Goal
I am developing a tool that manages volumes/physical disks/partitions. For the purpose I am attempting to learn to invoke many of the Windows file system data set management (DSM) IOCTLs. Currently I am learning how to use IOCTL_STORAGE_MANAGE_DATA_SET_ATTRIBUTES which is documented at https://learn.microsoft.com/en-us/windows/win32/api/winioctl/ni-winioctl-ioctl_storage_manage_data_set_attributes?redirectedfrom=MSDN.
However, I have had to intuit how to set up the call to the IOCTL myself. The MSDN article does not give fully detailed instructions on how to set up the input buffer, and specifically what values of the inputs are strictly required. I have uncertainty about how to call the IOCTL that has lead to a bug I cannot debug easily.
In order to reduce my uncertainty about proper invocation of the IOCTL I worked off a tool MS released a few years ago and copied some of their code: https://github.com/microsoft/StorScore/blob/7cbe261a7cad74f3a4f758c2b8a35ca552ba8dde/src/StorageTool/src/_backup.c
My Code
At first I tried:
#include <windows.h>
#include <stdio.h>
#include <string>
#include <iostream>
#include <winnt.h>
#include <winternl.h>
#include <ntddstor.h>
int main(int argc, const char* argv[]) {
//My understanding is for this IOCTL I need to open the drive, not the object PartmgrControl device that the driver registers.
HANDLE hDevice = CreateFile(L"\\\\.\\Physicaldrive0",
GENERIC_READ | GENERIC_WRITE,
FILE_SHARE_READ | FILE_SHARE_WRITE,
NULL,
OPEN_EXISTING,
FILE_FLAG_NO_BUFFERING,
NULL);
int cf_error = 0;
cf_error = GetLastError();
if (hDevice == INVALID_HANDLE_VALUE) {
std::cout << "COULDN'T GET HANDLE";
return -1;
}
std::cout << "Device Handle error: " << cf_error << "\n";
std::cout << "Handle value: " << hDevice << "\n";
_DEVICE_MANAGE_DATA_SET_ATTRIBUTES attributes_struct;
LPDWORD BytesReturned = 0;
int inputbufferlength = 0;
inputbufferlength = sizeof(DEVICE_MANAGE_DATA_SET_ATTRIBUTES) + sizeof(_DEVICE_DSM_OFFLOAD_WRITE_PARAMETERS) + sizeof(DEVICE_DATA_SET_RANGE);
PUCHAR inputbuffer = (PUCHAR)malloc(inputbufferlength);
PUCHAR outputbuffer = (PUCHAR)malloc(inputbufferlength);
//RtlZeroMemory(inputbuffer, inputBufferLength);
PDEVICE_MANAGE_DATA_SET_ATTRIBUTES dsmAttributes = (PDEVICE_MANAGE_DATA_SET_ATTRIBUTES)inputbuffer;
PDEVICE_DSM_OFFLOAD_WRITE_PARAMETERS offload_write_parameters = NULL;
dsmAttributes->Size = sizeof(DEVICE_MANAGE_DATA_SET_ATTRIBUTES);
dsmAttributes->Action = DeviceDsmAction_OffloadWrite;
dsmAttributes->Flags = 0;
dsmAttributes->ParameterBlockOffset = sizeof(DEVICE_MANAGE_DATA_SET_ATTRIBUTES);
dsmAttributes->ParameterBlockLength = sizeof(DEVICE_DSM_OFFLOAD_WRITE_PARAMETERS);
offload_write_parameters = (PDEVICE_DSM_OFFLOAD_WRITE_PARAMETERS)((PUCHAR)dsmAttributes + dsmAttributes->ParameterBlockOffset);
offload_write_parameters->Flags = 0;
offload_write_parameters->TokenOffset = 0;
dsmAttributes->DataSetRangesOffset = dsmAttributes->ParameterBlockOffset + dsmAttributes->ParameterBlockLength;
dsmAttributes->DataSetRangesLength = sizeof(DEVICE_DATA_SET_RANGE);
PDEVICE_DATA_SET_RANGE lbaRange = NULL;
lbaRange = (PDEVICE_DATA_SET_RANGE)((PUCHAR)dsmAttributes + dsmAttributes->DataSetRangesOffset);
lbaRange->StartingOffset = 0; // not sure about this one for now
lbaRange->LengthInBytes = 256 * 1024 * 1024;
int status = DeviceIoControl(
hDevice, // handle to device
IOCTL_STORAGE_MANAGE_DATA_SET_ATTRIBUTES, // dwIoControlCode
inputbuffer, // input buffer
inputbufferlength, // size of the input buffer
outputbuffer, // output buffer
inputbufferlength, // size of the input buffer - modified to be too small!
BytesReturned, // number of bytes returned
0 //(LPOVERLAPPED) &overlapped_struct // OVERLAPPED structure
);
DWORD error_num = GetLastError();
CloseHandle(hDevice);
std::cout << "STATUS IS: " << status << "\n";
std::cout << "ERROR IS: " << error_num;
return 0;
}
But this returned error 87 ERROR_INVALID_PARAMETER when attempting to call it.
My instinct was to debug the IOCTL by placing a breakpoint on partmgr!PartitionIoctlDsm - I was under the impression the targeted IOCTL was throwing the error. However my breakpoint was not being hit. So, then I moved on to placing a breakpoint on the IOCTL dispatch table itself
bp partmgr!PartitionDeviceControl
But that BP is never hit either. So, something else before my driver is throwing the error.
The Question(s)
How should I go about debugging this? How do I figure which driver is throwing the error?
What is the correct way to invoke this driver without throwing errors?
Why Am I getting this error?
Additional information
To be absolutely clear, I am dead set on using this particular IOCTL function. This is a learning exercise, and I am not interested in using alternative/easier to use functionality to implement the same effect. My curiosity lies in figuring out why the IO manager wont let me call the function.
I am running this code as admin.
I am running this code in a virtual machine.
I am debugging with windbg preview over a COM port.
Through some sleuthing I believe this is a filter driver, and that other drivers can intercept and handle this request.
Let me know if there is any other information I can provide.

How to create an OpenGL context on an NodeJS native addon on MacOS?

Follow-up for this question.
I'm trying to create a NodeJS native addon that uses OpenGL.
I'm not able to use OpenGL functions because CGLGetCurrentContext() always returns NULL.
When trying to create a new context to draw into, CGLChoosePixelFormat always returns the error kCGLBadConnection invalid CoreGraphics connection.
What is bugging me out is that when I isolate the code that creates the OpenGL context into a standalone CPP project, it works! It just gives an error when I run it inside the NodeJS addon!
I created this NodeJS native addon project to exemplify my error: https://github.com/Psidium/node-opengl-context-error-example
This is the code that works when executed on a standalone project and errors out when running inside NodeJS:
//
// main.cpp
// test_cli
//
// Created by Borges, Gabriel on 4/3/20.
// Copyright © 2020 Psidium. All rights reserved.
//
#include <iostream>
#include <OpenGL/OpenGL.h>
int main(int argc, const char * argv[]) {
std::cout << "Context before creating it: " << CGLGetCurrentContext() << "\n";
CGLContextObj context;
CGLPixelFormatAttribute attributes[2] = {
kCGLPFAAccelerated, // no software rendering
(CGLPixelFormatAttribute) 0
};
CGLPixelFormatObj pix;
CGLError errorCode;
GLint num; // stores the number of possible pixel formats
errorCode = CGLChoosePixelFormat( attributes, &pix, &num );
if (errorCode > 0) {
std::cout << ": Error returned by choosePixelFormat: " << errorCode << "\n";
return 10;
}
errorCode = CGLCreateContext( pix, NULL, &context );
if (errorCode > 0) {
std::cout << ": Error returned by CGLCreateContext: " << errorCode << "\n";
return 10 ;
}
CGLDestroyPixelFormat( pix );
errorCode = CGLSetCurrentContext( context );
if (errorCode > 0) {
std::cout << "Error returned by CGLSetCurrentContext: " << errorCode << "\n";
return 10;
}
std::cout << "Context after being created is: " << CGLGetCurrentContext() << "\n";
return 0;
}
I already tried:
Using fork() to create a context in a subprocess (didn't work);
Changing the pixelformat attributes to something that will create my context (didn't work);
I have a hunch that it may have something to do with the fact that a Node native addon is a dynamically linked library, or maybe my OpenGL createContext function may not be executing on the main thread (but if this was the case, the fork() would have solved it, right?).
Accessing graphics hardware requires extra permissions - Windows and macOS (don't know for others) restrict creation of hardware-accelerated OpenGL context to interactive user session (I may be wrong with the terms here). From one of articles on the web:
In case the user is not logged in, the CGLChoosePixelFormat will return kCGLBadConnection
Interactive session is easier to feel than to understand; e.g. when you interactively login and launch application - it is interactive session; when process is started as service - it is non-interactive. How this is actually managed by system requires deeper reading. As far as I know, there is no easy way "escaping" non-interactive process flag.
NodeJS can be used as part of a web-server, so that I may expect that it can be exactly the problem - it is started as a service, by another non-interactive user or has other special conditions making it non-interactive. So maybe more details on how you use / start NodeJS itself might clarify why the code doesn't work. But I may expect that using OpenGL on server part might be not a good idea anyway (if this is a target); although it might be possible that software OpenGL implementation (without kCGLPFAAccelerated flag might work).
By the way, there are at least two OpenGL / WebGL extensions to NodeJS - have you tried their samples to see if they behave in the same or different way in your environment?
https://github.com/google/node-gles
https://github.com/mikeseven/node-webgl

How to know if a device has been explicitly been disabled by user?

Using device manager a user can explicitly enable/disable a device, as can be seen in the following image.
For a given device I want to know if it's currently in a user disabled/enabled state.
I have tried the following approaches
CM_Get_DevNode_Status(&status, &problem, data.DevInst, 0); I was hoping that presence of DN_STARTED, or DN_DRIVER_LOADED would tell me that. But these can be zero even when a driver is being loaded/unloaded by the OS, when the device connects/disconnects. For example, a device which is enabled, and for which driver is loaded. DN_STARTED will be 1, but when we disconnect device it is set to zero before the device's entry is removed from device manager.
SetupDiGetDeviceRegistryProperty(..., SPDRP_INSTALL_STATE, ...) I though a state of CM_INSTALL_STATE_INSTALLED should mean that the device is enabled. But the function returns this state even for disabled devices.
Using WMI I was able to get the required information, but I used wmi in PowerShell. I do not want to use wmi, as it is quite difficult to implement in native c++. I used the following query.
Select Name, Availability, ConfigManagerErrorCode, ConfigManagerUserConfig from Win32_PnPEntity where Name = 'NVIDIA Quadro M1000M'
ConfigManagerErrorCode in above query, if set to 22, means that device has been disabled, 21 means that windows is removing the device
I am looking for a non wmi solution.
The information can be obtained from a device's problem code. There are two ways which I could find to get it.
Use SetupDiGetDeviceProperty() to query DEVPKEY_Device_ProblemCode.
Use CM_Get_DevNode_Status() the problem code will be present in the second argument after the call.
A problem code of 22 (CM_PROB_DISABLED) means that the device has been explicitly disabled by a user by either using device manager, or other such utility.
Sample code
#include "stdafx.h"
#include <Windows.h>
#include <SetupAPI.h>
#include <Cfgmgr32.h>
#include <devguid.h>
#include <initguid.h>
#include "devpkey.h"
#include <algorithm>
#include <iostream>
using namespace std;
int main()
{
HDEVINFO hDevInfo = ::SetupDiGetClassDevs(&GUID_DEVCLASS_DISPLAY, NULL, NULL, 0); //only getting for GPUs on the machine
if (INVALID_HANDLE_VALUE != hDevInfo)
{
SP_DEVINFO_DATA data;
data.cbSize = (DWORD)sizeof(data);
for (unsigned int nIndex = 0; ::SetupDiEnumDeviceInfo(hDevInfo, nIndex, &data); nIndex++)
{
ULONG status = 0, problem = 0;
CONFIGRET cr = ::CM_Get_DevNode_Status(&status, &problem, data.DevInst, 0); //after the call 'problem' variable will have the problem code
if (CR_SUCCESS == cr)
{
cout << " problem " << problem <<endl;
if(problem == CM_PROB_DISABLED)
{ /*Do Something*/ }
DEVPROPTYPE propertyType;
const DWORD propertyBufferSize = 100;
BYTE propertyBuffer[propertyBufferSize];
std::fill(begin(propertyBuffer), end(propertyBuffer), BYTE(0));
DWORD requiredSize = 0;
if (SetupDiGetDeviceProperty(hDevInfo, &data, &DEVPKEY_Device_ProblemCode, &propertyType, propertyBuffer, propertyBufferSize, &requiredSize, 0)) //after the call 'propertyBuffer' will have error codes
{
unsigned long deviceProblemCode = *((unsigned long*)propertyBuffer);
cout << " deviceProblemCode " << deviceProblemCode << endl;
if(problem == CM_PROB_DISABLED)
{ /*Do Something*/ }
}
}
}
}
return 0;
}
Sample Output
problem 0
deviceProblemCode 0
problem 22
deviceProblemCode 22
In the question it can be seen that Intel(R) HD Graphics 530 was enabled, and NVIDIA Quadro M1000M was disabled. Hence in the output we got a problem code of 0, and a problem code of 22 (CM_PROB_DISABLED).

Writing Memory to Process

I'm trying to write a float value into a video game emulator called Dolphin. I have a spread sheet of memory addresses for a specific game I'd like to be able to change the values at those address. Dolphin has a debug setting and i'm able to look at the games memory on the fly but whenever I try to run my program the value at the memory address does not change
#include <cstdlib>
#include <iostream>
#include <windows.h>
using namespace std;
/*
*
*/
int main(int argc, char** argv) {
float newValue = 22;
HWND hWnd = FindWindow(0, "Dolphin 5.0-321");
if(hWnd == 0) {
cerr << "Cannot Find Window." << endl;
} else {
DWORD pId;
GetWindowThreadProcessId(hWnd, &pId);
HANDLE hProc = OpenProcess(PROCESS_ALL_ACCESS, FALSE, pId);
if(!hProc) {
cerr << "Cannot Open Process." << endl;
} else {
int isSuccessful = WriteProcessMemory(hProc, (LPVOID)0x81118DF0, &newValue, (DWORD)sizeof(newValue), NULL);
if (isSuccessful) {
clog << "Process Memory Written." << endl;
} else {
cerr << "Cannot Write Process Memory." << endl;
}
CloseHandle(hProc);
}
}
return 0;
}
Dolphin in an emulator, if you're using it's debug function it may not be providing actual real memory addresses, but rather the addresses relative to the emulated system not your Windows operating system. Think of it kinda like a virtual machine, the dolphin emulator is running a guest operating system and you're writing to memory from the host Windows OS.
If you've located those addresses by reversing the game from the host OS then you can write to those addresses without much issue.
There is no guarantee that writing to the variables address will have the effect you're looking for, it's possible that it gets overwritten by some other code.
Lastly, if you have the correct address for the virtual memory of the dolphin process and you have tested that it changes correctly with a tool like Cheat Engine, the last problem that could arise is that you're not running your C++ project as administrator. Once doing so, it should work without issue.

C++ Serial Port Question

Problem:
I have a hand held device that scans those graphic color barcodes on all packaging. There is a track device that I can use that will slide the device automatically. This track device functions by taking ascii code through a serial port. I need to get this thing to work in FileMaker on a Mac. So no terminal programs, etc...
What I've got so far:
I bought a Keyspan USB/Serial adapter. Using a program called ZTerm I was successful in sending commands to the device.
Example:
"C,7^M^J"
I was also able to do the same thing in Terminal using this command: screen /dev/tty.KeySerial1 57600
and then type in the same command above(but when I typed in I just hit Control-M and Control-J for the carriage return and line feed)
Now I'm writing a plug-in for FileMaker(in C++ of course). I want to get what I did above happen in C++ so when I install that plug-in in FileMaker I can just call one of those functions and have the whole process take place right there.
I'm able to connect to the device, but I can't talk to it. It is not responding to anything.
I've tried connecting to the device(successfully) using these:
FILE *comport;
if ((comport = fopen("/dev/tty.KeySerial1", "w")) == NULL){...}
and
int fd;
fd = open("/dev/tty.KeySerial1", O_RDWR | O_NOCTTY | O_NDELAY);
This is what I've tried so far in way of talking to the device:
fputs ("C,7^M^J",comport);
or
fprintf(comport,"C,7^M^J");
or
char buffer[] = { 'C' , ',' , '7' , '^' , 'M' , '^' , 'J' };
fwrite (buffer , 1 , sizeof(buffer) , comport );
or
fwrite('C,7^M^J', 1, 1, comport);
Questions:
When I connected to the device from Terminal and using ZTerm, I was able to set my baud rate of 57600. I think that may be why it isn't responding here. But I don't know how to do it here.... Does any one know how to do that? I tried this, but it didn't work:
comport->BaudRate = 57600;
There are a lot of class solutions out there but they all call these include files like termios.h and stdio.h. I don't have these and, for whatever reason, I can't find them to download. I've downloaded a few examples but there are like 20 files in them and they're all calling other files I can't find(like the ones listed above). Do I need to find these and if so where? I just don't know enough about C++ Is there a website where I can download libraries??
Another solution might be to put those terminal commands in C++. Is there a way to do that?
So this has been driving me crazy. I'm not a C++ guy, I only know basic programming concepts. Is anyone out there a C++ expert? I ideally I'd like this to just work using functions I already have, like those fwrite, fputs stuff.
Thanks!
Sending a ^ and then a M doesn't send control-M, thats just the way you write it,
to send a control character the easiest way is to just use the ascii control code.
ps. ^M is carriage return ie "\r" and ^J is linefeed "\n"
edit: Probably more than you will (hopefully) ever need to know - but read The Serial Port Howto before going any further.
This isn't a C++ question. You're asking how to interact with the TTY driver to set teh baud rate. The fact that you're opening the file under /dev tells me that you're on a unix derivative, so the relevant man page to read on a linux system is "man 3 termios".
Basically, you use the open() variant above, and pass the file descriptor to tcsetattr/tcgetattr.
Are you sure you've installed all the compiler tools properly? On my OS X 10.5.8 Mac,
termios.h and stdio.h are right there under /usr/include, just as I'd expect. The
code you've already found for serial port programming on other Unix variants should
only require minor changes (if any) to work on a Mac. Can you tell us a bit more about
what you've tried, and what went wrong?
mgb also has a good point about how the control characters need to be represented.
You can set the baud rate with ioctl. Here's a link to an example.
You don't specify which Unix you are using, so below I'm posting some Linux production code I use.
Pleae note below code is a class method so ignore any external (ie undeclared) references.
Steps are as follows -
Configure your termio structure, this is where you set any needed flags etc (ie the step you accomplished using zterm. The termio settings below configure the port to 8 databits, 1 stopbit and no parity (8-n-1). Also the port will be in "raw" (as opposed to cooked) mode so its a character stream, text isn't framed into lines etc The baud constants match the actual value, ie for 56700 baud you use "57600".
The timing parameters mean that characters are returned from the device as soon as they are available.
Once you have your termainal parameters set, you open the device (using POSIX open()), and then can use tcgetattr/tcsetattr to configure the device via the fd.
At this point you can read/write to the device using the read()/write() system calls.
Note that in the below example read() will block if no data is available so you may want to use select()/poll() if blocking is undesirable.
Hope that helps.
termios termio
tcflag_t baud_specifier;
//reset device state...
memset (&termio, 0, sizeof (termios));
read_buffer.clear();
//get our boad rate...
if (!(baud_specifier = baud_constant (baud))) {
ostringstream txt;
txt << "invalid baud - " << baud;
device_status_msg = txt.str();
status = false;
return (true);
}
//configure device state...
termio.c_cflag = baud_specifier | CS8 | CLOCAL | CREAD;
//do we want handshaking?
if (rtscts) {
termio.c_cflag |= CRTSCTS;
}
termio.c_iflag = IGNPAR;
termio.c_oflag = 0;
termio.c_lflag = 0;
//com port timing, no wait between characters and read unblocks as soon as there is a character
termio.c_cc[VTIME] = 0;
termio.c_cc[VMIN] = 0;
//open device...
if ((fd = open (device.c_str(), O_RDWR | O_NOCTTY)) == -1) {
ostringstream txt;
txt << "open(\"" << device << "\") failed with " << errno << " - "
<< std_error_msg (errno);
device_status_msg = txt.str();
status = false;
return (true);
}
//keep a copy of curret device state...
if (tcgetattr (fd, &old_termio) == -1) {
ostringstream txt;
txt << "tcgetattr() failed with " << errno << " - " << std_error_msg (errno);
device_status_msg = txt.str();
status = false;
return (true);
}
//flush any unwanted bytes
if (tcflush (fd, TCIOFLUSH) == -1) {
ostringstream txt;
txt << "tcflush() failed with " << errno << " - " << std_error_msg (errno);
device_status_msg = txt.str();
status = false;
return (true);
}
//apply our device config...
if (tcsetattr (fd, TCSANOW, &termio) == -1) {
ostringstream txt;
txt << "tcsetattr() failed with " << errno << " - " << std_error_msg (errno);
device_status_msg = txt.str();
status = false;
return (true);
}
node_log_f ("successfully initialised device %s at %i baud", "open_device()",
device.c_str(), baud);
status = true;
return (true);
}