Bad status register? - c++

Working on a project. Professor gave us a .zip file with some tests, so we can see if our project is working correctly. We are building a small kernel in c++.
Anyhow, there is a thread that waits for a keyboard interrupt (event9.wait()) and after that it should put characters in a buffer or end the program (if you press "esc").
while (!theEnd) {
event9.wait();
status = inportb(0x64); // reading status reg. from 64h
while (status & 0x01){ //while status indicates that keys are pressed
....
I checked and I am certain that it waits for the interrupt regularly. The problem occurs because status&0x01 is 0.
Then I got the part of code that gets the characters from 0x60 and it worked just fine.
Is there something wrong with the code of test files? And if yes, what? If the code is correct what could cause the problem?
I could change the test files, but I need a good reason to do so. And so far the only reason I have is that it doesn't work.
*note: comments are translated from Serbian, but I am almost certain they are translated correctly.

I think status & 0x01 is perfectly fine. However, you would need to read the port again after reading port 0x60 - it may well be that you do that later on in the code, but I personally would just re-write the code to:
while ((status = inportb(0x64)) & 0x01){ //while status indicates that keys are pressed
....
Note that you shouldn't read port 0x64 again inside the loop in this case.

Related

gdb debugger unfamiliar with code displayed

I am fairly new to using gdb debugger and so coming across the code being displayed when I ran gdb left me having no use for the debugger. I am unfamiliar with the code being displayed but a did a little research and I assume I accidentally opened up a "thread"? It's hard to explain something I do not understand but I will link a picture explaining what I am talking about. Basically I want to revert back to the "basic" display of my actual code and not this: displayed by the debugger
Your program called one of scanf family of functions, with a NULL stream.
Usually this happens when you don't check for errors. For example:
FILE *fp = fopen("/file/which/does/not/exist", "r");
char ch;
fscanf(fp, "%c", &ch); /* BUG: should check fp!=NULL first. */
You should always check return value from any function that may fail.
You can see which code called into the fscanf with GDB where command.

UTF-8 problems in writing a UART-Console on a microcontroller

I am currently writing a uart-console on an ATMega1284p. It supposed to echo the characters back, so that the computer-side-console actually sees what is being typed and that is it for now.
Here is the problem: With ASCII it works perfectly fine, but if I am sending anything beyond ASCII e.g. a '§' my minicom shows "�§" '�' being the invalid or the '§' in case everything works fine. But getting the combination of both throws me off and I currently have no idea where the problem is!
Here is part of my code:
char c;
while(m_uart->recv(c) > 0) {
m_lineBuff[m_lineIndex++] = c;
if(c == '\r') {
c = '\n';
m_lineBuff[m_lineIndex++] = c;
m_sendCount = 2;
} else {
m_sendCount = 1;
}
this->send();
if(c == '\n') {
m_lineBuff[m_lineIndex++] = '\0';
// invoke some callbacks that handle the line at some point
m_lineIndex = 0;
}
}
m_lineBuff is a self written (and tested) vector of chars. m_uart is a self written (and also tested) UART driver for the micro-internal hardware uart. this->send sends m_sendCount bytes using m_uart.
What I tried so far:
I verified that the baud rates of minicom and my micro match (115200). I verified that the frequency is within the 2% range (micro is running at 20MHz). Both minicom and the micro are setup for 8n1.
I verified that minicom works by hooking it up to a little-board I had lying around. On that board any utf-8 digit works just fine.
Does anyone see my mistake or does anyone have a clue at what I haven't considered?
I'll be happy to supply up to all of my code if you guys are interested in it.
EDIT/Elaboration:
Observation 1 (prior to starting this project)
The PC side program (minicom) can send and recieve characters to resp. from the microcontroller. It does not show the sent characters though.
Conclusion 1 (prior to starting this project)
The microcontroller side needs to send the characters back to the PC, so that you have the behaviour of a console.
Thus I immediately send back any character I get.
Observation 2 (after implementing it)
When I press '§' (or any other character consisting of more than 1 byte) (using minicom) I see "�§".
Conclusion 2 (after implementing it)
Something I can't explain with my knowledge is going on. Maybe a small delay between the two bytes making up the character lead to minicom printing a '�' first because the first byte on it's own is indeed an invalid character, and when the second character comes in minicom realizes that it's acutally '§' but minicom doesn't remove/overwrite the '�'.
If that is the problem, then how do I solve it? Does my microcontroller need to react faster/with less delay in between characters?
EDIT2:
I replaced the '?' with the actual character '�' using the power of copy and paste.
More tests I did
I tried the character '😹' and as I expexted (it backs my conclusion 2) and I got "���😹". '😹' by the way is a 4 byte character.
Set the baud rate of micro and minicom to 9600: exact same behaviour.
I managed to set minicom into hex mode: it sends regularly but outputs hex... When I send '😹' I get "f0 9f 98 b9" which (at least according to this site) is correct... Is that backing my conclusion 2? And more importantly: how do I get rid of that behaviour. It works with my little linux board instead of my micro.
EDIT: the op discovered on his own that the odd behaviour he discovered is (probably) a bug of minicom itself. This post of mine clearly looses its value, unless the community thinks that it should be removed I would leave it here as a witness of possible workarounds when experiencing similar problems.
tl;dr: your pc application might not be interpreting UTF-8 correctly as it appears.
If we look at the Extended ASCII Code defined by ISO 8859-1,
A7 10100111 § § => Section sign
and according to this page, the UTF-8 encoding of § is
U+00A7 § c2 a7 => SECTION SIGN
So my educated guess is that the symbol is still printed correctly because it belongs to the Extended ASCII Code with the same value a7.
Either your end-application fails to correctly interpret the UTF-8 U (c2) symbol, and that's why you get an ? printed out, or a component in the middle fails to pass the correct value forward. I am inclined to believe your output is an instance of the first case.
You claim that minicom works, I can not refute this claim, but I would suggest you to try the following things first:
try send a symbol that belongs to UTF-8 but not to the ISO 8859-1 standard: if it doesn't work, this should rule out your Conclusion #2 pretty immediately;
try reduce the speed to the lowest possible, 9600 baud rate
verify that minicom is correctly configured to interpret UTF-8 characters checking the documentation;
try to use some other application to fetch data from your micro-controller and see whether the results are consistent;
verify that the unicode symbol U you're sending out is correct
NB: this is kind of an incomplete answer, but I couldn't get everything in the comments. If you're patient enough, please update your question with your findings and comment this answer to notify me. I'll get back here and update my answer accordingly.

Setting useUnsafeHeaderParsing for C++ WinHttp

I'm trying to reach a web page on an embedded device.
I'm using WinHttp on Win32.
When trying to read response I get error
ERROR_WINHTTP_INVALID_SERVER_RESPONSE
12152
The server response cannot be parsed.
But when I captured with WireShark I can see that response is coming.
So to test I wrote a simple C# program.
GetResponse was throwing exception
The server committed a protocol violation. Section=ResponseHeader
Detail=CR must be followed by LF
So according to below solution I set useUnsafeHeaderParsing to true. And it worked fine.
HttpWebRequestError: The server committed a protocol violation. Section=ResponseHeader Detail=CR must be followed by LF
Since I can't use C# I need to find a way to set useUnsafeHeaderParsing to true for WinHttp with win32 C++
Many thanks
I've briefly looked into the option flags of WinHttpSetOption and found the following entry:
WINHTTP_OPTION_UNSAFE_HEADER_BLOCKING
This option is reserved for internal use and should not be called.
Since the option looks linke an on/off switch I would try to do the following:
BOOL bResult;
BOOL bOption = FALSE;
bResult = WinHttpSetOption(hInternet,
WINHTTP_OPTION_UNSAFE_HEADER_BLOCKING,
&bOption,
sizeof(bOption));
if (bResult == FALSE)
{
/* handle error with GetLastError() */
}
Well but as MSDN says it's reserved for internal use and therefore the function may change in the future (or has already changed in the past). But it's worth a try... Good Luck!
Looks like the name of the option must have changed since then: with the current SDK it's WINHTTP_OPTION_UNSAFE_HEADER_PARSING. Also, I verified (by examining the Assembly code directly) that:
the option must be DWORD-sized
the value of the option doesn't matter, as long as it's nonzero
you can only enable unsafe parsing; trying to disable (by setting the option value to zero) causes an error to be returned
Obviously, since this undocumented, it's subject to change.

MFRC522 PICC responded with NAK (Porting MFRC522 arduino library[C++] to [C])

First some introducing.
I am trying to make the MFRC522 library for Arduino work on an ATmega328 programmed in C(I am using a 'normal' controller first, to make it work on a raspberry pi in a later state).
I copied the .h and .cpp from the library to my own project and renamed the .c to .cpp. After removing the classes in the .h file, it was time for the .c file. I replaced all the 'byte' statements to 'uint8_t', replaced the 'Serial.print' with printf and did the changes for GPIO and SPI.
The problem.
After some small mistakes I finally got data from a keycard. However it looked like to work, I get an error with reading line 58 from the card. The error is:
MIFARE_READ() failed: A MIFARE PICC responded with NAK.
I added a print statement to the SPI write and read and found out the following difference(on the left the [C] version and on the right the Arduino version): (because of my reputation, the picture can be found in the BitBucket I mentioned at the code part)
Code
The code is pretty long, but i made it available on BitBucket
I hope someone can point me where to look(some [C++] >> [C] different interpretations), because I don't know anymore where to look.
Sander
You need to run the PCD_Authenticate function before reads and writes. There are a few pre programmed keys in the linked github library that will authenticate the cards. I was getting this when trying to write to the card because I was using KEY_B and not KEY_A. You can see this Authenticate used in the samples provided on that GitHub page. It should looks something like this.
status = (MFRC522::StatusCode) mfrc522.PCD_Authenticate(MFRC522::PICC_CMD_MF_AUTH_KEY_A, trailerBlock, &key, &(mfrc522.uid));
From what I can tell the NAK simply means that the wrong key was used or maybe no key.

Infinite loop during debugging

I am working with a STM32 eval2 board and trying to debug it. It used to work fine, and I haven´t changed anything, but for the last week or so I am always getting stuck in this loop while I am in debugger mode, but when I am not, the program runs fine.
while(!__HAL_SD_SDIO_GET_FLAG(hsd, SDIO_FLAG_RXOVERR | SDIO_FLAG_DCRCFAIL | SDIO_FLAG_DTIMEOUT | SDIO_FLAG_DBCKEND | SDIO_FLAG_STBITERR))
{
if(__HAL_SD_SDIO_GET_FLAG(hsd, SDIO_FLAG_RXDAVL))
{
*(tempscr + index) = SDIO_ReadFIFO(hsd->Instance);
index++;
}
}
I even tried running the sample project code provided for the board by ST, did not change anything about it, and I am stuck in the same while loop in their code as well.
Does anybody know what I am doing wrong here? It doesn´t make sense because nothing changed.
The errors that are defined by the variables in the while loop are (respectively):
Received FIFO overrun error
Data block sent/received (CRC check failed)
Data timeout
Data block sent/received (CRC check passed)
Start bit not detected on all data signals in wide bus mode
and it looks like in this while loop it is getting stuck in the if statement for a "Data available in receive FIFO" flag, if that makes sense. I cannot step over that if statement.
I am using keil v5 and programming in c++
Well, I have been struggling with this for a week and almost right after I posted this I figured it out.
I had the SD card in, and for some reason taking it out fixed it. So I will leave this in case anyone else ever has this stupid problem.
Well, I have been struggling with this for a week and almost right after I posted this I figured it out.
I had the SD card in, and for some reason taking it out fixed it. So I will leave this in case somebody else has this stupid problem.