I am using Raspberry pi pico on Arduino IDE. I am using this library githublink for it. There is 3 examples in this link, ArduinoUniqueID and ArduinoUniqueID8
doesn't print anything. Ide says
WARNING: library ArduinoUniqueID claims to run on avr, esp8266, esp32, sam, samd, stm32 architecture(s) and may be incompatible with your current board which runs on mbed_rp2040 architecture(s).
(but GitHub says we add RP2040)
When I try to use last example ArduinoUniqueIDSerialUSB , It prints something but they are not correct values. It prints these :
UniqueID: 30 00 33 00 39 00 31 00 36 00 30 00 45 00 36 00 32 00 41 00 38 00 32 00 34 00 38 00 43 00 33 00
UniqueID: 34 00 38 00 43 00 33 00
The correct unique ID values here : (I printed these with micropython)
hex value of s = e660a4931754432c
type s = <class 'bytes'>
s = b'\xe6`\xa4\x93\x17TC,'
I don't even know what type 34 00 38 00 43 00 33 00 are, I try to convert hex but it prints same thing.
How can I find pico's Unique ID with Arduino Code ?
A unique ID for the Pico (and most RP2040 boards) is determined by the serial number of the flash. The Pico SDK has functions to get that ID. Either you can retrieve it directly from the flash by using flash_get_unique_id(uint8_t* id_out) which is what the library linked above did. The documentation for that is here.
Alternatively, you can get the unique ID from the MCU. The two functions for retrieving the ID are pico_get_unique_board_id(pico_unique_board_id_t* id_out) which returns the ID as a hex array or pico_get_unique_board_id_string(char* id_out, uint len) which returns it as a string. The documentation for that is here.
Those values are hex and are coming from their Unique_ID buffer which it looks like is being improperly filled with the Unique id. The code below should instead do what you need.
uint8_t UniqueID[8];
void UniqueIDdump(stream)
{
flash_get_unique_id(UniqueID);
stream.print("UniqueID: ");
for (size_t i = 0; i < 8; i++)
{
if (UniqueID[i] < 0x10)
stream.print("0");
stream.print(UniqueID[i], HEX);
stream.print(" "); }
stream.println();
}
"UniqueID: 30 00 33 " etc is a unicode string "039160E62A8248C348C3" not hex.
also for Earls pico core just add extern "C" void flash_get_unique_id(uint8_t *p); to your sketch and you can access the function required by the above UniqueIDdump example
Related
I am trying to send request using AsyncIO for Interrupt EP, for AsyncIO I have created IOMemoryBufferDescriptor, once IOMemoryBufferDescriptor, Create is success I used GetAddressRange and stored Address in ivars structure of dext. For this request completion(CompleteAsyncIO) is called using action-> GetReference() I got ivars structure, I was expecting interrupt completed data received from USB device, unfortunately I am not seeing related data. In wireshark I tried to debug data received is 16 bytes and CompleteAsyncIO actualbytes also 16.
What is the correct way to get interrupt data received from device using IOMemoryBufferDescriptor?
OSAction Create for CompleteAsyncIO
ret = OSAction::Create(this,
Data_interruptComplete_ID,
IOUSBHostPipe_CompleteAsyncIO_ID,
sizeof(IntActRef),
&ivars->interruptComplete);
IOMemoryBufferDescriptor Allocation for USB Interrupt EP:
IOBufferMemoryDescriptor* fCommPipeMDP;
ivars->fCommPipeMDP->Create(kIOMemoryDirectionIn,
ivars->fcomBuffSize,
0,
&ivars->fCommPipeMDP);
ivars->fCommPipeMDP->SetLength(ivars->fcomBuffSize);
ivars->fCommPipeMDP->GetAddressRange(&aRange);
ivars->fCommPipeBuffer = (uint8_t*)&aRange.address;
Send AsyncIO Request to Interrupt EP
ret = ivars->fCommPipe->AsyncIO(ivars->fCommPipeMDP,
ivars->fcomBuffSize,
ivars->interruptComplete,
0);
CompleteAsyncIO called by framework
void
IMPL (ClassData,interruptComplete)
{
struct interruptActionRef *actionref = (struct interruptActionRef*)action->GetReference();
Data_IVars * livars = actionref->interruptactionref;
for(tmp = 0; tmp < actualByteCount; tmp++)
os_log(OS_LOG_DEFAULT, "%x",livars->fCommPipeBuffer[tmp]);
//TRYING PRINT DATA RECEIVED FROM USB DEVICE IN INTERRUPT COMPLETION(CompleteAsyncIO)
//UNFORTUNATELY DATA IS NOT MATCHING
}
How to get actual data that is received from USB device for interrupt completion using IOBufferMemoryDescriptor which i have sent using AsyncIO ? Do I need to MAP address to current process address space?
which I am seeing wireshark with USB filter only actual data length is matching.
Wireshark Logs a1 20 00 00 01 00 02 00 03 00 00 00 00 00 00 00 (16 Bytes Data)
"3029","32.105745","64.16.4","host","USB","40","URB_INTERRUPT in (submitted)"
"3030","32.169565","64.16.4","host","USB","56","URB_INTERRUPT in (completed)"
0000 01 01 28 01 10 00 00 00 00 00 00 00 00 00 00 00 ..(.............
0010 31 d8 05 00 00 00 00 00 00 00 40 14 02 10 84 03 1.........#.....
0020 ff 02 01 00 04 10 3e 63 a1 20 00 00 01 00 02 00 ......>c. ......
0030 03 00 00 00 00 00 00 00
The problem is on this line:
ivars->fCommPipeBuffer = (uint8_t*)&aRange.address;
This is saving the pointer to the address field of the IOAddressSegment struct variable, not the pointer to the buffer itself. You want:
ivars->fCommPipeBuffer = (uint8_t*)aRange.address;
or, less error-prone and more idiomatic C++:
ivars->fCommPipeBuffer = reinterpret_cast<uint8_t*>(aRange.address);
(Though to be fair the type checker would still not have caught the bug; static analysis might have, however.)
With the correct buffer pointer it should start outputting the correct data.
I have encounter an issue while I am using tesseract library. I have successfully compiled leptonica and tesseract libs, with VS2017. Now, I have used these libraries into MFC project, where they are compiled without any error. Here is the code:
tesseract::TessBaseAPI api;
if (0 != api.Init(NULL, _T("eng"), tesseract::OEM_DEFAULT))
{
m_sState.Format(_T("tesseract initialize error"));
return FALSE;
}
nothing complicate, nothing wrong ... but I met 2 problems:
Even this code is executed or not, I have massive memory leak.
Detected memory leaks! Dumping objects -> {65734} normal block at
0x014EEB88, 24 bytes long. Data: < FXDebug > 10 00 00 00 08 00
00 00 46 58 44 65 62 75 67 00 {65733} normal block at 0x014EEB40, 24
bytes long. Data: < FXDebug > 10 00 00 00 08 00 00 00 46 58 44
65 62 75 67 00 {65732} normal block at 0x03880908, 8 bytes long.
Data: < > 10 BE 96 0F 00 00 00 00 {65731} normal block at
0x014EBDA8, 32 bytes long. Data: < N N N > A8 BD 4E 01 A8 BD
4E 01 A8 BD 4E 01 01 01 CD CD {65730} normal block at 0x03880A20, 8
bytes long. Data: < > 04 BE 96 0F 00 00 00 00 {65729} normal
block at 0x014EE990, 24 bytes long.
Every time when this code is executed, the app is going by "tesseract initialize error" route. I don't understand why ...
I have tried to run this project on VS2017 in Win10 64bit, all libraries and my project are compiled as Debug ... if this matter ... can you help me in order to use tesseract to read simple images ?
Last edit:
When I include this code inta a console app:
#include <leptonica/allheaders.h>
#include <tesseract/baseapi.h>
int main()
{
std::cout << "Hello World!\n";
tesseract::TessBaseAPI api;
if (0 != api.Init(NULL, NULL))
{
std::cout << "tesseract initialize error\n";
std::cout << "Last error:" << GetLastError() << std::endl;
}
}
I get the following error messages:
Hello World!
Error in pixReadMemTiff: function not present
Error in pixReadMem: tiff: no pix returned
Error in pixaGenerateFontFromString: pix not made
Error in bmfCreate: font pixa not made
Error opening data file C:\Program Files (x86)\Tesseract-OCR\eng.traineddata
Please make sure the TESSDATA_PREFIX environment variable is set to your "tessdata" directory.
Failed loading language 'eng'
Tesseract couldn't load any languages!
tesseract initialize error
Last error:3
but I have not any "Tesseract-OCR" folder in "Program Files (x86)" ...
I'm generating a big char for future passing to a thread with strcpyand strcat. It was all going ok until I needed to substitute all the occurrences of the space for a comma in one of the strings. I searched for the solution to this here
Problem is, now I have a memory leak and the program exits with this message:
_Dumping objects ->
{473} normal block at 0x0091E0C0, 32 bytes long.
Data: <AMLUH UL619 BKD > 41 4D 4C 55 48 20 55 4C 36 31 39 20 42 4B 44 20
{472} normal block at 0x049CCD20, 8 bytes long.
Data: < > BC ED 18 00 F0 EC 18 00
{416} normal block at 0x082B5158, 1000 bytes long.
Data: <Number of Aircra> 4E 75 6D 62 65 72 20 6F 66 20 41 69 72 63 72 61
{415} normal block at 0x04A0E200, 20 bytes long.
Data: < > 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
{185} normal block at 0x049DA998, 64 bytes long.
Data: < O X8 8 > DC 4F BB 58 38 C5 9A 06 38 D3 88 00 00 00 00 00
PythonPlugin.cpp(76) : {172} normal block at 0x0088D338, 72 bytes long.
Data: < a X F <) > DC 61 BB 58 18 BB 46 06 3C 29 8A 06 CD CD CD CD
Object dump complete._
Here's the code so you can tell me what I'm doing wrong:
Code of the problem:
char* loop_planes(ac){
char *char1=new char[1000];
for(...){
strcpy(char1,"Number of Aircrafts\nHour of simulation\n\n");
string tmp2=fp.GetRoute();
tmp2.replace(tmp2.begin(),tmp2.end()," ",","); #PROBLEM IS IN THIS LINE
const char *tmp3=tmp2.c_str();
strcat(char1,tmp3);
}
return char1;
}
The fp.GetRoute()is a string like this: AMLUH UL619 BKD UM748 RUTOL
Also, now that I'm talking about memory allocation, I don't want any future problems with memory leaks, so when should I delete char1, knowing that the thread is going to call this function?
When you call std::string::replace, the best match is a fumction template whose third and fourth parameters are input iterators. So the string literals you are passing are interpreted as the start and end of a range, when they are not. This leads to undefined behaviour.
You can fix this easily by using the algorithm std::replace instead:
std::replace(tmp2.begin(),tmp2.end(),' ',',');
Note that here the third and fourth parameters are single chars.
The answer from #juanchopanza correctly identifies and fixes the original question, but since you've asked about memory leaks in general, I'd like to additionally suggest that you replace your function with something that doesn't use new or delete or strcpy or strcat.
std::string loop_planes() {
std::string res("Number of Aircrafts\nHour of simulation\n\n");
for (...) {
std::string route = fp.GetRoute();
std::replace(route.begin(), route.end(), ' ',',');
res += route;
}
return res;
}
This doesn't require any explicit memory allocation or deletion and does not leak memory. I also took the liberty of changing the return type from char * to std::string to eliminate messy conversions.
I am reading in a binary file (in c++). And the header is something like this (printed in hexadecimal)
43 27 41 1A 00 00 00 00 23 00 00 00 00 00 00 00 04 63 68 72 31 FFFFFFB4 01 00 00 04 63 68 72 32 FFFFFFEE FFFFFFB7
when printed out using:
std::cout << hex << (int)mem[c];
Is there an efficient way to store 23 which is the 9th byte(?) into an integer without using stringstream? Or is stringstream the best way?
Something like
int n= mem[8]
I want to store 23 in n not 35.
You did store 23 in n. You only see 35 because you are outputting it with a routine that converts it to decimal for display. If you could look at the binary data inside the computer, you would see that it is in fact a hex 23.
You will get the same result as if you did:
int n=0x23;
(What you might think you want is impossible. What number should be stored in n for 1E? The only corresponding number is 31, which is what you are getting.)
Do you mean you want to treat the value as binary-coded decimal? In that case, you could convert it using something like:
unsigned char bcd = mem[8];
unsigned char ones = bcd % 16;
unsigned char tens = bcd / 16;
if (ones > 9 || tens > 9) {
// handle error
}
int n = 10*tens + ones;
I have a binary file and documentation of the format the information is stored in. I'm trying to write a simple program using c++ that pulls a specific piece of information from the file but I'm missing something since the output isn't what I expect.
The documentation is as follows:
Half-word Field Name Type Units Range Precision
10 Block Divider INT*2 N/A -1 N/A
11-12 Latitude INT*4 Degrees -90 to +90 0.001
There are other items in the file obviously but for this case I'm just trying to get the Latitude value.
My code is:
#include <cstdlib>
#include <iostream>
#include <fstream>
using namespace std;
int main(int argc, char* argv[])
{
char* dataFileLocation = "testfile.bin";
ifstream dataFile(dataFileLocation, ios::in | ios::binary);
if(dataFile.is_open())
{
char* buffer = new char[32768];
dataFile.seekg(10, ios::beg);
dataFile.read(buffer, 4);
dataFile.close();
cout << "value is << (int)(buffer[0] & 255);
}
}
The result of which is "value is 226" which is not in the allowed range.
I'm quite new to this and here's what my intentions where when writing the above code:
Open file in binary mode
Seek to the 11th byte from the start of the file
Read in 4 bytes from that point
Close the file
Output those 4 bytes as an integer.
If someone could point out where I'm going wrong I'd sure appreciate it. I don't really understand the (buffer[0] & 255) part (took that from some example code) so layman's terms for that would be greatly appreciated.
Hex Dump of the first 100 bytes:
testfile.bin 98,402 bytes 11/16/2011 9:01:52
-0 -1 -2 -3 -4 -5 -6 -7 -8 -9 -A -B -C -D -E -F
00000000- 00 5F 3B BF 00 00 C4 17 00 00 00 E2 2E E0 00 00 [._;.............]
00000001- 00 03 FF FF 00 00 94 70 FF FE 81 30 00 00 00 5F [.......p...0..._]
00000002- 00 02 00 00 00 00 00 00 3B BF 00 00 C4 17 3B BF [........;.....;.]
00000003- 00 00 C4 17 00 00 00 00 00 00 00 00 80 02 00 00 [................]
00000004- 00 05 00 0A 00 0F 00 14 00 19 00 1E 00 23 00 28 [.............#.(]
00000005- 00 2D 00 32 00 37 00 3C 00 41 00 46 00 00 00 00 [.-.2.7.<.A.F....]
00000006- 00 00 00 00 [.... ]
Since the documentation lists the field as an integer but shows the precision to be 0.001, I would assume that the actual value is the stored value multiplied by 0.001. The integer range would be -90000 to 90000.
The 4 bytes must be combined into a single integer. There are two ways to do this, big endian and little endian, and which you need depends on the machine that wrote the file. x86 PCs for example are little endian.
int little_endian = buffer[0] | buffer[1]<<8 | buffer[2]<<16 | buffer[3]<<24;
int big_endian = buffer[0]<<24 | buffer[1]<<16 | buffer[2]<<8 | buffer[3];
The &255 is used to remove the sign extension that occurs when you convert a signed char to a signed integer. Use unsigned char instead and you probably won't need it.
Edit: I think "half-word" refers to 2 bytes, so you'll need to skip 20 bytes instead of 10.