c++ copy array to array - c++

I have taken code from here Webduino Network Setp
I added one more field.
struct config_t
{
....
...
.....
byte subnet[4];
byte dns_server[4];
unsigned int webserverPort;
char HostName[10]; // Added code Here..
} eeprom_config;
Snippet..
#define NAMELEN 5
#define VALUELEN 10
void setupNetHTML(WebServer &server, WebServer::ConnectionType type, char *url_tail, bool tail_complete)
{
URLPARAM_RESULT rc;
char name[NAMELEN];
char value[VALUELEN];
boolean params_present = false;
byte param_number = 0;
char buffer [13];
.....
.....
}
Added Lines to read date from web page and Wire to eeprom
Write to eeprom: ( Facing issue here, I need to copy value to eeprom_config.HostName[0] ... )
// read Host Name
if (param_number >= 25 && param_number <= 35) {
// eeprom_config.HostName[param_number - 25] = strtol(value, NULL, 10);
eeprom_config.HostName[param_number - 25] = value ; // Facing Issue here..
}
and...
for (int a = 0; a < 10; a++) {
server.printP(Form_input_text_start);
server.print(a + 25);
server.printP(Form_input_value);
server.print(eeprom_config.HostName[a]);
server.printP(Form_input_size1);
server.printP(Form_input_end);
}

Issue was resolved.
Thanks , got idea from this post.
invalid conversion from char' tochar*'
How ! changed
// read Host Name
if (param_number >= 25 && param_number <= 35) {
// eeprom_config.HostName[param_number - 25] = strtol(value, NULL, 10);
eeprom_config.HostName[param_number - 25] = value ; // Facing Issue here..
}
changed to
// read Host Name
if (param_number >= 25 && param_number <= 35) {
eeprom_config.HostName[param_number - 25] = value[0];
}

Related

What's the difference between initializing a vector in Class Header or Class constructor body?

I encountered a strange behavior in my C++ program that I don't understand and I don't know how to search for more information. So I ask for advice here hoping someone might know.
I have a class Interface that has a 2 dimensional vector that I initialize in the header :
class Interface {
public:
// code...
const unsigned short int SIZE_X_ = 64;
const unsigned short int SIZE_Y_ = 32;
std::vector<std::vector<bool>> screen_memory_ =
std::vector<std::vector<bool>>(SIZE_X_, std::vector<bool>(SIZE_Y_, false));
// code...
};
Here I expect that I have a SIZE_X_ x SIZE_Y_ vector filled with false booleans.
Later in my program I loop at a fixed rate like so :
void Emulator::loop() {
const milliseconds intervalPeriodMillis{static_cast<int>((1. / FREQ) * 1000)};
//Initialize the chrono timepoint & duration objects we'll be //using over & over inside our sleep loop
system_clock::time_point currentStartTime{system_clock::now()};
system_clock::time_point nextStartTime{currentStartTime};
while (!stop) {
currentStartTime = system_clock::now();
nextStartTime = currentStartTime + intervalPeriodMillis;
// ---- Stuff happens here ----
registers_->trigger_timers();
interface_->toogle_buzzer();
interface_->poll_events();
interface_->get_keys();
romParser_->step();
romParser_->decode();
// ---- ------------------ ----
stop = stop || interface_->requests_close();
std::this_thread::sleep_until(nextStartTime);
}
}
But then during the execution I get a segmentation fault
[1] 7585 segmentation fault (core dumped) ./CHIP8 coin.ch8
I checked with the debugger and some part of the screen_memory_ cannot be accessed anymore. And it seems to happen at random time.
But when I put the initialization of the vector in the constructor body like so :
Interface::Interface(const std::shared_ptr<reg::RegisterManager> & registers, bool hidden)
: registers_(registers) {
// code ...
screen_memory_ =
std::vector<std::vector<bool>>(SIZE_X_, std::vector<bool>(SIZE_Y_, false));
// code ...
}
The segmentation fault doesn't happen anymore. So the solution is just to initialize the vector in the constructor body.
But why ? what is happening there ?
I don't understand what I did wrong, I'm sure someone knows.
Thanks for your help !
[Edit] I found the source of the bug (Or at least what to change so it doesnt give me a segfault anymore).
In my class Interface I use the SDL and SDL_audio libraries to create the display and the buzzer sound. Have a special look where I set the callback want_.callback, the callback Interface::forward_audio_callback and Interface::audio_callback. Here's the code :
// (c) 2021 Maxandre Ogeret
// Licensed under MIT License
#include "Interface.h"
Interface::Interface(const std::shared_ptr<reg::RegisterManager> & registers, bool hidden)
: registers_(registers) {
if (SDL_Init(SDL_INIT_AUDIO != 0) || SDL_Init(SDL_INIT_VIDEO) != 0) {
throw std::runtime_error("Unable to initialize rendering engine.");
}
want_.freq = SAMPLE_RATE;
want_.format = AUDIO_S16SYS;
want_.channels = 1;
want_.samples = 2048;
want_.callback = Interface::forward_audio_callback;
want_.userdata = &sound_userdata_;
if (SDL_OpenAudio(&want_, &have_) != 0) {
SDL_LogError(SDL_LOG_CATEGORY_AUDIO, "Failed to open audio: %s", SDL_GetError());
}
if (want_.format != have_.format) {
SDL_LogError(SDL_LOG_CATEGORY_AUDIO, "Failed to get the desired AudioSpec");
}
window = SDL_CreateWindow("CHIP8", SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED,
SIZE_X_ * SIZE_MULTIPLIER_, SIZE_Y_ * SIZE_MULTIPLIER_,
hidden ? SDL_WINDOW_HIDDEN : 0);
renderer = SDL_CreateRenderer(window, -1, SDL_RENDERER_SOFTWARE);
bpp_ = SDL_GetWindowSurface(window)->format->BytesPerPixel;
SDL_Delay(1000);
// screen_memory_ = std::vector<std::vector<bool>>(SIZE_X_, std::vector<bool>(SIZE_Y_, false));
}
Interface::~Interface() {
SDL_CloseAudio();
SDL_DestroyWindow(window);
SDL_Quit();
}
// code ...
void Interface::audio_callback(void * user_data, Uint8 * raw_buffer, int bytes) {
audio_buffer_ = reinterpret_cast<Sint16 *>(raw_buffer);
sample_length_ = bytes / 2;
int & sample_nr(*(int *) user_data);
for (int i = 0; i < sample_length_; i++, sample_nr++) {
double time = (double) sample_nr / (double) SAMPLE_RATE;
audio_buffer_[i] = static_cast<Sint16>(
AMPLITUDE * (2 * (2 * floor(220.0f * time) - floor(2 * 220.0f * time)) + 1));
}
}
void Interface::forward_audio_callback(void * user_data, Uint8 * raw_buffer, int bytes) {
static_cast<Interface *>(user_data)->audio_callback(user_data, raw_buffer, bytes);
}
}
In the function Interface::audio_callback, replacing the class variable assignation :
sample_length_ = bytes / 2;
By an int creation and assignation :
int sample_length = bytes / 2;
which gives :
void Interface::audio_callback(void * user_data, Uint8 * raw_buffer, int bytes) {
audio_buffer_ = reinterpret_cast<Sint16 *>(raw_buffer);
int sample_length = bytes / 2;
int &sample_nr(*(int*)user_data);
for(int i = 0; i < sample_length; i++, sample_nr++)
{
double time = (double)sample_nr / (double)SAMPLE_RATE;
audio_buffer_[i] = (Sint16)(AMPLITUDE * sin(2.0f * M_PI * 441.0f * time)); // render 441 HZ sine wave
}
}
The class variable sample_length_ is defined and initialized as private in the header like so :
int sample_length_ = 0;
So I had an idea and I created the variable sample_length_ as public and it works ! So the problem was definitely a scope problem of the class variable sample_length_. But it doesn't explain why the segfault disappeared when I moved the init of some other variable in the class constructor... Did I hit some undefined behavior with my callback ?
Thanks for reading me !

Pass Byte Array as std::vector<char> from Node.js to C++ Addon

I have some constraints where the addon is built with nan.h and v8 (not the new node-addon-api).
The end function is a part of a library. It accepts std::vector<char> that represents the bytes of an image.
I tried creating an image buffer from Node.js:
const img = fs.readFileSync('./myImage.png');
myAddonFunction(Buffer.from(img));
I am not really sure how to continue from here. I tried creating a new vector with a buffer, like so:
std::vector<char> buffer(data);
But it seems like I need to give it a size, which I am unsure how to get. Regardless, even when I use the initial buffer size (from Node.js), the image fails to go through.
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
[1] 16021 abort (core dumped)
However, when I read the image directly from C++, it all works fine:
std::ifstream ifs ("./myImage.png", std::ios::binary|std::ios::ate);
std::ifstream::pos_type pos = ifs.tellg();
std::vector<char> buffer(pos);
ifs.seekg(0, std::ios::beg);
ifs.read(&buffer[0], pos);
// further below, I pass "buffer" to the function and it works just fine.
But of course, I need the image to come from Node.js. Maybe Buffer is not what I am looking for?
Here is an example based on N-API; I would also encourage you to take a look similar implementation based on node-addon-api (it is an easy to use C++ wrapper on top of N-API)
https://github.com/nodejs/node-addon-examples/tree/master/array_buffer_to_native/node-addon-api
#include <assert.h>
#include "addon_api.h"
#include "stdio.h"
napi_value CArrayBuffSum(napi_env env, napi_callback_info info)
{
napi_status status;
const size_t MaxArgExpected = 1;
napi_value args[MaxArgExpected];
size_t argc = sizeof(args) / sizeof(napi_value);
status = napi_get_cb_info(env, info, &argc, args, nullptr, nullptr);
assert(status == napi_ok);
if (argc < 1)
napi_throw_error(env, "EINVAL", "Too few arguments");
napi_value buff = args[0];
napi_valuetype valuetype;
status = napi_typeof(env, buff, &valuetype);
assert(status == napi_ok);
if (valuetype == napi_object)
{
bool isArrayBuff = 0;
status = napi_is_arraybuffer(env, buff, &isArrayBuff);
assert(status == napi_ok);
if (isArrayBuff != true)
napi_throw_error(env, "EINVAL", "Expected an ArrayBuffer");
}
int32_t *buff_data = NULL;
size_t byte_length = 0;
int32_t sum = 0;
napi_get_arraybuffer_info(env, buff, (void **)&buff_data, &byte_length);
assert(status == napi_ok);
printf("\nC: Int32Array size = %d, (ie: bytes=%d)",
(int)(byte_length / sizeof(int32_t)), (int)byte_length);
for (int i = 0; i < byte_length / sizeof(int32_t); ++i)
{
sum += *(buff_data + i);
printf("\nC: Int32ArrayBuff[%d] = %d", i, *(buff_data + i));
}
napi_value rcValue;
napi_create_int32(env, sum, &rcValue);
return (rcValue);
}
The JavaScript code to call the addon
'use strict'
const myaddon = require('bindings')('mync1');
function test1() {
const array = new Int32Array(10);
for (let i = 0; i < 10; ++i)
array[i] = i * 5;
const sum = myaddon.ArrayBuffSum(array.buffer);
console.log();
console.log(`js: Sum of the array = ${sum}`);
}
test1();
The Output of the code execution:
C: Int32Array size = 10, (ie: bytes=40)
C: Int32ArrayBuff[0] = 0
C: Int32ArrayBuff[1] = 5
C: Int32ArrayBuff[2] = 10
C: Int32ArrayBuff[3] = 15
C: Int32ArrayBuff[4] = 20
C: Int32ArrayBuff[5] = 25
C: Int32ArrayBuff[6] = 30
C: Int32ArrayBuff[7] = 35
C: Int32ArrayBuff[8] = 40
C: Int32ArrayBuff[9] = 45
js: Sum of the array = 225

How to programatically decrypt aes-256-cbc file which was encrypted using password? [duplicate]

For example, the command:
openssl enc -aes-256-cbc -a -in test.txt -k pinkrhino -nosalt -p -out openssl_output.txt
outputs something like:
key = 33D890D33F91D52FC9B405A0DDA65336C3C4B557A3D79FE69AB674BE82C5C3D2
iv = 677C95C475C0E057B739750748608A49
How is that key generated? (C code as an answer would be too awesome to ask for :) )
Also, how is the iv generated?
Looks like some kind of hex to me.
OpenSSL uses the function EVP_BytesToKey. You can find the call to it in apps/enc.c. The enc utility used to use the MD5 digest by default in the Key Derivation Algorithm (KDF) if you didn't specify a different digest with the -md argument. Now it uses SHA-256 by default. Here's a working example using MD5:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <openssl/evp.h>
int main(int argc, char *argv[])
{
const EVP_CIPHER *cipher;
const EVP_MD *dgst = NULL;
unsigned char key[EVP_MAX_KEY_LENGTH], iv[EVP_MAX_IV_LENGTH];
const char *password = "password";
const unsigned char *salt = NULL;
int i;
OpenSSL_add_all_algorithms();
cipher = EVP_get_cipherbyname("aes-256-cbc");
if(!cipher) { fprintf(stderr, "no such cipher\n"); return 1; }
dgst=EVP_get_digestbyname("md5");
if(!dgst) { fprintf(stderr, "no such digest\n"); return 1; }
if(!EVP_BytesToKey(cipher, dgst, salt,
(unsigned char *) password,
strlen(password), 1, key, iv))
{
fprintf(stderr, "EVP_BytesToKey failed\n");
return 1;
}
printf("Key: "); for(i=0; i<cipher->key_len; ++i) { printf("%02x", key[i]); } printf("\n");
printf("IV: "); for(i=0; i<cipher->iv_len; ++i) { printf("%02x", iv[i]); } printf("\n");
return 0;
}
Example usage:
gcc b2k.c -o b2k -lcrypto -g
./b2k
Key: 5f4dcc3b5aa765d61d8327deb882cf992b95990a9151374abd8ff8c5a7a0fe08
IV: b7b4372cdfbcb3d16a2631b59b509e94
Which generates the same key as this OpenSSL command line:
openssl enc -aes-256-cbc -k password -nosalt -p < /dev/null
key=5F4DCC3B5AA765D61D8327DEB882CF992B95990A9151374ABD8FF8C5A7A0FE08
iv =B7B4372CDFBCB3D16A2631B59B509E94
OpenSSL 1.1.0c changed the digest algorithm used in some internal components. Formerly, MD5 was used, and 1.1.0 switched to SHA256. Be careful the change is not affecting you in both EVP_BytesToKey and commands like openssl enc.
If anyone is looking for implementing the same in SWIFT
I converted the EVP_BytesToKey in swift
/*
- parameter keyLen: keyLen
- parameter ivLen: ivLen
- parameter digest: digest e.g "md5" or "sha1"
- parameter salt: salt
- parameter data: data
- parameter count: count
- returns: key and IV respectively
*/
open static func evpBytesToKey(_ keyLen:Int, ivLen:Int, digest:String, salt:[UInt8], data:Data, count:Int)-> [[UInt8]] {
let saltData = Data(bytes: UnsafePointer<UInt8>(salt), count: Int(salt.count))
var both = [[UInt8]](repeating: [UInt8](), count: 2)
var key = [UInt8](repeating: 0,count: keyLen)
var key_ix = 0
var iv = [UInt8](repeating: 0,count: ivLen)
var iv_ix = 0
var nkey = keyLen;
var niv = ivLen;
var i = 0
var addmd = 0
var md:Data = Data()
var md_buf:[UInt8]
while true {
addmd = addmd + 1
md.append(data)
md.append(saltData)
if(digest=="md5"){
md = NSData(data:md.md5()) as Data
}else if (digest == "sha1"){
md = NSData(data:md.sha1()) as Data
}
for _ in 1...(count-1){
if(digest=="md5"){
md = NSData(data:md.md5()) as Data
}else if (digest == "sha1"){
md = NSData(data:md.sha1()) as Data
}
}
md_buf = Array (UnsafeBufferPointer(start: md.bytes, count: md.count))
// md_buf = Array(UnsafeBufferPointer(start: md.bytes.bindMemory(to: UInt8.self, capacity: md.count), count: md.length))
i = 0
if (nkey > 0) {
while(true) {
if (nkey == 0){
break
}
if (i == md.count){
break
}
key[key_ix] = md_buf[i];
key_ix = key_ix + 1
nkey = nkey - 1
i = i + 1
}
}
if (niv > 0 && i != md_buf.count) {
while(true) {
if (niv == 0){
break
}
if (i == md_buf.count){
break
}
iv[iv_ix] = md_buf[i]
iv_ix = iv_ix + 1
niv = niv - 1
i = i + 1
}
}
if (nkey == 0 && niv == 0) {
break
}
}
both[0] = key
both[1] = iv
return both
}
I use CryptoSwift for the hash.
This is a much cleaner way as apples does not recommend OpenSSL in iOS
UPDATE : Swift 3
Here is a version for mbedTLS / Polar SSL - tested and working.
typedef int bool;
#define false 0
#define true (!false)
//------------------------------------------------------------------------------
static bool EVP_BytesToKey( const unsigned int nDesiredKeyLen, const unsigned char* salt,
const unsigned char* password, const unsigned int nPwdLen,
unsigned char* pOutKey, unsigned char* pOutIV )
{
// This is a re-implemntation of openssl's password to key & IV routine for mbedtls.
// (See openssl apps/enc.c and /crypto/evp/evp_key.c) It is not any kind of
// standard (e.g. PBKDF2), and it only uses an interation count of 1, so it's
// pretty crappy. MD5 is used as the digest in Openssl 1.0.2, 1.1 and late
// use SHA256. Since this is for embedded system, I figure you know what you've
// got, so I made it compile-time configurable.
//
// The signature has been re-jiggered to make it less general.
//
// See: https://wiki.openssl.org/index.php/Manual:EVP_BytesToKey(3)
// And: https://www.cryptopp.com/wiki/OPENSSL_EVP_BytesToKey
#define IV_BYTE_COUNT 16
#if BTK_USE_MD5
# define DIGEST_BYTE_COUNT 16 // MD5
#else
# define DIGEST_BYTE_COUNT 32 // SHA
#endif
bool bRet;
unsigned char md_buf[ DIGEST_BYTE_COUNT ];
mbedtls_md_context_t md_ctx;
bool bAddLastMD = false;
unsigned int nKeyToGo = nDesiredKeyLen; // 32, typical
unsigned int nIVToGo = IV_BYTE_COUNT;
mbedtls_md_init( &md_ctx );
#if BTK_USE_MD5
int rc = mbedtls_md_setup( &md_ctx, mbedtls_md_info_from_type( MBEDTLS_MD_MD5 ), 0 );
#else
int rc = mbedtls_md_setup( &md_ctx, mbedtls_md_info_from_type( MBEDTLS_MD_SHA256 ), 0 );
#endif
if (rc != 0 )
{
fprintf( stderr, "mbedutils_md_setup() failed -0x%04x\n", -rc );
bRet = false;
goto exit;
}
while( 1 )
{
mbedtls_md_starts( &md_ctx ); // start digest
if ( bAddLastMD == false ) // first time
{
bAddLastMD = true; // do it next time
}
else
{
mbedtls_md_update( &md_ctx, &md_buf[0], DIGEST_BYTE_COUNT );
}
mbedtls_md_update( &md_ctx, &password[0], nPwdLen );
mbedtls_md_update( &md_ctx, &salt[0], 8 );
mbedtls_md_finish( &md_ctx, &md_buf[0] );
//
// Iteration loop here in original removed as unused by "openssl enc"
//
// Following code treats the output key and iv as one long, concatentated buffer
// and smears as much digest across it as is available. If not enough, it takes the
// big, enclosing loop, makes more digest, and continues where it left off on
// the last iteration.
unsigned int ii = 0; // index into mb_buf
if ( nKeyToGo != 0 ) // still have key to fill in?
{
while( 1 )
{
if ( nKeyToGo == 0 ) // key part is full/done
break;
if ( ii == DIGEST_BYTE_COUNT ) // ran out of digest, so loop
break;
*pOutKey++ = md_buf[ ii ]; // stick byte in output key
nKeyToGo--;
ii++;
}
}
if ( nIVToGo != 0 // still have fill up IV
&& // and
ii != DIGEST_BYTE_COUNT // have some digest available
)
{
while( 1 )
{
if ( nIVToGo == 0 ) // iv is full/done
break;
if ( ii == DIGEST_BYTE_COUNT ) // ran out of digest, so loop
break;
*pOutIV++ = md_buf[ ii ]; // stick byte in output IV
nIVToGo--;
ii++;
}
}
if ( nKeyToGo == 0 && nIVToGo == 0 ) // output full, break main loop and exit
break;
} // outermost while loop
bRet = true;
exit:
mbedtls_md_free( &md_ctx );
return bRet;
}
If anyone passing through here is looking for a working, performant reference implementation in Haskell, here it is:
import Crypto.Hash
import qualified Data.ByteString as B
import Data.ByteArray (convert)
import Data.Monoid ((<>))
evpBytesToKey :: HashAlgorithm alg =>
Int -> Int -> alg -> Maybe B.ByteString -> B.ByteString -> (B.ByteString, B.ByteString)
evpBytesToKey keyLen ivLen alg mSalt password =
let bytes = B.concat . take required . iterate go $ hash' passAndSalt
(key, rest) = B.splitAt keyLen bytes
in (key, B.take ivLen rest)
where
hash' = convert . hashWith alg
required = 1 + ((keyLen + ivLen - 1) `div` hashDigestSize alg)
passAndSalt = maybe password (password <>) mSalt
go = hash' . (<> passAndSalt)
It uses hash algorithms provided by the cryptonite package. The arguments are desired key and IV size in bytes, the hash algorithm to use (like e.g. (undefined :: MD5)), optional salt and the password. The result is a tuple of key and IV.

GIF LZW decompression

I am trying to implement a simple Gif-Reader in c++.
I currently stuck with decompressing the Imagedata.
If an image includes a Clear Code my decompression algorithm fails.
After the Clear Code I rebuild the CodeTable reset the CodeSize to MinimumLzwCodeSize + 1.
Then I read the next code and add it to the indexstream. The problem is that after clearing, the next codes include values greater than the size of the current codetable.
For example the sample file from wikipedia: rotating-earth.gif has a code value of 262 but the GlobalColorTable is only 256. How do I handle this?
I implemented the lzw decompression according to gif spec..
here is the main code part of decompressing:
int prevCode = GetCode(ptr, offset, codeSize);
codeStream.push_back(prevCode);
while (true)
{
auto code = GetCode(ptr, offset, codeSize);
//
//Clear code
//
if (code == IndexClearCode)
{
//reset codesize
codeSize = blockA.LZWMinimumCodeSize + 1;
currentNodeValue = pow(2, codeSize) - 1;
//reset codeTable
codeTable.resize(colorTable.size() + 2);
//read next code
prevCode = GetCode(ptr, offset, codeSize);
codeStream.push_back(prevCode);
continue;
}
else if (code == IndexEndOfInformationCode)
break;
//exists in dictionary
if (codeTable.size() > code)
{
if (prevCode >= codeTable.size())
{
prevCode = code;
continue;
}
for (auto c : codeTable[code])
codeStream.push_back(c);
newEntry = codeTable[prevCode];
newEntry.push_back(codeTable[code][0]);
codeTable.push_back(newEntry);
prevCode = code;
if (codeTable.size() - 1 == currentNodeValue)
{
codeSize++;
currentNodeValue = pow(2, codeSize) - 1;
}
}
else
{
if (prevCode >= codeTable.size())
{
prevCode = code;
continue;
}
newEntry = codeTable[prevCode];
newEntry.push_back(codeTable[prevCode][0]);
for (auto c : newEntry)
codeStream.push_back(c);
codeTable.push_back(newEntry);
prevCode = codeTable.size() - 1;
if (codeTable.size() - 1 == currentNodeValue)
{
codeSize++;
currentNodeValue = pow(2, codeSize) - 1;
}
}
}
Found the solution.
It is called Deferred clear code. So when I check if the codeSize needs to be incremented I also need to check if the codeSize is already max(12), as it is possible to to get codes that are of the maximum Code Size. See spec-gif89a.txt.
if (codeTable.size() - 1 == currentNodeValue && codeSize < 12)
{
codeSize++;
currentNodeValue = (1 << codeSize) - 1;
}

why one of the enum's in my program has strange value 131075?

I do debug this code:
result = conn_process(conn, 1, 0);
if (result == CG_ERR_OK) continue;
if (result == CG_ERR_TIMEOUT)
{
break; // i'm here!
}
As in debugger i'm at break; I assume that result == CG_ERR_TIMEOUT is true. In Locals I do see:
result 131075 unsigned int
In Watch I do see:
CG_ERR_TIMEOUT error: identifier 'CG_ERR_TIMEOUT' out of scope
Going to definition shows me such code:
enum {
CG_ERR_OK = 0,
CG_ERR_INTERNAL = CG_RANGE_BEGIN,
CG_ERR_INVALIDARGUMENT,
CG_ERR_UNSUPPORTED,
CG_ERR_TIMEOUT,
CG_ERR_MORE,
CG_ERR_INCORRECTSTATE,
CG_ERR_DUPLICATEID,
CG_ERR_BUFFERTOOSMALL,
CG_ERR_OVERFLOW,
CG_ERR_UNDERFLOW,
CG_RANGE_END
};
So I just wonder why CG_ERR_TIMEOUT == 131075. What a strange magic number?
Because CG_RANGE_BEGIN is 131072 (which is 0x20000).
enum {
CG_ERR_OK = 0,
CG_ERR_INTERNAL = CG_RANGE_BEGIN, // == 131072
From now on every enum value is the previous one plus 1:
CG_ERR_INVALIDARGUMENT, // == 131072 + 1 = 131073
CG_ERR_UNSUPPORTED, // == 131073 + 1 = 131074
CG_ERR_TIMEOUT, // == 131074 + 1 = 131075
CG_ERR_MORE, // etc.
CG_ERR_INCORRECTSTATE,
CG_ERR_DUPLICATEID,
CG_ERR_BUFFERTOOSMALL,
CG_ERR_OVERFLOW,
CG_ERR_UNDERFLOW,
CG_RANGE_END
};