How to check if EEPROM.read(0) contains a value - c++

I would like to read a data from EEPROM.read(0); on my esp32. But before reading I would like to check if there is a value at 0 position and if not, take the default predefined one.
I did some searching, but I could not found what EEPROM.read(0); returns if there is not a value to write an if statement.

Related

I have 2 sets of data mapped to each other. Is there a way to based on the value of another variable check that corresponding variable for its value?

So I have 64 variables called S0O through S63O. These are corresponding to the chess squares on a board. Now I have a function that walks through and marks each of these variables as TRUE(Occupied) or FALSE(Empty). Now i'm trying to take a variable(BP1 aka Black Pawn 1) which has a value 0 through 63 and I want to check if the square in front of it is empty so (BP1 - 8). I was recommended to try a unordered map which I tried but I could not find a way to take a variable value being a number and run ti though the Keys to find the output I am looking for. I am open to any ideas.
As François Andrieux pointed out if I use an array to put all these in a grid I can move about and edit specific "squares" inside of the board. So if my input is stored in variable BP1P = 48; Then I can take this value and check for it occupied with something like ([BP1P-8] == 'O').

SQLITE CHECK constraint for hex values

I want to store IP Address using c++ in sqlite3 DB in hex format. I am using TEXT as the format for storing hex values. I want to perform a check on the value being inserted in the DB. The value range I want to check for is 0x00000000 - 0xFFFFFFFF. For INTEGER type you can check by - CHECK (num BETWEEN 0 AND 100). Is there any way to add check constraint for Hex Values?
If there is an smarter way to store IP address in SQLITE please share with me.
Thanks
I think you have two main choices: (a) store as hex (i.e. text) and check "hex conformity", or (b) store as integer and print it as hex when reading data.
There may be several reasons for preferring the one over the other, e.g. if the application actually provides and receives integer values. Anyway, some examples for option (a) and option (b).
Option (a) - store as text and check hex conformity
the probably simplest way is to use a check based on GLOB as remarked by CL:
value TEXT CONSTRAINT "valueIsHex" CHECK(value GLOB "0x[a-fA-F0-9][a-fA-F0-9][a-fA-F0-9][a-fA-F0-9][a-fA-F0-9][a-fA-F0-9][a-fA-F0-9][a-fA-F0-9]")
If logic goes beyond that supported by GLOB, you could install a user defined function, either a general regexp() function or a custom function for your specific needs. This could be the case, if you want to store a complete IP-address in one field and still want to do a conformity check. For help on general regexp() function, confer this SO answer. For help on user defined functions confer, for example, this StackOverflow answer.
Option (b) - store as int and print as hex
If your application is actually working with integers, then change your db schema to integer and add a check like:
value INTEGER CONSTRAINT "valueIsValid" CHECK(value <= 0xFFFFFFFF)
For convenience, you can still print the value as hex using a query (or defining a corresponding view) like the following:
select printf('%08X', value) from theTable

Sequence generator with aggregator

Data is being passed through an aggregator transformation and grouped by customer account number to ensure I have distinct values. This is then passed to an expression transformation. I have a sequence generator transformation linked to the expression transformation - it never touches the aggregator. A variable in the expression is populated with the sequence number.
The problem I am running into is that the variable is coming up with a value in excess of the sequence number - e.g if there are 499 rows, the value of the variable is 501. It's as though the value assigned to the variable is ignoring the grouping and returning a non-distinct count.
Any idea what's happening here?
edit: More info on how this is being done. (Can't screenshot as it's too big.)
Flow 1 takes a list of account numbers, service numbers and destination systems and uses a router to sort them into flat files by destination system.
123456|0299999999|SYSA
123456|0299999999|SYSB
123457|0299999998|SYSA
123457|0299999998|SYSB
123457|0299999997|SYSA
123457|0299999997|SYSB
Some systems don't want the service number and some do. For those that do, it's a simple exercise of routing them through an expression transformation to set the variable using the sequence number. So the required output for SYSA would look like:
123456|0299999999|SYSA
123457|0299999998|SYSA
123457|0299999997|SYSA
And the expression transformation sets the variable using:
SETVARIABLE($$SYSA, SEQUENCE_NO)
In a second flow, I construct header and trailer files. For the trailer record count, I simply output the current value of $$SYSA like so:
SETVARIABLE($$SYSA, NULL)
I use Target Load Plan to execute the second flow only after the first completes.
I can demonstrate that using the variable in this way works, because the workflow outputs the correct values every time - I can alter the source datato increase or decrease the number of rows, and the value output for $$SYSA in the second flow is correct the first time (i.e it can't be a persisted value).
Where this is falling down is when the destination system only wants distinct account numbers and no service numbers. The required output for SYSB would be:
123456|SYSB
123457|SYSB
i.e the third row for SYSB is discarded because the account number is not unique. I'm trying to achieve this by putting an aggregator between the router and the expression, and grouping by the account number. However the $$SYSB variable isn't being assigned correctly in this case.
It appears Informatica was only updating the value of the variable if it is higher than the persistent value stored in the repository. So if a successful run persists a value of 501 to the repository, that value is picked up again at the start of the next run and it's only overridden if the new value is higher. I worked around it by declaring a starting value of 0 in the parameter file.

A way to retrieve data by address (c++)

Using c++, is it possible to store data to a file, and retrieve that data by address for quicker access? I want to get around having to parse or iterate large files of data, with the ability to gain direct access to a subset of that data. In your answers, it does not matter how the data is stored; whatever works best with the answer you have.
Yes. Assuming you're using iostreams, you can use tellg and tellp to retrieve the current get and put (i.e., read and write) locations respectively. You can later feed the same value back to seekg or seekp to get back to the same location (again, for reading or writing respectively).
You can use these to (for one example) create an index into a file. Before writing each record to your primary data file, you'd use tellp to retrieve the current location. Then you'd store the data to the data file, and save the value tellp returned into the index file. Depending on what sort of index you want, that might just contain a series of locations, so you can seek directly to record #N in the data file (even if the records are of different sizes).
Alternatively, you might store the data for some key field in the index file. For example, you might have a main data file with a set of records about people. Then you might build a number of indices into that, one with last names and a location for each, another with birthdays and a location for each, and so on, so you can search by name or birthday (or do an intersection between them to support things like people older than 18 with a last name starting with "M", "N" or "O").

Making an index-creating class

I'm busy with programming a class that creates an index out of a text-file ASCII/BINARY.
My problem is that I don't really know how to start. I already had some tries but none really worked well for me.
I do NOT need to find the address of the file via the MFT. Just loading the file and finding stuff much faster by searching for the key in the index-file and going in the text-file to the address it shows.
The index-file should be built up as follows:
KEY ADDRESS
1 0xABCDEF
2 0xFEDCBA
. .
. .
We have a text-file with the following example value:
1, 8752 FW,
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++,
******************************************************************************,
------------------------------------------------------------------------------;
I hope that this explains my question a bit better.
Thanks!
It seems to me that all your class needs to do is store an array of pointers or file start offsets to the key locations in the file.
It really depends on what your Key locations represent.
I would suggest that you access the file through your class using some public methods. You can then more easily tie in Key locations with the data written.
For example, your Key locations may be where each new data block written into the file starts from. e.g. first block 1000 bytes, key location 0; second block 2500 bytes, key location 1000; third block 550 bytes; key location 3500; the next block will be 4050 all assuming that 0 is the first byte.
Store the Key values in a variable length array and then you can easily retrieve the starting point for a data block.
If your Key point is signified by some key character then you can use the same class, but with a slight change to store where the Key value is stored. The simplest way is to step through the data until the key character is located, counting the number of characters checked as you go. The count is then used to produce your key location.
Your code snippet isn't so much of an idea as it is the functionality you wish to have in the end.
Recognize that "indexing" merely means "remembering" where things are located. You can accomplish this using any data structure you wish... B-Tree, Red/Black tree, BST, or more advanced structures like suffix trees/suffix arrays.
I recommend you look into such data structures.
edit:
with the new information, I would suggest making your own key/value lookup. Build an array of keys, and associate their values somehow. this may mean building a class or struct that contains both the key and the value, or instead contains the key and a pointer to a struct or class with a value, etc.
Once you have done this, sort the key array. Now, you have the ability to do a binary search on the keys to find the appropriate value for a given key.
You could build a hash table in a similar manner. you could build a BST or similar structure like i mentioned earlier.
I still don't really understand the question (work on your question asking skillz), but as far as I can tell the algorithm will be:
scan the file linearly, the first value up to the first comma (',') is a key, probably. All other keys occur wherever a ';' occurs, up to the next ',' (you might need to skip linebreaks here). If it's a homework assignment, just use scanf() or something to read the key.
print out the key and byte position you found it at to your index file
AFAIUI that's the algorithm, I don't really see the problem here?