Converting bits from one array to another? - c++

I am building a library for this vacuum fluorescent display. Its a very simple interface and I have all the features working.
The problem I am having now is that I am trying to make the code as compact as possable, but the custom character loading is not intuitive. That is the bitmap for the font maps to completely different bits and bytes to the display itself. From the IEE VFD datasheet, when you scroll down you see that the bits are mapped all over the place.
The code I have so far works like so:
// input the font bitmap, the bit from that line of the bitmap and the bit it needs to go to
static unsigned char VFD_CONVERT(const unsigned char* font, unsigned char from, unsigned char to) {
return ((*font >> from) & 0x01) << to;
//return (*font & (1 << from)) ? (1<<to) : 0;
}
// macros to make it easyer to read and see
#define CM_01 font+0, 4
#define CM_02 font+0, 3
#define CM_03 font+0, 2
#define CM_04 font+0, 1
#define CM_05 font+0, 0
// One of the 7 lines I have to send
o = VFD_CONVERT(CM_07,6) | VFD_CONVERT(CM_13,5) | VFD_CONVERT(CM_30,4) | VFD_CONVERT(CM_23,3) | VFD_CONVERT(CM_04,2) | VFD_CONVERT(CM_14,1) | VFD_CONVERT(CM_33,0);
send(o);
This is oviously not all the code. You can see the rest over my Google code repository but it should give you some idea what I am doing.
So the question I have is if there is a better way to optimize this or do the translation?
Changing the return statement on VFD_CONVERT makes GCC go crazy (-O1, -O2, -O3, and -Os does it) and expands the code to 1400 bytes. If I use the return statement with the inline if, it reduces it to 800 bytes. I have been going though the asm generated statements and current I am tempted to just write it all in asm as I am starting to think the compiler doesn't know what it is doing. However I thought maybe its me and I don't know what I am doing and so it confuses the compiler.
As a side note, the code there works, both return statements upload the custom character and it gets displayed (with a weird bug where I have to send it twice, but that's a separate issue).

First of all, you should file a bug report against gcc with a minimal example, since -Os should never generate larger code than -O0. Then, I suggest storing the permutation in a table, like this
const char[][] perm = {{ 7, 13, 30, 23, 4, 14, 33}, ...
with special values indicating a fixed zero or one bit. That'll also make your code more readable.

Related

How to deal with 128bit variable in MinGM32 bit compiler for Encryption (Diffie Hellman Algorithm) in Qt

I want to use the below equation in one of the code
A = g^a mod p; //g raise to a modulus p.
(something like 2^5 % 3) = 32%3 = 2
(This equation looks like Diffie Hellman algorithm for security)
Where:
^ is (power)
g is fixed number 0x05
a is 128bit(16bytes) randomly generated number,
p is fixed hex number of 128bit(16bytes). Something like (0x0xD4A283974897234CE908B3478387A3).
I am using:
Qt 4.8.7
Compiler MinGW32 (checked with boost library boost 1.70)
The solutions which I found which didn`t work for me are listed below:
one can use __int128 but to support that one should have used
latest GCC compiler or MinGW64 bit compiler, neither of that I am using now.
I found one latest version of Qt has QSslDiffieHellmanParameters class,
but again not supported in our Qt version.
I found some libraries like boost/multiprecision/cpp_int.hpp (boost 1.70))
that does have data type such as int128_t and int256_t, but due to
our compiler isssue or something else, we are not able to store
128bit number, meaning
if I do:
int128_t ptval128 = 0xAB1232423243434343BAE3453345E34B;
cout << "ptval128 = " << std::hex << ptval128 << endl;
//will print only 0xAB12324232434343;//half digits only,
I tried using Bigint which much more useful, but again
5^(128bit number) is way too big, it takes hours to compute things,
(I waited till 1 hour and 16 mins and kill the application).
int myGval = 0x05;
128_bit_data_type myPVal= 0xD4A283974897234CE908B3478387A3;
128_bit_data_type 128_bit_variable = 128_bit_random_data;
myVal = (myGval)^(128_bit_variable) % (myPVal);
That is not how to do modular exponentiation! The first problem is that 5 ^ 128_bit_variable is huge, so big that it won't fit into memory in any computers available today. To keep the required storage space within bounds, you have to take the remainder % myPVal after every operation.
The second problem is that you can't compute 5 ^ 128_bit_variable simply by multiplying by 5 by itself 128_bit_variable times -- that would take longer than the age of the universe. You need to use an exponentiation ladder, which requires just 128 squarings and at most 128 multiplications. See this Wikipedia article for the details. In the end, the operation 5 ^ 128_bit_number should take a fraction of a second.

How to pass a 48-bit MAC address as a arguement in a function through a uint_8-bit variable?

Recently, I started working on a project relevant to emac and came across few doubts and blockages with respect to implementation, and decided to post my Q here to get some advise and suggestions from experienced people.
At present, I am working on interfacing the EMAC-DM9161A module with my SAM3x - Taiji Uino board for high speed ethernet communication.I am using the library developed by Palliser which is uploaded on Github as elechouse/EMAC-Demo. In the source code - ethernet_phy.c, I came across this function to initialize the DM9161A PHY component as follows:
unit8_t ethernet_phy_init(Emac*p_emac, uint8_t uc_phy_addr, uint32_t mck);
Problem: The argument uint8_t uc_phy_addr is an 8 bit register through which I want to pass a 48 bit MAC address such as - 70-62-D8-28-C2-8E. I understand that, I can use two 32 bit registers to store the first 32 bit of the MAC address i.e. 70-62-D8-28 in one 32 bit register and the rest 16 bit MAC address i.e. C2-8E in another 32 bit register. However, I cannot do this, since I need to use the above ethernet_phy_init function in which a unit8_t is used to pass the 48 bit MAC address. So, I'd like to know, how to make this happen?
Another Question: I executed some code to understand by some trial methods and came across some doubts,here is the code:
int main()
{
unit8_t phy_addr =49; //Assign a value 49 to 8 bit Reg
int8_t phy_addr1 = 49;
int phy_addr2 = 49;
cout<<phy_addr;
cout<<phy_addr1
cout<<phy_addr2;
getchar();
return 0;
}
Output Results:
1
1
49
So my doubt is, why is the output being displayed in ASCII character wherever I use a 8 bit variable to store the value 49, but when I use a normal 32 bit int variable to store 49, it displays a decimal value of 49. Why does this happen?. And lastly how to store MAC address in an 8 bit register?
About second question:
uint8_t/int8_t is same as unsigned/signed char and cout will handli it as char. Use static_cast<int> to print as number.
About first quiestion:
I never worked with emac, but judging by this example mac should be set this way:
#define ETHERNET_CONF_ETHADDR0 0x00
#define ETHERNET_CONF_ETHADDR0 0x00
#define ETHERNET_CONF_ETHADDR1 0x04
#define ETHERNET_CONF_ETHADDR2 0x25
#define ETHERNET_CONF_ETHADDR3 0x1C
#define ETHERNET_CONF_ETHADDR4 0xA0
#define ETHERNET_CONF_ETHADDR5 0x02
static uint8_t gs_uc_mac_address[] =
{ ETHERNET_CONF_ETHADDR0, ETHERNET_CONF_ETHADDR1, ETHERNET_CONF_ETHADDR2,
ETHERNET_CONF_ETHADDR3, ETHERNET_CONF_ETHADDR4, ETHERNET_CONF_ETHADDR5
};
emac_options_t emac_option;
memcpy(emac_option.uc_mac_addr, gs_uc_mac_address, sizeof(gs_uc_mac_address));
emac_dev_init(EMAC, &gs_emac_dev, &emac_option);
Regarding your second question: the first 2 variables are 8bit (one signed and one unsigned), so the ostream assumes they are chars (also 8bit wide) and displays the char representation for them ("1" = ASCII 49).
As for original question, i browsed a little bit the Atmel sources and the MAC address has nothing to do in ethernet_phy_init (all is at a much lower level):
uc_phy_addr - seems to be interface index
mck - seems to be a timer related value.
I figured it out, so I am going to answer my own question for those beginners like me who may encounter this same doubt.
Answer: As suggested by the members in the comments, yes they were right and thanks to them. The function parameter uint8_t uc_phy_addr represents the 5 bit port address in the PHY chip - Register and not the MAC Address, hence the address is set as 0x01 to enable only the receive pin and keeping the other 4 bits 0. The 4th bit is the CSR which is also set 0 in this case (for more details, Please refer data sheet of DM9161A).

How to print degree symbol on the window using qt5(QtQuick 2.1) and above

When I was using up to qt4.8(qt quick 1.1) for gui then I am successfully able to print degree with \260 but when things got upgraded to qt5 and above then this stopped working. I searched on the net and found many relevant link such as (http://www.fileformat.info/info/unicode/char/00b0/index.htm) I tried but no help. Do I need to include some library for usinf UTF format or problem is sth else. Please some one help. What to do?
#Revised,
Here it is described what is being done.
First I am storing the printable statement in string text.
As in cpp function:-
sprintf(text, "%02d\260 %03d\260 ",latD, longD);
QString positionText(text.c_str());
return positionText;
And then using positionText in qml file to display on the window.
So, someone please answer what do I need to do to have degree in display?
Thanks.
Problem is simple you used \260 most probably inside Ansii C-string (const char []). In such cases Qt has use some codec to convert this to Unicode characters. For some reason when you change Qt version default codec was changed and this is why it stopped working.
Anyway your approach is wrong. You shouldn't use C-string which are codec depended (usually this leads to this kind of problems). You can define QChar const as QChar(0260) or best approach is to use tr and provide translation.
It would be best if you give representative example with string with degree character, then someone will provide you best solution.
Edit:
I would change your code like this:
const QChar degreeChar(0260); // octal value
return QString("%1%3 %2%3").arg(latD, 2, 10, '0').arg(longD, 3, 10, '0').arg(degreeChar);
or add translation which will handle this line:
return tr("%1degree %2degree").arg(latD, 2, 10, '0').arg(longD, 3, 10, '0');
Note that this translation for this line only have to be added always no mater what is current locale.
Try
return QString::fromLatin1(text);
or, if that doesn't work, another static QString::fromXXX method.
QT5 changed Qt's default codec from Latin-1 to UTF-8, as described here:
https://www.macieira.org/blog/2012/05/source-code-must-be-utf-8-and-qstring-wants-it/
Latin-1 and Unicode both use 176 (0xB0 or 0260) as the degree symbol, so your usage of it coincidentally worked, since it was interpreted as Latin-1 and converted to the same value in Unicode.
That first line could be changed to:
sprintf(text, "%02d\302\260 %03d\302\260 ",latD, longD);
As mentioned before, going directly to a QString is indeed better, but if you had to go through a std::string, you could simply substitute the UTF-8 encoding of Unicode 176, in which the lower 6 bits 110000 would have a 10 prepended, and the upper 2 bits 10, would have 110000 prepended in the first byte. This becomes: \302\260.
To easily print angles with degree symbols in console, try this:
#include <QDebug>
double v = 7.0589;
qDebug().noquote() << "value=" << v << QString(248);
Console output:
value= 7.0589 °
This works out-of-the-box under Windows.

can GCC print out intermediate results?

Check the code below:
#include <avr/io.h>
const uint16_t baudrate = 9600;
void setupUART( void ) {
uint16_t ubrr = ( ( F_CPU / ( 16 * (float) baudrate ) ) - 1 + .5 );
UBRRH = ubrr >> 8;
UBRRL = ubrr & 0xff;
}
int main( void ) {
setupUART();
}
This is the command used to compile the code:
avr-gcc -g -DF_CPU=4000000 -Wall -Os -Werror -Wextra -mmcu=attiny2313 -Wa,-ahlmns=project.lst -c -o project.o project.cpp
ubrr is calculated by the compiler as 25, so far so good. However, to check what the compiler calculated, I have peek into the disassembly listing.
000000ae <setupUART()>:
ae: 12 b8 out UBRRH, r1 ; 0x02
b0: 89 e1 ldi r24, 0x19 ; 25
b2: 89 b9 out UBRRL, r24 ; 0x09
b4: 08 95 ret
Is it possible to make avr-gcc print out the intermediate result at compile time (or pull the info from the .o file), so when I compile the code it prints a line like (uint16_t) ubbr = 25 or similar? That way I can do a quick sanity check on the calculation and settings.
GCC has command line options to request that it dump out its intermediate representation after any stage of compilation. The "tree" dumps are in pseudo-C syntax and contain the information you want. For what you're trying to do, the -fdump-tree-original and -fdump-tree-optimized dumps happen at useful points in the optimization pipeline. I don't have an AVR compiler to hand, so I modified your test case to be self-contained and compilable with the compiler I do have:
typedef unsigned short uint16_t;
const int F_CPU = 4000000;
const uint16_t baudrate = 9600;
extern uint16_t UBRRH, UBRRL;
void
setupUART(void)
{
uint16_t ubrr = ((F_CPU / (16 * (float) baudrate)) - 1 + .5);
UBRRH = ubrr >> 8;
UBRRL = ubrr & 0xff;
}
and then
$ gcc -O2 -S -fdump-tree-original -fdump-tree-optimized test.c
$ cat test.c.003t.original
;; Function setupUART (null)
;; enabled by -tree-original
{
uint16_t ubrr = 25;
uint16_t ubrr = 25;
UBRRH = (uint16_t) ((short unsigned int) ubrr >> 8);
UBRRL = ubrr & 255;
}
$ cat test.c.149t.optimized
;; Function setupUART (setupUART, funcdef_no=0, decl_uid=1728, cgraph_uid=0)
setupUART ()
{
<bb 2>:
UBRRH = 0;
UBRRL = 25;
return;
}
You can see that constant-expression folding is done so early that it's already happened in the "original" dump (which is the earliest comprehensible dump you can have), and that optimization has further folded the shift and mask operations into the statements writing to UBRRH and UBRRL.
The numbers in the filenames (003t and 149t) will probably be different for you. If you want to see all the "tree" dumps, use -fdump-tree-all. There are also "RTL" dumps, which don't look anything like C and are probably not useful to you. If you're curious, though, -fdump-rtl-all will turn 'em on. In total there are about 100 tree and 60 RTL dumps, so it's a good idea to do this in a scratch directory.
(Psssst: Every time you put spaces on the inside of your parentheses, God kills a kitten.)
There might be a solution for printing intermediate results, but it will take you some time to be implemented. So it is worthwhile only for a quite large source code base.
You could customize your GCC compiler; either thru a plugin (painfully coded in C or C++) or thru a MELT extension. MELT is a high-level, Lisp-like, domain specific language to extend GCC. (It is implemented as a [meta-]plugin for GCC and is translated to C++ code suitable for GCC).
However such an approach requires you to understand GCC internals, then to add your own "optimization" pass to do the aspect oriented programming (e.g. using MELT) to print the relevant intermediate results.
You could also look not only the generated assembly (and use -fverbose-asm -S as options to GCC) but also perhaps in the generated Gimple representations (perhaps with -fdump-tree-gimple). For some interactive tool, consider the graphical MELT probe.
Perhaps adding your own builtin (with a MELT extension) like __builtin_display_compile_time_constant might be relevant.
I doubt there is an easy way to determine what the compiler does. There may be some tools in gcc specifically to dump the intermediate form of the language, but it will definitely not be easy to read, and unless you REALLY suspect that the compiler is doing something wrong (and have a VERY small example to show it), it's unlikely you can use it for anything meaningful - simply because it is too much work to follow what is going on.
A better approach is to add temporary variables (and perhaps prints) to your code, if you worry about it being correct:
uint16_t ubrr = ( ( F_CPU / ( 16 * (float) baudrate ) ) - 1 + .5 );
uint8_t ubrr_high = ubrr >> 8
uint8_t ubrr_low = ubrr & 0xff;
UBRRH = ubrr_high;
UBRRL = ubrr_low;
Now, if you have a non-optimized build and step through it in GDB, you should be able to see what it does. Otherwise, adding printouts of some sort to the code to show what the values are...
If you can't print it on the target system because you are in the process of setting up the uart that you will be using to print with, then replicate the code on your local host system and debug it there. Unless the compiler is very buggy, you should get the same values from the same compilation.
Here's a hack: simply automate what you are doing by hand now.
In your makefile, ensure that avr-gcc produces a disassembly (-ahlms=output.lst). Alternatively, use your own dissassembly method as a post-compile step in your makefile.
As a post-compilation step, process your listing file using your favorite scripting language to look for out UBRRH and out UBRRL lines. These are going to be loaded from registers, so your script can pull out the immediately preceeding assignments to registers that will be loaded into UBRRH and UBRRL. The script can then reassemble the UBRR value from the value loaded into the general-purpose registers which whhich are used to set UBRRH and UBRRL.
This sounds to be easier than Basile Starynkevich's very useful suggestion of MELT extension. Now, granted that this solution seems fragile, at first blush, so let's consider this issue:
We know that (at least on your processor) the lines out UBRR_, r__ will appear in the disassembly listing: there is simply no other way to set the registers/write data to port. One thing that might change is the spacing in/around these lines but this can be easily handled by your script
We also know that out instructions can only take place from general-purpose registers, so we know there will be a general-purpose register as the second argument to the out instruction line, so that should not be a problem.
Finally, we also know that this register will be set prior to out instruction. Here we must allow for some variability: instead of LDI (load immediate), avr-gcc might produce some other set of instructions to set the register value. I think as a first pass the script should be able to parse immediate loading, and otherwise dump whatever last instruction it finds involving the register that will be written to UBRR_ ports.
The script may have to change if you change platforms (some processors have UBRRH1/2 registers instea of UBRRH, however in that case you baud code will have to change. If the script complains that it can't parse the disassembly then you'll at least know that your check has not been performed.

How to print subscripts/superscripts on a CLI?

I'm writing a piece of code which deals with math variables and indices, and I'd need to print subscripts and superscripts on a CLI, is there a (possibly cross-platform) way to do that? I'm working in vanilla C++.
Note: I'd like this to be cross-platform, but since from the first answers this doesn't seem to be possible I'm working under MacOS and Ubuntu Linux (so bash).
Thank you
Since most CLIs are really only terminals (pretty dumb ones mostly but sometimes with color), the only cross-platform way I've ever done this is by allocating muliple physical lines per virtual line, such as:
2
f(x) = x + log x
2
It's not ideal but it's probably the best you're going to get without a GUI.
Following you extra information as to what platforms you're mainly interested in:
With Ubuntu at least, gnome-terminal runs in UTF-8 mode by default so the following code shows how to generate the superscripts and subscripts:
#include <stdio.h>
static char *super[] = {"\xe2\x81\xb0", "\xc2\xb9", "\xc2\xb2",
"\xc2\xb3", "\xe2\x81\xb4", "\xe2\x81\xb5", "\xe2\x81\xb6",
"\xe2\x81\xb7", "\xe2\x81\xb8", "\xe2\x81\xb9"};
static char *sub[] = {"\xe2\x82\x80", "\xe2\x82\x81", "\xe2\x82\x82",
"\xe2\x82\x83", "\xe2\x82\x84", "\xe2\x82\x85", "\xe2\x82\x86",
"\xe2\x82\x87", "\xe2\x82\x88", "\xe2\x82\x89"};
int main(void) {
int i;
printf ("f(x) = x%s + log%sx\n",super[2],sub[2]);
for (i = 0; i < 10; i++) {
printf ("x%s x%s ", super[i], sub[i]);
}
printf ("y%s%s%s z%s%s\n", super[9], super[9], super[9], sub[7], sub[5]);
return 0;
}
The super and sub char* arrays are the UTF-8 encodings for the Unicode code points for numeric superscripts and subscripts (see here). The given program will output my formula from above (on one line instead of three), then another test line for all the choices and a y-super-999 and z-sub-75 so you can see what they look like.
MacOS doesn't appear to use gnome-terminal as a terminal program but references here and here seem to indicate the standard terminal understands UTF-8 (or you could download and install gnome-terminal as a last resort).
I'd need to print subscripts and superscripts on a CLI, is there a cross-platform way to do that?
Only if you have a Unicode-capable terminal, which is far from guaranteed. Unicode defines a limited number of sub- and superscript ‘compatibility characters’, you certainly can't use it on any old letter:
₀₁₂₃₄₅₆₇₈₉₊₋₌₍₎ₐₑₒₓ
⁰¹²³⁴⁵⁶⁷⁸⁹⁺⁻⁼⁽⁾ⁿⁱ
Even then you're reliant on there being a glyph for it in the console font, which is also far from guaranteed. Superscript 2 and 3 are likely to exist as they're present in ISO-8859-1; the others may well not work.