How to pass a 48-bit MAC address as a arguement in a function through a uint_8-bit variable? - c++

Recently, I started working on a project relevant to emac and came across few doubts and blockages with respect to implementation, and decided to post my Q here to get some advise and suggestions from experienced people.
At present, I am working on interfacing the EMAC-DM9161A module with my SAM3x - Taiji Uino board for high speed ethernet communication.I am using the library developed by Palliser which is uploaded on Github as elechouse/EMAC-Demo. In the source code - ethernet_phy.c, I came across this function to initialize the DM9161A PHY component as follows:
unit8_t ethernet_phy_init(Emac*p_emac, uint8_t uc_phy_addr, uint32_t mck);
Problem: The argument uint8_t uc_phy_addr is an 8 bit register through which I want to pass a 48 bit MAC address such as - 70-62-D8-28-C2-8E. I understand that, I can use two 32 bit registers to store the first 32 bit of the MAC address i.e. 70-62-D8-28 in one 32 bit register and the rest 16 bit MAC address i.e. C2-8E in another 32 bit register. However, I cannot do this, since I need to use the above ethernet_phy_init function in which a unit8_t is used to pass the 48 bit MAC address. So, I'd like to know, how to make this happen?
Another Question: I executed some code to understand by some trial methods and came across some doubts,here is the code:
int main()
{
unit8_t phy_addr =49; //Assign a value 49 to 8 bit Reg
int8_t phy_addr1 = 49;
int phy_addr2 = 49;
cout<<phy_addr;
cout<<phy_addr1
cout<<phy_addr2;
getchar();
return 0;
}
Output Results:
1
1
49
So my doubt is, why is the output being displayed in ASCII character wherever I use a 8 bit variable to store the value 49, but when I use a normal 32 bit int variable to store 49, it displays a decimal value of 49. Why does this happen?. And lastly how to store MAC address in an 8 bit register?

About second question:
uint8_t/int8_t is same as unsigned/signed char and cout will handli it as char. Use static_cast<int> to print as number.
About first quiestion:
I never worked with emac, but judging by this example mac should be set this way:
#define ETHERNET_CONF_ETHADDR0 0x00
#define ETHERNET_CONF_ETHADDR0 0x00
#define ETHERNET_CONF_ETHADDR1 0x04
#define ETHERNET_CONF_ETHADDR2 0x25
#define ETHERNET_CONF_ETHADDR3 0x1C
#define ETHERNET_CONF_ETHADDR4 0xA0
#define ETHERNET_CONF_ETHADDR5 0x02
static uint8_t gs_uc_mac_address[] =
{ ETHERNET_CONF_ETHADDR0, ETHERNET_CONF_ETHADDR1, ETHERNET_CONF_ETHADDR2,
ETHERNET_CONF_ETHADDR3, ETHERNET_CONF_ETHADDR4, ETHERNET_CONF_ETHADDR5
};
emac_options_t emac_option;
memcpy(emac_option.uc_mac_addr, gs_uc_mac_address, sizeof(gs_uc_mac_address));
emac_dev_init(EMAC, &gs_emac_dev, &emac_option);

Regarding your second question: the first 2 variables are 8bit (one signed and one unsigned), so the ostream assumes they are chars (also 8bit wide) and displays the char representation for them ("1" = ASCII 49).
As for original question, i browsed a little bit the Atmel sources and the MAC address has nothing to do in ethernet_phy_init (all is at a much lower level):
uc_phy_addr - seems to be interface index
mck - seems to be a timer related value.

I figured it out, so I am going to answer my own question for those beginners like me who may encounter this same doubt.
Answer: As suggested by the members in the comments, yes they were right and thanks to them. The function parameter uint8_t uc_phy_addr represents the 5 bit port address in the PHY chip - Register and not the MAC Address, hence the address is set as 0x01 to enable only the receive pin and keeping the other 4 bits 0. The 4th bit is the CSR which is also set 0 in this case (for more details, Please refer data sheet of DM9161A).

Related

I have the code, memory and watch opened. How do i make sense of these hex values? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I've been trying debugging in code::blocks. I'm a student.
I understand value written on left is the address of the variable x in memory written in hex. What about these weird numbers on its right?
Also, what is the 'bytes' menu? If I select 16 bytes (from drop down arrow menu) I get one row only (in the image there are 8, as 16*8=256) . Does this mean this variable x is using 16 bytes in memory (but if I issue a sizeof(x) command, it gives me 4). So what's happening here?
Thanks.
Image
code:
#include <iostream>
using namespace std;
int main()
{
int x=10;
x=6;
x=13;
int y=12;
cout<<&y<<endl<<sizeof(x);
int z=19;
return 0;
}
Basics
The smallest memory size that we can work with is 1 byte. To represent a byte in hex, you need two hex values (ie: A0). This is the basics in the hex editor/ viewer.
What you have when you debug memory is the memory portion which the application uses in the RAM. A typical hex editor/ viewer will look something like this,
The address (A) | The values (in hex) (B) | ASCII representation (C)
| | | | |
V | V | V
0x000000 | 00 00 00 00 00 00 00 00 00 | . . . . . . . . . .
The A column represents the base (beginning) address of the row.
The B column shows the actual values stored at a given address. The first hex value (ie: 00 in this depiction) has the address of 0x000000. The second one has the address of 0x000001 and so on.
The C column contains the ASCII representation of the values shown in one column. Meaning that it shows what the values in the row looks like in ASCII.
int in C++ is 4 bytes in size (32 bits). In your example, the address of x is 0x61ff1c. Right next to it, in the B column you get the value stored in little endian (https://en.wikipedia.org/wiki/Endianness).
The address of y is 0x61ff14 in the memory. Your image does not show it as the variable is stored at a memory location before the address of x.
The Bytes in the menus just lets you to decide how many bytes it should display starting from the address that you specified (in your case its the address of x).
What about these weird numbers on its right?
The number on the extreme left is the memory address of the first byte (8 bits). In your image 0x61ff1c is not the address of the entire row, just the first byte (0d). The address of the second byte is 0x61ff1d and so on. If you check, the address to the left of the second row reads 0x61ff2c, exactly a difference of 16 bytes.
Now, about the contents. Let's look at 0x61ff1c, it contains 0x0d. If you are using a CPU with the x86 architecture, it is the opcode of an instruction that tells the CPU to perform a logical OR operation between the numbers in the next locations. The CPU does not understand C, it only understands binary. When you compile a C program, it gets converted from a .c file to an executable. The executable file can be directly executed by the CPU because all it contains is opcodes for instructions and data. The instruction set can be completely different for a CPU with another architecture; there 0xd can mean something else. Your compiler takes care of generating the right instructions for your CPU.
What you are seeing is directly the binary contents of memory(in hex to make it simpler), which will be read by the CPU.
ASCII
To the extreme right is the textual representation of the instructions.
Ever tried opening an executable file in Notepad or another text editor? Basically, all computer files contain data in the form of 0s and 1s. It is the file type that tells the computer how to interpret it. When you create a text file in Notepad, it is interpreted as text/ASCII. For instance, the ASCII value of A is 65 or 0x41. When the computer sees, 0x41, it knows it is an A. But if your file type is not text but rather an executable, that same 0x41 could be an opcode for an instruction. When you open an executable with Notepad, you are interpreting CPU instructions as text. In your case 0xd means OR for an executable, but you are trying to interpret as text. The character with the ASCII value 0x40 is #, that's why you get an # for the 7th column in the extreme right.

How to deal with 128bit variable in MinGM32 bit compiler for Encryption (Diffie Hellman Algorithm) in Qt

I want to use the below equation in one of the code
A = g^a mod p; //g raise to a modulus p.
(something like 2^5 % 3) = 32%3 = 2
(This equation looks like Diffie Hellman algorithm for security)
Where:
^ is (power)
g is fixed number 0x05
a is 128bit(16bytes) randomly generated number,
p is fixed hex number of 128bit(16bytes). Something like (0x0xD4A283974897234CE908B3478387A3).
I am using:
Qt 4.8.7
Compiler MinGW32 (checked with boost library boost 1.70)
The solutions which I found which didn`t work for me are listed below:
one can use __int128 but to support that one should have used
latest GCC compiler or MinGW64 bit compiler, neither of that I am using now.
I found one latest version of Qt has QSslDiffieHellmanParameters class,
but again not supported in our Qt version.
I found some libraries like boost/multiprecision/cpp_int.hpp (boost 1.70))
that does have data type such as int128_t and int256_t, but due to
our compiler isssue or something else, we are not able to store
128bit number, meaning
if I do:
int128_t ptval128 = 0xAB1232423243434343BAE3453345E34B;
cout << "ptval128 = " << std::hex << ptval128 << endl;
//will print only 0xAB12324232434343;//half digits only,
I tried using Bigint which much more useful, but again
5^(128bit number) is way too big, it takes hours to compute things,
(I waited till 1 hour and 16 mins and kill the application).
int myGval = 0x05;
128_bit_data_type myPVal= 0xD4A283974897234CE908B3478387A3;
128_bit_data_type 128_bit_variable = 128_bit_random_data;
myVal = (myGval)^(128_bit_variable) % (myPVal);
That is not how to do modular exponentiation! The first problem is that 5 ^ 128_bit_variable is huge, so big that it won't fit into memory in any computers available today. To keep the required storage space within bounds, you have to take the remainder % myPVal after every operation.
The second problem is that you can't compute 5 ^ 128_bit_variable simply by multiplying by 5 by itself 128_bit_variable times -- that would take longer than the age of the universe. You need to use an exponentiation ladder, which requires just 128 squarings and at most 128 multiplications. See this Wikipedia article for the details. In the end, the operation 5 ^ 128_bit_number should take a fraction of a second.

C++ Memory Writing for the PS3

I'm learning how to hack PS3 games, where I need to edit what is stored at certain memory addresses. Here is an example of what I see people do to to achieve this:
*(char*)0x1786418 = 0x40;
This line of code turns on super speed for COD Black Ops II.
I'm not 100% sure what is going on here. I know that 0x1786418 is the address and 0x40 sets the value at that address. But I'm not so sure what *(char*) does and how does 0x40 turn on super speed?
An explanation of this syntactically and how it turns on super speed would be much appreciated.
Thanks!
You should consider understanding the basics of the programming language before you try to go into reverse-engineering. That's definitely an advanced topic that you don't want to use as a way to get started. It'll make things unnecessarily more difficult for you.
I'm not 100% sure what is going on here. I know that 0x1786418 is the address and 0x40 sets the value at that address.
This is as much as anyone here might be able to tell you, unless the person who reverse-engineered the software shows up here and explains it.
But I'm not so sure what *(char*) does
This is a way to take the address and interpret it as a pointer to a byte (chars in C are 1 byte of memory) and then the outside * dereferences the pointer to allow the value referenced by the pointer to be modified, in this case, set to the value 0x40.
and how does 0x40 turn on super speed?
This is very specific to the game itself. Someone must've figured out where data about player movement speed is stored in memory (specifically for the PS3) and is updating it this way.
Something like this could easily break by a simple patch because code changes can make certain things end up at different addresses, requiring additional reverse-engineering efforts.
if anyone is seing this and wants to know how to set prestiges or enable red boxes or what not ill explain how (MW3 Will be my example)
so for a prestige it would be like *(char*)0x01C1947C = 20;
that would set prestige 20 if you dont understand for 20 you can either do 20 or 0x14 would also equal prestige 20 say you want prestige 12 you could do 12 or 0xC
if you dont know how just search prestige you want then in hex :)
now for stuff like red boxes (assuming you know about bools / if statements and voids im not going to cover them only how you would set it)
now you would do for red boxes (enable) ps. bytesOn can be called anything
char bytesOn[] = { 0x60, 0x00, 0x00, 0x00 };
write_process(0x65D14, bytesOn, sizeof(bytesOn));
whateverYourBoolIsCalled = true;
now to turn it off works the same except you have to get the other bytes :)
ill add 1 more example if you want to set a name in a sprx
char name[] = { "name here" };
write_process(0x01BBBC2C, name, sizeof(name));
there is shorter ways of doing this but i think its the best way to understand doing it this way :)
so ye this has been my tut :)

Realization of Truth table in C

I want to set various clock sources in a function as per the truth table below. Basically I want to write to the TCCR0B register(Atmega328) according to the parameter I pass to setClockSource function. The image of the table and registers is given below.
I am not able to makeout how best it can be done. I thought of using enum for various modes as below.
enum CLOCK_SOURCE{
NO_CLOCK_SOURCE=0x0;
NO_PRESCALING=0x01;
CLK_8=0x02;
// and so on
}
But the problem is in setClockSource() function, how should I write to TCCR0B register without affecting bits 3-7? Shall I first clear last 3 bits and then OR TIMER_MODE values with TCCR0B? Without clearing, I may not guarantee the correct values for last 3 bits I guess. What is the efficient way?
void setClockSource (enum CLOCK_SOURCE clockSource)
{
TCCR0B&=0xF8; // First Clear last 3 bits
TCCR0B|=clockSource;
}
Do we have library functions available to set clock source? I am using Atmega studio
Do it like this:
void setClockSource (CLOCK_SOURCE clockSource)
{
TCCR0B = TCCR0B & 0xF8 | clockSource;
}
Thus you will keep high bits and set lower bits.

Converting bits from one array to another?

I am building a library for this vacuum fluorescent display. Its a very simple interface and I have all the features working.
The problem I am having now is that I am trying to make the code as compact as possable, but the custom character loading is not intuitive. That is the bitmap for the font maps to completely different bits and bytes to the display itself. From the IEE VFD datasheet, when you scroll down you see that the bits are mapped all over the place.
The code I have so far works like so:
// input the font bitmap, the bit from that line of the bitmap and the bit it needs to go to
static unsigned char VFD_CONVERT(const unsigned char* font, unsigned char from, unsigned char to) {
return ((*font >> from) & 0x01) << to;
//return (*font & (1 << from)) ? (1<<to) : 0;
}
// macros to make it easyer to read and see
#define CM_01 font+0, 4
#define CM_02 font+0, 3
#define CM_03 font+0, 2
#define CM_04 font+0, 1
#define CM_05 font+0, 0
// One of the 7 lines I have to send
o = VFD_CONVERT(CM_07,6) | VFD_CONVERT(CM_13,5) | VFD_CONVERT(CM_30,4) | VFD_CONVERT(CM_23,3) | VFD_CONVERT(CM_04,2) | VFD_CONVERT(CM_14,1) | VFD_CONVERT(CM_33,0);
send(o);
This is oviously not all the code. You can see the rest over my Google code repository but it should give you some idea what I am doing.
So the question I have is if there is a better way to optimize this or do the translation?
Changing the return statement on VFD_CONVERT makes GCC go crazy (-O1, -O2, -O3, and -Os does it) and expands the code to 1400 bytes. If I use the return statement with the inline if, it reduces it to 800 bytes. I have been going though the asm generated statements and current I am tempted to just write it all in asm as I am starting to think the compiler doesn't know what it is doing. However I thought maybe its me and I don't know what I am doing and so it confuses the compiler.
As a side note, the code there works, both return statements upload the custom character and it gets displayed (with a weird bug where I have to send it twice, but that's a separate issue).
First of all, you should file a bug report against gcc with a minimal example, since -Os should never generate larger code than -O0. Then, I suggest storing the permutation in a table, like this
const char[][] perm = {{ 7, 13, 30, 23, 4, 14, 33}, ...
with special values indicating a fixed zero or one bit. That'll also make your code more readable.