Solidity convert HEX number to HEX string - bit-manipulation

I need to store a value of this kind 0xff0000 or 0x00ff08 (hex colour representation) in solidity smart contract and be able to convert it inside a contract to a string with the same text characters "ff0000". I intend to deploy this smart contract on RSK.
My idea was to store those values in a bytes3 or simply uint variable and to have a pure function converting bytes3 or uint to corresponding string. I found a function that does the job and working on solidity 0.4.9
pragma solidity 0.4.9;
contract UintToString {
function uint2hexstr(uint i) public constant returns (string) {
if (i == 0) return "0";
uint j = i;
uint length;
while (j != 0) {
length++;
j = j >> 4;
}
uint mask = 15;
bytes memory bstr = new bytes(length);
uint k = length - 1;
while (i != 0){
uint curr = (i & mask);
bstr[k--] = curr > 9 ? byte(55 + curr ) : byte(48 + curr); // 55 = 65 - 10
i = i >> 4;
}
return string(bstr);
}
}
But I need a more recent compiler version (at least 0.8.0). The above function is not working on newer versions.
What's the way to convert bytes or uint to a hex string (1->'1',f->'f') what works in Solidity >=0.8.0 ?

The following compiles, and was tested using solc 0.8.7
It is the same as your original version, with the following modifications:
constant --> pure
returns (string) --> returns (string memory)
byte(...) --> bytes1(uint8(...))
The above changes overcame all of the compile-time differences in your original function.
... however there was still a run-time error that was causing this function to revert:
While debugging, the line bstr[k--] = curr > 9 ? triggered the revert in the last iteration of the loop it was in. This was because the while loop was set up such that k was 0 in its final iteration.
While the identification was tricky, the fix was simple: Change the decrement operator from postfixed to prefixed - bstr[--k] = curr > 9 ?.
Aside:
Q: Why did this not revert when compiled in solc 0.4.9,
but revert when the same code was compiled in solc 0.8.7?
A: solc 0.8.0 introduced a breaking change where uint overflow and underflow checks were inserted by the compiler.
Prior to this one would have needed to use SafeMath or similar to accomplish the same.
See the "Silent Changes of the Semantics" section of the solc 0.8 release notes
pragma solidity >=0.8;
contract TypeConversion {
function uint2hexstr(uint i) public pure returns (string memory) {
if (i == 0) return "0";
uint j = i;
uint length;
while (j != 0) {
length++;
j = j >> 4;
}
uint mask = 15;
bytes memory bstr = new bytes(length);
uint k = length;
while (i != 0) {
uint curr = (i & mask);
bstr[--k] = curr > 9 ?
bytes1(uint8(55 + curr)) :
bytes1(uint8(48 + curr)); // 55 = 65 - 10
i = i >> 4;
}
return string(bstr);
}
}

Related

InterlockedSubtract workaround - memory barriers

I have a situation where I would love to have an InterlockedSubtract function in HLSL. InterlockedAdd works fine for integers, but I'm stuck using a RWByteAddressBuffer of uints - I'm using every single bit, and I would rather not resort to having an encode/decode function to make ints behave exactly like uints.
My current workaround looks like this:
uint oldValue = Source.Load(oldParent, y);
Source.InterlockedMin(oldParent, oldValue - 1, y);
The issue is that I understand that it is possible for these operations to be confused across several threads, like so:
Thread 1: Thread 2:
o1 = Source.Load(l) l = 10, o1 = 10
o2 = Source.Load(l) l = 10, o2 = 10
Source.InterlockedMin(l, o1 - 1) l = 9
Source.InterlockedMin(l, o2 - 1) l = 9
These would only decrement the value once, despite the two calls.
As I understand it, I can't just make it a one-liner, as the compiled instructions could desync anyway.
Is there a workaround I'm missing? I could refactor my code to use another uint as a subtraction counter, then use another kernel to subtract those from the actual counts, but I'd much prefer to keep it within one kernel.
Thanks #PeterCordes you were absolutely right.
I did something very simple to test:
HLSL
#pragma kernel Overflow
RWByteAddressBuffer buff;
uint ByteIndex(uint3 id)
{
return (id.x + id.y * 8) * 4;
}
[numthreads(8,8,1)]
void Overflow (uint3 id : SV_DispatchThreadID)
{
uint originalValue;
buff.InterlockedAdd(ByteIndex(id), 1, originalValue);
}
And C#
public ComputeShader shade;
private ComputeBuffer b;
private int kernel;
private uint[] uints = new uint[64];
private void Start()
{
for(int i = 0; i < 64; i++) { uints[i] = 4294967294; }
for(int i = 0; i < 64; i++) { Debug.Log(uints[i]); }
}
void Update()
{
if(Input.GetKeyDown(KeyCode.Space))
{
b = new ComputeBuffer(64, sizeof(uint));
b.SetData(uints);
kernel = shade.FindKernel("Overflow");
shade.SetBuffer(kernel, Shader.PropertyToID("buff"), b);
shade.Dispatch(kernel, 1, 1, 1);
b.GetData(uints);
b.Dispose();
for (int i = 0; i < 64; i++) { Debug.Log(uints[i]); }
}
}
This clearly exhibits the desired behavior.
It would be nice if this behavior was documented in HLSL, but at least I know it now.

What is the fastest way to check bits in variable using bitwise operations?

For example I have short (2 bytes = 16 bits) variable: (in my project this is sequence of 00, 01 and 10's)
0001010101101001 = 0001.0101|0110.1001
And I want to check if this variable contains sequence of bits, for example I need '01010101' (this is 4 x 01).
What is the fastest way to check this?
I found some solutions but I am sure that exists more simple and faster solution.
(pseudocode)
var = 0001010101101001;
need = 0000000001010101;
for(int i=0;i<4;i++)
{
if(var&need==need)
return 1;
else
var = var >> 2;
}
or:
(pseudocode)
var = 0001010101101001;
need1 = 0000000001010101;
need2 = 0000000101010100;
need3 = 0000010101010000;
need4 = 0001010101000000;
need5 = 0101010100000000;
if(var&need1==need1) return 1;
if(var&need2==need2) return 1;
if(var&need3==need3) return 1;
if(var&need4==need4) return 1;
if(var&need5==need5) return 1;
else return 0;
Your first solution is good:
for (int Count = 0; Count < 4; Count++)
{
if ((Var & Need) == Need)
Found = true;
else
Var = (UInt16)(Var >> 2);
}
I actually make things complicated rather than simplifying it.
This is an alternative solution using masks.
using System;
public class Program
{
public static void Main()
{
UInt16 Var = 0x1569; //0001010101101001 0x1569
UInt16 Need = 0x5A; //0000000001011010 0x5A
//0000000001010101 0x55
//0000000001010110 0x56
UInt16[] Mask = { 0x00FF, 0x03FC, 0x0FF0, 0x3FC0, 0xFF00 };
bool Found = false;
for (int Count = 0; Count < 4; Count++)
Found |= (((Var & Mask[Count]) ^ (Need << (Count + Count))) == 0);
Console.WriteLine(Found);
}
}
There is an other way:
var &= 0101010101010101
var &= var >> 2
var &= var >> 4
return var != 0
The odd bits are irrelevant so just removed in the first step.
Then every 4 adjacent "pieces" (of 2 bits each) are ANDed together in two steps, first every piece with the piece directly to the left of it, then compounded by doing the same thing with a distance of 2 pieces. So the result is a mask of whether a sequence of 4 "01"s starts at that position.
Finally just check if there are any bits set in that mask.

Padding Issue of Long Hashes

For crypto experts, I have a question that recently came into my mind. So, for example, think that we have a long string of bytes and we want to put that string into a hash function which we may take for the sake of illustration as SHA1. As we know, SHA1 takes inputs in 64 bytes chunks and every hash function afaik needs to pad the message before processing. Now the question is that is it the last chunk that needs to be padded or the whole string? It will matter because at the end of padding we will append the length. Thanks all.
Now the question is that is it the last chunk that needs to be padded or the whole string?
I believe both the things are same. Padding the whole string means padding the last chunk only.
Some pseudocode from good old wiki here
A Look at the code might give you some insights :
Taken from : mattmahoney.net/dc/sha1.c
void SHA1PadMessage(SHA1Context *context)
{
/*
* Check to see if the current message block is too small to hold
* the initial padding bits and length. If so, we will pad the
* block, process it, and then continue padding into a second
* block.
*/
if (context->Message_Block_Index > 55)
{
context->Message_Block[context->Message_Block_Index++] = 0x80;
while(context->Message_Block_Index < 64)
{
context->Message_Block[context->Message_Block_Index++] = 0;
}
SHA1ProcessMessageBlock(context);
while(context->Message_Block_Index < 56)
{
context->Message_Block[context->Message_Block_Index++] = 0;
}
}
else
{
context->Message_Block[context->Message_Block_Index++] = 0x80;
while(context->Message_Block_Index < 56)
{
context->Message_Block[context->Message_Block_Index++] = 0;
}
}
/*
* Store the message length as the last 8 octets
*/
context->Message_Block[56] = context->Length_High >> 24;
context->Message_Block[57] = context->Length_High >> 16;
context->Message_Block[58] = context->Length_High >> 8;
context->Message_Block[59] = context->Length_High;
context->Message_Block[60] = context->Length_Low >> 24;
context->Message_Block[61] = context->Length_Low >> 16;
context->Message_Block[62] = context->Length_Low >> 8;
context->Message_Block[63] = context->Length_Low;
SHA1ProcessMessageBlock(context);
}

Trouble translating c++ with pre and post increment operators to python

I'm trying to work out what the equivalent a[++j]=*pr++; in the following code (which comes from a MatLab mex file) is in Python. I've found out that 'pr' is a pointer to the first element of the input array, but I can't get my head around what is happening to j. Can someone explain what is happening there in simple terms without pointers etc?
rf3(mxArray *array_ext, mxArray *hs[]) {
double *pr, *po, a[16384], ampl, mean;
int tot_num, index, j, cNr;
mxArray *array_out;
tot_num = mxGetM(array_ext) * mxGetN(array_ext);
pr = (double *)mxGetPr(array_ext);
array_out = mxCreateDoubleMatrix(3, tot_num-1, mxREAL);
po = (double *)mxGetPr(array_out);
j = -1;
cNr = 1;
for (index=0; index<tot_num; index++) {
a[++j]=*pr++;
while ( (j >= 2) && (fabs(a[j-1]-a[j-2]) <= fabs(a[j]-a[j-1])) ) {
ampl=fabs( (a[j-1]-a[j-2])/2 );
switch(j)
{
case 0: { break; }
case 1: { break; }
case 2: {
mean=(a[0]+a[1])/2;
a[0]=a[1];
a[1]=a[2];
j=1;
if (ampl > 0) {
*po++=ampl;
*po++=mean;
*po++=0.50;
}
break;
}
default: {
mean=(a[j-1]+a[j-2])/2;
a[j-2]=a[j];
j=j-2;
if (ampl > 0) {
*po++=ampl;
*po++=mean;
*po++=1.00;
cNr++;
}
break;
}
}
}
}
for (index=0; index<j; index++) {
ampl=fabs(a[index]-a[index+1])/2;
mean=(a[index]+a[index+1])/2;
if (ampl > 0){
*po++=ampl;
*po++=mean;
*po++=0.50;
}
}
/* you can free the allocated memeory */
/* for array_out data */
mxSetN(array_out, tot_num - cNr);
hs[0]=array_out;
}
Here's what happens:
Increment j by 1
assign to a[j] value pointed at by pr
increment pr.
In this order.
You specifically asked about:
a[++j]=*pr++;
j is being incremented before the assignment. In python the left hand side would be:
a[j+1]
and you would also then need to increment j before you use it next:
j += 1
The right hand side simply accesses the current position and then increments the position in the array. In python you would probably just use an iterator for your array.
BTW, you might find it difficult to do a line-by-line 'translation' of the code. I would suggest writing down the steps of the algorithm and then tackling it fresh in python, if that is what you need.
In Python, there aren't pointers, so how you translate this will depend on how you decide to represent pr. If you think of the pointer as a copy of the list pr = array_ext[:], the line you've highlighted would be something like
j = j + 1
a[j] = pr.pop(0)
For greater efficiency (and a closer parallel to what the C code is doing), you could use pr as an index into the list array_ext, starting it at 0. Then, the line you highlighted does this:
j = j + 1
a[j] = array_ext[pr]
pr = pr + 1

C++ zLib Byte array Compressing

After getting my first problem solved at C++ zLib compress byte array i faced another problem
void CGGCBotDlg::OnBnClickedButtonCompress()
{
// TODO: Add your control notification handler code here
z_const char hello[256];
hello[0] = 0x0A;
hello[1] = 0x0A;
hello[2] = 0x0A;
hello[3] = 0x0A;
hello[4] = 0x0A;
hello[5] = PKT_END;
hello[6] = PKT_END;
hello[7] = PKT_END;
Byte compr[256];
uLong comprLen = sizeof(compr);
int ReturnCode;
ReturnCode = Compress(compr, comprLen, hello, Z_DEFAULT_COMPRESSION);
g_CS.Send(&compr,comprLen);
}
int CGGCBotDlg::Compress(Byte Compressed[], uLong CompressedLength, CHAR YourByte[], int CompressionLevel)
{
int zReturnCode;
int Len;
for (int i = 0 ; i <= 10240 ; i++)
{
if (YourByte[i] == PKT_END && YourByte[i+1] == PKT_END && YourByte[i+2] == PKT_END)
{
Len = i - 1;
break;
}
}
uLong Length = (uLong)(sizeof(YourByte) * 1.0001) + 12;
zReturnCode = compress2(Compressed, &Length, (const Bytef*)YourByte, Len,CompressionLevel);
return zReturnCode;
}
Im trying to compress hello[] wich is actually 5 bytes (I want to compress first 5 bytes)
The expected result after compressing is : 0x78,0x9C,0xE3,0xE2,0x02,0x02,0x06,0x00,0x00,0xCE,0x00,0x33
But what I get after compressing is just the first 4 bytes of expected result and another bytes are just something else.
And my second problem is that i want to replcae 256 in Byte compr[256] with the exact number of bytes after decompressing original buffer (which is 12 in my case)
Would be great if someone correct me
Thanks
this line is wrong:
Len = i - 1;
because when i is 5, you do Len = i - 1; so len will be 4, but you want compress 5 bytes. just use:
Len = i;
another problem comprLen is never been assigned a value. In Compress(Byte Compressed[], uLong CompressedLength..), itCompressedLength is not used. I assume you want its value back. you should define like this:
Compress(Byte Compressed[], uLong& CompressedLength..)
and change the line to use CompressedLength instead of using length:
zReturnCode = compress2(Compressed, &CompressedLength, (const Bytef*)YourByte, Len,CompressionLevel);