i'm doing an exercise on two complement, the question sound like this:
Solving 11base10 – 11base10 using 2’s complement will lead to a problem; by using 7-bit data representation. Explain what the problem is and suggest steps to overcome the problem.
i got 0 for the answer because 11-11=0, what problem if the answer is 0?
and is there a way to overcome it?
So 11 in base 10 is the following in 7-bit base 2:
000 1011
To subtract 11, you need to find -11 first. One of the many ways is to invert all the bits and add 1, leaving you with:
111 0101
Add the two numbers together:
1000 0000
Well, that's interesting. The 8th bit is a 1.
You didn't end up with zero. Or did you?
That's the question that your homework is attempting to get you to answer.
Related
I am doing practice problems for midterms. The question is as follows:
Suppose we want to transmit a message 11001001 and protect it from error using the CRC polynomial x^3+1. Use polynomial long division to determine the message that should be transmitted (show all steps to get CRC bits and the complete message transmitted).
In this only solution I could find, the long division stops before the final zero. For the work I have done, there's an extra 1 in the quotient (pic attached). Why is the work in the online solution so different to mine?
Your solution is correct, the solution in the link is missing the 1 bit. In hex the example is dividing 648 / 9, quotient = d3, remainder = 3, which is what you have in your question. (It was recommended to post this as an answer so others browsing question would know it has an answer).
For example in this answer to a reversing bits function problem made 4 years ago:
[reverse_Bits function]
https://stackoverflow.com/a/50596723/19574301
Code:
def reverse_Bits(n, no_of_bits):
result = 0
for i in range(no_of_bits):
result <<= 1
result |= n & 1
n >>= 1
return result
I don't understand how to think the problem at all.
You multiply actual number (n) by one in order to check the first right side bit. Then you right shift this number by one so you are checking if the second bit is 0 when you and it again, and this for all bits. So basically you're adding 1 to the result if there is a 1 in the actual (?) bit. Aside you left shift the result so I understand you're trying to put the bit in its correct index and if there is a one you add it... I get lost here.
I mean, I know the code works and I know how but I couldn't do it from zero without having this reference because I don't know how you achieve thinking every step of the algorithm.
I don't know if I explain my problem or if it's just a mess but hoping somebody can help me!!
If your question is, "how would I write this from scratch, without any help?" then I find personally that it comes about from a combination of sketching out simple cases, working through them manually, and progressive implementation.
For example, you may have started with example: You have the number 3 (because it is easy) and you want to reverse bits:
3 = 0000 0011 b
need to &1 and if it is non-zero, write 1000 0000 b
need to &2 and if it is non-zero, write 0100 0000 b
need to &4 and as it is zero, write nothing...
...
Okay, how can I automate 1,2,4,8,16,32 .. ? Can have a variable which will double, or I can left-shift a number by 1. Take your pick, does not matter.
For writing the values, same thing, how can I write 1000 0000 b and then 0100 0000 b, etc? Well start off as 1000 0000 b and divide by 2 or right-shift by 1.
With these two simple things, you will end up with something like this for one bit:
result = 0
src_mask = 0x01
dst_mask = 0x80
if number & src_mask != 0:
result |= dst_mask
One bit working. Then you add a loop so that you can do all bits and add a *2 for the src_mask and a /2 for the dst_mask as you do it to address each bit. Again this is all figured out from the scribbles on paper listing what I want to happen for each bit.
Then comes optimization, I don't like the 'if' so can I figure out a way of directly adding the bit without testing? if it was 0 it will add 0 and if the bit is set, then I add the bit?
This is generally the progression. Manual scribbles, first design and then step-by-step enhancements.
Lately I've been thinking about compression on computers, and stumbled upon the question, 'why isn't bitwise compression more common for large files?'.
I tried looking around and didn't manage to find anyone talking about the subject, at least the way I meant it, , I might not be talking about the same subject, or not using the correct name so I'll explain what I had in mind.
Lets say we have th following string "Hi I'm a string!".
Its value in binary is:
01001000011010010010000001001001001001110110110100100000011000010010000001110011011101000111001001101001011011100110011100100001
As you can see in the binary sequence there are more then several reoccurring sequences of 0's and 1's. My idea is to remove them, and include an indexing file, saying exactly where you need to add 0's or 1's and how many, for example let's break it to the first three bytes:
01001000 01101001 00100000
The indexing file will look like this:
[2,1] [5,3]
[1,1] [5, 1]
[0,1] [3, 4]
And the binary will be:
01010 010101 010
And of course since there will be filler bits until it reaches an N%8 == 0
My question is why isn't this type of compression common\existant, if it is I would love to see an example of it being used practically in the real world, if it doesn't I would love to learn why it isn't used.
This algorithm would work on certain types of data. Is is far less effective than other algorithms that are in use, though.
For example, the LZ facility of algorithms can reference data that has been seen before. It can reference strings of zeroes like your algorithm but it can also reference any other pattern. It is more general.
I don't think your algorithm would achieve compression with common English text. There are too many 1 bits and storing a bit position takes many bits.
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 8 years ago.
Improve this question
I have access to a program which I'm running which SHOULD be guessing a very low number for certain things and outputting the number (probably 0 or 1). However, 0.2% of the time when it should be outputting 0 it outputs a number from 4,294,967,286 - 4,294,967,295. (Note: the latter is the max number an unsigned integer can be).
What I GUESS is happening is the function is guessing the number of the data to be less than 0 aka -1 to -9 and when it assigns that number to an unsigned int it's wrapping the number around to be the max or close to the max number.
I therefore assumed the program is written in C (I do not have access to the source code) and then tested in Visual Studio .NET 2012 C what would happen if I assign a variety of negative numbers to an unsigned integer. Unfortunately, nothing seemed to happen - it would still output the number to the console as a negative integer. I'm wondering if this is to do with MSVS 2012 trying to be smart or perhaps some other reason.
Anyway, am I correct in assuming that this is in fact what is happening and the reason why the programs outputs the max number of an unisnged int? Or are there any other valid reasons as to why this is happening?
Edit: All I want to know is if it's valid to assume that attempting to assign a negative number to an unsigned integer can result in setting the integer to the max number aka 4,294,967,295. If this is IMPOSSIBLE then okay, I'm not looking at SPECIFICS on exactly why this is happening with the program as I do not have access to the code. All I want to know is if it's possible and therefore a possible explanation as to why I am getting these results.
In C and C++ assigning -1 to an unsigned number will give you the maximum unsigned value.
This is guaranteed by the standard and all compilers I know (even VC) implement this part correctly. Probably your C example has some other problem for not showing this result (cannot say without seeing the code).
You can think of negative numbers to have its first bit counting negative.
A 4 bit integer would be
Binary HEX INT4 UINT4
(In Memory) (As decimal) (As decimal)
0000 0x0 0 0 (UINT4_MIN)
0001 0x1 1 1
0010 0x2 2 2
0100 0x4 4 4
0111 0x7 7 (INT4_MAX) 7
1000 0x8 -8 (INT4_MIN) 8
1111 0xF -1 15 (UINT4_MAX)
It may be that the header of a library lies to you and the value is negative.
If the library has no other means of telling you about errors this may be a deliberate error value. I have seen "nonsensical" values used in that manner before.
The error could be calculated as (UINT4_MAX - error) or always UINT4_MAX if an error occurs.
Really, without any source code this is a guessing game.
EDIT:
I expanded the illustrating table a bit.
If you want to log a number like that you may want to log it in hexadecimal form. The Hex view allows you to peek into memory a bit quicker if you are used to it.
I know how to encode and decode CRC. For example given the binary message to be encoded was 11010011101100 and the genrator polynomial is 1011
Then the result is:
11010011101100 000 <--- input left padded by 3 bits
`1011` <--- divisor
01100011101100 000 <--- result
1011 <--- divisor ...
00111011101100 000
1011
...
-----------------
00000000000000 100 <---remainder (3 bits)
and to decode it you use the same technique however replace the 3 zeros with the remainder (100)
however is there a way of using this same method to encode and decode crc codes using normal natural numbers without converting them to binary?
I tried to do some research however, I cant find any method or examples to do it using natural numbers I only seem to find binary examples. Any help please guys?
It's the same exact algorithm. You are working with natural numbers. The way they're written is immaterial. "Shift left by three bits" is equivalent to "multiply by 8". "Shift right by one bit" is equivalent to "divide by two, discarding the remainder". "Take the last three bits" is equivalent to "take the remainder from dividing by 8". The bit-xor of two numbers isn't very easy to describe in arithmetical terms though.