why I am getting output blank? pointers are able to modify but can't read.why?
#include <iostream>
using namespace std;
int main(){
int a = 0;
char *x1,*x2,*x3,*x4;
x1 = (char *)&a;
x2 = x1;x2++;
x3 = x2;x3++;
x4 = x3;x4++;
*x1=1;
*x2=1;
*x3=1;
*x4=1;
cout <<"#" << *x1 << " " << *x2 << " " << *x3 << " " << *x4 << "#"<<endl ;
cout << a << endl;
}
[Desktop]👉 g++ test_pointer.cpp
[Desktop]👉 ./a.out
# #
16843009
I want to read the value of integer by using pointers type of char.
so i can read byte by byte.
You're streaming chars. These get automatically ASCII-ised for you by IOStreams*, so you're seeing (or rather, not seeing) unprintable characters (in fact, all 0x01 bytes).
You can cast to int to see the numerical value, and perhaps add std::hex for a conventional view.
Example:
#include <iostream>
#include <iomanip>
int main()
{
int a = 0;
// Alias the first four bytes of `a` using `char*`
char* x1 = (char*)&a;
char* x2 = x1 + 1;
char* x3 = x1 + 2;
char* x4 = x1 + 3;
*x1 = 1;
*x2 = 1;
*x3 = 1;
*x4 = 1;
std::cout << std::hex << std::setfill('0');
std::cout << '#' << std::setw(2) << "0x" << (int)*x1
<< ' ' << std::setw(2) << "0x" << (int)*x2
<< ' ' << std::setw(2) << "0x" << (int)*x3
<< ' ' << std::setw(2) << "0x" << (int)*x4
<< '#' << '\n';
std::cout << "0x" << a << '\n';
}
// Output:
// #0x01 0x01 0x01 0x01#
// 0x1010101
(live demo)
Those saying that your program has undefined are incorrect (assuming your int has at least four bytes in it); aliasing objects via char* is specifically permitted.
The 16843009 output is correct; that's equal to 0x01010101 which you'd again see if you put your stream into hex mode.
N.B. Some people will recommend reinterpret_cast<char*>(&a) and static_cast<int>(*x1), instead of C-style casts, though personally I find them ugly and unnecessary in this particular case. For the output you can at least write +*x1 to get a "free" promotion to int (via the unary + operator), but that's not terribly self-documenting.
* Technically it's something like the opposite; IOStreams usually automatically converts your numbers and booleans and things into the right ASCII characters to appear correct on screen. For char it skips that step, assuming that you're already providing the ASCII value you want.
Assuming an int is at least 4 bytes long on your system, the program manipulates the 4 bytes of int a.
The result 16843009 is the decimal value of 0x01010101, so this is as you might expect.
You don't see anything in the first line of output because you write 4 characters of a binary value 1 (or 0x01) which are invisible characters (ASCII SOH).
When you modify your program like this
*x1='1';
*x2='3';
*x3='5';
*x4='7';
you will see output with the expected characters
#1 3 5 7#
926233393
The value 926233393 is the decimal representation of 0x37353331 where 0x37 is the ASCII value of the character '7' etc.
(These results are valid for a little-endian architecture.)
You can use unary + for converting character type (printed as symbol) into integer type (printed as number):
cout <<"#" << +*x1 << " " << +*x2 << " " << +*x3 << " " << +*x4 << "#"<<endl ;
See integral promotion:
Have a look at your declarations of the x's
char *x1,*x2,*x3,*x4;
these are pointers to chars (characters).
In your stream output they are interpreted as printable characters.
A short look into the ascii-Table let you see that the lower numbers are not printeable.
Since your int a is zero also the x's that point to the individual bytes are zero.
One possibility to get readeable output would be to cast the characters to int, so that the stream would print the numerical representation instead the ascii character:
cout <<"#" << int(*x1) << " " << int(*x2) << " " << int(*x3) << " " << int(*x4) << "#"<<endl ;
If I understood your problem correctly, this is the solution
#include <stdio.h>
#include <iostream>
using namespace std;
int main(){
int a = 0;
char *x1,*x2,*x3,*x4;
x1 = (char*)&a;
x2 = x1;x2++;
x3 = x2;x3++;
x4 = x3;x4++;
*x1=1;
*x2=1;
*x3=1;
*x4=1;
cout <<"#" << (int)*x1 << " " << (int)*x2 << " " << (int)*x3 << " " << (int)*x4 << "#"<<endl ;
cout << a << endl;
}
Related
Source Code:
#include <iostream>
using namespace std;
int main() {
unsigned long P;
P = 0x7F << 24;
cout << P << endl;
P = 0x80 << 24;
cout << P << endl;
return 0;
}
Output:
2130706432
18446744071562067968
As you can see, the first result is correct.
But the second result is extremely wrong.
The expected result is 2147483648 and it does not match with 18446744071562067968.
I want to know why
The type of the expression 0x80 << 24 is not unsigned long, it’s int. You then assign the result of that expression to P, and in the process convert it to an unsigned long. But at that point it has already overflown (incidentally causing undefined behaviour). Use unsigned long literals in your expression:
P = 0x80ul << 24;
This problem is not entirely portable, since it depends on the number of bits in your representation of unsigned long. In this case, there is an overflow followed by an underflow, and the two effects combine to produce your surprising result.
The basic solution is indicated here: ULL suffix on a numeric literal
I've broken it down in the code below.
#include <iostream>
using namespace std;
int main() {
cout << "sizeof(unsigned long) = " << sizeof(unsigned long) << "\n";
cout << "sizeof(0x80) = " << sizeof(0x80) << "\n";
int32_t a = (0x80 << 24); // overflow: positive to negative
uint64_t b = a; // underflow: negative to positive
uint64_t c = (0x80 << 24); // simple broken
uint64_t d = (0x80UL << 24); // simple fixed
uint32_t e = (0x80U << 24); // what you probably intended
cout << "a = " << a << "\n";
cout << "b = " << b << "\n";
cout << "c = " << c << "\n";
cout << "d = " << d << "\n";
cout << "e = " << e << "\n";
}
Output:
$ ./c-unsigned-long-cannot-hold-the-correct-number-over-2-147-483-647.cpp
sizeof(unsigned long) = 8
sizeof(0x80) = 4
a = -2147483648
b = 18446744071562067968
c = 18446744071562067968
d = 2147483648
e = 2147483648
If you're doing bit-shift operations like this, it probably makes sense to be explicit about the integer sizes (as I have shown in the code above).
What's the difference between long long and long
Fixed width integer types (since C++11)
I'm trying to get the values of x and y coordinates of two eyes. I detect it using opencv XML file, and in the console 2 different x values appear from printf() while the text file I save with operator<< displays 1 value. Why is this so?
printf("X = %o,Y = %o\n", eyes[j].x, eyes[j].y);
ofstream coordinates;
coordinates.open("C:/Users/dougl/Desktop/Coordinates.txt");
coordinates << "X = " << eyes[j].x << "\n" << "Y = " << eyes[j].y;
#include <iostream>
using std::cout;
using std::endl;
using std::oct;
using std::hex;
int main()
{
long int pos_value = 12345678;
cout << "The decimal value 12345678 is printed out as" << endl;
cout << "octal: " << oct << pos_value << endl;
cout << "hexadecimal: " << hex << pos_value << endl << endl;
return 0;
}
Printf showing the Unsigned Octal number for integer using: %o format.
https://www.geeksforgeeks.org/format-specifiers-in-c/amp/
Stream operator<< overload work as per the data type of the value passed.
So, to print octal value you need to formatting (std::oct) :
cout << "octal: " << oct << pos_value << endl;
Reference:http://faculty.cs.niu.edu/~mcmahon/CS241/c241man/node83.html
You are telling printf() to output the integers in octal form, whereas operator<< outputs the integers in decimal form by default.
To make the two outputs match, you need to either:
change %o to %d or %u, depending on whether the x and y values are signed or unsigned, respectively.
use the std::oct I/O manipulator with operator<<.
I wrote a text cipher program. It seems to works on text strings a few characters long but does not work on a longer ones. It gets the input text by reading from a text file. On longer text strings, it still runs without crashing, but it doesn’t seem to work properly.
Below I have isolated the code that performs that text scrambling. In case it is useful, I am running this in a virtual machine running Ubuntu 19.04. When running the code, enter in auto when prompted. I removed the rest of code so it wasn't too long.
#include <iostream>
#include <string>
#include <sstream>
#include <random>
#include <cmath>
#include <cctype>
#include <chrono>
#include <fstream>
#include <new>
bool run_cypher(char (&a)[27],char (&b)[27],char (&c)[11],char (&aa)[27],char (&bb)[27],char (&cc)[11]) {
//lowercase cypher, uppercase cypher, number cypher, lowercase original sequence, uppercase original sequence, number original sequence
std::ifstream out_buffer("text.txt",std::ios::in);
std::ofstream file_buffer("text_out.txt",std::ios::out);
//out_buffer.open();
out_buffer.seekg(0,out_buffer.end);
std::cout << "size of text: " << out_buffer.tellg() << std::endl;//debug
const int size = out_buffer.tellg();
std::cout << "size: " << size << std::endl;//debug
out_buffer.seekg(0,out_buffer.beg);
char *out_array = new char[size + 1];
std::cout << "size of out array: " << sizeof(out_array) << std::endl;//debug
for (int u = 0;u <= size;u = u + 1) {
out_array[u] = 0;
}
out_buffer.read(out_array,size);
out_buffer.close();
char original[size + 1];//debug
for (int bn = 0;bn <= size;bn = bn + 1) {//debug
original[bn] = out_array[bn];//debug
}//debug
for (int y = 0;y <= size - 1;y = y + 1) {
std::cout << "- - - - - - - -" << std::endl;
std::cout << "out_array[" << y << "]: " << out_array[y] << std::endl;//debug
int match;
int case_n; //0 = lowercase, 1 = uppercase
if (isalpha(out_array[y])) {
if (islower(out_array[y])) {
//std::cout << "out_array[" << y << "]: " << out_array[y] << std::endl;//debug
//int match;
for (int ab = 0;ab <= size - 1;ab = ab + 1) {
if (out_array[y] == aa[ab]) {
match = ab;
case_n = 0;
std::cout << "matched letter: " << aa[match] << std::endl;//debug
std::cout << "letter index: " << match << std::endl;//debug
std::cout << "case_n: " << case_n << std::endl;//debug
}
}
}
if (isupper(out_array[y])) {
for (int cv = 0;cv <= size - 1;cv = cv + 1) {
if (out_array[y] == bb[cv]) {
case_n = 1;
match = cv;
std::cout << "matched letter: " << bb[match] << std::endl;//debug
std::cout << "letter index: " << match << std::endl;//debug
std::cout << "case_n: " << case_n << std::endl;//debug
}
}
}
if (case_n == 0) {
out_array[y] = a[match];
std::cout << "replacement letter: " << a[match] << " | new character: " << out_array[y] << std::endl;//debug
}
if (case_n == 1) {
std::cout << "replacement letter: " << b[match] << " | new character: " << out_array[y] << std::endl;//debug
out_array[y] = b[match];
}
}
if (isdigit(out_array[y])) {
for (int o = 0;o <= size - 1;o = o + 1) {
if (out_array[y] == cc[o]) {
match = o;
std::cout << "matched letter: " << cc[match] << std::endl;//debug
std::cout << "letter index: " << match << std::endl;//debug
}
}
out_array[y] = c[match];
std::cout << "replacement number: " << c[match] << " | new character: " << out_array[y] << std::endl;//debug
}
std::cout << "- - - - - - - -" << std::endl;
}
std::cout << "original text: " << "\n" << original << "\n" << std::endl;
std::cout << "encrypted text: " << "\n" << out_array << std::endl;
delete[] out_array;
return 0;
}
int main() {
const int alpha_size = 27;
const int num_size = 11;
char l_a_set[] = "abcdefghijklmnopqrstuvwxyz";
char cap_a_set[] = "ABCDEFGHIJKLMNOPQRSTUVWXYZ";
char n_a_set[] = "0123456789";
std::cout << "sizeof alpha_set: " << std::endl;//debug
char lower[alpha_size] = "mnbvcxzasdfghjklpoiuytrewq";
char upper[alpha_size] = "POIUYTREWQASDFGHJKLMNBVCXZ";
char num[num_size] = "9876543210";
int p_run; //control variable. 1 == running, 0 == not running
int b[alpha_size]; //array with values expressed as index numbers
std::string mode;
int m_set = 1;
while (m_set == 1) {
std::cout << "Enter 'auto' for automatic cypher generation." << std::endl;
std::cout << "Enter 'manual' to manually enter in a cypher. " << std::endl;
std::cin >> mode;
std::cin.ignore(1);
std::cin.clear();
if (mode == "auto") {
p_run = 2;
m_set = 0;
}
if (mode == "manual") {
p_run = 3;
m_set = 0;
}
}
if (p_run == 2) { //automatic mode
std::cout <<"lower cypher: " << lower << "\n" << "upper cypher: " << upper << "\n" << "number cypher: " << num << std::endl;//debug
run_cypher(lower,upper,num,l_a_set,cap_a_set,n_a_set);
return 0;//debug
}
while (p_run == 3) {//manual mode
return 0;//debug
}
return 0;
}
For example, using an array containing “mnbvcxzasdfghjklpoiuytrewq” as the cipher for lower case letters, I get “mnbv” if the input is “abcd”. This is correct.
If the input is “a long word”, I get “m gggz zzzv” as the output when it should be “m gkjz rkov”. Sort of correct but still wrong. If I use “this is a very very long sentence that will result in the program failing” as the input, I get "uas” as the output, which is completely wrong. The program still runs but it fails to function as intended. So as you can see, it does work, but not on any text strings that are remotely long. Is this a memory problem or did I make horrible mistake somewhere?
For your specific code, you should run it through a memory checking tool such as valgrind, or compile with an address sanitizer.
Here are some examples of memory problems that most likely won't crash your program:
Forgetting to delete a small object, which is allocated only once in the program. A memory leak can remain undetected for decades, if it does not make the program run out of memory.
Reading from allocated uninitialized memory. May still crash if the system allocates objects lazily at the first write.
Writing out of bounds slightly after an object that sits on heap, whose size is sizeof(obj) % 8 != 0. This is so, since heap allocation is usually done in multiples of 8 or 16. You can read about it at answers of this SO question.
Dereferencing a nullptr does not crash on some systems. For example AIX used to put zeros at and near address 0x0. Newer AIX might still do it.
On many systems without memory management, address zero is either a regular memory address, or a memory mapped register. This memory can be accessed without crashing.
On any system I have tried (POSIX based), it was possible to allocate valid memory at address zero through memory mapping. Doing so can even make writing through nullptr work without crashing.
This is only a partial list.
Note: these memory problems are undefined behavior. This means that even if the program does not crash in debug mode, the compiler might assume wrong things during optimization. If the compiler assumes wrong things, it might create an optimized code that crashes after optimization.
For example, most compilers will optimize this:
int a = *p; // implies that p != nullptr
if (p)
boom(p);
Into this:
int a = *p;
boom(p);
If a system allows dereferencing nullptr, then this code might crash after optimization. It will not crash due to the dereferencing, but because the optimization did something the programmer did not foresee.
Is there a way to make setw and setfill pad the end of a string instead of the front?
I have a situation where I'm printing something like this.
CONSTANT TEXT variablesizeName1 .....:number1
CONSTANT TEXT varsizeName2 ..........:number2
I want to add a variable amount of '.' to the end of
"CONSTANT TEXT variablesizeName#" so I can make ":number#" line up on the screen.
Note: I have an array of "variablesizeName#" so I know the widest case.
Or
Should I do it manually by setting setw like this
for( int x= 0; x < ARRAYSIZE; x++)
{
string temp = string("CONSTANT TEXT ")+variabletext[x];
cout << temp;
cout << setw(MAXWIDTH - temp.length) << setfill('.') <<":";
cout << Number<<"\n";
}
I guess this would do the job but it feels kind of clunky.
Ideas?
You can use manipulators std::left, std::right, and std::internal to choose where the fill characters go.
For your specific case, something like this could do:
#include <iostream>
#include <iomanip>
#include <string>
const char* C_TEXT = "Constant text ";
const size_t MAXWIDTH = 10;
void print(const std::string& var_text, int num)
{
std::cout << C_TEXT
// align output to left, fill goes to right
<< std::left << std::setw(MAXWIDTH) << std::setfill('.')
<< var_text << ": " << num << '\n';
}
int main()
{
print("1234567890", 42);
print("12345", 101);
}
Output:
Constant text 1234567890: 42
Constant text 12345.....: 101
EDIT:
As mentioned in the link, std::internal works only with integer, floating point and monetary output. For example with negative integers, it'll insert fill characters between negative sign and left-most digit.
This:
int32_t i = -1;
std::cout << std::internal
<< std::setfill('0')
<< std::setw(11) // max 10 digits + negative sign
<< i << '\n';
i = -123;
std::cout << std::internal
<< std::setfill('0')
<< std::setw(11)
<< i;
will output
-0000000001
-0000000123
Something like:
cout << left << setw(MAXWIDTH) << setfill('.') << temp << ':' << Number << endl;
Produces something like:
derp..........................:234
herpderpborp..................:12345678
#include <iostream>
#include <iomanip>
int main()
{
std::cout
<< std::setiosflags(std::ios::left) // left align this section
<< std::setw(30) // within a max of 30 characters
<< std::setfill('.') // fill with .
<< "Hello World!"
<< "\n";
}
//Output:
Hello World!..................
i am a c coder, new to c++.
i try to print the following with cout with strange output. Any comment on this behaviour is appreciated.
#include<iostream>
using namespace std;
int main()
{
unsigned char x = 0xff;
cout << "Value of x " << hex<<x<<" hexadecimal"<<endl;
printf(" Value of x %x by printf", x);
}
output:
Value of x Ăż hexadecimal
Value of x ff by printf
<< handles char as a 'character' that you want to output, and just outputs that byte exactly. The hex only applies to integer-like types, so the following will do what you expect:
cout << "Value of x " << hex << int(x) << " hexadecimal" << endl;
Billy ONeal's suggestion of static_cast would look like this:
cout << "Value of x " << hex << static_cast<int>(x) << " hexadecimal" << endl;
You are doing the hex part correctly, but x is a character, and C++ is trying to print it as a character. You have to cast it to an integer.
#include<iostream>
using namespace std;
int main()
{
unsigned char x = 0xff;
cout << "Value of x " << hex<<static_cast<int>(x)<<" hexadecimal"<<endl;
printf(" Value of x %x by printf", x);
}
If I understand your question correctly, you should expect to know how to convert hex to dec since you have already assigned unsigned char x = 0xff;
#include <iostream>
int main()
{
unsigned char x = 0xff;
std::cout << std::dec << static_cast<int>(x) << std::endl;
}
which shall give the value 255 instead.
Further detail related to the the str stream to dec shall refer in http://www.cplusplus.com/reference/ios/dec/.
If you want to know the hexadecimal value from the decimal one, here is a simple example
#include <iostream>
#include <iomanip>
int main()
{
int x = 255;
std::cout << std::showbase << std::setw(4) << std::hex << x << std::endl;
}
which prints oxff.
The library <iomanip> is optional if you want to see 0x ahead of ff. The original reply related to hex number printing was in http://www.cplusplus.com/forum/windows/51591/.