String not decrypted properly using <openssl/aes> - c++

I want to write one simple a program that will encrypt one test using <openssl/aes.h> and at the same time decrypt it. I wrote below program
Adding my whole code here:
#include <stdio.h>
#include <fstream>
#include <iostream>
#include <stdio.h>
#include <string.h>
#include <openssl/aes.h>
#include <string.h>
int main(void)
{
//encryption testing
unsigned char inputb[2048] = {'\0'};
unsigned char encpb[2048]= {'\0'};
unsigned char opb[2048]= {'\0'};
#define MAX_SIZE 100
unsigned char oneKey[] = "6BC1BEE22E409F96E93D7E117393172A";
AES_KEY key;
AES_KEY key1;
char testchat[] = "!!!test doctors file!!! #Hospitan name(norman) SICKAPP_NAME=9873471093 #Duration (Duration\
of doctor visitdfwhedf in months)higibujiji TESTATION=-5 #Expiry date MADICINE_NAME=678041783478\n";
char NULL_byte[16] = {0};
memcpy((char*)inputb, (testchat), strlen(testchat)+1);
printf("\n\ninputb= %s strlen(testchat)=%d \n\n",inputb, strlen(testchat));
AES_set_encrypt_key(oneKey, 128, &key);
unsigned char tmp_char[50] = {'\0'};
char* pChar = (char*)inputb;
unsigned char tmp_char_encpb[MAX_SIZE];
while(*pChar != '\0') {
memset(tmp_char, '\0', 50);
memset(tmp_char_encpb, '\0', MAX_SIZE);
if(strlen(pChar) < 16) {
strncpy((char*)tmp_char, (char*)pChar, strlen(pChar)+1);
strncat((char*)tmp_char, NULL_byte, 16 - strlen(pChar)+1);
}
else
strncpy((char*)tmp_char, (char*)pChar, 16);
printf("Line:%d tmp_char = %s pChar=%d\n", __LINE__, tmp_char, strlen(pChar));
AES_encrypt(tmp_char, tmp_char_encpb, &key);
strcat((char*)encpb, (char*)tmp_char_encpb);
pChar += 16;
}
printf("len encpb=%d\n", strlen((char*)encpb));
//now test with decrypting and check if all okk....
unsigned char oneKey1[] = "6BC1BEE22E409F96E93D7E117393172A";
AES_set_decrypt_key(oneKey1,128,&key1);
unsigned char tmp_char_dencpb[MAX_SIZE];
pChar = (char*)encpb;
while(*pChar != '\0') {
memset(tmp_char, '\0', 50);
if(strlen(pChar) < 16) {
strncpy((char*)tmp_char, (char*)pChar, strlen(pChar)+1);
strncat((char*)tmp_char, NULL_byte, 16 - strlen(pChar)+1);
}
else
strncpy((char*)tmp_char, (char*)pChar, 16);
AES_decrypt(tmp_char, tmp_char_dencpb, &key1);
strncat((char*)opb, (char*)tmp_char_dencpb,16);
memset(tmp_char_dencpb, '\0', MAX_SIZE);
pChar += 16;
}
printf("\n\nopb = %s\n\n",opb);
return 0;
}
I am building via:
g++ mytest.cpp -lssl -lcrypto
running through GDB:
Program received signal SIGSEGV, Segmentation fault.
0x0000003e48437122 in ____strtoll_l_internal () from /lib64/libc.so.6
Missing separate debuginfos, use: debuginfo-install glibc-2.12-1.47.el6_2.12.x86_64 keyutils-libs-1.4-3.el6.x86_64 krb5-libs-1.9-22.el6_2.1.x86_64 libcom_err-1.41.12-11.el6.x86_64 libgcc-4.4.6-3.el6.x86_64 libselinux-2.0.94-5.2.el6.x86_64 libstdc++-4.4.6-3.el6.x86_64 openssl-1.0.0-20.el6_2.4.x86_64 zlib-1.2.3-27.el6.x86_64
(gdb) backtrace
#0 0x0000003e48437122 in ____strtoll_l_internal () from /lib64/libc.so.6
#1 0x0000000000400e9b in GetExpiryDate (exp_date=0x7fffffffd970) at LicReader.cpp:66
#2 0x0000000000400eeb in IsLicenseExpired () at LicReader.cpp:74
#3 0x0000000000400f3b in main (argc=1, argv=0x7fffffffda88) at LicReader.cpp:86
(gdb)
OP: in out put some time I got current decrypted string and some time getting with junk character.(when i/p string changed)
Am I missed something anywhere? Can anyone tell why AES_decrypt not workin gsometimes?

Zero-terminated string manipulation is not how to manage encrypted data... for example you're using strcat to add encrypted data to encpb... but what happens if there's a zero in the encrypted data? What happens is you don't get all the data. Deal instead with the actual block-size which is 16 bytes. What happens if the data you encrypt is not a multiple of 16 bytes? You have to pad it out to a multiple of 16. How? Lots of different ways, like PKCS7. Plus you should look into cipher-block-chaining and salting... lots to learn!

Related

Convert C-Source image dump into original image

I have created with GIMP a C-Source image dump like the following:
/* GIMP RGBA C-Source image dump (example.c) */
static const struct {
guint width;
guint height;
guint bytes_per_pixel; /* 2:RGB16, 3:RGB, 4:RGBA */
guint8 pixel_data[304 * 98 * 2 + 1];
} example= {
304, 98, 2,
"\206\061\206\061..... }
Is there a way to read this in GIMP again in order to get back the original image? because it doesn't seem possible.
Or does it exist a tool that can do this back-conversion?
EDITED
Following some suggestion I tried to write a simple C programme to make the reverse coversion ending up with something very similar to another code found on internet but both dont work:
#include <stdlib.h>
#include <stdio.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include "imgs_press.h"
#include <stdio.h>
#include <unistd.h>
#include <fcntl.h>
using namespace std;
int main(int argc, char** argv) {
int fd;
char *name = "orignal_img.pnm";
fd = open(name, O_WRONLY | O_CREAT, 0644);
if (fd == -1) {
perror("open failed");
exit(1);
}
if (dup2(fd, 1) == -1) {
perror("dup2 failed");
exit(1);
}
// file descriptor 1, i.e. stdout, now points to the file
// "helloworld" which is open for writing
// You can now use printf which writes specifically to stdout
printf("P2\n");
printf("%d %d\n", press_high.width, press_high.height);
for(int x=0; x<press_high.width * press_high.height * 2; x++) {
printf("%d ", press_high.pixel_data[x]);
}
}
As suggested by n-1-8e9-wheres-my-share-m, maybe I need to manipulate the pixels usign the correct decode, but I have no idea how to do that, does anybody have other suggestions?
The image I got is indeed distorted:
Updated Answer
If you want to decode the RGB565 and write a NetPBM format PNM file without using ImageMagick, you can do this:
#include <stdint.h> /* for uint8_t */
#include <stdio.h> /* for printf */
/* tell compiler what those GIMP types are */
typedef int guint;
typedef uint8_t guint8;
#include <YOURGIMPIMAGE>
int main(){
int w = gimp_image.width;
int h = gimp_image.height;
int i;
uint16_t* RGB565p = (uint16_t*)&(gimp_image.pixel_data);
/* Print P3 PNM header on stdout */
printf("P3\n%d %d\n255\n",w, h);
/* Print RGB pixels, ASCII, one RGB pixel per line */
for(i=0;i<w*h;i++){
uint16_t RGB565 = *RGB565p++;
uint8_t r = (RGB565 & 0xf800) >> 8;
uint8_t g = (RGB565 & 0x07e0) >> 3;
uint8_t b = (RGB565 & 0x001f) << 3;
printf("%d %d %d\n", r, g ,b);
}
}
Compile with:
clang example.c
And run with:
./a.out > result.pnm
I have not tested it too extensively beyond your sample image, so you may want to make a test image with some reds, greens, blues and shades of grey to ensure that all my bit-twiddling is correct.
Original Answer
The easiest way to get your image back would be... to let ImageMagick do it.
So, take your C file and add a main() to it that simply writes the 304x98x2 bytes starting at &(example.pixel_data) to stdout:
Compile it with something like:
clang example.c -o program # or with GCC
gcc example.c -o program
Then run it, writing to a file for ImageMagick with:
./program > image.bin
And tell ImageMagick its size, type and where it is and what you want as a result:
magick -size 304x98 RGB565:image.bin result.png
I did a quick, not-too-thorough test of the following code and it worked fine for an image I generated with GIMP. Note it doesn't handle alpha/transparency but that could be added if necessary. Save it as program.c:
#include <unistd.h> /* for write() */
#include <stdint.h> /* for uint8_t */
/* tell compiler what those GIMP types are */
typedef int guint;
typedef uint8_t guint8;
<PASTE YOUR GIMP FILE HERE>
int main(){
/* Work out how many bytes to write */
int nbytes = example.width * example.height * 2;
/* Write on stdout for redirection to a file - may need to reopen in binary mode if on Windows */
write(1, &(example.pixel_data), nbytes);
}
If I run this with the file you provided via Google Drive I get:

Too few arguments to function 'int fclose(FILE*)'

Hello I am a beginner of C language for microprocessors. I want to read a ''.bmp'' file in order to apply line detection on it. I have declared a function to read the image. This error occurs when compile button is pushed:
#include "esp_camera.h"
#include "Arduino.h"
#include "FS.h" // SD Card ESP32
#include "SD_MMC.h" // SD Card ESP32
#include "soc/soc.h" // Disable brownour problems
#include "soc/rtc_cntl_reg.h" // Disable brownour problems
#include "driver/rtc_io.h"
#include <EEPROM.h> // read and write from flash memory
#include <SPI.h>
void imageReader(const char *imgName,
int *height,
int *width,
int *bitdepth,
unsigned char *header,
unsigned char *_colortable,
unsigned char *buf
) // READ AN IMAGE
{
int i;
fs::FS &fs = SD_MMC; //
FILE *file;
file = fopen(imgName,"rb"); // read imgName file ( it is a picture in .bmp format )
if(!file){
Serial.printf("Unable to read image");
}
for(i=0 ; i<54 ; i++){
header[i]=getc(file);
}
*width = *(int * )& header[18]; // width information of the image
*height = *(int * )& header[22]; // height information of image
*bitdepth = *(int *)& header[28];
if(*bitdepth<=8){
fread(_colortable,sizeof(unsigned char),1024,file);
}
fread(buf,sizeof(unsigned char),( 1600 * 1200 ) ,file);
fclose();
}
It gives this error. too few arguments to function 'int fclose(FILE*)'
The fclose() function needs to know which file to close. You need to tell it that by supplying "file" as an argument. You want to use fclose(file).

Linux - segmentation fault only sometimes - how to debug

I have a Linux program, that from time to time ends with a segmentation fault. The program is running periodically every hour, but the segmentation fault occurs only sometimes.
I have a problem to debug this, because if I run the program again with the same input, no error is reported and all is OK.
Is there a way, how to "report" in which part of the code error occured or what caused the problem?
The usual way is to have the crashing program generate a corefile and analyze this after the crash. Make sure, that:
the maximum corefile-size is big enough (i.e. unlimited) by calling ulimit -c unlimited in the shell, which starts the process.
The cwd is writable by the segfaulting process.
Then you can analyze the file with
gdb <exe> <corefile>
Since your code not crashing every time, you can use backtrace as well. Using this you can see the function call stack at the time of crash. There are many examples available. In my projects I normally use the following code for backtracing.
/*
* call reg_usr2 function from main
* gcc -rdynamic myfile.c -o output
*/
#include <stdio.h>
#include <stdarg.h>
#include <signal.h>
#include <unistd.h>
#include <stdlib.h>
#include <execinfo.h>
#define FILE_NAME "/tmp/debug"
#define MODE 0xFFFF
void dbgprint(int flag, char* fmt, ...)
{
if(flag & MODE) {
char buf[100];
va_list vlist;
FILE *fp = fopen(FILE_NAME,"a");
va_start(vlist, fmt);
vsnprintf( buf, sizeof( buf), fmt, vlist);
va_end( vlist);
fprintf(fp,"[%x]->%s\n", flag, buf);
fclose(fp);
}
}
/** Here is the code to print backtrace **/
void print_stack_trace ()
{
void *array[20];
size_t size;
char **strings;
size_t i;
size = backtrace (array, 20);
strings = backtrace_symbols (array, size);
dbgprint(0xFFFF, "Obtained %zd stack frames.", size);
dbgprint(0xFFFF, "-------------------------");
dbgprint(0xFFFF, "---------Backtrace-------");
for (i = 0; i < size; i++)
dbgprint (0xFFFF, "%s", strings[i]);
dbgprint(0xFFFF, "-------------------------");
free (strings);
}
void sig_handler(int signo)
{
FILE *fp = fopen(FILE_NAME,"a");
if (signo == SIGUSR2){
dbgprint(0xFFFF, "received SIGUSR2");
dbgprint(0xFFFF, "----------------");
}
print_stack_trace();
exit(0);
}
void reg_usr2()
{
if (signal(SIGUSR2, sig_handler) == SIG_ERR)
printf("\ncan't catch SIGUSR2\n");
}
int main()
{
reg_usr2(); //should be first line of main after variables
//Code.....
return 0;
}
You can generate backtrace by catching SIGSEGV signal, and see where your application throw an invalid access.
see https://stackoverflow.com/a/77336/4490542
But there is more easier solution, try running your application with catchsegv
catchsegv './program args'
and better alternative, valgrind
valgrind --tool=none ./program args

How can a Unix program display output on screen even when stdout and stderr are redirected?

I was running a program (valgrind, actually) on my Ubuntu machine, and had redirected both stdout and stderr to different files. I was surprised to see a short message appear on the screen -- how is that possible? How could I do that myself in a C++ program?
EDIT: Here's the command I used, and the output:
$ valgrind ./myprogram > val.out 2> val.err
*** stack smashing detected ***: ./myprogram terminated
EDIT2: Playing with it a little more, it turns out that myprogram, not valgrind, is causing the message to be printed, and as answered below, it looks like gcc stack smashing detection code is printing to /dev/tty
It is not written by valgrind but rather glibc and your ./myprogram is using glibc:
#define _PATH_TTY "/dev/tty"
/* Open a descriptor for /dev/tty unless the user explicitly
requests errors on standard error. */
const char *on_2 = __libc_secure_getenv ("LIBC_FATAL_STDERR_");
if (on_2 == NULL || *on_2 == '\0')
fd = open_not_cancel_2 (_PATH_TTY, O_RDWR | O_NOCTTY | O_NDELAY);
if (fd == -1)
fd = STDERR_FILENO;
...
written = WRITEV_FOR_FATAL (fd, iov, nlist, total);
Below are some relevant parts of glibc:
void
__attribute__ ((noreturn))
__stack_chk_fail (void)
{
__fortify_fail ("stack smashing detected");
}
void
__attribute__ ((noreturn))
__fortify_fail (msg)
const char *msg;
{
/* The loop is added only to keep gcc happy. */
while (1)
__libc_message (2, "*** %s ***: %s terminated\n",
msg, __libc_argv[0] ?: "<unknown>");
}
/* Abort with an error message. */
void
__libc_message (int do_abort, const char *fmt, ...)
{
va_list ap;
int fd = -1;
va_start (ap, fmt);
#ifdef FATAL_PREPARE
FATAL_PREPARE;
#endif
/* Open a descriptor for /dev/tty unless the user explicitly
requests errors on standard error. */
const char *on_2 = __libc_secure_getenv ("LIBC_FATAL_STDERR_");
if (on_2 == NULL || *on_2 == '\0')
fd = open_not_cancel_2 (_PATH_TTY, O_RDWR | O_NOCTTY | O_NDELAY);
if (fd == -1)
fd = STDERR_FILENO;
...
written = WRITEV_FOR_FATAL (fd, iov, nlist, total);
The message is most probably from GCC's stack protector feature or from glib itself. If it's from GCC, it is output using the fail() function, which directly opens /dev/tty:
fd = open (_PATH_TTY, O_WRONLY);
_PATH_TTY is not really standard, but SingleUnix actually demands that /dev/tty exists.
Here is some sample code that does exactly what was asked (thanks to earlier answers pointing me in the right direction). Both are compiled with g++, and will print a message to the screen even when stdout and stderr are redirected.
For Linux (Ubuntu 14):
#include <stdio.h>
#include <unistd.h>
#include <fcntl.h>
#include <string.h>
int main( int, char *[]) {
printf("This goes to stdout\n");
fprintf(stderr, "This goes to stderr\n");
int ttyfd = open("/dev/tty", O_RDWR);
const char *msg = "This goes to screen\n";
write(ttyfd, msg, strlen(msg));
}
For Windows 7, using MinGW:
#include <stdio.h>
#include <fcntl.h>
#include <string.h>
#include <conio.h>
void writeConsole( const char *s) {
while( *s) {
putch(*(s++));
}
}
int main( int, char *[]) {
printf("This goes to stdout\n");
fprintf(stderr, "This goes to stderr\n");
writeConsole( "This goes to screen\n");
}

Unable to get correct output from AES-128-GCM

The following test code should theoretically give me the result from the NIST test suite of 58e2fccefa7e3061367f1d57a4e7455a , however a hexdump of the output yields 9eeaed13b5f591104e2cda197fb99eeaed13b5f591104e2cda197fb9 instead ?
#include <iostream>
#include <cstdio>
#include <polarssl/md.h>
#include <polarssl/entropy.h>
#include <polarssl/ctr_drbg.h>
#include <polarssl/cipher.h>
#include <cstdlib>
#include <fstream>
int main(int argc, char** argv) {
const cipher_info_t *cipher_info;
cipher_info = cipher_info_from_string( "AES-128-GCM" );
cipher_context_t cipher_ctx;
cipher_init_ctx (&cipher_ctx,cipher_info);
std::cout<<"KEYLEN"<<std::endl;
std::cout<<cipher_info->key_length<<std::endl;
std::cout<<"IVLEN"<<std::endl;
std::cout<<cipher_info->iv_size<<std::endl;
unsigned char key[cipher_info->key_length/8];
unsigned char iv[cipher_info->iv_size];
memset(key,0x00,cipher_info->key_length/8);
memset(iv,0x00,cipher_info->iv_size);
unsigned char iBuffer[10];
unsigned char oBuffer[1024];
size_t ilen, olen;
std::ofstream oFile2;
oFile2.open("testOut",std::ofstream::out | std::ofstream::trunc | std::ofstream::binary);
cipher_setkey( &cipher_ctx,key,cipher_info->key_length,POLARSSL_ENCRYPT);
cipher_set_iv( &cipher_ctx, iv, 16 );
cipher_reset( &cipher_ctx );
cipher_update( &cipher_ctx, iBuffer, sizeof(iBuffer), oBuffer, &olen );
oFile2 << oBuffer;
cipher_finish( &cipher_ctx, oBuffer, &olen );
oFile2 << oBuffer;
oFile2.close();
}
This is the nIST test :
Variable
Value
K 00000000000000000000000000000000
P
IV 000000000000000000000000
H 66e94bd4ef8a2c3b884cfa59ca342b2e
Yo 00000000000000000000000000000001
E ( K,Yo) 58e2fccefa7e3061367f1d57a4e7455a
len(A)||len(C) 00000000000000000000000000000000
GHASH (H,A,C) 00000000000000000000000000000000
C
T 58e2fccefa7e3061367f1d57a4e7455a
(test case No. 1 http://csrc.nist.gov/groups/ST/toolkit/BCM/documents/proposedmodes/gcm/gcm-revised-spec.pdf)
I can see two immediate mistakes:
the plain text size is set to 10 bytes instead of no bytes at all - this makes the ciphertext too large and the authentication tag incorrect;
the IV is 12 bytes set to 0 instead of 16 bytes set to 0 - 12 is the default for GCM mode - this makes the ciphertext if any and authentication tag incorrect.
These issues are in the following lines:
unsigned char iBuffer[10];
...
cipher_update( &cipher_ctx, iBuffer, sizeof(iBuffer), oBuffer, &olen );
and
cipher_set_iv( &cipher_ctx, iv, 16 );
Furthermore, it seems like the API requires you to retrieve the tag separately using the ...write_tag... method. Currently you are only seeing the CTR ciphertext, not the authentication tag.