visual c++ parallel port control - c++

I wrote this code in Visual c++ to control LED's through parallel port:
// InpoutTest.cpp : Defines the entry point for the console application.
//
#include "stdafx.h"
#include "stdio.h"
#include "string.h"
#include "stdlib.h"
#include <conio.h>
short _stdcall Inp32(short PortAddress);
void _stdcall Out32(short PortAddress, short data);
int main(int argc, char* argv[])
{
Out32(888, 255);
system("pause");
Out32(888, 0);
return 0;
}
Now, what I thought was that the line 'Out32(888, 255);' will write 1 in all data registers, and all LED'd connected from D0 to D7 will turn on; but nothing happened, the led's which were on before execution remained on and same case with the led's which were off.
Same was the case with 'Out32(888, 0);', no led's were turned off.
What is wrong in the above code? I used 'Inpoutx64.dll' as I'm working on 64 bit OS (windows 8). I also included 'Inpoutx64.lib' in project properties > linked > input > Additional dependencies.
I've also copied "inpoutx64.dll' to Windows/system 32

Make sure you have inpoutx64.dll in the same directory as your generated .exe file, and that you have run the InstallDriver.exe program included with inpoutx64.dll, and allowed UAC elevation, in order to install the required system driver.

Related

std::cin.read() fails to read stream

I'm implementing a native host for a browser extension. I designed my implementation around std::cin instead of C-style getchar()
The issue here is that std::cin not opened in binary mode and this has effects on Windows based hosts because Chrome browser don't work well with Windows style \r\n hence I have to read it in binary mode.
To read in binary mode, I have to use _setmode(_fileno(stdin), _O_BINARY);
My IDE can't find definition for _fileno and I found that the workaround is to use the following macro,
#if !defined(_fileno)
#define _fileno(__F) ((__F)->_file)
#endif
However, I'm not confident enough with this macro. I believe something is wrong, but I'm using the latest MinGW compiler and not sure why it's not defined.
Update: it seems the function is behind a __STRICT_ANSI__ and I have no idea how to disable it.
Whatever, the program compiles fine and the browser starts it, and when I send message from browser, the application able to read the length of message, and when it try to read the content, the std::cin.read() operation inserts nothing to the buffer vector and the message is not null terminated, but I don't think that causing the issue.
I also made an attempt to send a dummy message to browser without reading but it seems freezing the browser.
#include <iostream>
#include <cstdio>
#include <string>
#include <vector>
#ifdef __WIN32
#include <fcntl.h>
#include <io.h>
#endif
#if !defined(_fileno)
#define _fileno(__F) ((__F)->_file)
#endif
enum class Platforms {
macOS = 1,
Windows = 2,
Linux = 3
};
Platforms platform;
#ifdef __APPLE__
constexpr Platforms BuildOS = Platforms::macOS;
#elif __linux__
constexpr Platforms BuildOS = Platforms::Linux;
#elif __WIN32
constexpr Platforms BuildOS = Platforms::Windows;
#endif
void sendMessage(std::string message) {
auto *data = message.data();
auto size = uint32_t(message.size());
std::cout.write(reinterpret_cast<char *>(&size), 4);
std::cout.write(data, size);
std::cout.flush();
}
int main() {
if constexpr(BuildOS == Platforms::Windows) {
// Chrome doesn't deal well with Windows style \r\n
_setmode(_fileno(stdin), _O_BINARY);
_setmode(_fileno(stdout), _O_BINARY);
}
while(true) {
std::uint32_t messageLength;
// First Four contains message legnth
std::cin.read(reinterpret_cast<char*>(&messageLength), 4);
if (std::cin.eof())
{
break;
}
std::vector<char> buffer;
// Allocate ahead
buffer.reserve(std::size_t(messageLength) + 1);
std::cin.read(&buffer[0], messageLength);
std::string message(buffer.data(), buffer.size());
sendMessage("{type: 'Hello World'}");
}
}
Solution:
buffer.reserve(std::size_t(messageLength) + 1);
should be
buffer.resize(std::size_t(messageLength) + 1);
or we can presize the buffer during construction with
std::vector<char> buffer(messageLength +1);
Problem Explanation:
buffer.reserve(std::size_t(messageLength) + 1);
reserves capacity but doesn't change the size of the vector, so technically
std::cin.read(&buffer[0], messageLength);`
is illegal, and at
std::string message(buffer.data(), buffer.size());`
buffer.size() is still 0.

fopen returning NULL in gdb

Im trying to solve a binary exploitation problem from picoCTF, but I'm having trouble with gdb.
Here is the source code of the problem (I've commented some stuff to help me).
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <sys/types.h>
#include <wchar.h>
#include <locale.h>
#define BUF_SIZE 32
#define FLAG_LEN 64
#define KEY_LEN 4
void display_flag() {
char buf[FLAG_LEN];
FILE *f = fopen("flag.txt","r");
if (f == NULL) {
printf("'flag.txt' missing in the current directory!\n");
exit(0);
}
fgets(buf,FLAG_LEN,f);
puts(buf);
fflush(stdout);
}
// loads value into key, global variables ie not on stack
char key[KEY_LEN];
void read_canary() {
FILE *f = fopen("/problems/canary_3_257a2a2061c96a7fb8326dbbc04d0328/canary.txt","r");
if (f == NULL) {
printf("[ERROR]: Trying to Read Canary\n");
exit(0);
}
fread(key,sizeof(char),KEY_LEN,f);
fclose(f);
}
void vuln(){
char canary[KEY_LEN];
char buf[BUF_SIZE];
char user_len[BUF_SIZE];
int count;
int x = 0;
memcpy(canary,key,KEY_LEN); // copies "key" to canary, an array on the stack
printf("Please enter the length of the entry:\n> ");
while (x<BUF_SIZE) {
read(0,user_len+x,1);
if (user_len[x]=='\n') break;
x++;
}
sscanf(user_len,"%d",&count); // gives count the value of the len of user_len
printf("Input> ");
read(0,buf,count); // reads count bytes to buf from stdin
// compares canary (variable on stack) to key
// if overwriting need to get the value of key and maintain it, i assume its constant
if (memcmp(canary,key,KEY_LEN)) {
printf("*** Stack Smashing Detected *** : Canary Value Corrupt!\n");
exit(-1);
}
printf("Ok... Now Where's the Flag?\n");
fflush(stdout);
}
int main(int argc, char **argv){
setvbuf(stdout, NULL, _IONBF, 0);
int i;
gid_t gid = getegid();
setresgid(gid, gid, gid);
read_canary();
vuln();
return 0;
}
When I run this normally, with ./vuln, I get normal execution. But when I open it in gdb with gdb ./vuln and then run it with run, I get the [ERROR]: Trying to Read Canary message. Is this something that is intended to make the problem challenging? I don't want the solution, I just don't know if this is intended behaviour or a bug. Thanks
I don't want the solution, I just don't know if this is intended behaviour or a bug.
I am not sure whether you'll consider it intended behavior, but it's definitely not a bug.
Your ./vuln is a set-gid program. As such, it runs as group canary_3 when run outside of GDB, but as your group when run under GDB (for obvious security reason).
We can assume that the canary_3 group has read permissions on the canary.txt, but you don't.
P.S. If you printed strerror(errno) (as comments suggested), the resulting Permission denied. should have made the failure obvious.

I want to receive multiple data from arduino to raspberry pi using I2C

Thank you to whoever is kind enough to look into this question.
I want to receive multiple data from arduino to raspberry pi using I2C.
I can obtain 1 data from arduino, but once I move to more than one data, it fails to do so.
I have tried multiple methods so far, and I found this method to work the best to obtain data from Arduino.
My previous attempt in obtaining data from arduino is as follows:
I want to read from Arduino using I2C using Raspberry Pi
Raspberry Pi's terminal response has weird font that cannot be recognized
Which are all solved by now.
Got Massive Help from link below
https://area-51.blog/2014/02/15/connecting-an-arduino-to-a-raspberry-pi-using-i2c/
Arduino Code
#include <Wire.h>
#define echoPin 7
#define trigPin 8
int number=0;
long duration;
long distance;
void setup()
{
//Join I2C bus as slave with address 8
Wire.begin(8);
//Call SendData & Receive Data
Wire.onRequest(SendData);
//Setup pins as output and input to operate ultrasonic sensor
Serial.begin(9600);
pinMode(echoPin,INPUT);
pinMode(trigPin,OUTPUT);
}
void loop ()
{
digitalWrite(trigPin,LOW);
delayMicroseconds(2);
digitalWrite(trigPin,HIGH);
delayMicroseconds(2);
digitalWrite(trigPin,LOW);
duration=pulseIn(echoPin,HIGH);
distance=duration/58.2;
Serial.print(distance);
Serial.println(" cm");
}
void SendData()
{
Wire.write(distance);
Wire.write("Why No Work?");
Wire.write(distance);
}
C++ Code
//Declare and Include the necessary header files
#include <iostream>
#include <string.h>
#include <unistd.h>
#include <errno.h>
#include <stdio.h>
#include <stdlib.h>
#include <linux/i2c-dev.h>
#include <sys/ioctl.h>
#include <fcntl.h>
//Define Address of Slave Address
#define ADDRESS 0x08
//Eliminate the Used of std in the future
using namespace std;
static const char *devName="/dev/i2c-1";
int main(int argc, char **argv)
{
//Check to see if C++ works
cout<<"Hello, World!\n";
cout<<"I2C: Connecting"<<endl;
int file;
if ((file = open(devName, O_RDWR))<0)
{
fprintf(stderr, "I2C: Failed to access");
exit(1);
}
if (ioctl(file, I2C_SLAVE, ADDRESS)<0)
{
cout<<"Failed to Access"<<endl;
}
char buf[0];
char dd;
for (int i=0; i<100;i++)
{
read(file,buf, 3);
float distance= (int) buf[0];
dd= buf[1];
float dist=(int) buf[2];
cout<<distance<<endl;
usleep(10000);
cout<<"doh"<<endl;
cout<<dd<<endl;
cout<<dist<<endl;
}
return 0;
}
What I would expect from the c++ code would be as follows
15
doh
Why No Work?
15
But I get
15
doh
weird font can't be recognized
255
Wire.write(distance);
wants to write a long onto the I2C bus. For an Arduino This is 32 bits, 4 bytes, of data. I'm not sure exactly what wire.write does because the documentation I can find is substandard to the point of being garbage, but the documentation looks like it's going to send exactly 1 of the 4 bytes you wanted to send. In order to send more than one byte, it looks like you need to use the array version: Wire.write(&distance, sizeof (distance));, but even this may not be sufficient. I'll get back into that later.
Wire.write("Why No Work?");
writes a null-terminated string (specifically a const char[13]) onto the I2C bus. I don't know arduino well enough to know if this also sends the terminating null.
so
Wire.write(distance);
Wire.write("Why No Work?");
Wire.write(distance);
needed to write at least 4 + 12 + 4 bytes onto the I2C bus. and probably only wrote 1 + 12 + 1.
On the Pi side,
read(file,buf, 3);
read out 3 bytes. This isn't enough to get the whole of distance, let alone the array of characters and second write of distance. You need to read all of the data you wrote, at least 20 bytes.
In addition,
char buf[0];
defines an array of 0 length. There isn't much you can do with it as there is no space to store anything here. It cannot hold 3 characters, let alone the 20 or 21 necessary. read of 3 bytes wrote into invalid memory and the program can no longer be counted on for sane results.
This means that at best
float distance= (int) buf[0];
dd= buf[1];
float dist=(int) buf[2];
got only one byte of the four bytes of distance and it's dumb luck that the result was the same as expected. dd got exactly one character, not the whole string, and this is turning out to be nonsense because of one of the preceding mistakes. dist is similarly garbage.
To successfully move data from one machine to another, you need to establish a communication protocol. You can't just write a long onto a wire. long doesn't have the same size on all platforms, nor does it always have the same encoding. You have to make absolutely certain that both sides agree on how the long is to be written (size and byte order) and read.
Exactly how you are going to do this is up to you, but here are some pointers and a search term, serialization, to assist you in further research.

loggedfs on mac with osxfuse stuck

I would like to log every syscall of a specified directory and I've found this repository https://github.com/rflament/loggedfs
It creates a virtual filesystem with fuse and log everything in it, just like I want.
I tried to port it on mac but it uses a "trick" that doesn't works on osx. The lstat is stuck 10s and crash.
I would like to understand why ?
This is a main part of my code :
// g++ -Wall main.cpp `pkg-config fuse --cflags --libs` -o hello
#define FUSE_USE_VERSION 26
#include <fuse.h>
#include <stdio.h>
#include <string.h>
#include <errno.h>
#include <fcntl.h>
#include <unistd.h>
#include <stdio.h>
static char *path;
static int savefd;
static int getattr(const char *path, struct stat *stbuf)
{
int res;
char rPath[1024];
strcpy(rPath, "."); strcat(rPath, path);
res = lstat(rPath, stbuf); // Mac stuck here
return (res == -1 ? -errno : 0);
}
static void* loggedFS_init(struct fuse_conn_info* info)
{
fchdir(savefd); close(savefd); return NULL;
}
int main(int argc, char *argv[])
{
struct fuse_operations oper;
bzero(&oper, sizeof(fuse_operations));
oper.init = loggedFS_init;
oper.getattr = getattr;
path = strdup(argv[argc - 1]);
printf("chdir to %s\n", path);
chdir(path);
savefd = open(".", 0);
return fuse_main(argc, argv, &oper, NULL);
}
I had a very close look at LoggedFS and tested it for POSIX compliance using pjdfstest, resulting in 3 issues (or groups of issues). I ended up re-implementing it in Python, fully POSIX compliant. I have not tested it on OS X yet, so I'd be happy to receive some feedback ;)
The "trick" you are mentioning could be the root cause of your issue, although I am not entirely sure. It causes a fundamental problem by adding another character to the path, which leads to issues when the length of path gets close to PATH_MAX. libfuse already passes paths with a leading / into FUSE operations. The additional . plus the "misleading" / (root of the mounted filesystem, not the "global" root folder) are two characters "too many", effectively reducing the maximum allowed path length to PATH_MAX minus 2. I explored options of altering PATH_MAX and informing user land software about a smaller PATH_MAX, which turned out to be impossible.
There is a way around, however. Do not close the file descriptor savefd in the init routine. Keep it open and instead close it in the destroy routine, which will be called by FUSE when the filesystem is unmounted. You can actually use savefd for specifying paths relative to it. You can then use fstatat (Linux, OS X / BSD) instead of lstat. Its prototype looks like this:
int fstatat(int dirfd, const char *pathname, struct stat *buf,
int flags);
You have to pass savefd into dirfd and remove the leading / from the content of path before passing it into pathname.

Need example of jpeglib-turbo that works in VS2013 x64

I'm trying to learn how to use the jpeg-turbo library. And I'm have a devil of a time getting started.
The example.c example in the doc folder, and every single example I find on the web, crashes in VS2013 when I try to read a .jpg file.
They compile fine. But when I run them they crash with an access violation error.
What I really need is a tiny working (beginner friendly) example that is known to run properly in VS2013 x64. Including the main(){} code block code.
And if there's anything special in the VS project properties that I might need to set that could be causing this crashing.
I'm pulling my hair out just trying to get one simple example working.
Thanks for the help.
*Edit-- Here is a very small example.
I've also tried to get jpeglib to run with and without using Boost/GIL
But it always crashes when loading the image: exception at 0x00000000774AE4B4 (ntdll.dll)
#include <stdio.h>
#include <assert.h>
#include <jpeglib.h>
#pragma warning(disable: 4996)
int main(int argc, char* argv[])
{
struct jpeg_decompress_struct cinfo;
struct jpeg_error_mgr jerr;
JSAMPARRAY buffer;
int row_stride;
//initialize error handling
cinfo.err = jpeg_std_error(&jerr);
FILE* infile;
infile = fopen("source.jpg", "rb");
assert(infile != NULL);
//initialize the decompression
jpeg_create_decompress(&cinfo);
//specify the input
jpeg_stdio_src(&cinfo, infile);
//read headers
(void)jpeg_read_header(&cinfo, TRUE);
jpeg_start_decompress(&cinfo); <----This guy seems to be the culprit
printf("width: %d, height: %d\n", cinfo.output_width, cinfo.output_height);
row_stride = cinfo.output_width * cinfo.output_components;
buffer = (*cinfo.mem->alloc_sarray)
((j_common_ptr)&cinfo, JPOOL_IMAGE, row_stride, 1);
JSAMPLE firstRed, firstGreen, firstBlue; // first pixel of each row, recycled
while (cinfo.output_scanline < cinfo.output_height)
{
(void)jpeg_read_scanlines(&cinfo, buffer, 1);
firstRed = buffer[0][0];
firstBlue = buffer[0][1];
firstGreen = buffer[0][2];
printf("R: %d, G: %d, B: %d\n", firstRed, firstBlue, firstGreen);
}
jpeg_finish_decompress(&cinfo);
return 0;
}
I found the problem.
In my VS project's Linker->Input->Additional Dependencies. I changed it to use turbojpeg-static.lib. Or jpeg-static.lib when I'm using the non turbo enhanced libraries. The turbojpeg.lib or jpeg.lib crashes for some reason when reading the image. FYI, I am using the libjpeg-turbo-1.4.2-vc64.exe version with VS2013. And this is how I got it to work.
One more very important thing that I learned that I'd like to share.
When writing to a new .jpg image. If the new image size is different than the source image. It will typically crash. Especially if the new size is larger than the source. I'm guessing this happens because it takes a much longer time to re-sample the color data to a different size. So this type of action might require it's own thread to prevent crashing. I wasted a lot of time chasing code errors and compiler settings due to this one. So watch out for that one.