I have multiple executable files. I want to write a program using cpp which will extract them to a specific location. Like unpacking a zip archive but i want it to be an executable and work in any clean install of windows. (vista or later) Is there a way to do it?
Edit: I know how to make a self extracting zip file, i want to learn how can i do it myself.
Most of archivers have option to create executable files. You don't need to write own program.
EDIT:
If you really want to do all of it by own, you should use windows resource files. (Using of them is describer in another question.) I see two possible ways to do this.
Add every single file to your program, as a resource. Program should remember name of each file and save them to directory one by one.
Pack all the files into one. (You need to write second program to pack the files.) Exemplary format of the package can be as follows:
| 4-bytes - length of 1st filename | 1st filename | 4-bytes size of 1st file | 1st file | 4-bytes - length of 2nd filename | ...
Package prepared in this way can be put into program resources. This is a more difficult method but it is also more elastic. You can modify the list of files without changing anything in the program.
Related
I'm trying to develop a file diff format for multiple files recursively in folders. Consider a source directory containing patched files and a destination directory containing original files. Write a size minimal diff file which expresses the difference between all files in the source and destination directory which can be applied to the original files in order to transform the original files into the patched files.
For this purpose I found the dtl library. Which algorithm or feature of the library should I use to write a file diff to the disk which I can then later read back and apply in order to patch the file? Any example code for this? I tried writing the result of the shortest edit script (SES) to the disk but I realized that I needed to specify the character and operation for every single byte. This of course makes the output file bigger than the entire comparison file, making this diff format entirely redundant since storing the entire target file instead would've saved more storage.
As another reference, this is very similar to how version control systems like git or svn operate but I don't want to use those since I'm mainly dealing with binary files and the simple requirement of creating and applying patches.
After doing some more search, I found the HDiffPatch project.
It worked fine apparently but it seems to take long on bigger folder comparisons:
diff usage: hdiffz [options] oldPath newPath outDiffFile
patch usage: hpatchz [options] oldPath diffFile outNewPath
EDIT:
Another good option is open-vcdiff but it only supports individual files.
use HDiffPatch: you can run hdiffz with "-s-48" for up speed;
or try "-s-32" , "-s-1k", "-s-128k" ...
I've been scouring the web for hours looking for an approach to solving this problem, and I just can't find one. Hopefully someone can fast-track me. I'd like to cause the following behaviour:
When running ember s and a file of a certain extension is changed, I'd like to analyze the contents of that file and write to several other files in the same directory.
To give a specific example, let's assume I have a file called app/dashboard/dashboard.ember. dashboard.ember consists of 3 concatenated files: app/dashboard/controller.js, .../route.js, and .../template.hbs with a reasonable delimiter between the files. When dashboard.ember is saved, I'd like to call a function (inside an addon, I assume) that reads the file, splits it at the delimiter and writes the corresponding splitted files. ember-cli should then pick up the changed source (.js, .hbs, etc.) files that it knows how to handle, ignoring the .ember file.
I could write this as a standalone application, of course, but I feel like it should be integrated with the ember-cli build environment, but I can't figure out what concoction of hooks and tools I should use to achieve this.
I'm doing some programming problems from the previous year competition and in the problem text there is only one case, which is simple so I can just rewrite it when testing.
Now, I also have a folder with a bunch of .in and .out files, in format '01.in, 01.out, 02.in, 02.out, etc'.
Is there a way to somehow take one of those .in files and automatically use all the lines of it as input without making changes inside my program but rather doing it directly from the command line?
Thanks
Assuming linux:
cat *.in | yourprogram
On Windows you'd use type instead of cat.
I assume your program takes in and processes the agruments (argv[]) passed to it already. If this is the case, one way could be to write a simple wrapper program (in Python for example) which opens the required .in files, reads the lines in it and invokes your C++ program by passing the required lines as input to them.
Then you can execute this python program or make changes to it as required.
I have a text file (>50k lines) of ascii numbers, with string identifiers, that can be thought of as a collection of data vectors. Based on user input, the application only needs one of these data vectors at runtime.
As far as I can see, I have 3 options for getting the information from this text file:
Keep it as a text file, extract the required vector at run-time. I believe the downside is that you can't have a relative path in the code, so the user would have to point to the file's correct location (?). Or alternatively, get the configure script to inject the absolute path as a macro.
Convert it to a static unsigned char using xxd (as explained here) and then include the resulting file. Downside is that a 5MB file turns into a 25MB include file. Am I correct in thinking that this 25MB is loaded into memory for the duration of the runtime?
Convert it to an object and link using objcopy as explained here. This seems to keep the file size about the same -- are there other trade-offs?
Is there a standard/recommended method for doing this? I can use C or C++ if that makes a difference.
Thanks.
(Running on linux with gcc)
I would go with number 1 and pass the filepath into the program as an argument. There's nothing wrong with doing that and it is simple and straight-forward.
You should have a look at the answers here:
Directory of running program
The top voted answer gives you a glue how to handle your data file. But instead of the home folder I would suggest to save it under /usr/share as explained in the link.
I'd preffer to use zlib (and both ways are possible:side file or include with compressed data).
I'm making a simple game with SFML 1.6 in C++. Of course, I have a lot of picture, level, and data files. Problem is, I don't want these files visible. Right now they're just plain picture files in a res/ subdirectory, and I want to either conceal them or encrypt them. Is it possible to put the raw data from the files into a resource file or something? Any solution is okay to me, I just don't want the files exposed to the user.
EDIT
Cross platform solutions best, but if they don't exist, that's okay, I'm working on windows. But I don't really want to use a library if it's not needed.
Most environments come with a resource compiler that converts images/icons/etc into string data and includes them in the source.
Another common technique is to copy them into the end of the final .exe as the last part of the build process. Then at run time, open the .exe as a file and read the data from some determined offset, see Embedding a filesystem in an executable?
The ideal way for this is to make your own archive format, which would contain all of your files' data along with some extra info needed to split files distinctly within it.