are there limitations on .pyc files? - python-2.7

I was recently trying to make my own module when I realised a copy of my module had been made but instead of ending in .py like the origional, it ended in .pyc. When I opened it, I could not understand a thing. I was using the import to make a game from pygame and the fact that the .pyc file had a bunch of question marks and weird symbols seemed to be helpful for hackers if I ever make a game good enough for release which probably wont happen. I just want to know a few things about these files:
Can other computers that download the game still read the module if I delete the original and only leave the weirder .pyc file?
Are they readable by humans and can they actually prevent hacks on downloaded game? (its not online I just don't want a easy game for people who know python)
Should I get rid of them for what I am doing? (I saw other questions asking how to do that but the answers said it was helpful)
Last but not least, will it work for .txt files (will they not just be read as a bunch of symbols)?
Thanks!

The .pyc files are not readable by humans - the python interpreter compiles the source code to these files, and they are used by the python virtual machine. You can delete these files, and when you run the .py file again, you will see a new .pyc file created.

Related

.How do you create a new file in Ocaml and where does it store it?

I can't seem to find the answer, how do you create a new file in Ocaml? Do you edit your file in the terminal? Where does the source code appear?
I think you're asking how to write code in OCaml, i.e., how to create an OCaml source file. (This isn't completely clear. You could be asking how to write OCaml code that creates a file.)
The details of creating OCaml source depend on your development environment, not on the language itself. So there is no one answer.
The general answer is that you can use any tool you like that knows how to create a text file. If you like working from the command line (as I do) you can work in a terminal environment and run some kind of vintage text editor from the last millennium (as I do). If you like a GUI environment, you can run some kind of "programmer's editor" from the current millennium, or really any kind of editor that creates basic utf-8 files (or even ASCII files).
Generally the editor will have to be told where to store the files that you edit. You would probably want to make some kind of folder for the project and make sure you store the text files in there.
I hope this helps! If you have any programmers nearby, they can probably get you started a lot faster than asking on StackOverflow.

Making updates and mods to c++ apps

I am pretty new to c++, so this is going to be a VERY big rookie question.
Lets say I want to make an update to my app, like adding a new function to a class, adding a new class, or just changing some code to improve the app. The IDE is use(Visual Studios) builds my code into a single .exe file. But when I want to make an update to the code, I have to rebuild the whole application. This is bad because when I distribute my app, then want to update it, I have to send everyone with my app an updated version of the .exe file, and that will use up a lot of memory. Is there any way for my .exe file to update without having to download the whole code, or a specific way I should distribute it so it is easy to update and mod?
If you wish to update .exe you'll need to rebuild it, and send clients that version, only compiler can merge code on binary level safely. But even several dozen thousands of code don't take up a lot of space, so I assume you're having some resources that use a lot of space. Split them into .dll library and ship once, your .exe will be faster and smaller. If by any case you have a lot of code, and only part needs to be updated, you can add functions/classes into another .dll aswel, mark them as extern 'C', calling them from your program and easily replace when needed.
Making some autoupdater isn't a bad idea aswel, eventually you'll get tired of sending files manually each time.

What's wrong with this user ignore file for Mercurial?

A little retrospective now that I've settled into Mercurial. Forget forget files combined with hg remove. It's crazy and bass-ackwards. You can use hg remove once you've established that something is in a forget file that isn't forgetting because the item in question was tracked before the original repo was created. Note that hg remove effectively clears tracked status but it also schedules the file for deletion in anything that gets changes from your repo. If ignored, however the tracking deactivation still happens but that delete-me change set won't ever reach another repo and for some reason will never delete in yours which IMO is counter-intuitive. It is a very sure sign that somebody and I don't know these guys, is UNWILLING TO COMPROMISE ON DUH DESIGN PROBLEMS. The important thing to understand is that you don't determine what's important, Mercurial does. Except when you're merging on a pull of course. It's entirely reasonable then. But I digress...
Ignore-file/remove is a good combo for already-tracked but very specific files you want forgotten but if you're dealing with a larger quantity of built files determined with broader patterns it's not worth the risk. Just go with a double-repo and pull -u from the remote repo to your syncing repo and then pull -u commits from your working repo and merge in a repo whose sole purpose is to merge changes and pass them on in a place where your not-quite tracked or untracked files (behavior is different when pulling rather than pushing of course because hey, why be consistent?) won't cause frustration. Trust me. The idea that you should have to have two repos just to get 'er done offends for good reason AND THAT SO MANY OF US ARE DOING IT should suggest a serioush !##$ing design problem, but it's much less painful than all the other awful things that will make you regret seeking a sensible alternative.
And use hg help. It's actually Mercurial's best feature and often better than the internet (which I don't fault for confusion on the matter of all things hg) for getting answers to everything that is confusing and counter-intuitive in this VCS.
/retrospective
# switch to regexp syntax.
syntax: regexp
#Config Files
#.Net
^somecompany\.Net[\\/]MasterSolution[\\/]SomeSolution[\\/]SomeApp[\\/]app\.config
^somecompany\.Net[\\/]MasterSolution[\\/]SomeSolution[\\/]SomeApp_test[\\/]App\.config
#and more of the same following
And in my mercurial.ini at the root of my user directory
[ui]
username = ereppen
merge = bcomp
ignore = C:\<path to user ignore file>\.hgignore-config
Context:
I wrote an auto-config utility in node. I just want changes to the files it changes to get ignored. We have two teams and both aren't on the same page with making this a universal thing so it needs to be user-specific for now.
The config file is in place and pointed at by my ini file. I clone. I run the config utility and change the files and stat reveals a list of every single file with an M next to it. I thought it was the utf-8 thing and explicitly set the file to utf-16 little endian. I don't think I'm doing with the regEx that any modern flavor of regEx worth actually calling regEx wouldn't support.
The .hgignore file has no effect for files that are tracked. Its function is to stop you from seeing files you want ignored listed as "untracked". If you're seeing "M" then they're already added (you got them with the clone) so .hgignore does nothing.
The usual way config files that differ from machine to machine are handled is to put a app.config.sample in source control, have app.config in .hgignore and have people do a copy when they're making their config edits.
Alternately if your config files allow for includes and overrides you end them with include app-local.config and override any settings in a app-local.config which you don't add and do include in .hgignore.

finding my py modules in sub folders, from the main application working dir

this question might have been asked before, but I could not find it.
I am on a Linux box. I have py app that runs from a folder called /avt. (example)
I did not write this code, and it has about 12 modules that go with it. I was the lucky engineer to inherit this mess.
this app imports other modules that live under this dir /avt/bin
I want to be able find my modules in the /bin dir no matter where the current working dir is. sometimes the app changes dir to some other sub folders to perform some file I/O. Then should return, but seems like sometimes it does not make it back, because the code will error out with "no such file or directory" error. so I want to test for working dir each time before I do any file I/O to the /bin dir.
As an example, I want to create files in /bin, and then later open those files and read data from them. How can I test to make sure my current working dir is always /avt? and if it is not, then ch.dir to it? Note: it also has to be portable code meaning if must run on any directory structure on any Linux machine.
I tried this code, but it is not very clean I think. Python is not my main language. Is this coding proper and will it work for this? forgive me I don't know how to format it for this forum.
Avtfolder = os.path.realpath(os.path.abspath(os.path.split(inspect.getfile( inspect.currentframe() ))[0]))
if Avtfolder not in sys.path:
sys.path.insert(0, Avtfolder)
if Avtfolder.__contains__('/avt'):
modfilespath = Avtfolder + '/bin'
print 'bin dir is ' + modfilespath
else:
print 'directory lost...'
#write some code here that changes to the root /avt dir
I have a few notes.
First, I'm afraid you are mixing up two problems (or I couldn't tell from the question which one you're facing). These problems are:
I/O to files that can reside in different directories on different machines
Importing Python modules used by your app that can also be in slightly different locations.
The title of the question and some of the text suggests you're dealing with problem 2, whereas references to I/O and "no such file or directory" error point to problem 1.
Those are, however, separate problems and are treated separately. I won't be able to give the exact recipes on both, but here are some suggestions:
For problem 1: I don't think it's a good idea to do some I/O, create files, etc. in the folder where the user installs the Python libraries. It's a folder for Python modules, not data. Also, if the library is installed via setup.py, using pip or easy_install (if it isn't the case now, that can change in the future) then the program will probably habe insufficient permissions to write there, unless invoked as root. And that's right. Create files somewhere else.
As to "how to track the directory changes" part: I must confess I don't quite understand what you mean. Why do you even using the concept of "current directory"? In my mind you should just have some variable such as write_path, data_path, etc. and the code would be
data = open(os.path.join(data_path, 'data.foo'))
dump = open(os.path.join(write_path, 'dump.bar'), 'w')
etc.
Why do you even care where are your libraries located? I don't think it's right, I'd change that. This inspect.currentframe() stuff smells like you really need to rethink the design of the library.
Now, what the location of the libraries matters for is Problem 2. But again, the absolute path shouldn't matter (if it does, change that!). You only need all the modules to be inside one folder (or its subfolders). If they are in the same folder, you're good. import foo will just work. If some are in subfolders, those subfolders should have a file named __init__.py in them, and then they will be seen as modules by Python interpreter, so you'll be able to do from foo import bar, where foo is a subfolder with __init__.py and bar.py in it.
So, try to rewrite it so that you don't depend on where the .py files are. You really shouldn't need to use inspect there at all. On another note, don't use special methods like __contains__ directly unless you really need to. if '/avt' in Avtfolder will do the same.

C++ Directory Restructuring

I have a source code of about 500 files in about 10 directories. I need to refactor the directory structure - this includes changing the directory hierarchy or renaming some directories.
I am using svn version control. There are two ways to refactor: one preserving svn history (using svn move command) and the other without preserving. I think refactoring preserving svn history is a lot easier using eclipse CDT and SVN plugin (visual studio does not fit at all for directory restructuring).
But right now since the code is not released, we have the option to not preserve history.
Still there remains the task of changing the include directives of header files wherever they are included. I am thinking of writing a small script using python - receives a map from current filename to new filename, and makes the rename wherever needed (using something like sed). Has anyone done this kind of directory refactoring? Do you know of good related tools?
If you're having to rewrite the #includes to do this, you did it wrong. Change all your #includes to use a very simple directory structure, at mot two levels deep and only using a second level to organize around architecture or OS dependencies (like sys/types.h).
Then change your make files to use -I include paths.
Voila. You'll never have to hack the code again for this, and compiles will blow up instantly if something goes wrong.
As far as the history part, I personally find it easier to make a clean start when doing this sort of thing; archive the old one, make a new repository v2, go from there. The counterargument is when there is a whole lot of history of changes, or lots of open issues against the existing code.
Oh, and you do have good tests, and you're not doing this with a release coming right up, right?
I would preserve the history, even if it takes a small amount of extra time. There's a lot of value in being able to read through commit logs and understand why function X is written in a weird way, or that this really is an off-by-one error because it was written by Oliver, who always gets that wrong.
The argument against preserving the history can be made for the following users:
your code might have embarrassing things, like profanity and fighting among developers
you don't care about the commit history of your code, because it's not going to change or be maintained in the future
I did some directory refactoring like this last year on our code base. If your code is reasonable structured at the beginning, you can do about 75-90% of the work using scripts written in your language of choice (I used Perl). In my case, we were moving from set of files all in one big directory, to a series of nested directories depending on namespaces. So, a file that declared the class protocols::serialization::SerializerBase was located in src/protocols/serialization/SerializerBase. The mapping from the old name to the new name was trivial, so that doing a find and replace on #includes in every source file in the tree was trivial, although it was a big change. There were a couple of weird edge cases that we had to fix by hand, but that seemed a lot better than either having to do everything by hand or having to write our own C++ parser.
Hacking up a shell script to do the svn moves is trivial. In tcsh it's foreach F ( $FILES ) ... end to adjust a set of files. Perl & Python offer better utility.
It really is worth saving the history. Especially when trying to track down some exotic bug. Those who do not learn from history are doomed to repeat it, or some such junk...
As for altering all the files... There was a similar question just the other day over at:
https://stackoverflow.com/questions/573430/
c-include-header-path-change-windows-to-linux/573531#573531