It's my first time developing a project with other developers (only two, right now). Our idea is to edit the files right from the server, using a FTP/SFTP software such as FileZilla. We want that any file opened for editing become blocked, so the other user cannot edit it at the same time. Is this possible? If not with FileZilla, perhaps using other software? I've looked at Git, Codiad and other similar solutions, but they are too complicated (merge concurrent editions in GitHub is not trivial) or have bugs (Codiad is not saving the files). We thought that file-locking is primitive but good enough for us (we're in the same room). The question is: how do we implement it?
Don't go that way.
First of all, it's risky to have only one copy of your code and that's what will happen if you edit files on the server directly.
Secondly I really don't understand why people find git an overkill.
For small team and no actual delivery pipeline, you don't need to master the whole git powers. You don't even have to work with branches and merges.
All you need to start working with git is:
git clone URL_OF_REPOSITORY - to download a copy of repository and start working with it
git add PATH_TO_EDITED_FILE_OR_FOLDER - to mark files or/and folders as staged for commit
git commit -m 'DESCRIPTION OF CHANGES' - to commit/accept the changes in files and folders marked in previous step and save information about those changes locally
git push - to upload commited changes to the remote repository
git pull - to download changes committed and pushed by others from the remote repository
As you can see, the commands don't expect you to provide any sophisticated information. Just the url of remote repo, files you consider as ready to be shared with others and desired description of changes made. It's not hard, really. And one should really start being familiar with version control systems as this is one of the core skills in software development these days.
if u are owner of machine u can install Koding for team (max 4 for free) and use webide in collaborative mode.
Beeing a small team loking is ok (it shouldnt be used on big projects) but you should look into git, I have found it only gets messsy if you both work on the same bit of code at the same time. To remove some of the complexity look into a Git GUI, the one I use is sourcetree but there are many others, they just help you visualise what you are doing and what changes are happening.
now to answering your question, I dont know of any standalone software that will do what you are after but dreamweaver did have a way of locking files for itself. This was done by creating a file called filename.lock, this was a text file with details of who locked it. You could do something like this manually, so when you want to lock a file you create the lock file, then to unlock it you delete it and you both promice to not edit files that have a lock file.
Its not the best solution or the neetest but depending on how you work, it could work.
There's no file locking in FTP.
There's locking in SFTP, but it is not generally supported, see SFTP file lock mechanism for details.
There's locking in WebDAV, but afaik common implementations like mod_dav actually lock a file within a WebDAV "world" only, the actual underlying file is not locked in any way.
So I'm not aware of any generally available mechanism for locking files remotely.
Related
The problem is that the source code distribution is not exactly the code that runs after installation. The installer, which runs when the site is accessed for the first time, generates a lot of code. Also, a running system stores some data in php source code (e.g. user profiles - under the /user_privileges directory) rather than in the database. So, I have the following unsatisfactory possibilities.
(1) Put the original source code under VC and edit it. In this case I have to do a fresh install and run the installer every time to see how my changes are working.
(2) Put the installed source code (after the installer has run) under VC, and edit it. In this case I have immediate feedback, but I can't use that code for new installations. I also have to exclude everything that the running system writes in the source tree from the VC.
Any suggestions?
I am working with Vtiger CRM version 6.0Beta, but any tips relevant to version 5 would help.
Thanks.
Choice 1 is appropriate. VC must always track the source code, not the products of any interpreter or processing. I feel your pain. It is so easy to tweak that Vtiger source code, and VC tends to be left by the wayside.
Get familiar with GIT. Really, it is what you want. Look here, I already did it.
Copy the original code in one branch
Copy the modified code into another branch
Make a diff or better, run git format-patch
Install (checkout) your new Version
Check the patches and apply them, if necessary.
Bonusses
Have a private and a public remote for your repo, so that you can keep track on the crumpy files in user_privileges and friends in private, but share code with others
Have an absolutely beautiful backup with daily rollback by just setting up a branch, a remote and a cronjob.
Beeing able to replicate the live situation within minutes for local development
Painfree Updates !!
I know, this is no easy task, but once done, it will make your live pretty much easier.
I've been looking into using Mercurial for version management and wanted to try it out. I'm planning on doing a project on my own to help me learn C++ coming from Java and thought it would be a good idea to try version management with it. It would probably be a game or some sort of simple application, not sure with a GUI or not.
The problem is I don't know much about version management. When in a project is a good time to make the repository? Should I just do it right away? How often should I make commits?
I'll be using Netbeans for my development since it seems to be pretty good, also has Mercurial controls built in, but what should I set in the .hgignore for my project? I assume the repository would be the whole folder Netbeans creates but what should I make it ignore?
Bonus questions: I'll be using Bitbucket for hosting/backup, should I make the project public? Why/why not? Also would that make it open source? Should I bother with attaching a license? Because I have no idea how the licenses work and such. Then again, I sincerely doubt I'll be making anything anyone will want to copy too badly.
On repos:
You should use a VCS repository as soon as possible. Not only does it allow for off-site backups, but it allows you to track changes. To find out when a bug was introduced.
Having a VCS, particularly a distributed VCS like Mercurial, will change the way you code. You won't have to be careful not to break things, because you always have the old version that you can revert to. If, after a few weeks or months, you decide that a particular course of development was a bad idea, you can rewind all the way back to some prior point.
You should generally commit either every day, or every time you finish a task. Like, you might have a 4-hour task of "write this class and get it working". You commit after that. When you're done for the day, you commit. I wouldn't suggest committing in a non-compiling state, though, so you should try to stop for the day only when everything still builds.
As for ignoring, you should ignore things you don't intend to commit. Things that you wouldn't want in the repo. Generated files, temporary files, the stuff that you wouldn't need.
What should be in the repo is 100% of the information necessary to build the project from scratch (minus external dependencies).
On licenses:
I would say that it is very rude to put up a public repository without some kind of license attached. Without a license, that means that anyone even pulling from the repo (which is what you're inviting by making it public) is a violation of your copyright without direct, explicit permission from you.
So either look at how copyright works and pick a license to release your stuff under, or keep your repos private.
The instant you conceive of it, usually. There is no penalty for putting everything you have into a VCS right away, and who knows, you may want that super-duper prototype version later on down the road. Unless you're going to be checking in large media files (don't), my answer is: right now.
As for how often to commit, that's a personal preference. I use git, and as far as I'm concerned there is no problem with checking in every time you make some unit of significant change. The more snapshots along the way you have, the easier it becomes to track down bugs later on.
Your .hgignore is up to you. You need to decide what to version and what not to. As a rule of thumb, any files generated during compilation probably shouldn't be checked in.
Also, if you release it to the public, please attach a license. It doesn't matter what it is, but it makes all the bosses and lawyers in the world happy when they know what they can and can't do with your code. Finally, please don't release something to the public unless you think someone else will benefit from it. The internet already suffers from information overload.
Commit early, and commit often.
With distributed VCS like hg or git there's really no reason not to start a repo before you even write any code. If you decide to scrap your project, you can always delete your repo before you post it to whatever hosting site you're using. The more often you commit, the easier it will be to hone in on where you messed up later on (you will).
Stuff to put in your ignore file would be build files and anything that your IDE generates periodically, like user preferences and tag caches. You don't want to foist your preferences on someone else and you don't want to have to create a commit every time you change the position of a window in the IDE. What's in the repo should be the minimum that someone would need to be able to work on your project.
If you're just doing a small project to learn, then I wouldn't worry about licenses or having a private repo. You can always add a license later. Alternatively, just pick a permissive license and go with it.
Based on my experience with git, I would recommend the following:
Create repository along with project.
Commit as often as you can with detailed comments
When you want to test a new idea - create a branch. If the idea is successful, merge your code to the master branch. If the idea is not successful, you can just discard the branch. Probably this is the most important idea.
Put the names of object files, executables and editor backup files in the ignore list.
this is a newbie question.
I've a server where I've uploaded all my work directory. It's a small project in Django.
I want to work either in locale and in remote server, but I want the both directories ever synchronized. When I'm going to work on my computer, I would the work directory to be synchronized. And vice versa.
Someone says me to use sshfs, rsync, git.
What are your recommendations? Which one should I use?
You should be using git (or another version control system) anyway, to ensure that you always have a record of the changes you make to your work. Synchronising across systems is an added benefit that you will get if you set up the git remotes properly, and always ensure that you pull from the remote when you start and push back when you finish.
Easiest answer for complete synchronization between them... Drop Box :)
But git might serve you well. Especially if you want to expand on this someday to have a QA vs. Production server that you stage changes across. That topic is a big one that's been answered a few times here. Do a search for "git production deploy" to get a list of questions around that topic.
I would love to hear ideas on how to best move code from development server to production server.
A list of gotcha's, don't do this list would be helpful.
Any tools to help automate the steps of.
Make backups of existing code, given these list of files
Record the Deployment of these files from dev to production
Allow easier rollback if deployment or app fails in any way...
I have never worked at a company that had a deployment process, other than a very manual, ftp files from dev to production.
What have you done in your companies, departments, etc?
Thank you...
Yes, I am a coldfusion programmer, but files are files, and this should be language agnostic question.
OK, I'll bite. There's the technology aspect of this problem, which other answers have already covered. But the real issue is a process problem. Where the real focus should be ensuring a meaningful software development life cycle (SDLC) - planning, development, validation, and deployment. I'll cover each in turn. What you want is a repeatable activity at each phase.
Planning
Articulating and recording what's to be delivered. Often tickets or user stories are enough. Sometimes you do more, like a written requirements document, that a customer signs off on, that's translated into various artifacts such as written use cases - ultimately what you want though is something recorded in an electronic system where you can associate changes to code with it. Which leads me to...
Development
Remember that electronic system? Good. Now when you make changes to code (you're committing to source control right?) you associate those change with something in this electronic system - typically tickets. I like Trac, but have also heard good things about Atlassian's suite. This gives you traceability. So you can assert what's been done and how. Then you can use this system and source control to create a build - all the bits needed for whatever's changed - and tag that build in source control - that's your list of what's changed. Even better, have a build contain everything, so that it's standalone entity that can easily be deployed on it's own. The build is then delivered for...
Validation
Perhaps the most important step that many shops ignore - at their own peril. Defects found in production are exponentially more expensive to fix then when they're discovered earlier in the process. And validation is often the only step where this occurs in many shops - so make sure yours does it.
This should not be done by the programmer! That's like the fox watching the hen house. And whoever is doing is should be following some sort of plan. We use Test Link. This means each build is validated the same way, so you can identify regression bugs. And, this build should be deployed in the same way as you would into production.
If all goes well (we usually need a minimum of 3 builds) the build is validated. And this goes to...
Deployment
This should be a non-event, because you're taking a validated build following the same steps as you did in testing. Could be first it hits a staging server, where there's an automated copying process, but the point being is that is shouldn't be an issue at this point, because you validated with the same process.
Conclusion
In terms of knowing what's where, what you really want is a logical way to group changes together. This is where the idea of a build comes in. It's really the unit that should segue between steps in the SDLC. If you already have that, then the ability to understand the state of a given system becomes trivial.
Check out Ant or Maven - these are build and deployment tools used in the Java world which can help you copy / ftp files, backup and even check out code from SVN.
You can automate your deployment steps using these tools, for example Ant will allow you declare a set of tasks as part of your deployment. So you could, for example:
Check out a revision using SVNAnt or similar to a directory
Copy (and perhaps zip first) these files to a backup directory
FTP all the files to your web server(s)
Create a report to email to the team illustrating the deployment
Really you can do almost anything you wish to put time into using Ant. Maven is a little more strucutred (and newer) and you can see a discussion of the differences here.
Hope that helps!
In a nutshell...
You should start with some source control solution - probably Subversion or Git. Once that's in place you can create a script that generates a clean build of your source code and deploys it to your production server(s).
You could do this with a simple batch script or use something like Ant for more control. Here is a simple example of a batch file using Subversion:
svn copy svn://path/to/your/project/trunk -r HEAD svn://path/to/your/project/tags/%version%
svn checkout svn://path/to/your/project/trunk -r HEAD //path/to/target/directory
Ant makes it easy to do things like automatically run unit tests and sync directories. For example:
<sync todir="//path/to/target/directory" includeEmptyDirs="true" overwrite="true">
<fileset dir="${basedir}">
<exclude name="**/*.svn"/>
<exclude name="**/test/"/>
</fileset>
</sync>
This is really just a starting point. A next step might be a continuous integration solution like Hudson. I would also recommend reading "Pragmatic Project Automation: How to Build, Deploy, and Monitor Java Applications".
One ColdFusion specific gotcha is to make sure you clear the Application scope when required (to update any singleton components). A common approach here is to use a URL parameter that causes onRequestStart() to call onApplicationStart(). You may also have to clear the trusted cache.
We use a system called AnthillPro: http://www.anthillpro.com
It's commercial software, but it allows us to completely automate our deployment process across multiple servers and operating systems (We currently use it for both ColdFusion and Java, but it can be used for most languages. It has a ton of 3rd party integrations:
http://www.anthillpro.com/html/products/anthillpro/tool-integrations.html
OK, so I have a Django project. I was wondering if I'm supposed to put each app in its own git repository, or is it better to just put the whole project into a git repository, or whether I should have a git repo for each app and also a git repo for the project?
Thanks.
It really depends on whether those reusable apps are actually going to be reused outside of the project or not?
The only reason you might need it in a different repo is if other projects with seperate repos might need to use it. But you can always split it out later. Making a git repo is cheap so it's one of those things you can do later if it becomes necessary. Making things complicated up front is just going to frustrate you later so feel free to wait till you know it's necessary
YAGNI it's for more than just code.
I like Tom's or Alex's answers, except they lack real rationale behind them
("easier to have one repository per development team" ?
"substantial numbers of people might be interested in pulling out (or watching changes to) separate apps"
why ?)
First of all, one or several repositories is an server-side administration decision (in term of server resources).
SVN easily sets up several repositories behind one server, whereas ClearCase will have its own "vob_server" process per VOB, meaning: you do not want more than a hundred VOB per (Unix) server (or more than 20-30 on a Windows server).
Git is particular: setting a repository is cheap, accessing it can be easy (through a simple shared path), or can involved a process (a git daemon). The latest solution means: "not to much repositories accessed directly from outside". (they can be accessed indirectly through submodules referenced by a super-project)
Then there is the client-side administration: how complex will the configuration of a workspace be, when one or several repositories are involved. How can the client references the right configuration? (list of labels needed to reference the correct files).
SVN would use externals, git "submodules", and in both case, that adds complexity.
That is why Tom's answer can be correct.
Finally, there is the configuration management aspect (referenced in Alex's answer). Can you tag part of a repo of not ?
If you can (like in SVN where you actually make a svn-copy of part of a repo), that means you can have a component approach, where several groups of files have their own life cycle, and their own tags (set at their own individual pace).
But in Git, that is not possible: a tag references a commit which always concerns the all repository.
That means a "system-based" approach, where you only want the all project anyway (as opposed to the "watching separate apps - I've never observed it in real life" bit from Alex's answer). If that is the case (if you want the all system anyway), that is not important.
But for those of us who think in term of "independent groups of files", that means a git repository actually represents an individual group of files (with its own rhythm in term of evolutions and tagging), with potentially a super-project referencing those repositories as submodules.
That is not your everyday setup, so for independent projects, I would recommend only few or one Git repo.
But for more complex interdependent set of projects... you need to realize that by the very way Git has been conceived, a "repository" represents a coherent set of files supposed to evolve at the same pace as a all. And not "all" can always fit in one set of files (if "all" is complex enough). Hence, several repositories would be required in this case.
And a complex set of inter-dependent projects does happen in real life too ;)
Almost always easier to have one repository per development team, no matter how many products you have. Eventually you will want to share code between projects (even several seperate DJango websites!) and it is much easier to do with only one repository.
By all means set up a nice folder structure WITHIN that repository. However, a single checkout from git (probably from a subfolder) should give you all the files you need for your test website (python, templates, css, jpegs...), and then you just copy httpd.conf and so on to complete the installation.
I'm no git expert, but I don't believe the appropriate strategy would be any different from one for hg, or even, in this case, svn, so I'm going to give my 2 cents' worth anyway;-).
A repo per app makes any sense only if substantial numbers of people might be interested in pulling out (or watching changes to) separate apps; this could theoretically be the case but I've never observed it in real life -- mostly, people who are interested in the project are interested in it as a whole. Therefore, I would avoid complications and make the whole proj into one repo.
Do those individual apps utilize shared code and libraries? If so, they really belong in the same repository .. which enables you to see the impact of new changes when running a single testing suite.
If they are completely separate and agnostic, it really doesn't matter. Just for sanity, ease of building and ease of packaging, I prefer to keep a whole project in a combined repository. However, we typically work in very small teams. So, if no shared code is involved, its really just a question of which method is most convenient and efficient for all concerned.
If you have reusable apps, put them in a seperate repo.
If you are worried about the number of private repositories have a look at bitbucket (if you decide to make them open source, I advice github)
There is a nice way of including your own apps in to the project, even making use of version tags:
I recently found out a way to do this with buildout and git tags
(I used svn:externals to include an app, but switched from svn to git recently).
I tried mr.developer first, but while not getting this to work i found an alternative for mr.developer:
I found out gp.vcsdevlop is very easy to use for this purpose.
see https://pypi.python.org/pypi/gp.vcsdevelop
I ended up, putting this in my buildout file and got it working at once (I had to add a pip requirements.txt to my app to get it working, but that's a good thing after all):
vcs-update = True
extensions =
gp.vcsdevelop
buildout-versions
develop-dir=./local_checkouts
vcs-extend-develop=git+git#bitbucket.org:<my bitbucket username>/<the app I want to include>.git#0.1.38#egg=<the appname in django>
develop = .
in this case it chacks out my app and clone it to version tag 0.1.38 in to the project in the subdit ./local_checkouts/
and develops it when running bin/buildout
Edit: remark 26 august 2013
While using this solution and editing in this local checkout of the app used in a project. I found out this:
you will get this warning when trying the normal 'git push origin master' command:
To prevent you from losing history, non-fast-forward updates were rejected
Merge the remote changes before pushing again. See the 'Note about
fast-forwards' section of 'git push --help' for details.
EDIT 28 august 2013
About working in the local_checkouts for shared apps included by gp.vcsdevelop:
(and handle the warning discussed in the remakr above)
git push origin +master seems to screw up commit history for the shared code
So the way to work in the local_checkout dir is like this:
go in to the local checkout (after a bin/buildout so the checkout is the master):
cd localcheckouts/<shared appname>
Create a new branch and swith to it like this:
use git checkout -b 'issue_nr1' (e.g. the name of the branch is here named after the issue you are working on)
and after you are done working in this branche, use: (after the usuals git add and git commit)
git push origin issue_nr1
when tested and completed merge the branch back in the master:
first checkout the master:
git checkout master
update (probably only neede when other commits in the mean time)
git pull
and merge to the master (where you are in right now):
git merge issue_nr1
and finaly push this merge:
git push origin master
(with sepcial thanks to this simplified guide to git: http://rogerdudler.github.io/git-guide/)
and after a while, to clean up the branches, you might want to delete this branch
git branch -d issue_nr1