How to work in locale/remote and have the work always synchronized - django

this is a newbie question.
I've a server where I've uploaded all my work directory. It's a small project in Django.
I want to work either in locale and in remote server, but I want the both directories ever synchronized. When I'm going to work on my computer, I would the work directory to be synchronized. And vice versa.
Someone says me to use sshfs, rsync, git.
What are your recommendations? Which one should I use?

You should be using git (or another version control system) anyway, to ensure that you always have a record of the changes you make to your work. Synchronising across systems is an added benefit that you will get if you set up the git remotes properly, and always ensure that you pull from the remote when you start and push back when you finish.

Easiest answer for complete synchronization between them... Drop Box :)
But git might serve you well. Especially if you want to expand on this someday to have a QA vs. Production server that you stage changes across. That topic is a big one that's been answered a few times here. Do a search for "git production deploy" to get a list of questions around that topic.

Related

How to lock file in Ubuntu server when editing

It's my first time developing a project with other developers (only two, right now). Our idea is to edit the files right from the server, using a FTP/SFTP software such as FileZilla. We want that any file opened for editing become blocked, so the other user cannot edit it at the same time. Is this possible? If not with FileZilla, perhaps using other software? I've looked at Git, Codiad and other similar solutions, but they are too complicated (merge concurrent editions in GitHub is not trivial) or have bugs (Codiad is not saving the files). We thought that file-locking is primitive but good enough for us (we're in the same room). The question is: how do we implement it?
Don't go that way.
First of all, it's risky to have only one copy of your code and that's what will happen if you edit files on the server directly.
Secondly I really don't understand why people find git an overkill.
For small team and no actual delivery pipeline, you don't need to master the whole git powers. You don't even have to work with branches and merges.
All you need to start working with git is:
git clone URL_OF_REPOSITORY - to download a copy of repository and start working with it
git add PATH_TO_EDITED_FILE_OR_FOLDER - to mark files or/and folders as staged for commit
git commit -m 'DESCRIPTION OF CHANGES' - to commit/accept the changes in files and folders marked in previous step and save information about those changes locally
git push - to upload commited changes to the remote repository
git pull - to download changes committed and pushed by others from the remote repository
As you can see, the commands don't expect you to provide any sophisticated information. Just the url of remote repo, files you consider as ready to be shared with others and desired description of changes made. It's not hard, really. And one should really start being familiar with version control systems as this is one of the core skills in software development these days.
if u are owner of machine u can install Koding for team (max 4 for free) and use webide in collaborative mode.
Beeing a small team loking is ok (it shouldnt be used on big projects) but you should look into git, I have found it only gets messsy if you both work on the same bit of code at the same time. To remove some of the complexity look into a Git GUI, the one I use is sourcetree but there are many others, they just help you visualise what you are doing and what changes are happening.
now to answering your question, I dont know of any standalone software that will do what you are after but dreamweaver did have a way of locking files for itself. This was done by creating a file called filename.lock, this was a text file with details of who locked it. You could do something like this manually, so when you want to lock a file you create the lock file, then to unlock it you delete it and you both promice to not edit files that have a lock file.
Its not the best solution or the neetest but depending on how you work, it could work.
There's no file locking in FTP.
There's locking in SFTP, but it is not generally supported, see SFTP file lock mechanism for details.
There's locking in WebDAV, but afaik common implementations like mod_dav actually lock a file within a WebDAV "world" only, the actual underlying file is not locked in any way.
So I'm not aware of any generally available mechanism for locking files remotely.

how to run lamp in env

TL;DR
How do I encapsulate my Apache/PHP/MySQL/Wordpress installation in an environment or sandbox?
In my absence the designers-that-be have decreed that we use Wordpress for our newest project, as opposed to the Django-type applications I am familiar with. Another developer has already started making a custom theme, but now he's gone and I'm supposed to take over his work but I have no experience whatsoever with LAMP or Wordpress.
I thought it would be a good start to encapsulate the project into a proper standalone repository that I can copy and share between machines and people, put on github, etc, and most importantly: it needs to run in a virtual-env instance (or equivalent), so it doesn't mess up anybody's other projects, we can run multiple versions, etc.
Can anybody help me with that? At the moment it looks like the required files are spread out over my whole computer, I keep having to touch stuff in my root-directory, I need sudo simply to start and stop the *##+ing server, it's complete lunacy.
Thank you!
Take a look at virtPHP, phpenv and php-build for creating isolated environments. As for typing sudo every time you need to start or stop the server, this is the right way of doing things.

Best Practices for Code/Web Application Deployment?

I would love to hear ideas on how to best move code from development server to production server.
A list of gotcha's, don't do this list would be helpful.
Any tools to help automate the steps of.
Make backups of existing code, given these list of files
Record the Deployment of these files from dev to production
Allow easier rollback if deployment or app fails in any way...
I have never worked at a company that had a deployment process, other than a very manual, ftp files from dev to production.
What have you done in your companies, departments, etc?
Thank you...
Yes, I am a coldfusion programmer, but files are files, and this should be language agnostic question.
OK, I'll bite. There's the technology aspect of this problem, which other answers have already covered. But the real issue is a process problem. Where the real focus should be ensuring a meaningful software development life cycle (SDLC) - planning, development, validation, and deployment. I'll cover each in turn. What you want is a repeatable activity at each phase.
Planning
Articulating and recording what's to be delivered. Often tickets or user stories are enough. Sometimes you do more, like a written requirements document, that a customer signs off on, that's translated into various artifacts such as written use cases - ultimately what you want though is something recorded in an electronic system where you can associate changes to code with it. Which leads me to...
Development
Remember that electronic system? Good. Now when you make changes to code (you're committing to source control right?) you associate those change with something in this electronic system - typically tickets. I like Trac, but have also heard good things about Atlassian's suite. This gives you traceability. So you can assert what's been done and how. Then you can use this system and source control to create a build - all the bits needed for whatever's changed - and tag that build in source control - that's your list of what's changed. Even better, have a build contain everything, so that it's standalone entity that can easily be deployed on it's own. The build is then delivered for...
Validation
Perhaps the most important step that many shops ignore - at their own peril. Defects found in production are exponentially more expensive to fix then when they're discovered earlier in the process. And validation is often the only step where this occurs in many shops - so make sure yours does it.
This should not be done by the programmer! That's like the fox watching the hen house. And whoever is doing is should be following some sort of plan. We use Test Link. This means each build is validated the same way, so you can identify regression bugs. And, this build should be deployed in the same way as you would into production.
If all goes well (we usually need a minimum of 3 builds) the build is validated. And this goes to...
Deployment
This should be a non-event, because you're taking a validated build following the same steps as you did in testing. Could be first it hits a staging server, where there's an automated copying process, but the point being is that is shouldn't be an issue at this point, because you validated with the same process.
Conclusion
In terms of knowing what's where, what you really want is a logical way to group changes together. This is where the idea of a build comes in. It's really the unit that should segue between steps in the SDLC. If you already have that, then the ability to understand the state of a given system becomes trivial.
Check out Ant or Maven - these are build and deployment tools used in the Java world which can help you copy / ftp files, backup and even check out code from SVN.
You can automate your deployment steps using these tools, for example Ant will allow you declare a set of tasks as part of your deployment. So you could, for example:
Check out a revision using SVNAnt or similar to a directory
Copy (and perhaps zip first) these files to a backup directory
FTP all the files to your web server(s)
Create a report to email to the team illustrating the deployment
Really you can do almost anything you wish to put time into using Ant. Maven is a little more strucutred (and newer) and you can see a discussion of the differences here.
Hope that helps!
In a nutshell...
You should start with some source control solution - probably Subversion or Git. Once that's in place you can create a script that generates a clean build of your source code and deploys it to your production server(s).
You could do this with a simple batch script or use something like Ant for more control. Here is a simple example of a batch file using Subversion:
svn copy svn://path/to/your/project/trunk -r HEAD svn://path/to/your/project/tags/%version%
svn checkout svn://path/to/your/project/trunk -r HEAD //path/to/target/directory
Ant makes it easy to do things like automatically run unit tests and sync directories. For example:
<sync todir="//path/to/target/directory" includeEmptyDirs="true" overwrite="true">
<fileset dir="${basedir}">
<exclude name="**/*.svn"/>
<exclude name="**/test/"/>
</fileset>
</sync>
This is really just a starting point. A next step might be a continuous integration solution like Hudson. I would also recommend reading "Pragmatic Project Automation: How to Build, Deploy, and Monitor Java Applications".
One ColdFusion specific gotcha is to make sure you clear the Application scope when required (to update any singleton components). A common approach here is to use a URL parameter that causes onRequestStart() to call onApplicationStart(). You may also have to clear the trusted cache.
We use a system called AnthillPro: http://www.anthillpro.com
It's commercial software, but it allows us to completely automate our deployment process across multiple servers and operating systems (We currently use it for both ColdFusion and Java, but it can be used for most languages. It has a ton of 3rd party integrations:
http://www.anthillpro.com/html/products/anthillpro/tool-integrations.html

Change Clickonce cache directory

We have been using ClickOnce deployment for some time now and all has been fine until recently. We have one of our clients that is now deleting their clients Documents and Settings directories which inturn is totally erasing our clickonce cache. From what I have seen, there is no way of setting an alternate location for this, but many of my references online were from 2005.
I was hoping someone may be able to provide a definitive answer as to whether or not they have changed this and there is a way to change the installation directory and if not, do you have any recommendation where I may be able to find a solution to this problem.
In then end, we would like the same Clickonce functionality regarding auto updates, however a way of letting the user choose where they want to install their files to. Any info would be great! Thanks!!
Dan
I found a post that seems to ask the same question as you do, and according to the answers it received, it is not possible to set the destination folder of a ClickOnce application.
Anyway, I think it's a reasonable assumption to make when developing an application for a client that the application data folder will not be deleted on an ad-hoc basis (unless this is a condition that has been known during the requirement gathering of the project).
If this client of yours doesn't have a very specific (and good) reason to remove the app data folders, I think you should just explain that "no, that's not going to work with our solution".

Do you version control the invidual apps or the whole project or both?

OK, so I have a Django project. I was wondering if I'm supposed to put each app in its own git repository, or is it better to just put the whole project into a git repository, or whether I should have a git repo for each app and also a git repo for the project?
Thanks.
It really depends on whether those reusable apps are actually going to be reused outside of the project or not?
The only reason you might need it in a different repo is if other projects with seperate repos might need to use it. But you can always split it out later. Making a git repo is cheap so it's one of those things you can do later if it becomes necessary. Making things complicated up front is just going to frustrate you later so feel free to wait till you know it's necessary
YAGNI it's for more than just code.
I like Tom's or Alex's answers, except they lack real rationale behind them
("easier to have one repository per development team" ?
"substantial numbers of people might be interested in pulling out (or watching changes to) separate apps"
why ?)
First of all, one or several repositories is an server-side administration decision (in term of server resources).
SVN easily sets up several repositories behind one server, whereas ClearCase will have its own "vob_server" process per VOB, meaning: you do not want more than a hundred VOB per (Unix) server (or more than 20-30 on a Windows server).
Git is particular: setting a repository is cheap, accessing it can be easy (through a simple shared path), or can involved a process (a git daemon). The latest solution means: "not to much repositories accessed directly from outside". (they can be accessed indirectly through submodules referenced by a super-project)
Then there is the client-side administration: how complex will the configuration of a workspace be, when one or several repositories are involved. How can the client references the right configuration? (list of labels needed to reference the correct files).
SVN would use externals, git "submodules", and in both case, that adds complexity.
That is why Tom's answer can be correct.
Finally, there is the configuration management aspect (referenced in Alex's answer). Can you tag part of a repo of not ?
If you can (like in SVN where you actually make a svn-copy of part of a repo), that means you can have a component approach, where several groups of files have their own life cycle, and their own tags (set at their own individual pace).
But in Git, that is not possible: a tag references a commit which always concerns the all repository.
That means a "system-based" approach, where you only want the all project anyway (as opposed to the "watching separate apps - I've never observed it in real life" bit from Alex's answer). If that is the case (if you want the all system anyway), that is not important.
But for those of us who think in term of "independent groups of files", that means a git repository actually represents an individual group of files (with its own rhythm in term of evolutions and tagging), with potentially a super-project referencing those repositories as submodules.
That is not your everyday setup, so for independent projects, I would recommend only few or one Git repo.
But for more complex interdependent set of projects... you need to realize that by the very way Git has been conceived, a "repository" represents a coherent set of files supposed to evolve at the same pace as a all. And not "all" can always fit in one set of files (if "all" is complex enough). Hence, several repositories would be required in this case.
And a complex set of inter-dependent projects does happen in real life too ;)
Almost always easier to have one repository per development team, no matter how many products you have. Eventually you will want to share code between projects (even several seperate DJango websites!) and it is much easier to do with only one repository.
By all means set up a nice folder structure WITHIN that repository. However, a single checkout from git (probably from a subfolder) should give you all the files you need for your test website (python, templates, css, jpegs...), and then you just copy httpd.conf and so on to complete the installation.
I'm no git expert, but I don't believe the appropriate strategy would be any different from one for hg, or even, in this case, svn, so I'm going to give my 2 cents' worth anyway;-).
A repo per app makes any sense only if substantial numbers of people might be interested in pulling out (or watching changes to) separate apps; this could theoretically be the case but I've never observed it in real life -- mostly, people who are interested in the project are interested in it as a whole. Therefore, I would avoid complications and make the whole proj into one repo.
Do those individual apps utilize shared code and libraries? If so, they really belong in the same repository .. which enables you to see the impact of new changes when running a single testing suite.
If they are completely separate and agnostic, it really doesn't matter. Just for sanity, ease of building and ease of packaging, I prefer to keep a whole project in a combined repository. However, we typically work in very small teams. So, if no shared code is involved, its really just a question of which method is most convenient and efficient for all concerned.
If you have reusable apps, put them in a seperate repo.
If you are worried about the number of private repositories have a look at bitbucket (if you decide to make them open source, I advice github)
There is a nice way of including your own apps in to the project, even making use of version tags:
I recently found out a way to do this with buildout and git tags
(I used svn:externals to include an app, but switched from svn to git recently).
I tried mr.developer first, but while not getting this to work i found an alternative for mr.developer:
I found out gp.vcsdevlop is very easy to use for this purpose.
see https://pypi.python.org/pypi/gp.vcsdevelop
I ended up, putting this in my buildout file and got it working at once (I had to add a pip requirements.txt to my app to get it working, but that's a good thing after all):
vcs-update = True
extensions =
gp.vcsdevelop
buildout-versions
develop-dir=./local_checkouts
vcs-extend-develop=git+git#bitbucket.org:<my bitbucket username>/<the app I want to include>.git#0.1.38#egg=<the appname in django>
develop = .
in this case it chacks out my app and clone it to version tag 0.1.38 in to the project in the subdit ./local_checkouts/
and develops it when running bin/buildout
Edit: remark 26 august 2013
While using this solution and editing in this local checkout of the app used in a project. I found out this:
you will get this warning when trying the normal 'git push origin master' command:
To prevent you from losing history, non-fast-forward updates were rejected
Merge the remote changes before pushing again. See the 'Note about
fast-forwards' section of 'git push --help' for details.
EDIT 28 august 2013
About working in the local_checkouts for shared apps included by gp.vcsdevelop:
(and handle the warning discussed in the remakr above)
git push origin +master seems to screw up commit history for the shared code
So the way to work in the local_checkout dir is like this:
go in to the local checkout (after a bin/buildout so the checkout is the master):
cd localcheckouts/<shared appname>
Create a new branch and swith to it like this:
use git checkout -b 'issue_nr1' (e.g. the name of the branch is here named after the issue you are working on)
and after you are done working in this branche, use: (after the usuals git add and git commit)
git push origin issue_nr1
when tested and completed merge the branch back in the master:
first checkout the master:
git checkout master
update (probably only neede when other commits in the mean time)
git pull
and merge to the master (where you are in right now):
git merge issue_nr1
and finaly push this merge:
git push origin master
(with sepcial thanks to this simplified guide to git: http://rogerdudler.github.io/git-guide/)
and after a while, to clean up the branches, you might want to delete this branch
git branch -d issue_nr1