I am new to Software Configuration Management systems, but am now interested in using Fossil. I have been reviewing the documentation on-and-off for a few days, and have played with the program a little, but I am still unsure how to most appropriately use it to meet my needs, so I would appreciate any advice anyone would like to offer on the following use scenario.
I am working exclusively in Windows environments. I am a sole developer, often working on a number of relatively small projects at a time. For the time being at least, I do not expect to make much use of forking and branching capabilities – I like to think my code development generally progresses fairly linearly. But I regularly need to access and update my code at a number of usually standalone PCs - that is, they are never networked to each other and often do not even have internet access.
I am hoping that Fossil will assist me in two ways, keeping track of milestones in my codebases including providing the ability to easily restore a previous version for testing purposes, and also making it as simple as possible for me to ensure I always have all versions of the code for every project accessible to me when I sit down to work at any particular PC.
To achieve the second objective, I expect to make a point of always carrying a USB Flash Drive with me as I move from PC to PC. I expect this Flash Drive should contain a number of repository files, one for each project I am concerned with. I expect when I sit down at any particular PC I should be able to extract from this Flash Drive whichever version of whichever project I need to access. Similarly, when I “finish” working at this PC if I wish to retain any changes I have made I expect I should “commit” these changes back to relevant repository on the Flash Drive in some way. But the most appropriate way to do all this is unclear to me.
I understand Fossil is generally intended to work with a local copy of a project’s repository on each machine’s local hard disk, and with a master repository accessed remotely when required via a network or internet connection. In my case, it seems to me the master repository would be the relevant repository file on my Flash Drive, but when my Flash Drive is plugged into the machine I am working on, the files on it are effectively local, not remote. So, when I sit down to work at a PC, should I copy the repository file for the project I need to work on onto the PC’s local hard drive, then open the version of the code I need to access from this copy of the repository, or should I just open the project repository directly from my Flash Drive ? Additionally, if I should copy the repository onto the local hard disk, should I simply copy the repository file using the operating system, or should I use Fossil to clone it to the local hard disk (I do not really understand the difference here) ? Then, when I finish working at the PC, if I wish to incorporate any changes I have made back into the repository on my Flash Drive, should I update this directly into the repository on my Flash Drive, or into a copy of the repository on the PC’s local hard disk ? If the later, should I then simply copy the updated repository file onto my Flash Drive (overwriting the previous repository file), or should I “pull” or “push” the changes into the repository file on the Flash Drive – can I even do this, when the hard disk based repository and the Flash Drive based repository files are effectively both local files on the same PC ? I’m guess I'm getting a bit confused here…
A possible additional complicating factor in the “right” way to do all this is that typically, when I finish working at a PC I will not want to leave a copy of the source code or the repository on the PC (i.e., the customer’s hardware). I understand deleting the local copies of the repositories undermines the redundancy and backup benefits of using a Distributed SCM system, but I guess I will address this by keeping copies of the repositories on my own PCs and ensuring I backup the repository files on the Flash Drive itself reliably.
So any thoughts, experience or advice on the most appropriate way to use Fossil in the above scenario would be most welcome, thank you.
Hope this is still actual :)
I would suggest following process:
On your usb drive do:
mkdir fossil - to keep your fossil repo files
mkdir src - to keep your project files.
Go to the fossil folder and create repos for your projects A and B
cd fossil
fossil init a.fossil
fossil init b.fossil
Use .fossil extensions as this will simplify work with repos later.
Create fossil_server.cmd batch file to start fossil as a server.
SET REPO_PATH=X:\fossil
SET FOSSIL_CMD=Path_to_fossil_exe/fossil.exe
start %FOSSIL_CMD% server %REPO_PATH% --repolist --localhost --port 8089
Start fossil_server.cmd, open browser and go to localhost:8089
You will see page with your repos, so you can configure them, write wiki/tickets and so on.
Go to the src folder
mkdir a
mkdir b
cd a
fossil open ../../fossil/a.fossil
cd ../b
fossil open ../../fossil/b.fossil
So you have initial repository for your files in src/a, src/b
Add new files to A/B projects and do
cd src/a
fossil addremove
REM to add new files to the repository
fossil commit
REM to commit changes.
Now you can add/modify files in your projects, commit them and rolling back.
just use:
fossil commit --tag new_tag
to add easy to understand tag to your commit,
more on https://fossil-scm.org/home/doc/trunk/www/quickstart.wiki
Related
I'll give a bit of a background as to the setup we have and why. Currently myself and a friend want to collaborate on an Unreal Engine Project. To do this I've set up an Amazon Lightsail Instance with Windows Server running. I've then installed Perforce onto this Server and added two users. Both of us are able to connect to this server from our local machines (great I thought!). Our goal was to attach two 'virtual' disks of 32gb to this server via Lightsails Storage option. I've formatted these discs and they are detected as Disk D and E on the Server. Our goal was to have two depots, one on Disk E and one on Disk D, the reason for this being the C disk was only 20gb (12gb Free after Windows).
I have tried multiple things (not got much hair left after this) to try and map the depots created to each HDD but have had little success and need your wisdom!
I've followed both the process indicated in this support guide (https://community.perforce.com/s/article/2559) via CMD as well as changing the depot storage location in P4Admin on the Server via RDP to the virtual disks D and E respectively.
Example change is from //UE_WIP/... to D:/UE_WIP/... (I have create a folder UE_WIP and UE_LIVE on each HDD).
When I open up P4V on my local machine I'm able to happily connect (as per screenshot) and set workstation to my local machine (detects both depots). This is when we're getting stuck. I then open up a new unreal engine file and save the unreal engine file to the the following local directory E:/DELETE/Perforce/Test/ and open up source control (See image 04). This is great, it detects the workspace and all is connecting to the server.
When I click submit to source control I get the following 'Failed Checking Source Control' when I try adding via P4V manually marking the new content folder for add I get the following 'file(s) not in client view.
All we want is the ability to send an Unreal Engine up to either the WIP Drive Depot or the Live Drive Depot. To resolve this does it require:
Two different workstations (one set up for LIVE and one for WIP)
Do we need to add some local folders to our directory? E:/DELETE/Perforce/UE_WIP & E:/DELETE/Perforce/UE_LIVE?
Do we need to tweak something on the Perforce Server?
Do we need to tweak something in Unreal Engine?
Any and all help would be massively appreciated.
Best,
Ben
https://imgur.com/a/aaMPTvI - Image gallery of issues
Your screenshots don't show how (or if?) you set up your local workspace (i.e. the thing that tells Perforce where the files are on your local workstation).
See: https://www.perforce.com/perforce/r13.1/manuals/p4v/Defining_a_client_view.html
The Perforce server acts as a layer of abstraction between the backend storage (i.e. the depots you've set up) and the client machines where you actually do your work. The location of the depot files doesn't matter at all to the client (any more than, say, the backend filesystem of a web server matters to your web browser); all that matters is how you set up the workspace, which is a simple matter of "here's where my local files are" (the Root) and "here's how my local paths map to depot paths" (the View).
You get the "file not in view" error if you try to add a local file to the depot and it's not in the View you've defined. The fix there is generally to simply fix the Root and/or View to accurately describe where you local files are. One View can easily map to multiple depots (as long as they're on a single server).
(edit)
Specifically, in your case, all of the files you're trying to add are under the path:
E:\DELETE\Perforce\Test\Saved\...
Since you've set up your workspace as:
Client: bsmith
Root: E:\DELETE\Perforce\bsmith
View:
//WIP/... //bsmith/WIP/...
//LIVE/... //bsmith/LIVE/...
then your bsmith workspace consists of these two local paths:
E:\DELETE\Perforce\bsmith\WIP\...
E:\DELETE\Perforce\bsmith\LIVE\...
The files you're trying to add aren't even under your Root, much less under either of the View mappings. That's what the "not in client view" error messages mean.
If you want to add the files where they are, modify your Root and View so that you define your workspace as being where your files are; if you want to have the files in one of the local directories above that you've already defined as being where your workspace lives, you'll have to move them there. If you put your files in bsmith\WIP, then when you add them they'll go to the WIP depot; if you put them in bsmith\LIVE, then they'll go to the LIVE depot, per your View.
Either way, once they're in your workspace, you can add them to the depot. Simple as that!
I have a build pipeline that builds and tests changes before they are merged to the main line. Once that happens, it would be great if the Bazel actions from that build are available to developers. Unfortunately, the build pipeline runs in the cloud and uses an in-cloud cache, but the developers use an on-premises cache.
I am using https://github.com/buchgr/bazel-remote
Does anyone know if I can just rsync the artifacts from the data directory of the cloud cache to the developers' cache in order to give them access to the pre-built artifacts? Normally, I would just try it out, but I'm concerned about subtle issues that might poison the cache or negatively effect the hit rate, so I'm hoping to hear from someone who understands the code before I go digging.
You can rsync the cache directory contents and use them from another location, but this won't work with a running bazel-remote- the items will be ignored until bazel-remote is restarted.
Another option would be to use the http_proxy configuration file setting to automatically put/get cache items to/from another bazel-remote instance. An example configuration file was recently added to README.md in the bazel-remote git repository.
As per best practices, my development team does not store the application config file in a repo for security reasons (we use a config/application.yml file to store configs). However, when we actually develop and deploy, this causes some problems:
A developer needs to add a new external URL that is different depending on what environment the application is running in. Since there is no config file in the repo, he cannot update a single file that gets synced when another developer pulls the code. To make this happen, he updates his local config/application.yml file and then each other developer updates their local file, and then we have to add the new ENV variable to the server's config/application.yml. Has to be a better solution.
If we stored the config/application.yml file in the repo and shared it among everyone and the servers, this solves the problem of sharing/updating global configs, BUT it opens up the possibility that a developer may accidentally start their local application in production mode and touch live data or spam real users with test emails (has happened which is why it's a concern).
Is there a standard best practice for solving these types of problems? It seems I either sacrifice productivity for security but can't really have both.
I've been thinking about creating a config/development.yml file in the repo that all developers share, which stores all environments EXCEPT production. That way they can share config/ENV items for development and sync them up. But in production, I would have a config/production.yml file that ONLY lives on the servers.
If the application is started in anything except production environment, it loads the development.yml file. If it is started in production, it loads the production.yml file. But since the production.yml file does NOT live in the repo (only on the servers), there's no chance that a developer can accidentally touch live data or spam real users, etc...
Have any professional developers tried a scheme like this? I've done a lot of googling but really haven't found a satisfactory solution.
Check out the RailsConfig gem. This allows you do to exactly what you stated, but with the ease of a gem. This also allows you and your dev team to have local yaml files that override settings.
config/settings.yml
config/settings/#{environment}.yml
config/environments/#{environment}.yml
config/settings.local.yml
config/settings/#{environment}.local.yml
config/environments/#{environment}.local.yml
You would then just have config/settings/production.yml within your .gitignore so that it will not be checked into source control.
I am trying to work out a good way to run a staging server and a production server for hosting multiple Coldfusion sites. Each site is essentially a fork of a repo, with site specific changes made to each. I am looking for a good way to have this staging server move code (upon QA approval) to the production server.
One fanciful idea involved compiling the sites each into EAR files to be run on the production server, but I cannot seem to wrap my head around Coldfusion archives, plus I cannot see any good way of automating this, especially the deployment part.
What I have done successfully before is use subversion as a go between for a site, where once a site is QA'd the code is committed and then the production server's working directory would have an SVN update run, which would then trigger a code copy from the working directory to the actual live code. This worked fine, but has many moving parts, and still required some form of server access to each server to run the commits and updates. Plus this worked for an individual site, I think it may be a nightmare to setup and maintain this architecture for multiple sites.
Ideally I would want a group of developers to have FTP access with the ability to log into some control panel to mark a site for QA, and then have a QA person check the site and mark it as stable/production worthy, and then have someone see that a site is pending and click a button to deploy the updated site. (Any of those roles could be filled by the same person mind you)
Sorry if that last part wasn't so much the question, just a framework to understand my current thought process.
Agree with #Nathan Strutz that Ant is a good tool for this purpose. Some more thoughts.
You want a repeatable build process that minimizes opportunities for deltas. With that in mind:
SVN export a build.
Tag the build in SVN.
Turn that export into a .zip, something with an installer, etc... idea being one unit to validate with a set of repeatable deployment steps.
Send the build to QA.
If QA approves deploy that build into production
Move whole code bases over as a build, rather than just changed files. This way you know what's put into place in production is the same thing that was validated. Refactor code so that configuration data is not overwritten by a new build.
As for actual production deployment, I have not come across a tool to solve the multiple servers, different code bases challenge. So I think you're best served rolling your own.
As an aside, in your situation I would think through an approach that allows for a standardized codebase, with a mechanism (i.e. an API) that allows for the customization you're describing. Otherwise managing each site as a "custom" project is very painful.
Update
Learning Ant: Ant in Action [book].
On Source Control: for the situation you describe, I would maintain a core code base and overlays per site. Export core, then site specific over it. This ensures any core updates that site specific changes don't override make it in.
Call this combination a "build". Do builds with Ant. Maintain an Ant script - or perhaps more flexibly an ant configuration file - per core & site combination. Track version number of core and site as part of a given build.
If your software is stuffed inside an installer (Nullsoft Install Shield for instance) that should be part of the build. Otherwise you should generate a .zip file (.ear is a possibility as well, but haven't seen anyone actually do this with CF). Point being one file that encompasses the whole build.
This build file is what QA should validate. So validation includes deployment, configuration and functionality testing. See my answer for deployment on how this can flow.
Deployment:
If you want to automate deployment QA should be involved as well to validate it. Meaning QA would deploy / install builds using the same process on their servers before doing a staing to production deployment.
To do this I would create something that tracks what server receives what build file and whatever credentials and connection information is necessary to make that happen. Most likely via FTP. Once transferred, the tool would then extract the build file / run the installer. This last piece is an area I would have to research as to how it's possible to let one server run commands such as extraction or installation remotely.
You should look into Ant as a migration tool. It allows you to package your build process with a simple XML file that you can run from the command line or from within Eclipse. Creating an automated build process is great because it documents the process as well as executes it the same way, every time.
Ant can handle zipping and unzipping, copying around, making backups if needed, working with your subversion repository, transferring via FTP, compressing javascript and even calling a web address if you need to do something like flush the application memory or server cache once it's installed. You may be surprised with the things you can do with Ant.
To get started, I would recommend the Ant manual as your main resource, but look into existing Ant builds as a good starting point to get you going. I have one on RIAForge for example that does some interesting stuff and calls a groovy script to do some more processing on my files during the build. If you search riaforge for build.xml files, you will come up with a great variety of them, many of which are directly for ColdFusion projects.
I went to upload a new file to my web server only to get a message in return saying that my disk quota was full... I wasn't using up my allotted space but rather my allotted FILE QUANTITY. My host caps my total number of files at about 260,000.
Checking through my folders I believe I found the culprit...
I have a small DVD database application (Video dB By split Brain) that I have installed and hidden away on my web site for my own personal use. It apparently caches data from IMDB, and over the years has secretly amassed what is probably close to a MIRROR of IMDB at this point. I don't know for certain but I did have a 2nd (inactive) copy of the program on the host that I created a few years back that I was using for testing when I was modifying portions of it. The cache folder in this inactive copy had 40,000 files totalling 2.3GB in size. I was able to delete this folder over FTP but it took over an hour. Thankfully it also gave me some much needed breathing room.
...But now as you can imagine the cache folder for the active copy of this web-app likely has closer to 150000 files totalling about 7GB worth of data.
This is where my problem comes in... I use Flash FXP for my FTP client and whenever I try to delete the cache folder, or even just view the contents it will sit and try to load a file list for a good 5 minutes and then lose connection to the server...
my host has a web based file browser and it crashes when trying to do this... as do free online FTP clients like net2ftp.com. I don't have SSH ability on this server so I can't login directly to delete either.
Anyone have any idea how I can delete these files? Is there a different FTP program I can download that would have better success... or perhaps a small script I could run that would be able to take care of it?
Any help would be greatly appreciated.
Anyone have any idea how I can delete
these files?
Submit a support request asking for them to delete it for you?
It sounds like it might be time for a command line FTP utility. One ships with just about every operating system. With that many files, I would write a script for my command-line FTP client that goes to the folder in question and performs a directory listing, redirecting the output to a file. Then, use magic (or perl or whatever) to process that file into a new FTP script that runs a delete command against all of the files. Yes, it will take a long time to run.
If the server supports wildcards, do that instead and just delete ..
If that all seems like too much work, open a support ticket with your hosting provider and ask them to clean it up on the server directly.
Having said all that, this isn't really a programming question and should probably be closed.
We had a question a while back where I ran an experiment to show that Firefox can browse a directory with 10,000 files no problem, via FTP. Presumably 150,000 will also be ok. Firefox won't help you delete, but it might be helpful in capturing the names of the files you need to delete.
But first I would just try the command-line client ncftp. It is well engineered and I have had good luck with it in the past. You can delete a large number of files at once using shell patterns. And it is available for Windows, MacOS, Linux, and many other platforms.
If that doesn't work, you sound like a long-term customer---could you beg your ISP the privilege of a shell account for a week so you can remote login with Putty or ssh and blow away the entire directory with a single rm -r command?
If your ISP provides ssh access, you can use one rm command to remove the files.
If there is no command line access, you can have a try with some powerful FTP client like CrossFTP. It works on win, mac, and linux. When you select to delete the huge amount of files on your server, it can queue in the delete operations, so that you don't need to reload the folder again. When you restart CrossFTP, the queue can also be restored and continued.