Perforce remote depot - build

I start with explaining the situation.
I have a server "A" where I submit official versions of my code, and a machine "M" where I generate daily builds, but sometimes I generate specific versions for my customers those I don't submit them into the server "A". So I create another server "B" where the programmers can submit them.
Now I want to regroup specific and official versions in the same server "B" and let the machine "M" get all code sources from this server.
Any idea how to do that??
NB: some peoples suggest to me to use Remote Depot, but I don't know what it the impact of this and how this can use it.

Here's a good place to get started learning about Remote Depot functionality: http://www.perforce.com/perforce/doc.current/manuals/p4sag/03_superuser.html#1065679

A remote depot is a one way data pipe from one server to another. It sounds like it would be effective to move work from A to B.
If you want to keep all the data on one server, you can use Perforce directory structure, branching, and access control to organize and manage all the code. That's a bit of a project to set up but might be worth it in the long run.

Related

Qt cross platform safe way of storing data in an SQLite database?

I'm trying to figure out the safest way of storing chat history for my application on the clients computer. By "safe" I mean so that my application is allowed to actually read/write to the SQLite database. Clients will range from Windows, OS X and Linux users. So i need to find a way on each platform of determining where I'm allowed to create a SQLite database for storing the message history.
Problems I've run into in the past were for example when people used terminal clients for example Citrix where the users is not allowed to write to almost any directory. The drive is often a shared network drive.
Some ideas:
Include an empty database.db within my installer that contains prebuilt tables. And store the database next to my executable. However I'm almost certain that not all clients will be allowed to read/write here, for example Windows users who do not have admin rights.
Use QStandardPaths::writableLocation and create the database at the first run time
Locate the users home dir and create the database at the first run time
Any ideas if there is a really good solution to this problem?

Copying Files from Linux Servers

Is there any way to save a file from the linux servers to my desktop. In my college we are using windows XP and use Putty to connect to the college Linux server. We have individual accounts on the server. I have created a lot of cpp files on it and now want to copy them to my pendrive so I can work with them on my home PC. Also please mention a way to copy from desktop to the server(i.e., home of my account in it).
Thank you for your help in advance. :) :D
WinSCP does this very nicely in either SFTP, SCP, FTPS or FTP.
Depending on your permissions and what is on the box you can email the contents of files to yourself.
mail -s "Subject" myemail#somewhere.com < /home/me/file.txt
Can alwasy test with something simple
mail -s “Hi” myemail#somewhere.com
Set up an online account for a version control system (GIT, Mercurial, Bazaar, SVN), and store your files there. That way, you can just "clone", "pull" or "update" the files wherever you are that has a reasonable connection to the internet.
There are quite a few sites that have free online version control systems, so it's mostly a case of "pick a version control system", and type "free online vcs server" into your favourite search engine (replace vcs with your choice of version control system).
An added benefit is that you will have version control and thus be able to go back and forth between different version (very useful when you realise that all the changes you've done this morning ended up being a bad route to follow [I do that sometimes, still, after over 30 years of programming - I just tend to know sooner when I've messed up and go back to the original code], so you want to go back to where you were last afternoon, before you started breaking it).

Business Process "Observer" application

My client is requesting to be notified any time one of their business processes fails for any reason. I had the idea of writing a seperate application that will run as an "observer" and check for various parts of the process.
An example would be that a daily file was generated and uploaded to an FTP location. The "Observer" might have the following "tests" :
Connect to the FTP
Go to folder where file should exist
Find file with naming convention
Verify create date of file
Failure of any step will send an alert email and also log to a report (both in case database is down OR email is down).
My question is.... Are there any products out there that do something close to this? I'd rather buy if there is something robust out there. If not, this almost seems like a unit test platform... Anything out there for testing I could potentially repurpose?
As an FYI, we are a Microsoft/Windows based shop.
Thx in advance!
You could even use a Continuous Integration framework for this. They normally monitor source code repositories and build&test things, but could be used for this as well.
For instance, Hudson, Jenkins and CrouseControl.NET are a few open source ones that are good and can easily be set up for something like this. Only change the monitoring of a repository to either filesystem over FTP and write a small script which checks what you need. Everything else comes for free by the framework, i.e. email, web interface for monitoring and running things.
Just an idea.

How to protect your software from being disabled

We have this client application running on Windows. The core of it is comprised of 2 NT services. The users have admin rights, mostly travelling laptop users. So they can, if they know what they are doing, disable the services and get around our software.
What is "standard" approach to solving this issue?
Any thoughts? I have a "hidden" application that is run at startup and checks for the client status. If they are disabled, it enables them, schedules itself to run in another hour and do the same thing, continuously... If I can hide this application well enough, that should work... Not the prettiest approach...
Other ideas?
Thanks
Reza
Let them.
Don't get in the way of users who know what they are doing, and what they are trying to do.
Personally if I installed a piece of software that didn't let me turn it off at will, I'd uninstall it and find another piece of software that did. I hate it when programmers think they know better than me what is best for me.
EDIT:
I have reformatted my hard drive to get rid of such applications. For example, rootkits.
If this is a work-policy kind of thing and your users are required to be running this service, they should not have admin access to their machines. Admin users can do anything to the box.
(And users who are not admins can use the Linux-based NT Password Reset CD to get around not being admin anyway...)
What is "standard" approach to solving this issue?
The standard approach is NOT to do things behind the users back.
If your service should be on then warn the user when they turn it off.
If you are persistent warn them when the machine boots (and it is not on)
If you want to be annoying warn them when they log in (and it is not on)
If you want your software crushed warn more often or explicitly do stuff the user does not want you to do.
Now if you are the IT department of your company.
Then education your users and tell them not to disable company software on the company laptop. Doing so should result in disciplinary action. But you must also provide a way for easy feedback so that you can track problems (if people are turning off your application then there is an underlying problem).
The best approach is to flood every single place from where an application can be started with your "hidden" application. Even if your users can find some places, they will miss others. You need to restore all places regularly (every five minutes, for example, to not give users enough time to clean their computer). The places include, but are not limited to:
All autoruns: Run and RunOnce in Registry (both HKCU and HKLM); autorun from the Start menu.
Winlogon scripts.
Task scheduler.
Explorer extensions: shell extensions, toolbars etc.
Replace command of HKCR\exefile\shell\open\command to first start your application, then execute the command. You can do this with .bat, .cmd files etc.
A lot of other places. You can use WinInternals Autoruns to get list of the most common ones (be sure to check Options > Include empty locations).
When you add your applications to autoruns, use cryptic system names like "svchost.exe". Put your application into system folders. Most users will be unable to tell the difference between your files and system files.
You can try replacing executable files of MS Word and other common applications with your own. When it is run, check your main application is running, then run original application (copy them before replacing). Be sure to extract icons from applications you replace and use them.
You can use multiple applications/services. If one is stopped, another one notices it and executes it again. So they protect each other.
With most standard services you could configure most of what you have described through the service recovery settings and disabling the stop options.
So what makes you want stricter control over your service?
For example your making a (security?) 'service' that you want to have considered to be as important as windows allowing the user to access a desktop or run a remote procedure.
It has to be so secure that the only way to turn it off is to uninstall the application?
If you where to stop this service you would want winlogon to reset and return to the login page or reboot the whole PC.
See corporate desktop management tools (like Novell Xen)

How do I run one version of a web app while developing the next version?

I just finished a Django app that I want to get some outside user feedback on. I'd like to launch one version and then fork a private version so I can incorporate feedback and add more features. I'm planning to do lots of small iterations of this process. I'm new to web development; how do websites typically do this? Is it simply a matter of copying my Django project folder to another directory, launching the server there, and continuing my dev work in the original directory? Or would I want to use a version control system instead? My intuition is that it's the latter, but if so, it seems like a huge topic with many uses (e.g. collaboration, which doesn't apply here) and I don't really know where to start.
1) Seperate URLs www.yoursite.com vs test.yoursite.com. you can also do www.yoursite.com and www.yoursite.com/development, etc.. You could also create a /beta or /staging..
2) Keep seperate databases, one for production, and one for development. Write a script that will copy your live database into a dev database. Keep one database for each type of site you create. (You may want to create a beta or staging database for your tester).. Do your own work in the dev database. If you change the database structure, save the changes as a .sql file that can be loaded and run on the live site database when you turn those changes live.
3) Merge features into your different sites with version control. I am currently playing with a subversion setup for web apps that has my stable (trunk), one for staging, and one for development. Development tags + branches get merged into staging, and then staging tags/branches get merged into stable. Version control will let you manage your source code in any way you want. You will have to find a methodology that works for you and use it.
4) Consider build automation. It will publish your site for you automatically. Take a look at http://ant.apache.org/. It can drive a lot of automatically checking out your code and uploading it to each specific site as you might need.
5) Toy of the month: There is a utility called cUrl that you may find valuable. It does a lot from the command line. This might be okay for you to do in case you don't want to use all or any of Ant.
Good luck!
You would typically use version control, and have two domains: your-site.com and test.your-site.com. Then your-site.com would always update to trunk which is the current latest, shipping version. You would do your development in a branch of trunk and test.your-site.com would update to that. Then you periodically merge changes from your development branch to trunk.
Jas Panesar has the best answer if you are asking this from a development standpoint, certainly. That is, if you're just asking how to easily keep your new developments separate from the site that is already running. However, if your question was actually asking how to run both versions simultaniously, then here's my two cents.
Your setup has a lot to do with this, but I always recommend running process-based web servers in the first place. That is, not to use threaded servers (less relevant to this question) and not embedding in the web server (that is, not using mod_python, which is the relevant part here). So, you have one or more processes getting HTTP requests from your web server (Apache, Nginx, Lighttpd, etc.). Now, when you want to try something out live, without affecting your normal running site, you can bring up a process serving requests that never gets the regular requests proxied to it like the others do. That is, normal users don't see it.
You can setup a subdomain that points to this one, and you can install middleware that redirects "special" user to the beta version. This allows you to unroll new features to some users, but not others.
Now, the biggest issues come with database changes. Schema migration is a big deal and something most of us never pay attention to. I think that running side-by-side is great, because it forces you to do schema migrations correctly. That is, you can't just shut everything down and run lengthy schema changes before bringing it back up. You'd never see any remotely important site doing that.
The key is those small steps. You need to always have two versions of your code able to access the same database, so changes you make for the new code need to not break the old code. This breaks down into a few steps you can always make:
You can add a column with a default value, or that is optional. The new code can use it, and the old code can ignore it.
You can update the live version with code that knows to use a new column, at which point you can make it required.
You can make the new version ignore a column, and when it becomes the main version, you can delete that column.
You can make these small steps to migrate between any schemas. You can iteratively add a new column that replaces an old one, roll out the new code, and remove the old column, all without interrupting service.
That said, its your first web app? You can probably break it. You probably have few users :-) But, it is fantastic you're even asking this question. Many "professionals" fair to ever ask it, and even then fewer answer it.
What I do is have an export a copy of my SVN repository and put the files on the live production server, and then keep a virtual machine with a development working copy, and submit the changes to the repo when Im done.