We have a Website with many pages based on pmwiki.
Now we have to move everything to a different server.
As the system administrators have already made our pages unavailable (without telling us!), we can only access our stuff via sftp.
What is the easiest way to port our pmwiki pages?
Thank You
PmWiki stores all the pages in the wiki.d directory, and you can simply copy the files. If you have any customisations (recipes) you may want to copy them (cookbook, pub and local directories), but you may need to edit the local config to match the new server. Finally, if you have uploads then you'll want to copy them too. You may also choose to copy the PmWiki installation too (unless you want to upgrade).
The PmWiki Backup and Restore page covers the concepts, you can consider it a backup from one server and restore to another.
Related
I'll give a bit of a background as to the setup we have and why. Currently myself and a friend want to collaborate on an Unreal Engine Project. To do this I've set up an Amazon Lightsail Instance with Windows Server running. I've then installed Perforce onto this Server and added two users. Both of us are able to connect to this server from our local machines (great I thought!). Our goal was to attach two 'virtual' disks of 32gb to this server via Lightsails Storage option. I've formatted these discs and they are detected as Disk D and E on the Server. Our goal was to have two depots, one on Disk E and one on Disk D, the reason for this being the C disk was only 20gb (12gb Free after Windows).
I have tried multiple things (not got much hair left after this) to try and map the depots created to each HDD but have had little success and need your wisdom!
I've followed both the process indicated in this support guide (https://community.perforce.com/s/article/2559) via CMD as well as changing the depot storage location in P4Admin on the Server via RDP to the virtual disks D and E respectively.
Example change is from //UE_WIP/... to D:/UE_WIP/... (I have create a folder UE_WIP and UE_LIVE on each HDD).
When I open up P4V on my local machine I'm able to happily connect (as per screenshot) and set workstation to my local machine (detects both depots). This is when we're getting stuck. I then open up a new unreal engine file and save the unreal engine file to the the following local directory E:/DELETE/Perforce/Test/ and open up source control (See image 04). This is great, it detects the workspace and all is connecting to the server.
When I click submit to source control I get the following 'Failed Checking Source Control' when I try adding via P4V manually marking the new content folder for add I get the following 'file(s) not in client view.
All we want is the ability to send an Unreal Engine up to either the WIP Drive Depot or the Live Drive Depot. To resolve this does it require:
Two different workstations (one set up for LIVE and one for WIP)
Do we need to add some local folders to our directory? E:/DELETE/Perforce/UE_WIP & E:/DELETE/Perforce/UE_LIVE?
Do we need to tweak something on the Perforce Server?
Do we need to tweak something in Unreal Engine?
Any and all help would be massively appreciated.
Best,
Ben
https://imgur.com/a/aaMPTvI - Image gallery of issues
Your screenshots don't show how (or if?) you set up your local workspace (i.e. the thing that tells Perforce where the files are on your local workstation).
See: https://www.perforce.com/perforce/r13.1/manuals/p4v/Defining_a_client_view.html
The Perforce server acts as a layer of abstraction between the backend storage (i.e. the depots you've set up) and the client machines where you actually do your work. The location of the depot files doesn't matter at all to the client (any more than, say, the backend filesystem of a web server matters to your web browser); all that matters is how you set up the workspace, which is a simple matter of "here's where my local files are" (the Root) and "here's how my local paths map to depot paths" (the View).
You get the "file not in view" error if you try to add a local file to the depot and it's not in the View you've defined. The fix there is generally to simply fix the Root and/or View to accurately describe where you local files are. One View can easily map to multiple depots (as long as they're on a single server).
(edit)
Specifically, in your case, all of the files you're trying to add are under the path:
E:\DELETE\Perforce\Test\Saved\...
Since you've set up your workspace as:
Client: bsmith
Root: E:\DELETE\Perforce\bsmith
View:
//WIP/... //bsmith/WIP/...
//LIVE/... //bsmith/LIVE/...
then your bsmith workspace consists of these two local paths:
E:\DELETE\Perforce\bsmith\WIP\...
E:\DELETE\Perforce\bsmith\LIVE\...
The files you're trying to add aren't even under your Root, much less under either of the View mappings. That's what the "not in client view" error messages mean.
If you want to add the files where they are, modify your Root and View so that you define your workspace as being where your files are; if you want to have the files in one of the local directories above that you've already defined as being where your workspace lives, you'll have to move them there. If you put your files in bsmith\WIP, then when you add them they'll go to the WIP depot; if you put them in bsmith\LIVE, then they'll go to the LIVE depot, per your View.
Either way, once they're in your workspace, you can add them to the depot. Simple as that!
I have hosted an intranet website on WAMP server which is working as expected. Now i would like to configure a backup site to it. I mean if it goes down by any chance how do i counter that?
My challenge is i can not have the URL changed as its already been distributed to many users in the past.
My URL is like
http :/ /ipaddress/MyProject/Running/Index.html
I want to know, how do i have a backup website running on the same url to maintain high availability?
Since WAMP applications do not provide their own backup APIs, you need to stop all services if you want to take a full file-system level backup; otherwise you'll get a lot of "file locked" errors and/or your backups will be in an incoherent state.
So yes, you can just make a copy of your C:\wamp directory, but stop all your WAMP-related services before (and remember to restart them after).
any problem please comment me...)
use a slave server with the one using as a master not sure of the tech but I know windows allows two or more servers with same info on them both.
As per best practices, my development team does not store the application config file in a repo for security reasons (we use a config/application.yml file to store configs). However, when we actually develop and deploy, this causes some problems:
A developer needs to add a new external URL that is different depending on what environment the application is running in. Since there is no config file in the repo, he cannot update a single file that gets synced when another developer pulls the code. To make this happen, he updates his local config/application.yml file and then each other developer updates their local file, and then we have to add the new ENV variable to the server's config/application.yml. Has to be a better solution.
If we stored the config/application.yml file in the repo and shared it among everyone and the servers, this solves the problem of sharing/updating global configs, BUT it opens up the possibility that a developer may accidentally start their local application in production mode and touch live data or spam real users with test emails (has happened which is why it's a concern).
Is there a standard best practice for solving these types of problems? It seems I either sacrifice productivity for security but can't really have both.
I've been thinking about creating a config/development.yml file in the repo that all developers share, which stores all environments EXCEPT production. That way they can share config/ENV items for development and sync them up. But in production, I would have a config/production.yml file that ONLY lives on the servers.
If the application is started in anything except production environment, it loads the development.yml file. If it is started in production, it loads the production.yml file. But since the production.yml file does NOT live in the repo (only on the servers), there's no chance that a developer can accidentally touch live data or spam real users, etc...
Have any professional developers tried a scheme like this? I've done a lot of googling but really haven't found a satisfactory solution.
Check out the RailsConfig gem. This allows you do to exactly what you stated, but with the ease of a gem. This also allows you and your dev team to have local yaml files that override settings.
config/settings.yml
config/settings/#{environment}.yml
config/environments/#{environment}.yml
config/settings.local.yml
config/settings/#{environment}.local.yml
config/environments/#{environment}.local.yml
You would then just have config/settings/production.yml within your .gitignore so that it will not be checked into source control.
I currently work with a web dev team and we have 100+ GitHub Repo's, each for a different e-commerce website that has an instance on AWS. The developers use the GitHub app to upload their changes to the servers, and do this multiple times a day.
I'm trying to find the easiest way for us to remove old, deleted files from our servers after we delete and sync GitHub locally.
To make it clear, say we have an index.html, page1.html and page2.html. We want to remove page1.html, so they delete page1.html and sync through the GitHub app. The file is no longer visibly in the repo, but for us to completely remove the file I must also SSH into our AWS server, go to the www directory and find page1.html and also remove it there. Is there an easier way for the developers, who do not use SSH and the command line, to get rid of those files in terms of syncing with GitHub? It becomes a pain to have to SSH into many different servers and then determining which files were removed from the repo so that I can remove them there.
Thanks in advance
Something we do with our repo is we use tags(releases) and then through automation (chef in our case) we tell it to pull the new tag. It sounds like this wouldn't necessarily work for you but what Chef actually does with the tag might be of interest
It pulls the tag and then updates a symlink (and graceful restarts Apache). This means there's 0 downtime (symlink updates instantly) and, because it's pulling a fresh copy, any deleted files are gone.
I am getting the following error:
The cause of this exception was:
java.io.FileNotFoundException:
//server/c$/folder1/folder2/folder3/folder4/folder5/login.cfm
(Access is denied).
When doing this:
<cffile action="copy"
destination="#copyto#\#apfold#\#applic#\#files#"
source="#path#\#apfold#\#applic#\#files#">
If I try to write to C:\folder1\folder2\folder3\folder4\folder5\login.cfm, it works fine. The problem with doing it this way is that this is a script for developers to be able to manually sync files to their application folder. We have multiple servers for each instance that is randomly picked by BigIP. So just writing to the C:\ drive would only copy the file to the server the developer is currently accessing. So if the developer were to close out the browser and go right back in to make sure their changes worked, if they happen to get sent to a different server, they won't see their change.
Since it works with writing to C:\, I know the permissions are correct. I've also copied the path out of the error message and put it in the address bar on the server and it got to the folder/file fine. What else could be stopping it from being able to access that server?
It seems that you want to access a file via UNC notation on a network folder (even if it incidentally refers to a directory on the local c:\ drive). To be able to do this, you have to change the user the ColdFusion 9 Application Server Service runs on. By default, this service runs with the user "Local System Account" which you need to change to an actual user. Have a look at the following link to find out how to do this: http://mlowell.hubpages.com/hub/Coldfusion-Programming-Accessing-a-shared-network-drive
Note that you might have to add a user with the same name as the one used for the CF 9 service to all of the file servers.
If you don't want to enable ftp on your servers another option would be to use RoboCopy to keep the servers in sync. I have had very good luck using this tool. You will need access to the cfexecute ColdFusion tag and you will need to create share(s) on your servers.
RoboCopy is an executable that comes with Windows. You can read some documentation here and here. It has some very powerful features and can be set to "mirror" the contents of directories from one server to the other. In this mode it will keep the folders identical (new files added, removed files deleted, updated files copied, etc). This is how I have used it.
Basically, you will create a share on your destination servers and give access to a specific user (can be local or domain). On your source server you will run some ColdFusion code that:
Logically maps a drive to the destination server
Runs the RoboCopy utility to copy files to the destination server
Then disconnects the mapped drive
The ColdFusion service on your source server will need access to C:\WINDOWS\system32\net.exe and C:\WINDOWS\system32\robocopy.exe. If you are using ColdFusion sandbox security you will need to add entries for these executables (on the source server only). Here are some basic code examples.
First, map to the destination server:
<cfexecute name="C:\WINDOWS\system32\net.exe"
arguments="use {share_name} {password} /user:{username}"
variable="shareLog"
timeout="30">
</cfexecute>
The {share_name} here would be something like \\server\c$. {username} and {password} should be obvious. You can specify username as \\server\username. NOTE I would suggest using a share that you create rather than the administrative share c$ but that is what you had in your example.
Next, copy the files from the source server to the destination server:
<cfexecute name="C:\WINDOWS\system32\robocopy.exe"
arguments="{source_folder} {destination_folder} [files_to_copy] [options]"
variable="robocopyLog"
timeout="60">
</cfexecute>
The {source_folder} here would be something like C:\folder1\folder2\folder3\folder4\folder5\ and the {destination_folder} would be \\server\c$\folder1\folder2\folder3\folder4\folder5\. You must begin this argument with the {share_name} from the step above followed by the desired directory path. The [files_to_copy] is a list of files or wildcard (*.*) and the [options] are RoboCopy's options. See the links that I have included for the full list of options. It is extensive. To mirror a folder structure see the /E and /PURGE options. I also typically include the /NDL and /NP options to limit the output generated. And the /XA:SH to exclude system and hidden files. And the /XO to not bother copying older files. You can exclude other files/directories specifically or by using wildcards.
Then, disconnect the mapped drive:
<cfexecute name="C:\WINDOWS\system32\net.exe"
arguments="use {share_name} /d"
variable="shareLog"
timeout="30">
</cfexecute>
Works like a charm. If you go this route and have not used RoboCopy before I would highly recommend playing around with the options/functionality using the command line first. Then once you get it working to your liking just paste those options into the code above.
I ran into a similar issue with this and it had me scratching my head as well. We are using an Active Directory along with a UNC path to SERVERSHARE/webroot. The application was working fine with the exception of using CFFILE to create a directory. We were running our CFService as a Domain account and permissions were granted onto the webroot folder (residing on the UNC Server). This same domain account was also being used to connect to the UNC path within IIS. I even went so far as to grant FULL Control on the webroot folder but still had no luck.
Ultimately what I found was causing the problem was that the Inetpub Folder (parent folder to our webroot) had sharing turned on but that sharing did not include 'Read/Write' sharing for our CFService domain account.
So while we had Sharing on Inetpub and more powerful user permissions turned on for Inetpub/webroot folder, the sharing permissions (or lack thereof) took precedence over the more granular webroot user security permissions.
Hope this helps someone else.