I have hosted an intranet website on WAMP server which is working as expected. Now i would like to configure a backup site to it. I mean if it goes down by any chance how do i counter that?
My challenge is i can not have the URL changed as its already been distributed to many users in the past.
My URL is like
http :/ /ipaddress/MyProject/Running/Index.html
I want to know, how do i have a backup website running on the same url to maintain high availability?
Since WAMP applications do not provide their own backup APIs, you need to stop all services if you want to take a full file-system level backup; otherwise you'll get a lot of "file locked" errors and/or your backups will be in an incoherent state.
So yes, you can just make a copy of your C:\wamp directory, but stop all your WAMP-related services before (and remember to restart them after).
any problem please comment me...)
use a slave server with the one using as a master not sure of the tech but I know windows allows two or more servers with same info on them both.
Related
I'm developing an tool using Django for internal use at my organization. It's used to search and tag documents (using Haystack and Solr), and will be employed on different projects. My team currently has a working prototype and we want to deploy it 'in the wild.'
Our security environment is strict. Project documents are located on subfolders on a network drive, and access to these folders is restricted based on users' Windows credentials (we also have an MS SQL server that uses the same credentials). A user can only access the projects they are involved in. Since we're an exclusively Microsoft shop, if we want to deploy our app on the company intranet, we'll need to use an IIS server to deal with these permissions. No one on the team has the requisite knowledge to work with IIS, Active Directory, and our IT department is already over-extended. In short, we're not web developers and we don't have immediate access to anybody experienced.
My hacky solution is to forgo IIS entirely and have each end user run a lightweight server locally (namely, CherryPy) while each retaining access to a common project-specific database (e.g. a SQLite DB living on the network drive or a DB on the MS SQL server). In order to use the tool, they would just launch an all-in-one batch script and point their browser to 127.0.0.1:8000. I recognize how ugly this is, but I feel like it leverages the security measures already in place (note that never expect more than 10 simultaneous users on a given project). Is this a terrible idea, and if so, what's a better solution?
I've dealt with a similar situation (primary development was geared toward a normal deployment situation, but some users have a requirement to use the application on a standalone workstation). Rather than deploy web and db servers on a standalone workstation, I just run the app with the Django internal development server and a SQLite DB. I didn't use CherryPy, but hopefully this is somewhat useful to you.
My current solution makes a nice executable for users not familiar with the command line (who also have trouble remembering the URL to put in their browser) but is also relatively easy development:
Use PyInstaller to package up the Django app into single executable. Once you figure this out, don't continue to do it by hand, add it to your continuous integration system (or at least write a script).
Modify the manage.py to:
Detect if the app is frozen by PyInstaller and there are no arguments (i.e.: user executed it by double clicking it) and if so, then run execute_from_command_line(..) with arguments to start the Django development server.
Right before running the execute_from_command_line(..), pop off a thread that does a time.sleep(2) (to let the development server come up fully) and then webbrowser.open_new("http://127.0.0.1:8000").
Modify the app's settings.py to detect if frozen and change things around such as the path to the DB server, enabling the development server, etc.
A couple additional notes.
If you go with SQLite, Windows file locking on network shares may not be adequate if you have concurrent writing to the DB; concurrent readers should be fine. Additionally, since you'll have different DB files for different projects you'll have to figure out a way for the user to indicate which file to use. Maybe prompt in app, or build the same app multiple times with different settings.py files. Variety of a ways to hit this nail...
If you go with MSSQL (or any client/server DB), the app will have to know the DB credentials (which means they could be extracted by a knowledgable user). This presents a security risk that may not be acceptable. Basically, don't try to have the only layer of security within the app that the user is executing. The DB credentials used by the app that a user is executing should only have the access that the user is allowed.
Suppose I create a site in Wordpress, which is running on Elastic Beanstalk. Now, on the running app I will create posts /pages, upload images, etc. That is, some data, videos, files and records in a database will be added to the running application.
3 questions:
If WordPress is running on Elastic Beanstalk with multiple Amazon EC2 instances actually running my WordPress install, then will those files propagate automatically to all running instances? And will this also happen, if a new EC2 instance is fired up - for example, to handle increased load?
From what I see in AWS console, I can deploy different versions of an app-- but as per scenario above, if I deploy a new version, wont I lose all the files uploaded directly into running app (i.e. files and database records)? How do I keep those and at the same time deploy a new version of my app?
The WordPress team keeps issuing upgrades. Can I directly upgrade my running WordPress install, through the web interface? Or do I have to first upgrade my local version of WordPress, and then upload the new version of the app to Beanstalk? If I have to upgrade my local version and then upload, then again I am back to point 1, i.e. changes made by users directly to the older version of running app. How do I preserve those changes?
I've been working on this as well, and have learned a couple of things that are relevant here -- your question about uploads in particular has been on my mind:
(1) The best way to handle uploads, it seems to me, is to either go the NFS/NAS route like you suggest, but one better than that is to use an Amazon S3 plugin for WordPress, so that any uploads automatically copy up to S3 and the URLs in your WordPress media library reflect the FQDN of your bucket and not your specific site. That way you could have one or ten WP nodes in your Beanstalk and media/images are independent of any one of those servers.
(2) You should absolutely be using RDS here. Few things are easier to work with and as stress-free as a Multi-AZ, reserved MySQL RDS instance. Either that or your own EC2 running MySQL that is independent of the Beanstalk, but why run that when RDS is so much easier?
(3) Yes you definitely have to commit changes to your Git repository or local file first (new plugins, changes to themes, WP upgrades) and then upload/install as a revision to the Beanstalk code. Otherwise, all the changes you make via the web interface to one node will never be in the new load for a new node -- in fact you'll have an upgraded database but an older set of code in the Beanstalk application, so it's likely to create errors of some kind or another.
I took an AWS architecture course, and their advice for EC2 and the Beanstalk is to start to think about server instances as very disposable -- so you should try to think about easy ways for your boxes to provision themselves in the bootstrapping process and to take over work for one another without any precious resources on just one box. So losing an instance should never be a big deal. (This is definitely not how we thought in the world of physical servers, where we got everything tweaked 'just so'.)
Good luck!
Well, I'm no expert, but since no one has answered, I'll give it my best shot.
You are absolutely right--kind of. While each EC2 instance does have some local storage, it is destroyed and reset with each new instance. Because of this, Amazon has things like Elastic Block Storage and S3 for persistent files. I don't know how one would configure WP to utilize this, but that will likely be the solution.
I think this problem is solved by my answer to #1. As for the database, all of your EC2 instances should be pulling from the same RDS location. Again, while you could have MySQL running on each EC2 instance, in the interest of persistence, having a separate database makes more sense.
You, again, have most everything right. Local development should always precede live deployment. Upgrading locally then pushing to the live servers will make sure all of your instances remain identical.
Truth-be-told, I am still in the process of learning all of this too, as I said, I'm not an expert. Hopefully someone else will come along and give a more informed answer. However, the key conceptual hurdle here is the idea of elastic scalability--and the salient point of this idea is the separation of elements between what is elastic/scalable/disposable and what is persistent.
Hopefully that helps.
I have deployed a small Wordpress site on EB, S3 and RDS. S3 holds all static data, such as media uploads. This works through a plugin. RDS holds the database. EB holds the latest deployed application. The application is deployed from my dev environment, with a build script. This way, I just have to press one button and I redeploy.
I wrote an article about it here: http://www.cortexcode.com/wordpress-to-aws-code-example/
While it was at first annoying to work with, the speed of AWS is nice and now it's easier than ever. It used to be that I had to upload a bunch of files over FTP, this is way more efficient. :-)
As an addition to all the great answers already:
1) I can highly recommend EFS but also S3 for media files, so they are served from high availability regions in combination with cloudfront. For Wordpress there is one plugin that really speeds up this ( not affiliated to them, just really like the plugin ). There is also an assets plugin, if you'd like to serve JS, CSS files from S3. For the EFS solution, take a look at the AWSlabs docs on git, and specifically this file on how they mount the uploads file.
In general, EBS is really great for Wordpress, but you'll need to think in a different mindset as compared to other hosting solutions ( shared hosting, managed hosting ).
OK I researched a lot on this particular issue, and this is what I learned--
(1) If a wordpress user uploads some files, then his files will be uploaded only to the virtual machine that is actually serving his request at that time. Eg if currently the wordpress site is cloud-deployed and is using 5 virtual machines, now when user makes request he is directed to one virtual machine-- the one with the lowest load at that point... His uploads are stored only on that server. Current Platform-as-a-service solutions (like Amazon Elastic Beanstalk and App Fog) do not have the ability to propagate the changes to all the running instances. Either that (propagate changes to all servers) or use a common storage by all running instances-- these are the only 2 solutions to this problem. (Eg of common storage would be all 5 running virtual machines using Network-Attached-Storage (NAS)... )
(2) With ref to platforms available currently like Amazon Elastic Beanstalk and App Fog, for example, even if user made changes directly to running app- these platforms rely on the local version of code (which the admin deployed initially to cloud)- and there is no way to update the local version of code (on admin's PC) with the changes made by a user to running app-- hence these changes viz, files are lost-- Similarly, changes in database by user to running app are also lost-- unless the admin is using exactly the same database for his local app (that he deployed to cloud)
(3) Any changes to running apps first have to be made to the local app on admin's PC and then pushed to cloud.
I am working on a Cloud PaaS that addresses all these concerns-- viz updates can be made to running apps, code changes made to running app are also updated in code repository accessible by user...The Proof of concept is ready, hopefully it will be as good as I hope it should be :) -- currently the only thing that is actually there is the website (anyacloudpanel.com) -- design work is going on :)
If there is some rule that I should not mention my website( Anya Cloud Panel) -- then I am sorry -- pls feel free to edit and remove my website URL from my answer :)
Thanks,
Arvind.
Deploying WordPress to AWS Elastic Beanstalk does require some change to the normal WordPress deployment as mentioned here a few times. To answer your questions, here is a great tutorial explaining stateless applications and how to deploy to Elastic Beanstalk:
Deploying WordPress to Amazon Web Services AWS EC2 and RDS via ElasticBeanstalk
Be careful if you use a theme from themeforest for example. Some of them are incompatible with wordpress S3 plugin. Then you're screwed, you can not deploy your wordpress on the cloud.
I realize CloudFoundry is still in beta and I'll admit to being moderately ignorant when it comes to this level of cloud computing but here's my question: I create an app, everything works, I upload it to CF. Now what? I want to launch my app in the wild. I want users to not see a CF URL.
Here are some pieces I do know, but I'm not getting the entire picture.
I know I can map a URL to an app. So presumably that's just some DNS routing happening. But other than that, is it safe at this point to bet the farm on CF and, for example, launch of startup using it? At what point am I going to realize I need to move to something like RackSpace (or whatever) and is it possible to take my CF VM and just move it?
Overall, I just don't fully understand what we're getting with CF other than a quick way to deploy a demo application.
At this point, if you need a custom domain, you need to configure an external proxy and from there route the traffic to your CF.com URL. This is a good example.
But the advantage of CloudFoundry is that it is entirely open source. You can always move your app to a compatible service provider, for example AppFog, with not much more than a simple push.
You could even deploy your own CF instance/server on Rackspace.
It appears that there is still no support for external domain mapping on Cloud Foundry. Here is another example that uses a Python reverse proxy running on Google AppEngine. This works well. http://programming.mvergel.com/2011/11/cloud-foundry-and-custom-domain.html
Right now, CloudFoundry.com doesn't offer domain mapping. You might expect that it will do so in a future fully-supported paid version, but as you note, right now it is still in beta.
For what it's worth, I am running a startup B2B product on CloudFoundry.
I have deployed the open source version on our own infrastructure though, I keep a close watch on changes and even review other people's commits.
That's a significant investment in terms of learning and time, but in my opinion it's worth it.
G'day
For security reasons, our CFAdmin (and accordingly RDS) is accessed via one domain, say cfadmin.ourdomain.com, and access the site via a different domain: www.ourdomain.com.
Via some miracle I have just been able to get both RDS and a server set up without RDS giving me "Could not initialize class com.adobe.rds.core.services.Messages" (this is a first), and will allow me to launch a debugging session. However it tries to hit the file I'm testing via cfadmin.ourdomain.com (and the actual website is not defined on that IIS website). I can understand why this happens, but I can't figure out how to tell the debugging config that the actual website is www.ourdomain.com.
It is not a possibility to have either CFIDE accessible on www.ourdomain.com, or the site accessible via cfadmin.ourdomain.com. So that cannot be part of a proposed solution.
Anyone have any ideas?
Oh: this is on CF9.0.1.
UPDATE:
Sorry, just to be absolutely clear... this is our dev environment. This is all running on my local PC. However the local server (a VM running on my workstation) is configured the same as the prod environment (for obvious reasons), down to how CFAdmin is accessed.
This is not the answer you will like :) Your security folks were right in the first place. CF Builder debugging is a development tool designed to be used on a dev system and it works well mostly in a local dev environment where CF+ Apache or IIS are running on your local workstation.
On a production server RDS itself is a weakness that you really don't want running in that production environment (sorry). Having said that here are my big ideas :)
Have your admin create a folder on cfadmin.ourdomain.com that points to your prod code. Depending on your code you might get that to work.
Point cfadmin.ourdomain.com to your production code directly (no folder). After all the CFIDE and other mappings used by RDS are aliases or virtuals anyway.
That's all I got sorry....
Windows 2008 Server R2 64bit, ColdFusion8 Enterprise Edition (multi-server configuration), Plesk
Recently purchased dedicated hosting on Hostgator and set up a website with help from a server guy. I'm doing a second site on my own this time. I've successfully created the new site in plesk, pointed the domain etc, but now need to set up a new instance of ColdFusion for this new site and I'm not sure how to go about it. From my googling, I'm guessing it's simply a case of setting up a new "instance", which can be done via the coldfusion administrator. Is this correct? Is there anything else I need to know? Any gotchas waiting to bite me?
Thanks in advance for any help.
Well you don't need to set up a new CF instance for a new website: there's not a one-to-one relationship between the two ideas.
The minimum you really need to do is to set up the new website, in a directory within the CF application root, or within a CF-mapped directory if the dir is outside that root, and then run the web server connector (wsconfig.exe) to configure the website to pass requests for CF files to CF.
On a standard install, wsconfig.exe is located in the [coldfusion]/runtine/bin dir, and on a multiserver install it's in [JRun]/bin.
If you do add a new instance, you still need to run wsconfig to connect the website to the CF instance.
user460114, new instance == new "server", so when first instance crashes for some reason, second will still run because it uses separate heap space and lives it's own life.
You could use this for some other purpose like failover instance, or cluster, or staging instance. It's not needed in your case.
You need to need to make new directory in webroot, here's example:
C:\inetpub\www\old_site
C:\inetpub\www\new_site
And set new_domain.com to point to ..\new_site, and leave old_site.com point to ..\old_site.
Cheers