I've searched for a couple hours and can't find an exact answer to the following question.
Is there an advantage to hosting web services on a sub-domain as opposed to having virtual directories point to the web services themselves.
Ex.: webserivces.domain.com/login vs. domain.com/webservices/login
IIS relies on application pools, and assuming that the web services are associated to a single application pool, then what is the difference between hosting a segregated website (e.g., sub-domain) versus applying a virtual directory to each website utilizing those same web services.
domain.com/
-- WebServices/ <- Where the web services actually reside
---- login
-- WebSite1/
---- Login.aspx
-- WebSite2/
---- Login.aspx
WebSuite.domain.com
-- WebServices/ <- VIRTUAL DIRECTORY
-- WebSite3/
---- Login.aspx <- Calls "login" web service through virtual directory
Personally i always go for sub domains as its much more flexible/extensible than using vdirs but either way works fine (particularly at small scale). My reasoning for subdomains is:
When you initially roll out your 2 dns names you can point both to the same server at no cost (if anything improved throughput as browser throttle requests per hostname).
At some point in the future you might want to split the front and back ends onto 2 severs. With subdomains this is simple, spin up your new server then make a single DNS change. The two domains could be further upgraded with load balencers allowing multiple web/api servers etc - all pretty seamlessly. Those upgrades with vdirs are trickier and require more work (but still totally do-able).
Application pools enforce process isolation - so (in theory) app pools shouldn't adversely impact each other (performance/security).
The front and back-end have very different workloads and optimal configurations, and if they have no specific requirements to be in the same app pool its best practise to keep the separate.
(subjective) Operating at the website level is easier for scripting/automation/deployment.
So basically, if your talking about a low traffic site with limited growth potential it really doesn't matter. If theres any possibility you will need to scale up, save yourself some pain and go the subdomain route from the start.
Related
Right now the website is running locally and I'm still working on it.
While doing this I also have to make it visible to a specific group of users as I need their feedback in order to add/change features, etc.
I've tried to find a free web hosting without any luck (see dependencies).
I was thinking to create a VPN but then I will have to use my PC as a host for a virtual machine which is by far not what I'm looking for.
Therefore, my questions are:
1. Which is the best way to achieve this (website visibility for TESTING) fast and easy?
2. If a dedicated web host is the best solution, please point me to an easy-to-use and cheap one. What I've tried so far: elastichosts, alwasydata, stackable, 1FreeHosting and probably others I don't remember right now. For a reason or another I couldn't use none of the above.
Another aspect to be considered: I want this only for simple testing and I don't need a lot of server resources. Also the traffic will be very low as there are only 5 testers. That's why I wouldn't pay too much for it. I will probably need this temporary web hosting for 2-3 months.
Dependencies:
- as the website uses mezzanine, for the moment I only need mezzanine's dependencies.
Thanks in advance!
You can always just setup port forwarding on your router. This would allow your testers direct access to your app. Though this might give your PC more exposure than you want.
Heroku has a free tier.
In your non free options, an instance at linode costs $20/month, but requires some setup. Rackspace has similar options in their cloud servers line. Both are no contract servers.
My blogpost covers gracefully deploying a Mezzanine site. The monthly hosting cost is nothing compared to the cost of a slow, painful deployment process.
An EC2 micro-instance right now costs as little as ~US$3.50/month. I create and destroy staging servers on EC2 servers for testing and sharing with others.
Is there an easy way to migrate a hosted LAMP site to Amazon Web Services? I have hobby sites and sites for family members where we're spending far too much per month compared to what we would be paying on AWS.
Typical el cheapo example of what I'd like to move over to AWS:
GoDaddy domain
site hosted at 1&1 or MochaHost
a handful of PHP files within a certain directory structure
a small MySQL database
.htaccess file for URL rewriting and the like
The tutorials I've found online necessitate PuTTY, Linux commands, etc. While these aren't the most cumbersome hurdles imaginable, it seems overly complicated. What's the easiest way to do this?
The ideal solution would be something like what you do to set up a web host: point GoDaddy to it, upload files, import database, done. (Bonus points for phpMyAdmin being already installed but certainly not necessary.)
It would seem the amazon AWS marketplace has now got a solution for your problem :
https://aws.amazon.com/marketplace/pp/B0078UIFF2/ref=gtw_msl_title/182-2227858-3810327?ie=UTF8&pf_rd_r=1RMV12H8SJEKSDPC569Y&pf_rd_m=A33KC2ESLMUT5Y&pf_rd_t=101&pf_rd_i=awsmp-gateway-1&pf_rd_p=1362852262&pf_rd_s=right-3
Or from their own site
http://www.turnkeylinux.org/lampstack
A full LAMP stack including PHPMyAdmin with no setup required.
As for your site and database migration itself (which should require no more than file copies and a database backup/restore) the only way to make this less cumbersome is to have someone else do it for you...
Dinah,
As a Web Development company I've experienced an unreal number of hosting companies. I've also been very closely involved with investigating cloud hosting solutions for sites in the LAMP and Windows stacks.
You've quoted GoDaddy, 1And1 and Mochahost for micro-sized Linux sites so I'm guessing you're using a benchmark of $2 - $4 per month, per site. It sounds like you have a "few" sites (5ish?) and need at least one database.
I've yet to see any tool that will move more than the most basic (i.e. file only, no db) websites into Cloud hosting. As most people are suggesting, there isn't much you can do to avoid the initial environment setup. (You should factor your time in too. If you spend 10 hours doing this, you could bill clients 10 x $hourly-rate and have just bought the hosting for your friends and family.)
When you look at AWS (or anyone) remember these things:
Compute cycles is only where it starts. When you buy hosting from traditional ISPs they are selling you cycles, disk space AND database hosting. Their default levels for allowed cycles, database size and traffic is also typically much higher before you are stopped or charged for "overage", or over-usage.
Factor in the cost of your 1 database, and consider how likely it will be that you need more. The database hosting charges can increase Cloud costs very quickly.
While you are likely going to need few CCs (compute cycles) for your basic sites, the free tier hosting maximums are still pretty low. Anticipate breaking past the free hosting and being charged monthly.
Disk space it also billed. Factor in your costs of CCs, DB and HDD by using their pricing estimator: http://calculator.s3.amazonaws.com/calc5.html
If your friends and family want to have access to the system they won't get it unless you use a hosting company that allows "white labeling" and provides a way to split your main account into smaller mini-hosting accounts. They can even be setup to give self-admin and direct billing options if you went with a host like www.rackspace.com. The problem is you don't sound like you want to bill anyone and their minimum account is likely way too big for your needs.
Remember that GoDaddy (and others) frequently give away a year of hosting with even simple domain registrations. Before I got my own servers I used to take HUGE advantage of these. I've probably been given like 40+ free hosting accounts, etc. in my lifetime as a client. (I still register a ton of domain through them. I also resell their hosting.)
If you aren't already, consider the use of CMS systems that support portaling (one instance, many websites under different domains). While I personally prefer DotNetNuke I'm sure that one of its LAMP stack competitors can do the same for you. This will keep you using only one database and simplify your needs further.
I hope this helps you make a well educated choice. I think it'll be a fine-line between benefits and costs. Only knowing the exact size of every site, every database and the typical traffic would allow this to be determined in advance. Database count and traffic will be your main "enemies". Optimize files to reduce disk-space needs AND your traffic levels in terms of data transferred.
Best of luck.
Actually it depends upon your server architecture, whether you want to migrate whole of your LAMP stack to Amazon EC2.
Or use different Amazon web services for different server components like Amazon S3 for storage and Amazon RDS for mysql database and so.
In case if you are going with LAMP on EC2: This tutorial will atleast give you a head up.
Anyways you still have to go with essential steps of setting up the AMI and installing LAMP through SSH.
I have a web-based interface for handing invoices, customer records and other transaction records which interacts currently with a database of all the aforementioned stored upon the same machine. As you can imagine, this is quite a simple set-up consisting of a web-app (PHP) and a database (MySQL). However, the ideal scenario is to keep the records on the machine they are currently on (easy) and move the web-app to another server within the same network (again, easy) ... but in addition, provide facilities on a public-facing website for managing accounts by customers and so forth. The problem is this - the public-facing web server is located in a completely separate location as it is a dedicated server provided by a well-known ISP.
What would be the best way to enable the records to be accessible from this other server whilst ensuring that all communications are secure. Speed is not a huge factor, although any outages on either side should be handled gracefully. Initially my thoughts went towards web services (XML-RPC/SOAP/Hessian), but these options seem to present difficulties (security being the main one, overcomplexity as well).
The web-app must remain PHP-based. The public-facing site is likely to be PHP-based as well, although Python (likely using Django) is another option. The introduction of any other technologies (Java etc) is not a problem, although it is preferred if they be Linux-friendly (so .NET would not be the best fit here).
Apologies if this question is somewhat verbose and vague. I am testing the water somewhat in regards to this kind of problem. Any advice or suggestions gratefully received.
I've done something similar. You can expose a web service to the internet that will do the database access, but requests to the service must match a strong hashed and salted password (which will be secured on the ISP's server in the DMZ.)
Either this or some sort of public/private key encryption scheme.
OK, this might seem a bit silly, but what if you just used mysql replication?
Instead of using all sorts of fancy web services, just have a master sql server on one machine, then have it replicate to another server that holds the slave sql server as well as the web app
I was wondering if anyone can compare/contrast the differences between frontend, backend, and middleware ("middle-end"?) succinctly.
Are there cases where they overlap?
Are there cases where they MUST overlap, and frontend/backend cannot be separated?
In terms of bottlenecks, which end is associated with which type of bottlenecks?
Here is one breakdown:
Front-end tier -> User Interface layer usually consisting of a mix of HTML, Javascript, CSS, Flash, and various server-side code like ASP.Net, classic ASP, PHP, etc. Think of this as being closest to the user in terms of code.
Middleware, middle-tier -> One tier back, generally referred to as the "plumbing" part of a system. Java and C# are common languages for writing this part that could be viewed as the glue between the UI and the data and can be webservices or WCF components or other SOA components possibly.
Back-end tier -> Databases and other data stores are generally at this level. Oracle, MS-SQL, MySQL, SAP, and various off-the-shelf pieces of software come to mind for this piece of software that is the final processing of the data.
Overlap can exist between any of these as you could have everything poured into one layer like an ASP.Net website that uses the built-in AJAX functionality that generates Javascript while the code behind may contain database commands making the code behind contain both middle and back-end tiers. Alternatively, one could use VBScript to act as all the layers using ADO objects and merging all three tiers into one.
Similarly, taking middleware and either front or back-end can be combined in some cases.
Bottlenecks generally have a few different levels to them:
1) Database or back-end processing -> This can vary from payroll or sales or other tasks where the throughput to the database is bogging things down.
2) Middleware bottlenecks -> This would be where some web service may be hitting capacity but the front and back ends have bandwidth to handle more traffic. Alternatively, there may be some server that is part of a system that isn't quite the UI part or the raw data that can be a bottleneck using something like Biztalk or MSMQ.
3) Front-end bottlenecks -> This could client or server-side issues. For example, if you took a low-end PC and had it load a web page that consisted of a lot of data being downloaded, the client could be where the bottleneck is. Similarly, the server could be queuing up requests if it is getting hammered with requests like what Amazon.com or other high-traffic websites may get at times.
Some of this is subject to interpretation, so it isn't perfect by any means and YMMV.
EDIT: Something to consider is that some systems can have multiple front-ends or back-ends. For example, a content management system will likely have a way for site visitors to view the content that is a front-end but what about how content editors are able to change the data on the site? The ability to pull up this data could be seen as front-end since it is a UI component or it could be seen as a back-end since it is used by internal users rather than the general public viewing the site. Thus, there is something to be said for context here.
Generally speaking, people refer to an application's presentation layer as its front end, its persistence layer (database, usually) as the back end, and anything between as middle tier. This set of ideas is often referred to as 3-tier architecture. They let you separate your application into more easily comprehensible (and testable!) chunks; you can also reuse lower-tier code more easily in higher tiers.
Which code is part of which tier is somewhat subjective; graphic designers tend to think of everything that isn't presentation as the back end, database people think of everything in front of the database as the front end, and so on.
Not all applications need to be separated out this way, though. It's certainly more work to have 3 separate sub-projects than it is to just open index.php and get cracking; depending on (1) how long you expect to have to maintain the app (2) how complex you expect the app to get, you may want to forgo the complexity.
There are in fact 3 questions in your question :
Define frontend, middle and back end
How and when do they overlap ?
Their associated usual bottlenecks.
What JB King has described is correct, but it is a particular, simple version, where in fact he mapped front, middle and bacn to an MVC layer.
He mapped M to the back, V to the front, and C to the middle.
For many people, it is just fine, since they come from the ugly world where even MVC was not applied, and you could have direct DB calls in a view.
However in real, complex web applications, you indeed have two or three different layers, called front, middle and back. Each of them may have an associated database and a controller.
The front-end will be visible by the end-user. It should not be confused with the front-office, which is the UI for parameters and administration of the front. The front-end will usually be some kind of CMS or e-commerce Platform (Magento, etc.)
The middle-end is not compulsory and is where the business logics is. It will be based on a PIM, a MDM tool, or some kind of custom database where you enrich your produts or your articles (for CMS). It'll also be the place where you code business functions that need to be shared between differents frontends (for instance between the PC frontend and the API-based mobile application). Sometimes, an ESB or tool like ActiveMQ will be your middle-end
The back-end will be a 3rd layer, surrouding your source database or your ERP. It may be jsut the API wrting to and reading from your ERP. It may be your supplier DB, if you are doing e-commerce. In fact, it really depends on web projects, but it is always a central repository. It'll be accessed either through a DB call, through an API, or an Hibernate layer, or a full-featured back-end application
This description means that answering the other 2 questions is not possible in this thread, as bottlenecks really depend on what your 3 ends contain : what JB King wrote remains true for simple MVC architectures
at the time the question was asked (5 years ago), maybe the MVC pattern was not yet so widely adopted. Now, there is absolutely no reason why the MVC pattern would not be followed and a view would be tied to DB calls.
If you read the question "Are there cases where they MUST overlap, and frontend/backend cannot be separated?" in a broader sense, with 3 different components, then there times when the 3 layers architecture is useless of course. Think of a simple personal blog, you'll not need to pull external data or poll RabbitMQ queues.
Here is a real world example which shows front/mid/back end.
General description:
Frontend is responsible for presenting data to user. Please note interesting quirk that you may have two different front ends associated with single backend
Backend provides business logic/data persistence.
Middleware (activemq in the picture) is responsible for system to system. integration between backends. Usually it is installed as separate application
Overlapping:
It is possible to have overlapping between frontend and backend. This usually leaads to long-term issues with application maintenance and scalability. Fairly common in legacy applications.
Most modern technology stacks encourage developers to have strict separation. For example in the picture you can see that backend of the first system has rest web service which is a clear separation line.
Bottlenecks
Most bottlenecks in large are caused by database/network. Databases are located in backend. As for network issues every connection goes through netowrk, so every connection has potential for being slow. With good application design these issues are avoidable to large extend.
In terms of networking and security, the Backend is by far the most (should be) secure node.
The middle-end portion, usually being a web server, will be somewhat in the wild and cut off in many respects from a company's network. The middle-end node is usually placed in the DMZ and segmented from the network with firewall settings. Most of the server-side code parsing of web pages is handled on the middle-end web server.
Getting to the backend means going through the middle-end, which has a carefully crafted set of rules allowing/disallowing access to the vital nummies which are stored on the database (backend) server.
Frontend refers to the client-side, whereas backend refers to the server-side of the application. Both are crucial to web development, but their roles, responsibilities and the environments they work in are totally different. Frontend is basically what users see whereas backend is how everything works
Frontend -> these are the client side of a website from where a user can interact with the server through User Interface. generally built using Html and CSS.
Middleware -> Middleware are the software or service which is responsible for the system to communicate and manage the data. it handles the communication between components and input/output
Backend -> Backend are the server side of any application which consist of all functioning and operations performed on data. this part is considered to be most essential part of any application. Only the server admin have access to this. it mainly consist of database and servers.
At my day job we have load balanced web servers which talk to load balanced app servers via web services (and lately WCF). At any given time, we have 4-6 different teams that have the ability to add new web sites or services or consume existing services. We probably have about 20-30 different web applications and corresponding services.
Unfortunately, given that we have no centralized control over this due to competing priorities, org structures, project timelines, financial buckets, etc., it is quite a mess. We have a variety of services that are reused, but a bunch that are specific to a front-end.
Ideally we would have better control over this situation, and we are trying to get control over it, but that is taking a while. One thing we would like to do is find out more about what all of the inter-relationships between web sites and the app servers.
I have used Reflector to find dependencies among assemblies, but would like to be able to see the traffic patterns between services.
What are the options for trying to map out web service relationships? For the most part, we are mainly talking about internal services (web to app, app to app, batch to app, etc.). Off the top of my head, I can think of two ways to approach it:
Analyze assemblies for any web references. The drawback here is that not everything is a web reference and I'm not sure how WCF connections are listed. However, this would at least be a start for finding 80% of the connections. Does anyone know of any tools that can do that analysis? Like I said, I've used Reflector for assembly references but can't find anything for web references.
Possibly tap into IIS and passively monitor the traffic coming in and out and somehow figure out what is being called and where from. We are looking at enterprise tools that could help but it would be a while before they are implemented (and cost a lot). But is there anything out there that could help out quickly and cheaply? One tool in particular (AmberPoint) can tap into IIS on the servers and monitor inbound and outbound traffic, adds a little special sauce and begin to build a map of the traffic. Very nice, but costs a bundle.
I know, I know, how the heck did you get into this mess in the first place? Beats me, just trying to help us get control of it and get out of it.
Thanks,
Matt
The easiest way is to look through the logs, but if that doesn't include the referrer than you may also want to monitor what is going out from your web to the app server. You can use tools like Wireshark or Microsoft Network Monitor to see this traffic.
The other "solution" and I use this loosely is to bind a specific web server to app server and then run through a bundle and see what it is hitting on the app server. You could probably do this in a test environment to lesson the effects on the users of the site.
You need a service registry (UDDI??)... If you had a means to catalog these services and their consumers, it would make this job of dependency discovery a lot easier. That is not an easy solution, though. It takes time and documentation to get a catalog in place.
I think the quickest solution would be to query your IIS logs and find source URLs which originate from your own servers. You would at least be able to track down which servers your consumers are coming from.
Also, if you already have some kind of authentication mechanism in place, you could trace who is using a particular service based on login.
You are right about AmberPoint. There are other tools that catalog the service traffic and provide reports showing what is happening to your services. Systinet, SOA Software and Actional also has a products similar to Amberpoint but Amberpoint has a free-ware version, I believe.