Which URL scheme would be better in this situation? - web-services

I'm building a web service which handles items for different countries. So far a request is something like example.com/item.php?country=us&id=1234
The idea is to rewrite these uri using friendly names. I'm thinking about two possible choices:
us.example.com/1234
example.com/us/1234
My preference is for #1 but I was wondering what would be the pros and cons of each. Admin access to the server wouldn't be an issue btw.
Thanks.

Your first solution (us.example.com/...) would require working with DNS entries and host headers. potentially you would have to make a different site for each country (depending on your hosting platform).
Your second approach (example.com/us/...) is simpler, and probably requires less work and maintenance.

Related

Tracking users on internet

We want to track users when they visits our competitor websites.
Suppose the user visits a.com(our site) and then visits b.com and c.com (our competitors) , we want to know the user has visited those sites. Is there a way to track this without using plugins like chrome extensions.
I know there is limitation in cookies. Is there a way to do this?
Thankfully, no.
From time to time accidental mechanisms to leak such information become known (such as, at one time, it was possible to inspect link styles to see if it's visited). Such things are regarded as serious security breaches and are swiftly eliminated by browser vendors.
Basic web security principles isolate the interaction with unrelated domains, unless both web pages willingly try to talk to each other. Furthermore, if a user visits your site first and the closes it, their interaction with you is finished - in no sane scenario would you be able to know what they do atferwards.
Not to mention the obvious fact that what you're asking is just plain highly unethical.
Unless you can get your competitors to give you access to their websites, either through include scripts or indirectly if you manage to track users through ad-networks or Facebook pings, it's not possible. You cannot access (should not be able, in any case) cookies set by another domain.
Some browsers allow access to the previously visited URL through document.referrer -- but that is not guaranteed to work, nor is reliable. In short, there's no way of doing it targeted.
Not being able to easily track people is a good thing; but it makes constructions like the on you are wanting a lot harder.

Is it feasible to have a single sign on page for multiple datasources?

I am in the beginning stages of planning a web application using ColdFusion and SQL Server 2012.
In researching the pros and cons of using multiple databases (one per customer) vs one large database, for my purposes I have decided multiple databases would be the best approach.
With this in mind I am now wondering the best way to proceed regarding logging clients in. I have two thoughts here:
I could use sub-domains with each one being for a specific client. The sub-domain also being the datasource name.
I could have a single sign on page with the datasource for this client stored in a universal users table.
I like the idea of option 2 best however I am wondering how this may work in the real world. Making each user unique would not be ideal (although I suppose I could make this off of an email address instead of a username).
I was thinking of maybe adding something along the lines of a "company code" that would need to be entered along with the username and password.
I feel like this may be asking too much of clients though.
With all of this said, would you advise going with option 1 or option 2? Would also love to hear any thoughts or ideas that may differ.
Thanks!
If you are expecting to have a large amount of data per client, it may be a good idea to split each client into their own database.
You can create a global database that contains client information, client datasource, settings, etc. for each client and then set the client database in the application.cfc.
This also makes it easier at the end if a client request their data or you would like to remove a client from the system.

Naming a SOAP web service endpoint address

I am developing a SOAP interface and am having trouble deciding what to name the endpoint address.
Options:
- {soap,api,service,???}.foo.com.au
- www.foo.com.au/{soap,api,service,???}
What are the typical names that a SOAP service gets?
I would use www.foo.com.au/soap, mostly because it's an easy way to tell people that it's a SOAP service, and if you want to add a REST service later, you can use www.foo.com.au/rest
Keep in mind, in practice, all solutions are technically equivalent. The benefits of one naming system over another are only at the ease or understanding what the URLs are about (for humans), or maintainability, really. So, if you are searching for a standard we can tell, at best:
If you have a big company with lots of applications, go for the http://api.company.com/application/rest and/or http://api.company.com/application/soap approach
Reason: you can separate, right from the start (networkwise) the web service servers (http://api.srv.com/app) from the human web browsing servers (http://www.srv.com/app).
All applications have one big root "meeting" point (the root URL api.company.com), so if anyone wonders what is company-wide available, just check http://api.company.com and it can list all services available.
If your setup is not that big, it is probably not worth the trouble, so don't fear using the www.. But keep in mind it's best to use at least a different context, such as api/, so that anyone knows right off the bat a service URL is about a web service(!): http://www.company.com/application/api/rest / http://www.company.com/application/api/soap
Note: It's also common to use service, although api seems to be somewhat better descriptive (api.something.com leaves no doubt about what that page is about).
Some examples (as you can see, there is really no global standard):
Google's search API: http://ajax.googleapis.com/ajax/services/search/web?v=1.0&q=test
Twitter's search API: http://search.twitter.com/search.json?q=w00t
Facebook' Graph API: http://graph.facebook.com
Facebook' Dialog API: http://www.facebook.com/dialog (see, no standard even within facebook!)
Weather Gov SOAP forecast: http://www.weather.gov/forecasts/xml/DWMLgen/wsdl/ndfdXML.wsdl
Buy many seem to keep the good ol' company-wide APIs "meeting points":
http://developers.google.com
http://developers.facebook.com
http://dev.twitter.com

Is it possible to use Django and Node.Js?

I have a django backend set up for user-logins and user-management, along with my entire set of templates which are used by visitors to the site to display html files. However, I am trying to add real-time functionality to my site and I found a perfect library within Node.Js that allows two users to type in a text box and have the text appear on both their screens. Is it possible to merge the two backends?
It's absolutely possible (and sometimes extremely useful) to run multiple back-ends for different purposes. However it opens up a few cans of worms, depending on what kind of rigour your system is expected to have, who's in your team, etc:
State. You'll want session state to be shared between different app servers. The easiest way to do this is to store external session state in a framework-agnostic way. I'd suggest JSON objects in a key/value store and you'll probably benefit from JSON schema.
Domains/routing. You'll need your login cookie to be available to both app servers, which means either a single domain routed by Apache/Nginx or separate subdomains routed via DNS. I'd suggest separate subdomains for the following reason
Websockets. I may be out of date, but to my knowledge neither Apache nor Nginx support proxying of websockets, which means if you want to use that you'll sacrifice the flexibility of using an http server as a app proxy and instead expose Node directly via a subdomain.
Non-specified requirements. Things like monitoring, logging, error notification, build systems, testing, continuous integration/deployment, documentation, etc. all need to be extended to support a new type of component
Skills. You'll have to pay in time or money for the skill-sets required to manage a more complex application architecture
So, my advice would be to think very carefully about whether you need this. There can be a lot of time and thought involved.
Update: There are actually companies springing around who specialise in adding real-time to existing sites. I'm not going to name any names, but if you look for 'real-time' on the add-on marketplace for hosting platforms (e.g. Heroku) then you'll find them.
Update 2: Nginx now has support for Websockets
You can't merge them. You can send messages from Django to Node.Js through some queue system like Reddis.
If you really want to use two backends, you could use a database that is supported by both backends.
Though I would not recommended it.

Web Service vs Form posting

I have 2 websites(www.mysite1.com and myweb2.com, both sites are in ASP.NET with SQL server as backend ) and i want to pass data from one site to another.Now i am confused whether to use a web service or a form posting (from mysite1 to a page in myweb2)
Can any one tell me the Pros and Cons of both ?
By web service I assume you mean SOAP based web service?
Anyway both are equal with few advantages. Posting is more lightweight, while SOAP is standardized (sort of). I would go with more restful approach, because I think SOAP is too much overhead for simple tasks while not giving much of advantage.
Webservices are SOAP messages (the SOAP protocol uses XML to pass messages back and forth), so your server on both ends must understand SOAP and whatever extensions you want to talk about between them, and they probably (but don't have to) be able to grok WMDL files (that "explains" the various services endpoints and remote functionality available). Usually we call this the SOAP / WS-* stack, with emphasis on 'stack' as there's a few bits of software that needs to be available, and the more complex the SOAP calls, the more of this stack needs to be available and maintained.
Using POST, on the other hand, is mostly associated with RESTful behaviours, and as an example of a protocol of such, look to HTTP. Inside the POST you can of course post complex XML, but people tend to use plain POST to simplify the calling, and use HTTP responses as replies. You don't need any extra software, probably, as most if not all webkits has got HTTP support. My own bias leans towards REST, in case you wonder. Through using HATEOAS you can create really good infrastructure for self-aware systems that can modify themselves with load and availability in real-time as opposed to the SOAP way, and this lies at the centre of the argument for it; HTTP was designed for large distributed networks in mind, dealing with performance and stability. SOAP tends to be a one-stop if-it-breaks-you're-stuffed kinda thing. (Again, remeber my bias. I've written about this a lot on my blog, especially the architecture side and the impact of SOA vs. ROA. :)
There's a great debate as to which is "better", to which I can only say "it depends completely on what you want to do, how you prefer to do it, what you need it to do, your environment, your experience, the position of the sun and the moon(s), and the mood my cat is in." Eh, meaning, a lot.
I'm all for a healthy debate about this, but I tend to think that SOAP is a reinvention; SOAP is an envelope with a header and body, and if that sounds familiar, it is exactly how HTML was designed, a fact very few people tend to see. HTTP as just a protocol for shifting stuff around is well understood and extremely well supported, and SOAP uses it to shift their XML envelopes around. Is there a real difference between shifting SOAP and HTML around? Well, yes, the big difference is that SOAP reinvents all the niceties of HTTP (caching, addressability, state, scaling), and then use HTTP only for delivering the message and nothing else and let the stack itself have to deal with those niceities mentioned earlier. So, a lot of the goodness of HTTP is ignored and recreated in another layer (hence, you need a SOAP stack to deal with it), which to me seems wasteful, ignorant and adding complexity.
Next up is what you want to do. For really complex things, there's lots in the webservices stack of standards (I think it's about 1200 pages combined these days) that could help you out, but if your needs are more modest (ie. not that crazy about seriously complex security, for example) a simple POST (or GET) of a request and an envelope back with results might be good enough. Results in HTTP is, as you probably know, HTTP content-type, so lots is already supported but you can create your own, for example application/xml+myformat (or more correctly, application/x-xml+myformat if I remember correctly). Get the request, if it's a response code 200, and parse.
Both will work. One is heavy (WS-* stack) depending on what your needs are, the other is more lightweight and already supported. The rest is glue, as they say.
I would say the webservice is definitely the best choice. A few pro's:
If in the future you need to add another website, your infrastructure (the webservice) is already there
Cross-site form posting might give you problems when using cookies or
might trigger browser privacy restrictions
If you use form posting you have to
write the same code over and over
again, while with using the
webservice you write the code once,
then use it at multiple locations.
Easier to maintain, less code to
write.
Maintainability (this is related to
the above point) ofcourse, all the
code relevant to exchanging data is
all at one location (your webservice)
There's probably even more. Like design time support/code completion.
From my little experience I'd say that you'd be best using a web service since you can see the methods and structure of the service in your code, once you've created it at the recieving end that is.
Also using the form posting methos would eman you have to fake form submissions which isn't as neat as making a web service call.
Your third way would be to get the databases talking, though I'm guessing they're disparate and can't 'see' each other?
I would suggest a web service (or WCF). As Beanie said, with a service you are able to see the methods and types of the service (that you expose), which would make for much easier and cleaner moving of data.
I agree with AlexanderJohannesen that it is debatable whether SOAP webservices or RESTful apis are better, however if both sites are under your control and done with asp.net, definitely go with SOAP webservices. The tools that Visual Studio provides for creating and consuming webservices are just great, it won't take you more than a few minutes to create the link between the two sites.
In the site you want to receive communication create the web service by selecting add item in VS. Select web service and name it appropriately. Then just create a method with the logic you want to implement and add the attribute [WebMethod], eg.
[WebMethod]
public void AddComment(int UserId, string Comment) {
// do stuff
}
Deploy this on your test server, say tst.myweb2.com.
Now on the consuming side (www.myweb1.com), select Add Web Reference, point the url to the address of the webservice we just created, give it a name and click Add refence. You have a proxy class that you can call just like a local class. Easy as pie.