If I have a multilingual site, what is the best way to pass information about language?
Right now the language is saved in cookies. That's convenient except that might be not good for search optimization, if search bots don't use cookies.
The other option would be specifying language in address, like exampel.com/?lang=de, but then you probably need to add ?lang=xx to every link on the page.
Is there a right way?
Better way is to maintain this info in session,
The other option would be specifying
language in address, like
exampel.com/?lang=de, but then you
probably need to add ?lang=xx to every
link on the page.
Is there a right way?
I would have created filter than parse each request and fetches the lang param and process accordingly.
Moreover I would recommend you to use following url pattern, and get the lang from filter
yourapp.com/en/welcome/
If you want all the content crawlable then you'd have to pass it in the URL. Either as a parameter http://mydomain.com/en/english-content or maybe have separate sites/subdomains http://english.mydomain.com/english-content
I would use Wikepedia's approach: Different URLs for different languages.
http://en.wikipedia.org for english
http://es.wikipedia.org for Spanish
Related
I want to link to our OTRS (version 5) -- is it possible to create a URL in such a way that a special parametrized search is performed within OTRS?
I'd like to link from a webpage to something like:
https://otrs.charite.de?Ralf.Hildebrandt#charite.de
and that should display all tickets in the queue XYZ and customeruser == Ralf.Hildebrandt#charite.de
Unfortunately that's not possible out of the box - at least in the way you probably intend to use it. All real search functions in OTRS (that I'm aware of) use HTTP-POST. Parameterized POST request using an URL is in principle possible, but strongly discouraged and wouldn't really work if you intend to store those searches as bookmarks or something like that.
The good news - you can create a saved search and trigger that via a URL like this:
https://url.to.otrs.de/otrs/index.pl?Action=AgentTicketSearch;Subaction=Search;TakeLastSearch=1;SaveProfile=1;Profile=current%20Changes
In this case the Profile=current%20Changes would be replaced by the name of the search profile (Special Characters must be encoded).
Im currently working on a more or less RESTful webservice, a type of content api for my companys articles. We currently have a resource for getting all the content of a specific article
http://api.com/content/articles/{id}
will return a full set of article data of the given article id.
Currently we control alot of the article's business logic becasue we only serve a native-app from the webservice. This means we convert tags, links, images and so on in the body text of the article, into a protocol the native-app can understand. Same with alot of different attributes and data on the article, we will transform and modify its original (web) state into a state that the native-app will understand.
fx. img tags will be converted from a normal <img src="http://source.com"/> into a <img src="inline-image//{imageId}"/> tag, samt goes for anchor tags etc.
Now i have to implement a resource that can return the articles data in a new representation
I'm puzzled over how best to do this.
I could just implement a completely new resource, on a different url like: content/articles/web/{id} and move the old one to content/article/app/{id}
I could also specify in my documentation of the resource, that a client should always specify a specific request header maybe the Accept header for the webservice to determine which representation of the article to return.
I could also just use the original url, and use a url parameter like .../{id}/?version=app or .../{id}/?version=web
What would you guys reckon would be the best option? My personal preference lean towards option 1, simply because i think its easier to understand for clients of the webservice.
Regards, Martin.
EDIT:
I have chosen to go with option 1. Thanks for helping out and giving pros and cons. :)
I would choose #1. If you need to preserve the existing URLS you could add a new one content/articles/{id}/native or content/native-articles/{id}/. Both are REST enough.
Working with paths make content more easily cacheable than both header or param options. Using Content-Type overcomplicates the service especially when both are returning JSON.
Use the HTTP concept of Content Negotiation. Use the Accept header with vendor types.
Get the articles in the native representation:
GET /api.com/content/articles/1234
Accept: application/vnd.com.exmaple.article.native+json
Get the articles in the original representation:
GET /api.com/content/articles/1234
Accept: application/vnd.com.exmaple.article.orig+json
Option 1 and Option 3
Both are perfectly good solutions. I like the way Option 1 looks better, but that is just aesthetics. It doesn't really matter. If you choose one of these options, you should have requests to the old URL redirect to the new location using a 301.
Option 2
This could work as well, but only if the two responses have a different Content-Type. From the description, I couldn't really tell if this was the case. I would not define a custom Content-Type in this case just so you could use Content Negotiation. If the media type is not different, I would not use this option.
Perhaps option 2 - with the header being a Content-Type?
That seems to be the way resources are served in differing formats; e.g. XML, JSON, some custom format
I'm new to Django and I have a BIG problem. I don't like the "url pattern" philosophy of Django.
I don't want my pages to look like
http://domain.com/object/title-of-object
I want
http://domain.com/title-of-object
and of course I will have more than one type of object.
Is there an elegant way to achieve this with Django (not using hard-coded urls)?
Thanks!
Ever wondered that, if what you want to do seems so hard to acheive, you're doing it wrong? What is so wrong with /foo/name-of-foo/ ?
I'm trying to imagine your use-case and wondering if you need 'human' URLs for only a handful of pages. If so, it would work to go with the /foo/slug-for-foo/ approach but then use the django.contrib.redirects app to support hand-written URLs that redirect to the saner, more RESTful ones?
It is possible. You'll have to create one catch-all URL pattern, for which you'll create a view that will search all possible object types, find the matching one, and process and return that. Usually, this is a bad idea.
Let's say I have a site where all urls are username specific.
For example /username1/points/list is a list of that user's points.
How do I grab the /username1/ portion of the url from all urls and add it as a kwarg for all views?
Alternatively, it would be great to grab the /username1/ portion and append that to the request as request.view_user.
You might consider attacking this with middlware. Specifically using process_request. This is called before the urlresolver is called and you can do pretty much anything to the request (request.path in this case) you want to. You might strip out the username and store it in the request object. Specifics depend (obviously) on the conditions under which you do/do not want to remove the first path component.
Updated for comment:
Whichever way you go about it, when you call reverse() you have to give it the additional context info -- it can't just automagically figure it out for itself. Django doesn't play any man-behind-the-curtains games -- everything is straight Python and there isn't any global state floating around just off stage. I think this is a Good Thing™.
I have a resource at a URL that both humans and machines should be able to read:
http://example.com/foo-collection/foo001
What is the best way to distinguish between human browsers and machines, and return either HTML or a domain-specific XML response?
(1) The Accept type field in the request?
(2) An additional bit of URL? eg:
http://example.com/foo-collection/foo001 -> returns HTML
http://example.com/foo-collection/foo001?xml -> returns, er, XML
I do not wish to oblige machines reading the resource to parse HTML (or XHTML for that matter). Machines like the googlebot should receive the HTML response.
It is reasonable to assume I control the machine readers.
If this is under your control, rather than adding a query parameter why not add a file extension:
http://example.com/foo-collection/foo001.html - return HTML
http://example.com/foo-collection/foo001.xml - return XML
Apart from anything else, that means if someone fetches it with wget or saves it from their browser, it'll have an appropriate filename without any fuss.
My preference is to make it a first-class part of the URI. This is debatable, since there are -- in a sense -- multiple URI's for the same resource. And is "format" really part of the URI?
http://example.com/foo-collection/html/foo001
http://example.com/foo-collection/xml/foo001
These are very easy deal with in a web framework that has URI parsing to direct the request to the proper application.
If this is indeed the same resource with two different representations, the HTTP invites you to use the Accept-header as you suggest. This is probably a very reliable way to distinguish between the two different scenarios. You can be plenty sure that user agents (including search engine spiders) send the Accept-header properly.
About the machine agents you are going to give XML; are they under your control? In that case you can be doubly sure that Accept will work. If they do not set this header properly, you can give XML as default. User agents DO set the header properly.
I would try to use the Accept heder for this, because this is exactly what the Accept header is there for.
The problem with having two different URLs is that is is not automatically apparent that these two represent the same underlying resource. This can be bad if a user finds an URL in one program, which renders HTML, and pastes it in the other, which needs XML. At this point a smart user could probably change the URL appropriately, but this is just a source of error that you don't need.
I would say adding a Query String parameter is your best bet. The only way to automatically detect whether your client is a browser(human) or application would be to read the User-Agent string from the HTTP Request. But this is easily set by any application to mimic a browser, you're not guaranteed that this is going to work.