Is there any way to modify the default protocol schemes recognized by the Rich Edit control when using EM_AUTOURLDETECT? - c++

I'm trying to determine if there is any way to modify the default set of protocol schemes that the Rich Edit Control when using EM_AUTOURLDETECT as described here:
http://msdn.microsoft.com/en-us/library/windows/desktop/bb787991(v=vs.85).aspx
In other words, when the EM_AUTOURLDETECT message is sent to the controland LPARAM is NULL, the following set of protocol schemes are recognized and displayed as a hyperlink:
callto
file
ftp
gopher
http
https
mailto
news
notes
nntp
onenote
outlook
prospero
tel
telnet
wais
webcal
My question is if it is possible to alter (add to) this list?
Is it hard coded or might it be stored in the registry somewhere?
I realize that it is possible to specify the list explicitly via LPARAM, but I am looking for a way to accomplish this without modifying the existing code (which is passing NULL for LPARAM).
More specifically, I'm trying to find a way to get TortoiseSVN's log view dialog to recognize a URL protocol which isn't on this list, without modifying the code. Here's a link to the related code:
http://code.google.com/searchframe#XJa9F1p-bAg/trunk/src/TortoiseProc/LogDialog/LogDlg.cpp&q=EM_AUTOURLDETECT%20package:http://tortoisesvn%5C.googlecode%5C.com&ct=rc&cd=1&sq=
This may be more of a SuperUser question, but since it relates to the way EM_AUTOURLDETECT works, I'm hoping someone here might know the answer.
Does anyone know if the source for the Rich Edit control is available anywhere?

Related

RESTservice, resource with two different outputs - how would you do it?

Im currently working on a more or less RESTful webservice, a type of content api for my companys articles. We currently have a resource for getting all the content of a specific article
http://api.com/content/articles/{id}
will return a full set of article data of the given article id.
Currently we control alot of the article's business logic becasue we only serve a native-app from the webservice. This means we convert tags, links, images and so on in the body text of the article, into a protocol the native-app can understand. Same with alot of different attributes and data on the article, we will transform and modify its original (web) state into a state that the native-app will understand.
fx. img tags will be converted from a normal <img src="http://source.com"/> into a <img src="inline-image//{imageId}"/> tag, samt goes for anchor tags etc.
Now i have to implement a resource that can return the articles data in a new representation
I'm puzzled over how best to do this.
I could just implement a completely new resource, on a different url like: content/articles/web/{id} and move the old one to content/article/app/{id}
I could also specify in my documentation of the resource, that a client should always specify a specific request header maybe the Accept header for the webservice to determine which representation of the article to return.
I could also just use the original url, and use a url parameter like .../{id}/?version=app or .../{id}/?version=web
What would you guys reckon would be the best option? My personal preference lean towards option 1, simply because i think its easier to understand for clients of the webservice.
Regards, Martin.
EDIT:
I have chosen to go with option 1. Thanks for helping out and giving pros and cons. :)
I would choose #1. If you need to preserve the existing URLS you could add a new one content/articles/{id}/native or content/native-articles/{id}/. Both are REST enough.
Working with paths make content more easily cacheable than both header or param options. Using Content-Type overcomplicates the service especially when both are returning JSON.
Use the HTTP concept of Content Negotiation. Use the Accept header with vendor types.
Get the articles in the native representation:
GET /api.com/content/articles/1234
Accept: application/vnd.com.exmaple.article.native+json
Get the articles in the original representation:
GET /api.com/content/articles/1234
Accept: application/vnd.com.exmaple.article.orig+json
Option 1 and Option 3
Both are perfectly good solutions. I like the way Option 1 looks better, but that is just aesthetics. It doesn't really matter. If you choose one of these options, you should have requests to the old URL redirect to the new location using a 301.
Option 2
This could work as well, but only if the two responses have a different Content-Type. From the description, I couldn't really tell if this was the case. I would not define a custom Content-Type in this case just so you could use Content Negotiation. If the media type is not different, I would not use this option.
Perhaps option 2 - with the header being a Content-Type?
That seems to be the way resources are served in differing formats; e.g. XML, JSON, some custom format

Read AICC Server response in cross domain implementation

I am currently trying to develop a web activity that a client would like to track via their Learning Management System. Their LMS uses the AICC standard (HACP binding), and they keep the actual learning objects on a separate content repository.
Right now I'm struggling with the types of communication between the LMS and the "course" given that they sit on two different servers. I'm able to retreive the sessionId and the aicc_url from the URL string when the course launches, and I can successfully post values to the aicc_url on the LMS.
The difficulty is that I can not read and parse the return response from the LMS (which is formatted as plain text). AICC stipulates that the course start with posting a "getParam" command to the aicc_url with the session id in order to retrieve information like completion status, bookmarking information from previous sessions, user ID information, etc, all of which I need.
I have tried three different approaches so far:
1 - I started with using jQuery (1.7) and AJAX, which is how I would typically go about a same-server implementation. This returned a "no transport" error on the XMLHttpRequest. After some forum reading, I tried making sure that the ajax call's crossdomain property was set to true, as well as a recommendation to insert $.support.cors = true above the ajax call, neither of which helped.
2 & 3 - I tried using an oldschool frameset with a form in a bottom frame which would submit and refresh with the returned text from the LMS and then reading that via javascript; and then a variation upon that using an iFrame as a target of an actual form with an onload handler to read and parse the contents. Both of these approaches worked in a same-server environment, but fail in the cross-domain environment.
I'm told that all the other courses running off the content repository bookmark as well as track completion, so obviously it is possible to read the return values from the LMS somehow; AICC is pitched frequently as working in cross-server scenarios, so I'm thinking there must be a frequently-used method to doing this in the AICC structure that I am overlooking. My forum searches so far haven't turned up anything that's gotten me much further, so if anyone has any experience in cross-domain AICC implementations I could certainly use recommendations!
The only idea I have left is to try setting up a PHP "relay" form on the same server as the course, and having the front-end page send values to that, and using the PHP to submit those to the LMS, and relay the return text from the LMS to the front-end iframe or ajax call so that it would be perceived as being within the same domain.... I'm not sure if there's a way to solve the issue without going server-side. It seems likely there must be a common solution to this within AICC.
Thanks in advance!
Edits and updates:
For anyone encountering similar problems, I found a few resources that may help explain the problem as well as some alternate solutions.
The first is specific to Plateau, a big player in the LMS industry that was acquired by Successfactors. It's some documentation that provide on setting up a proxy to handle cross-domain content:
http://content.plateausystems.com/ContentIntegration/content/support_files/Cross-domain_Proxlet_Installation.pdf
The second I found was a slide presentation from Successfactors that highlights the challenge of cross-domain content, and illustrates so back-end ideas for resolving it; including the use of reverse proxies. The relevant parts start around slide 21-22 (page 11 in the PDF).
http://www.successfactors.com/static/docs/successconnect/sf/successfactors-content-integration-turley.pdf
Hope that helps anyone else out there trying to resolve the same issues!
The answer in this post may lead you in the right direction:
Best Practice: Legitimate Cross-Site Scripting
I think you are on the right track with setting up a PHP "relay." I think this is similar to choice #1 in the answer from the other post and seems to make most sense with what you described in your question.

Opening a carbon c++ program from a custom url on OSx

I have been able to set the plist for my project to open the project with a given url. However, I can't get it to pass params to the application (custom urls are built based on the user)
Is there a way to pass the params as command line arguments?
the scheme is essentially
url:userid
I need to be able to get the user id in the application.
Is there a way to do this? I know with cocoa you can create an app delegate to handle this but I need a carbon way to do it.
Thanks in advance!
Install an Apple Event handler to recognize URLs (both the suite and the event name have the same four-character code, 'GURL').
The event's direct object is a URL string. I would expect that string to contain the entire original URL, including any parameters that were encoded into it (e.g. if your custom scheme was xyz://some/data?param1=abc&param2=def, you should receive all of that).
Another important step is to register as a handler for that URL type in your Info.plist file. Read up on CFBundleURLTypes for more.

ie useragent wxWidgets

Im currently using ie as an active x com thing on wxWidgets and was wanting to know if there is any easy way to change the user agent that will always work.
Atm im changing the header but this only works when i manually load the link (i.e. call setUrl)
The only way that will "always work," so far as I've been able to find, is changing the user-agent string in the registry. That will, of course, affect every web browser instance running on that machine.
You might also try a Google search on DISPID_AMBIENT_USERAGENT. From this Microsoft page:
MSHTML will also ask for a new user
agent via DISPID_AMBIENT_USERAGENT
when navigating to clicked hyperlinks.
This ambient property can be
overridden, but it is not used when
programmatically calling the Navigate
method; it will also not cause the
userAgent property of the DOM's
navigator object or clientInformation
behavior to be altered - this property
will always reflect Internet
Explorer's own UserAgent string.
I'm not familiar with the MSHTML component, so I'm not certain that's helpful.
I hope that at least gives you a place to start. :-)
I did a bit of googling today with the hint you provided head geek and i worked out how to do it.
wxWidgets uses an activex rapper class called FrameSite that handles the invoke requests. What i did was make a new class that inherits from this, handles the DISPID_AMBIENT_USERAGENT event and passes all others on. Thus now i can return a different user agent.
Thanks for the help.
Head Geek already told you where in the registy IE will look by default.
This is just a default, though. If you implement [IDocHostUIHandler::GetOptionKeyPath](http://msdn.microsoft.com/en-us/library/aa753258(VS.85%29.aspx) or [IDocHostUIHandler2::GetOverrideKeyPath](http://msdn.microsoft.com/en-us/library/aa753274(VS.85%29.aspx), IE will use that registry entry instead.
You'll probably want to use SysInternal's RegMon to debug this.

Best way to decide on XML or HTML response?

I have a resource at a URL that both humans and machines should be able to read:
http://example.com/foo-collection/foo001
What is the best way to distinguish between human browsers and machines, and return either HTML or a domain-specific XML response?
(1) The Accept type field in the request?
(2) An additional bit of URL? eg:
http://example.com/foo-collection/foo001 -> returns HTML
http://example.com/foo-collection/foo001?xml -> returns, er, XML
I do not wish to oblige machines reading the resource to parse HTML (or XHTML for that matter). Machines like the googlebot should receive the HTML response.
It is reasonable to assume I control the machine readers.
If this is under your control, rather than adding a query parameter why not add a file extension:
http://example.com/foo-collection/foo001.html - return HTML
http://example.com/foo-collection/foo001.xml - return XML
Apart from anything else, that means if someone fetches it with wget or saves it from their browser, it'll have an appropriate filename without any fuss.
My preference is to make it a first-class part of the URI. This is debatable, since there are -- in a sense -- multiple URI's for the same resource. And is "format" really part of the URI?
http://example.com/foo-collection/html/foo001
http://example.com/foo-collection/xml/foo001
These are very easy deal with in a web framework that has URI parsing to direct the request to the proper application.
If this is indeed the same resource with two different representations, the HTTP invites you to use the Accept-header as you suggest. This is probably a very reliable way to distinguish between the two different scenarios. You can be plenty sure that user agents (including search engine spiders) send the Accept-header properly.
About the machine agents you are going to give XML; are they under your control? In that case you can be doubly sure that Accept will work. If they do not set this header properly, you can give XML as default. User agents DO set the header properly.
I would try to use the Accept heder for this, because this is exactly what the Accept header is there for.
The problem with having two different URLs is that is is not automatically apparent that these two represent the same underlying resource. This can be bad if a user finds an URL in one program, which renders HTML, and pastes it in the other, which needs XML. At this point a smart user could probably change the URL appropriately, but this is just a source of error that you don't need.
I would say adding a Query String parameter is your best bet. The only way to automatically detect whether your client is a browser(human) or application would be to read the User-Agent string from the HTTP Request. But this is easily set by any application to mimic a browser, you're not guaranteed that this is going to work.