Autonomy developer info sources [closed] - autonomy

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
Does anyone know some site where I can find information about autonomy??
I'm looking after code samples, architecture posts, and things like that, both about
autonomy IDOL search engine
autonomy interwoven content management server
side note:
I cannot understand why there're so many barriers to access theese product's developer resources. I thought that HP would change autonomy's policy about this but It stills the same: there's absolutely NO access to libraries, code samples, etc... you're forced to have a partner account...
If I could I'd move to opener alternatives... but it's not completly in my hand ;-(

There is little public information available about Autonomy's products.
The best way forward is to build your own network of people who know the product and have had experience with implementations.
The information that is shipped with the product can also help. Specifically regarding the Autonomy IDOL server and the calls you can make - some resources
The IDOL Administrator manual: Probably the most complete document available. It will help you understand the components which make up an IDOL architecture. However, it will not go into too much detail on complex architectures.
The Online Help: (http://:/a=help) Most components have an online help which documents all the calls and parameters.
The GRL: (http://:/a=grl) Gives you the most recent commands sent to a component. The best way to 'reverse engineer' how to components are interacting with each other.
I found that most active discussions regarding Autonomy's product suite can be found here.

Related

Any commercial grade web site built using emweb Wt C++ toolkit? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 1 year ago.
Improve this question
Wt C++ webtool kit (https://www.webtoolkit.eu/wt) has been in existence for a decade. I want to know if any commercial grade website, apart from their homepage, that is in existence today that I can access to ascertain its capability.
I seeking to know the useage Wt in a commercial grade website, preferably in financial domain. Earlier there was one MusicPlyr website which was supposed to be based on Wt (the only one that I know of from published information). It was from a for profit company. Now that site is down. I want to know if there are other similar sites (apart from, of course, the wt homepage). There are some webpages listed in the Wt site. But are best simple webpages, but none full blown website.
Further Gwan (http://gwan.com/), a c based web framework, provides the performance bench mark under heavy load condition. I have searched for a similar performance benchmark numbers for Wt. Till date I haven't got it. Please provide me sites where I can get performance benchmarks of Wt (with their built in webserver).
I am looking for a convincing case to use this framework to design and code a full blown financial commercial website. Please help me to get the above information.
Regards
Rathnadhar K V
From their github
Demos, examples
The homepage itself a Wt application, contains also various examples

Communication method for data exchange between a server and several clients for 10+ years [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
We're running an experiment which will involve collecting data from multiple stations around the world. Each station will be providing HDF5 files with magnetic field measurements in a rate of 1 kHz and some auxiliary data in real time. The latency is going to be a few minutes.
I'm assigned to design this program (in C++, with clients/server model, with server being in linux and clients being cross-platform), and apparently I'll be designing this from scratch. My first concern is not to really do everything from scratch because this will be more error prone and pure wrong, so my question here is: What information/file transfer protocols/libraries should I use so that
The program can live for 10+ years with minimal maintenance
I can have very good support from the community for when I need help.
Since we need something relatively secure, my first thought was libssh (the only cross platform opensource library available out there for ssh), but then after discussing with some pros there I realized that the support there isn't so wonderful because only a few people work with libssh. The pros there hesitated in suggesting OpenSSL, but with OpenSSL I'll have to write my own authentication (apparently, I'm not an expert and that's why I'm asking).
What would you suggest? Please share your vision to whether I should go for OpenSSL, libssh, or something else.
PS: Please, if you're going to start off by saying this question is off-topic, move on and ignore it. Consider being helpful rather than critical.
If you require any additional information, please ask.
I think that OpenSSL might be a good choice.
No you do not have to "write you own authentication" - you just need to generate certificates and keys and put them in the right places - that is all.
I would suggest to look at the examples in <openssl-source-dir>/demos and <openssl-source-dir>/apps to get you started. Reading a book about OpenSSL would also be a good idea - for many other reasons (sometimes not directly related with SSL/TLS).
I hope that helps.

Django-openauth-id documentation and installation guides [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
On a few questions this package was recommended to provide Django with OpenAuth capabilities.
I'm new to Django and as one of my first projects, I'm trying to replicate StackOverflow's login and registration mechanisms. the only two documents that relate to the usability and installation of the package are the README and openid.txt files. edit I forgot to mention the example in their code base /endedit
I implemented what the files and example implemented, but so far I still feel lost in terms of actually understanding how the mechanism works and how to build a site with openauth-id integration.
The questions I have involve:
Best practice way to include multiple openID providers
Proper way to connect the to the Django-user models
Handling any security, privacy, etc issues that may arise
I have put up an example of using django-openid-auth with openid-selector(http://code.google.com/p/openid-selector/) for a nice UI on github. See if this helps.
https://github.com/rajasaur/openid_userprofiles
If something is not clear from the examples, please ask and Id be more than happy to help
Imho:
You need to include each ID provider in a separate Authentication backend.
Bast practice is also to use build in User model.
Look for example plugin that provides multiple authentication providers
django-social_auth at github.
Hope that will help...

Wiki software for documenting APIs [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
What's an advisable way of documenting and sharing APIs (e.g. HTTP web-services)?
The requirements are:
A Wiki type system in which anyone can edit any page.
An easy way to write an API spec so that the styling/formatting is applied automatically, rather than having to manually add the styling for each individual page.
I would use Wordpress, except that it's not really a Wiki system; it's more of a blog engine. I want a nice, clean, structured hierarchy of pages, and the ability to click and edit instantly.
I tried Google Sites, but this also seems to be unsuitable, because it doesn't allow me to create a consistent style for APIs. The only control I have over styling is "themes", which change the look & feel of the whole site, and aren't specific enough.
I found a hosted solution here, but at $499 p/year I'm sure we can do better.
Any suggestions?
Many projects use trac. Here is an example of a project that uses it http://djangobb.org/wiki
Trac integrates together wiki, issue tracking and source control.
Might consider using something like doxygen to generate an inital snapshot and then just wikify that.
A similar question was posted here also: Wiki solution for APIs documentations?
and I suggested using MindTouch
**jonathan, just saw your comment about trac adding too much complexity. you'll likely find the same with MindTouch, but that's because you're asking for a solution to a specific problem, and the suitable tools available offer much more capabilities (ie complexities)

Is there a list of known web crawlers? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
I'm trying to get accurate download numbers for some files on a web server. I look at the user agents and some are clearly bots or web crawlers, but many for many I'm not sure, they may or may not be a web crawler and they are causing many downloads so it's important for me to know.
Is there somewhere a list of know web crawlers with some documentation like user agent, IPs, behavior, etc?
I'm not interested in the official ones, like Google's, Yahoo's, or Microsoft's. Those are generally well behaved and self-indentified.
I'm using http://www.user-agents.org/ usually as reference, hope this helps you out.
You can also try http://www.robotstxt.org/db.html or http://www.botsvsbrowsers.com.
I'm maintaining a list of crawler's user-agent patterns at https://github.com/monperrus/crawler-user-agents/.
It's collaborative, you can contribute to it with pull requests.
http://www.robotstxt.org/db.html is a good place to start. They have an automatable raw feed if you need that too. http://www.botsvsbrowsers.com/ is also helpful.
Unfortunately we've found that bot activity is too numerous and varied to be able to accurately filter it. If you want accurate download counts, your best bet is to require javascript to trigger the download. That's basically the only thing that is going to reliably filter out the bots. It's also why all site traffic analytics engines these days are javascript based.