Elmah not all errors appear in the interface - elmah

Even though I have all the errors in MongoDb, I am not able to see them all in the list.
I am able though to access a specific error by ID (localhost/elmah.axd/detail?id=...)
The message on the top of the page "Errors 1 to 15 of total ..." is also correct.
The only thing I think may not be OK is that the time and date on the Mongo server is not the same with the one on the web server, and I see that web server's time and date are being displayed in the errors interface, and errors are also being sorter by this date and time.
I couldn't find anywhere anything on how does Elmah makes the Mongo queries in order to extract the list of errors and how does it transform the time in the DB in the time on the web server where it displays the data.
Thanks a lot!

Are multiple instances of the application writing to Elmah? Do you have for example a web app and an API app that write to it? Elmah will only display errors for a specific application, you can specify the name in the web.config.
In the same question, a user makes a reference to how the default application name is determined, by using the appdomain GUID. That's another thing that could be different when multiple servers are involved.

Related

Sitecore returns incorrect items in multi server farm

I have a Sitecore website deployed in multi server environment. When I make some changes to Sitecore items sometimes they are shown correctly, but sometimes it shows old data.
I understand that sitecore caches items, but it sometimes showing wrong data and sometime its fine. If its caching it should always be same data at least.
For example:
Sitecore.Globalization.Translate.TextByDomain("MyDictionary", "Category");
Sometimes it returns correct data sometimes it shows wrong data i.e. the one before I changed to item.
I am using Sitecore 8.0
Items get cached on the individual servers in memory, and these are not cleared unless you activate event queues. Further content might be cached in the output cache, which needs to be cleared after you publish.
Here is a guide on how to activate event queues and here is also a good description
Here is how to make your sites clear output cache after publish
Thanks Jens for your help. The links really helped me with my understanding of Sitecore farm.
But the issue turned out to be rather silly. For some reason on one content delivery server Application Pool account didn't had permission on the virtual directory.

How to create an API and then dynamically retrieve data from and add new data to it?

To start off, I am extremely sorry if my question is not clear but I have very little knowledge about web services in general and the vast nature of varying available information has driven me crazy over the past few weeks. So please do bear with me.
Summary: I want to create a live score update app for android. (I haven't added android as a tag because I do know how to retrieve data from say twitter's JSON api.) However, like the twitter JSON api, I want to be able to add(POST maybe?) data to the Apache 7.0 service that I have running. I then want the app to be able to be able to retrieve this data that I have posted.
I had asked a more generic question earlier and I was told that I should look up some api's. I did that but I have still not been unable to make a break through.
So my questions is:
Is setting up an API on my local web service the correct way to do this?
If so, how can I setup an API that will return JSON objects to the Android app. Also, I would need to be able to constantly update this API with new data.
Additionally, would I also need to setup a database for all this?
Any links to well explained matter would be appreciated too.
Note: I would like to carry this out using a RESTful Web Service through Jersey and use JSON Objects during retrieval.
Again, I am sorry about my terrible knowledge with web services in general despite trying my best to research a lot. The best I could do was get my RESTful Web to respond to a GET with some pre-defined text that I had set in Eclipse.
Thanks.
If I understand you correctly, what you try to do is something like this:
There will be a match or multiple matches of some sort. Whenever a team/player scores someone (i.e. you) will use the app to update the score. People who previously subscribed to the match, will be notified and see the updated score.
Even though I'm not familiar with backends based on Java, the implementation should be fairly similar to other programming languages.
First of all a few words to REST in general. REST is generally needed, when you need to share information between multiple devices and or users. This seems to be the case here. To implement the REST you are going to need an API of some sorts. Within the web APIs are implemented by webservers answering to certain predefined HTTP Requests.
Thus setting up an API on a web server is the correct way.
Next a few words on databases. A database is generally needed, if you want to store information persistently. This might, or might not be what you are planning to do. If there are just going to be a few matches at the same time and you don't care about persistence of the data, you can use Java to store a collection of match objects in memory. I'm just saying it is possible, not that it is a good idea. Once your server crashes or you run out of memory due to w/e reason, data is going to be lost. (Of course within the actual implementation you want to cache data for current matches in some way and keeping objects in memory is way to do so).
I'd recommend to use a database.
Within the database, you can then store and access information about the matches like the score, which users subscribed, who played, etc.
JSON is just a way to represent the data/objects that will be shared between the server and the client. You can use JSON to encode request and response data/bodies.
The user has to be informed about the updated score. There are two basic ways to do so. Push or Pull. With pull, the client will check for updated scores after fixed intervals or actions. With push, the server will notify the client about changed scores which will cause him to update the information. Since you are planning on doing a live application and using Java anyways, push seems to be the better way to go.
Last but not least let's have a look at a possible implementation using
Webserver (API endpoints + database)
Administrator (keeps score updated)
User (receives updates)
We assume that the server will respond to HTTP Requests (POST#/api/my-endpoint) with JSON-Objects.
Possible flow
1)
First the administrator creates a match
REQUEST
POST # /api/matches
body: team1=someteam&team2=someotherteam
The server now will create a match object and store it in the database. The response will contain information about the object and whether the action was successful.
2)
The user asks for a list of matches
REQUEST
GET # /api/matches/curret
The response will be a JSON object containing a list of current matches.
RESPONSE
{
matches: [
{id: 1, teams:...}, ...
]
}
3)
(If push)
A user subscribes to a match
REQUEST
GET # /api/SOME_MATCH_ID/observe
The user will now be added as an observer for the match. Again, the response contains information about whether the action was successful or not.
4)
The administrator updates a score
REQUEST
UPDATE # /api/SOME_MATCH_ID
body: team1scored...
The score now gets update on the server (in memory/database) and the user will be notified about the updated score.
5)
The user gets the updated score
REQUEST
GET # /api/SOME_MATCH_ID
RESPONSE
... (Updated score in some way)

Standard way of implementing remote source jquery autocomplete

I have around 1M string elements against which I have to provide autocomplete. I certainly can't transfer the list to client so have to do remote source autocomplete.
Now, what is the standard way - data structure/ algo of implementing the server side of remote source autocomplete. How should I handle the ajax request sent by client for autocomplete? Should I store the list in database, or keep it in RAM in some specific data structure ..etc?
I feel keeping it in database would slow it down too much, but keeping it in RAM would conflict with limited RAM issue.
It is possible to get very acceptable response times for Autocomplete from a remote database. For example, geonames.org has about 6 million location names. Most people, me included, get sub second response times to display the Autocomplete dropdown selection.
Here is a link to a tutorial about how to build the database, the server manager (in several languages), and the Autocomplete code.
http://www.jensbits.com/2010/03/29/jquery-ui-autocomplete-widget-with-php-and-mysql/

Tracing requests of users by logging their actions to DB in django

I want to trace user's actions in my web site by logging their requests to database as plain text in Django.
I consider to write a custom decorator and place it to every view that I want to trace.
However, I have some troubles in my design.
First of all, is such logging mecahinsm reasonable or because of my log table will be enlarging rapidly it causes some preformance problems ?
Secondly, how should be my log table's design ?
I want to keep keywords if the user call search view or keep the item's id if the user call details of item view.
Besides, IP addresses of user's should be kept but how can I seperate users if they connect via single IP address as in many companies.
I am glad to explain in detail if you think my question is unclear.
Thanks
I wouldn't do that. If this is a production service then you've got a proper web server running in front of it, right? Apache, or nginx or something. That can do logging, and can do it well, and can write to a form that won't bloat your database, and there's a wealth of analytical tools for log analysis.
You are going to have to duplicate a lot of that functionality in your decorator, such as when you want to switch it on or off, or change the log level. The only thing you'll get by doing it all in django is the possibility of ultra-fine control, such as only logging views of blog posts with id numbers greater than X or something. But generally you'd not want that level of detail, and you'd log everything and do any stripping at the analysis phase. You've not given any reason currently why you need to do it from Django.
If you really want it in a RDBMS, reading an apache log file into Postgres or MySQL or one of those expensive ones is fairly trivial.
One thing you should keep in mind is that SQL databases don't offer you a very good writing performance (in comparison with reading), so if you are experiencing heavy loads you should probably look for a better in-memory solution (eg. some key-value-store like redis).
But keep in mind, that, especially if you would use a non-sql solution you should be aware what you want to do with the collected data (just display something like a 'log' or do some more in-deep searching/querying on the data).
If you want to identify different users from the same IP address you should probably look for a cookie-based solution (if you are using django's session framework the session's are per default identified through a cookie - so you could just simply use sessions). Another solution could be doing the logging 'asynchronously' via javascript after the page has loaded in the browser (which could give you more possibilities in identifying the user and avoid additional load when generating the page).

Can I make an unbuffered query in ColdFusion?

I'm in the process of porting a Java desktop application to a ColdFusion web app. This desktop app made queries with very large result sets (thousands of text records) that, while being all right on the database side, could take a lot of memory on the client side if they were buffered. For this reason, the app explicitly tells the database driver to not buffer results too much.
Now that I'm working on the ColdFusion port, I'm being hit by the buffering problem. The ColdFusion page times out during the <cfquery> call, and I'm fairly sure this is because it tries to buffer everything.
Can I make an unbuffered query in ColdFusion?
If pagination is not an option (i.e., you're writing out a report for example), then you'll have to get low level with the java, using setFetchSize(). See this answer. Note that the code in the answer uses the DataSourceService, which, with latest security patches from Adobe, is no longer available on CF8. You'll have to figure out how to get a connection via the adminapi or create a connection outside of coldfusion. Or you could transition your datasource to use JNDI, and then you can lookup the resource yourself without using CF api's.
I'm almost certain that ColdFusion does not provide such a mechanism. As a language, it was meant to abstract the developer away from things like that.
I'd suggest that you look into a few options:
Re-work your query to use pagination, and run it in a loop.
Use the timeout attribute on the <cfquery> to prevent timeouts from happening
Use the CreateObject() syntax to instantiate a JDBC database connection.
With the last option, what you'd actually do is access the underlying Java classes to do the querying and getting results. Take a look at this article for a quick look at the CreateObject() function.
You can also look at the Adobe Livedocs for the function, but they don't really seem helpful.
I haven't tried to use CreateObject() to do querying with the Java database access classes, but I imagine that you can probably get it to work.