ELMAH, JSON and XML Export? - elmah

I noticed recently that ELMAH has support for exporting details of an exception via JSON and XML. Out of sheer curiosity, why would anyone use this?
If I was storing my data in a SQL DB, why not retrieve the value from there? Additionally, the errors are stored in a pseudo xml format already... why export something that's already in xml to xml?
Just wondering...

The JSON and XML export features were added to enable and encourage anyone to develop a client to ELMAH using simply HTTP for access rather than relying on the choice of back-end storage. A client can be, for example, written to provide alternative and richer views (dashboards or using Ajax) in addition to the built-in ones, perform analytics, full-text search and more.
A basic client would need to take one or more “home” URLs of ELMAH deployments and build a TOC of the log. This is easily done by simply downloading the CSV off the home URL. Each record in the CSV provides a URL to the detailed entry, which in turn can be used to get the full details of an error in XML or JSON.

Related

Postman: How to Export/Download API Documentation from Postman?

I have developed a collection in postman having a bunch of API Endpoints. I can add team member to my Postman workspace and also can share the Documentation link publicly online.
What I was finding to have a download link to download the documentation as a folder so that I could add them into my project.
Is there anything I failed to find in postman?
You can export the collection as a json as shown in the other answer and then run a tool to convert it into an HTML document that you can host wherever you want.
I created a simple python executable to do just that -
https://github.com/karthiks3000/postman-doc-gen
Hi #Siddiqui currently this feature is not available, I do it by going to my collection documentation and getting it to print when the print prompt is shown I save the document as PDF before finalizing the print options. Once I get it in PDF I have all sorts options to do as I want. This is the closest I have been to downloading my collection documentation.
I have redacted information for privacy.
Hope this helps or leave a comment if I can be of any further assistance.
Postman generated API documentation is meant to be shared and consumed via workspace and URL to help ensure it is kept up to date and does not go stagnant. Because documentation will most likely be regularly updated with examples, new endpoints, and other elements anything downloaded will quickly be out of date. I know that a PDF generated version has been discussed as part of future releases, but keeping API documentation up to date is the priority.
A simple solution to this is to print the page to PDF from the web browser. It's not perfect but it is usable.
https://learning.postman.com/docs/getting-started/importing-and-exporting-data/
to export the doc to json
and then run the script by #karthiks3000 (https://github.com/karthiks3000/postman-doc-gen)

React-Rest app, where to fetch data from database

I have an App composed by back-end: Python with Django and Django REST, and front-end composed of React.
Right now I have Excel files with data, which I import with python in json format to the back-end, so they are available for a fetch in the front-end via REST-url like here.
I am now translating my data into a web-based-database to be queried into my app.
But I have questions regarding the structure of my app with this change.
I have url-based queries for my new database.
Should I continue to import the queries in the back-end REST framework and, from there, to React?
Or should I use the url-based queries directly inside my React, substituting the REST url calls?
You can get an idea by referring this url.
https://www.andreasreiterer.at/connect-react-app-rest-api/
this describing about how to bind data using REST APIs in react.
I have found some sources that presented me two ways of solving the problem
Case 1:
Have the JSON Query importation on server side in your back-end and pass this data to your API (REST in my case).
Basic source: https://www.valentinog.com/blog/tutorial-api-django-rest-react/
Pros:
Do not need to change the structure for the rest of my application. The data layer continues the same, as before I was working with an Excel file and now I just change to a JSON Query.
The connection between server-client continues to be straight forward
Credential systems can be applied more easily for the data will be stored in your API
Cons
Harder to implement
Connection between python and url queries must have individual settings (url-queries are usually browser-based, and some queries can't be performed in python)
Harder to debug
Case 2: Query data with a native fetch Javascript method and handle the data in client side.
Basic Sources:https://www.robinwieruch.de/react-fetching-data/
https://blog.hellojs.org/fetching-api-data-with-react-js-460fe8bbf8f2
Pros:
Faster and easier to implement
Easier to debug
Javascript handles queries in a simpler way than python
Cons:
Credential system can't be applied
Less secure/robust method
Double connection between client and server (client-queries and client-API), because the API would still be maintained to store local information.

How to parse XML feed and create Django objects from it

is there a good package to parse XML/JSON data and map it to Django model's object fields? The ideal solution would be the one that supports object updates. What can you recommend?
UPDATE:
Will tell a bit about my requirements. There affiliate programs that supply partners with content via XML feeds. What I need is to parse the feed time from time and get content out of it and update already existing content (objects).

Data mining to gather a website's details and put in CSV or SQL

I don't know if it's called data mining or something else.
Let's say I have a world business listing site, that list all the shops. And I saw this website ABC that also list shops, but only in Ausralia. They are in page by page, with no ID.
How do I start to write a program, that will crawl their pages, and put in the selective information of a page in the format of CSV, which I can then import it to my website?
At least, where can I learn this? Thank you.
What you are attempting to do is known as "Web Scraping", here's a good starting point for information, including the legal issues
http://en.wikipedia.org/wiki/Web_scraping
One common framework for writing crawlers like this is Scrapy- http://scrapy.org/
Yes, this process called Web Scraping. If you are familiar with java, most useful tools here is HTMLUnit and WEbDriver. You should use headless browser to go though you pages and extract important information using selector(mostly xpath, regexp in html format)

best way to deal with JSON in django

I am getting a JSON feed from a server and today I convert it to python object and thus to django view. We are now making an update of our site. whereby
the browser client should parse json using jQuery
also we will have a adobe-air app which will consume JSON directly
However I am not so keen on exposing my back-end server directly to browser/adobe client. How best way to go via django? any existing django-app?
regards
django-newbie
You can use certain built-in elements of Django but I've always found that SimpleJSON makes things so much easier.
Why? With straight serialisation, you don't want to show everything. So with the built-in methods, you have to cut a lot out. With SimpleJSON, you built a dict, fill it with only what you want shown and pump it through the SimpleJSON lib. I find inclusion a lot more secure than exclusion when it comes to exposing APIs.
It's also a lot more versatile for consuming data as your client isn't going to be a django site, it's an AIR app with its own ideas about how to format data (even within a spec like JSON there can and probably will be differences).
Oh and remember that there isn't a date type in JSON. (I only mention it because it caused me pain in the past)
Edit: (Thanks Cide) Django ships SimpleJSON in django.utils.simplejson but it might not be there forever. Regardless, you can download it separately from Pypi