Markdown Table in Postman documentation not rendered properly in newman-htmlextra file - postman

I am using newman with htmlextra to run a Postman collection and generate an html report.
I have used a table within the Postman documentation section which is in Markdown format, something like this :
CODE
STATUS
RESULT
200
success
404
no_data_found
No saved report found for the logged-in user
It renders properly in Postman
However, when I open the htmlextra report, it looks something like this :
htmlextra report
Has someone come across this issue before?
Any ideas would be helpful.
Thanks in advance.

Related

Couldn’t generate collection Invalid schema supplied

I'm using the Windows Postman v7.13.0 app and it's throws a cryptic message when trying to Generate a Collection from a OpenAPI spec.
Is there any way to get more detail on what the issue is? Any validator or debug option in PostMan?
Couldn’t generate collection
Invalid schema supplied
NOTE: The same openapi: 3.0.2 spec works in Swagger UI and with the OpenAPI Generator.
I also checked the PostMan console and no logs appear to describe the issue further.
I had this very same issue and got it fixed by validating my definition as indicated on the previous comment using editor.swagger.io
I got a similar error in Postman:
Couldn’t generate collection
Could not convert the given schema
Then I tried to upload swagger.json to https://editor.swagger.io/ as proposed above. It showed many errors.
I opened swagger.json in Notepad and in the first line replaced swagger with openapi. Then saved the file and imported in Postman as usual (File > Import..., Upload Files).

Postman: How to Export/Download API Documentation from Postman?

I have developed a collection in postman having a bunch of API Endpoints. I can add team member to my Postman workspace and also can share the Documentation link publicly online.
What I was finding to have a download link to download the documentation as a folder so that I could add them into my project.
Is there anything I failed to find in postman?
You can export the collection as a json as shown in the other answer and then run a tool to convert it into an HTML document that you can host wherever you want.
I created a simple python executable to do just that -
https://github.com/karthiks3000/postman-doc-gen
Hi #Siddiqui currently this feature is not available, I do it by going to my collection documentation and getting it to print when the print prompt is shown I save the document as PDF before finalizing the print options. Once I get it in PDF I have all sorts options to do as I want. This is the closest I have been to downloading my collection documentation.
I have redacted information for privacy.
Hope this helps or leave a comment if I can be of any further assistance.
Postman generated API documentation is meant to be shared and consumed via workspace and URL to help ensure it is kept up to date and does not go stagnant. Because documentation will most likely be regularly updated with examples, new endpoints, and other elements anything downloaded will quickly be out of date. I know that a PDF generated version has been discussed as part of future releases, but keeping API documentation up to date is the priority.
A simple solution to this is to print the page to PDF from the web browser. It's not perfect but it is usable.
https://learning.postman.com/docs/getting-started/importing-and-exporting-data/
to export the doc to json
and then run the script by #karthiks3000 (https://github.com/karthiks3000/postman-doc-gen)

Access Webpage With Credentials and Cookies From Command Line

I am trying to access a proprietary website which provides access to a large database. The database is quite large (many billions of entries). Each entry in the database is a link to a webpage that is essentially a flat file containing the information that I need.
I have about 2000 entries from the database and their corresponding webpages in the database. I have two related issues that I am trying to resolve:
How to get wget (or any other similar program) to read cookie data. I downloaded my cookies from google chrome (using: https://chrome.google.com/webstore/detail/cookiestxt/njabckikapfpffapmjgojcnbfjonfjfg?hl=en) but for some reason the html downloaded by wget still cannot be rendered as a webpage. Similarly, I have not been able to get Google Chrome from the command line to read cookies. These cookies are needed to access the database, since they contain my credentials.
In my context, it would be OK if the webpage was downloaded as a PDF, but I cannot seem to figure out how to download a webpage as a pdf using wget or similar tools. I tried using automate-save-page-as (https://github.com/abiyani/automate-save-page-as) but I continuously get an error of the browser not being in my PATH.
I solved both of these issues:
Problem 1: I switched away from wget, curl and python's requests to simply using the selenium webdriver in python. Using selenium, I did not have to deal with issues such as passing cookies,headers, post and get, since it actually opens a browser. This also has a plus that as I was writing the script to use selenium, I could inspect the page and see what it was doing as it was doing it.
Problem 2: Selenium has a method called page_source, which downloaded the html of the webpage. When I tested it, it rendered the html correctly.

Web crawler: web content does not show up in html code

I am doing a basic web crawler work for this web page (just for study purpose, and I have got their permission):
http://www.seattle.gov/council/calendar#/?i=0
What I wanted to do is to get all the events' "Time", "Description" and "Location" in that form. I have tried python regular expression however it looks like these information does not show up in the HTML code of this page. Instead, I am using a Selenium, but I still don't know where to find this information.
Sometimes, things are in front of you but you don't see them.
You can fetch/extract that data from their RSS Feed. It's here: http://www.trumba.com/calendars/seattle-city-council.rss
Hope this helps.

Sitecore WFFM Form Report is not showing form data

I have installed WFFM on Sitecore 8.0 Update-3, and created a demo form with few fields. After submitting form I am getting success message, in log files as well didn't find any error.
But if I am checking Reports with Form Reports button (Sitecore Functionality), it is not showing any data.
I can see data in reporting database WFFM tables.
Does anyone know how we can show form data on Form Reports Page?
-Yogesh
Make sure you have run the WFFM_Analytics.sql script on your reporting database. It can be found under /Data/WFFM_Analytics.sql
Also check your error logs for an aggregation error. If you are getting that you need to follow this post: http://sitecorefootsteps.blogspot.co.uk/2015/06/sitecore-8-wffm-data-aggregation-error.html
Make sure that your have included the Visitor Identification in the head. For MVC this would be #Html.Sitecore().VisitorIdentification().
Finally remember that the data will not be written until the session end, so it might be worth setting the session timeout to 2 minutes when testing. Then it wont take so long for the data to be stored.
EDIT
Are you using an IOC Container on your project? I had an issue with SimpleInjector blocking the ajax calls to the form reports data. Check your browser console for javascript errors, specifically calls to /api/sitecore/FormReports/GetFormFieldsStatistics returning error 500.
If you are getting those, check this post on a way to fix it with SimpleInjector - other IoC containers may have similar issues. http://www.sitecorenutsbolts.net/2015/07/27/Simple-Injector-and-WFFM-Controller-Injection-Woes/
-Richard
We recently were faced with the same issue. Data was being stored in the SQL database correctly, but the form reports were coming up blank. As #Richard pointed out, we saw 404 errors in the console on the reports page.
I was able to solve our issue by updating our custom 404 logic to ignore paths beginning with "/api".