Why does exporting Postman collections export deleted requests? - postman

I am trying to separate my collections. To do so I backed up all of my collections. Then I deleted all but one collection and exported that. The resulting json file still has ALL of my collections. I even started over after going to postman site and emptying "Trash". I deleted all but one collection, closed postman, went to postman site, emptied Trash, reopened postman, it had only the one collection, exported, and again, the json file has ALL of my collections.

Related

where i can get the postman collection data in windows 10?

To clarify, I want to find where Postman saves collection files to by default when online syncing is disabled
I've looked in %LocalAppData%, My Documents, and Program Files, but I don't see where Postman saves its collection data
C:\Users\your username\AppData\Roaming\Postman\IndexedDB\
you have to get the details form Roaming\Postman not in LocalAppData
Postman collection data can be found in several places in Windows 10. Here are a few possible options:
In the Postman application: If you have Postman installed on your Windows 10 machine, you can find your collection data by opening the application and going to the "Collections" tab in the left-side navigation panel.
In the Postman data folder: Postman stores its data, including collections, in a specific folder on your computer. On Windows 10, the default location for this folder is %APPDATA%\Postman. You can access this folder by opening the File Explorer and pasting the above path in the address bar.
In the Postman cloud: If you have a Postman account and have synced your collections to the cloud, you can access your collection data by logging into your Postman account at https://app.getpostman.com/.
Regardless of where you find your collection data, you can export it from Postman by clicking the "Export" button in the top right corner of the Collections tab. This will allow you to save the collection data as a JSON file that can be imported into Postman or other tools.

Places where a website stores its data

I have just started with Python Web scraping through Requests. This could be a broad question, I will try to make it as brief as possible.
I came through situation where sometimes an entire page source can be downloaded with r.content (where r is a response object of requests's get call)
Sometimes some part of the data is stored in json format... In files that can be accessed by deeply observing the get and post calls made.
However, I even found websites where the entire content is in DOM but part of it is neither in Page source nor in Json files.
I am wondering how many of such places can a website store a data in?
(Just the names, I am not looking for how to get there)
For these last type of websites, I have observed almost every requests call made, but couldn't find where the data is.
So are there any other place except the 2 mentioned above? Or those are the only two indicating I am not doing my job right of observing the requests call?
You may answer it in brief bullet points and I can take my study from there.
Thanks in advance.
Lets assume we are talking only about HTML data. A web server could serve you data in many other formats (JSON/XML, etc.)
Please note that what I have described is generalisation, and like most generalisations, you could find exceptions that do not fit in it.
Broadly we could divide the type of data displayed (for the end user) into two categories
Pre render
Post render
Pre render
The entire HTML page is constructed at server-side and sent across to the client. Here, the JS side is concerned with the user interaction, and not with the structure of the data.
We are slowly moving away from this type of structure, but currently a large majority of all web pages uses this.
Web scraping is relatively easy here, as we can programatically pull the html page, and not bother about the javascript code that accompanies it.
a combination of requests and beautifulsoup should work in almost all of the cases (assuming that you could identify the general structure of the document).
Post render
Here the HTML page that is returned from the server is just a "skeleton" or placeholders for the actual data. The data is rendered by the accompanying JS code.
In such cases, if you fetch the source file via for eg., requests, you will get an empty shell, with no data in it.
for this if you inspect the calls made by a browser while rendering, (chrome's network tab or firefox's inspect tool or the more popular firebug), you will most likely see ajax requests that brings back the actual data from the server)
depending on how the requests are made, you could hit that ajax endpoint, and get the data in JSON.
you could use response.json() function to extract it into python-dicts.
In certain (rare) cases, there would not be an ajax call, but the HTML served from the server will still be a shell. The actual data is part of that file served, but stored as part of the JS code itself. This could be done for a variety of reasons, for example for dynamic data to be sent to static js files, or just to deter simple attempts of scraping the page.
One approach to scraping such pages would be to 'render' the page in a headless browser, which executes the JS code and returns an HTML that could be parsed via parsers like beautifulsoup
beautifulsoup has the ability to work with many parsers, one of which is html5lib, which could solve this issue.
you could also look at selenium or mechanize
or you could try parsing the js code yourself which might be faster.
Arriving at a conclusion as to what to use requires careful inspection of how the page is rendered on a browser. Even if you don't see an ajax request, the html that is served by the server need not be how the browser displays it.
A good way to start is by looking at the bare-html that is being served, by either downloading the page via curl or requests.get or simply rendering it in your browser with javascript disabled.
Good luck.

How to send a post request via excel to a RESTful Web Service without using XML?

Here's the deal: I had a excel table that fulfills a MySQL table. I already made a procedure in server side who receives the sheet, read it and put it on the database. Saddly the sheet and data table doesn't have the same structure, so I need to use a php object/script in server side to manipulate it. I have a interface to upload the file (excel file), so the PHP program can read it...
...but my boss job isn't make my life easier, is it? NO! He says that is a lot of work have to upload every excel file by the web interface. So, he asked me to make a button in the sheet that he might click after his "job" is done. That would replace the web interface.
But, the system itself is a interface that would be saled one day (well, it's the plan!). So, I just can't just role out the web interface.
WHAT I'M ASKING IS: There's a way that I could send a file (the sheet itself) in a post method straight from the VBA Macro without using XML files and name each data that I'm sending, like a form post?
So far, I've found some tutorials or even some SO posts that made me get somewhere. But all of them were talking about a XML, and I already have a method that receives a HTTP POST (from a form) and work. I aiming to reuse the same method. From my VBA script I'm already able to make the request (not a big deal) and post it. But, in the server-side script, I'm expecting a POST come out from a form, so it calls a field's name. I don't seen to be able to do that from a VBA post. =/
Here's the answer... the two first functions/methods define how to send a file to a web service. You only need the file path and the URL from service. It has answered even more than I expected. :D

Tracking number of downloads in Django

I have a website where I allow people to download files, such as Powerpoint templates that they can reuse.
I added a column to the model that stores information and location of these files called 'Downloads' and in that column I'd like to use it as a counter to track how many times the file has been downloaded.
How would I go about this? Would it be a GET request on the page? Basically I just provide a link to the file and the download starts automatically, so not sure how to relay the fact that the link was clicked back to the model.
Many thanks :-)
If you create a view that handles the GET request you could put the updating code in that view. If your Django-app is not handling the uploading itself, you can create a view that simply redirects to the download link, after updating the counter in the database.
You have several ways to do this:
create a custom view, that "proxies" all files, while keeping track of downloads
create a middleware that does pretty much the same as above, filtering which requests to record
..but none of the above will be applicable if you want to count downloads of collected static files, that will be served directly by your http server, without passing through django. In this case, I'd retrieve the downloads count from the webserver logs; have a look if your webserver allows storing logs to database, otherwise I'd create a cron-running scripts that parses the logfiles and stores the count to a db, accessible from your django application.
Like redShadow said you can create a proxie view. This view could serve the files via mod_xsendfile (if you are using apache as webserver) and set a counter for the downloads.

Need help setting up django-filetransfers

My setup is: Django 1.3/Python 2.7.2/Win Server 2008 R2/IIS 7.5/MS SQL Server 2008 R2. I am developing an application whose main function is to analyze uploaded files and produce a report.
Reading over the documentation for django-filetransfers, I believe this is a solution to a problem I've been trying to solve for a while (i.e. form-based file uploads completely block all Django responses until the file-transfer finishes...horror for even moderate-sized files).
The documentation talks about piping uploads to S3 or Blobstore, and that might be what I end up doing eventually, but during development I thought maybe I could just set up my own "poor-man's S3" on a server that I control. This would basically just be another Django instance (or possibly a simple ASP.NET app) whose sole purpose is to receive uploaded files. This sounds like it should be possible with django-filetransfers and would solve the problem of Django responsiveness (???).
But I am missing some bits of understanding how this works in general, as well as some specifics. Maybe an example will help: let's say I have MyMainDjangoServer and MyFileUploadServer. MyMainDjangoServer will serve the views, including the upload form. MyFileUploadServer will "catch" the uploaded files. My questions/confusion are as follows:
My upload form will contain additional fields beyond just the file(s)...do I understand correctly that MyMainDjangoServer will somehow still get that form data, minus the file data (basically: request.POST), and the file data gets shunted over to MyFileUploadServer? How does this work? Will MyMainDjangoServer still block during the upload to MyFileUploadServer?
I assume that what I would need to do on MyFileUploadServer is have a view/URL that handles the form request and sucks out the request.FILES data. What else needs to happen? What happens to the rest of the form data?
How would I set up my settings.py for this scenario? The django-filetransfers examples seem to assume either S3 or GAE/Blobstore but maybe I am missing some basics.
Any advice/answers appreciated...this is a confusing and frustrating area of Django for me.
"MyMainDjangoServer will somehow still get that form data, minus the file data (basically: request.POST), and the file data gets shunted over to MyFileUploadServer? How does this work? Will MyMainDjangoServer still block during the upload to MyFileUploadServer?"
I know the GAE Blobstore, presumably S3 as well, handles this by requiring you to give it a success_url. In your case that would be the url on MyMainDjangoServer where your file receiving view on MyFileUploadServer would re-post the non-files form data to once the upload is complete.
Have a look at the create_upload_url method here: https://developers.google.com/appengine/docs/python/blobstore/functions
You need to recreate this functionality in some form (see below).
"How would I set up my settings.py for this scenario?"
You'd need to create your own filetransfers backend which would be a file with a prepare_upload function in it.
You can see the App Engine one here:
https://github.com/django-nonrel/djangoappengine/blob/develop/storage.py
The prepare_upload method just wraps the GAE create_upload_url method mentioned above.
So in your settings.py you'd have something like:
PREPARE_UPLOAD_BACKEND = 'myapp.filetransfers_backend.prepare_upload'
(i.e. the import path to your prepare_upload function)
For the rest you can start with the ones provided by filetransfers already:
SERVE_FILE_BACKEND = 'filetransfers.backends.url.serve_file'
# if you need it:
PUBLIC_DOWNLOAD_URL_BACKEND = 'filetransfers.backends.url.public_download_url'
These rely on the file_field.url being set (see Django docs) and since your files will be on a separate server you probably need to look into writing a custom storage backend for Django too. (the S3 and GAE cases assume you're using the custom Django storage backends from here)