In Postman, once you have created a request or collection and have fine tuned it, is there any way to lock it so as to make it read only so that it can't be accidentally altered?
Obviously I would need something to toggle it back to editable again!
Thanks in advance
I don't think there is a option to make a collection read only for admin ( the creator of the collection). Few of the way of avoiding unnecessary changes are:
if you can edit rights of other users within the workspace make
it view only for selected users inside the workspace.
Create a fork of the collection so that you revert back
Create a copy of the collection
Download collection as json file
Personally i prefere downloading collection as json as this keeps the workspace clean and tidy .
else:
i prefer creating inprogress workspace and final workspace and sharing completed collection to final workspace and deleting it from ingrogress workspace everytime i finish something.
If changes is required , i will work in inprogress workspace by creating a copy
Related
I have a general understanding question. I am building a flutter app that relies on a content library containing text files, latex equations, images, pdfs, videos etc.
The content lies on an aws amplify backend. Depending on the navigation of the user in the app, the corresponding data is fetched and displayed.
I am not sure about the correct way of fetching the data. The current method (which works) is that the data is stored in an S3 bucket. When data is requested, the data is downloaded to a temporary directory and then opened and processed in the app. This is actually not slow, but I feel that it is not the way it should be done.
When data is downloaded a file transfer notification pops up, which bothers me because it is shown all the time. Also I would like to read the data directly with something like a get request, without downloading the file first (specially for text files, which I would like to read directly into a String). But here I don't know how it works, because I don't see that you can save data in a file system with the other amplify services like data store or the rest api. Also, the S3 bucket is an intuitive way of storing data that is easy to use for the content creators of my company, for me it seems that the S3 bucket is the way to go. However with S3 I have only figured out the download method to fetch data.
Could someone give me a hint on what is the correct approach for this use case? Thank you very much!
This may be a basic question but I cannot figure out the answer. I have a simple postman collection that is run through newman
newman run testPostman.json -r htmlextra
That generates a nice dynamic HTML report of the test run.
How can I then share that with someone else? i.e. via email. The HTML report is just created through a local URL and I can't figure out how to save it so it stays in its dynamic state. Right clicking and Save As .html saves the file, but you lose the ability to click around in it.
I realize that I can change the export path so it saves to some shared drive somewhere, but aside from that is there any other way?
It's been already saved to newman/ in the current working directory, no need to 'Save As' one more time. You can zip it and send via email.
If you want to change the location of generated report, check this.
I have multiple collections on the same Workspace on Postman. Unfortunately things are starting to get messy, to many Collections from different projects on the same place.
How can I move some Collections/APIs to new Workspaces?
Share a collection
To Workspace, select yours, and make sure you checked "Share collection and remove from this workspace"
Voila!
Note: Worked on version 7.34.0.
Just export the collection and import from another workspace.
There are several Ways to do that some of them are as follows
There is a workaround. Tested with Postman v6.1.4
Export your collection. Switch workspace. (Re-)import saved .postman_collection.json Postman will even ask you if you want to copy the collection (into the new workspace) or if you want to overwrite it (meaning move it to the new workspace)
OR
Delete needed collections from workspace
A > sync > go to trash >
and restore collections to workspace B.
Hopefully, trash feature is enabled by default.
OR
Sharing collections in another workspace
In the workspaces dashboard, select a collection and then click the Share button. The collection is visible in your target workspace.
I think the simplest and fasted way to do this is to Fork your Collection and select the new Workspace you want to fork it into.
If you "Move" your collection by Sharing, once you delete your collection from one Workspace it will also delete it from the other.
Moving a collection by deleting and restoring or exporting and importing, forces you to leave the app.
So, just fork it is fast and easy.
In Postman v9.10.0
Click the three dots on the right of the collection name and select the action Move.
In the next screen select the workspace you want to move your collection to and click the Move Collection button.
That's it.
One way that was quick and easy was using the browser (instead of the app).
First, go to Postman using the browser and sign in. Next, create a new workspace - Test-1. It also gives you the option to choose the type of workspace; Personal or Team.
After creation, you now have at least 2 workspaces: the default workspace, My Workspace, and Test-1. It will create and put you inside the new workspace. Using the Collections or Environments tab, click Add Collections/Environments to this Workspace. You are now asked to pick a source workspace and then choose all or as many collections and environments as needed and finally Add to this workspace.
Whichever collections/environments you selected will now be duplicated to the new workspace.
Currently, there's no option to delete a collection or environment from a source workspace while copying it over. You will have to manually navigate inside a workspace to delete duplicates.
Click the three dots on the right of the collection name
I am trying to load my data using a separate query to the server after the records get dirty in the store. The updated values are sent to the server and relevant actions are performed using a custom ajax call and handled at the server side to update all the related records. But when the data is loaded again I get the above mentioned error.
The possible reason could be, since the records are dirty in the store, and without committing the store I am trying to load the data again, it is giving me the error. So, I tried to do an "Application.defaultTransaction.rollback()". It removes those records from the updated bucket, but the "key" in the updated bucket (the object type) still exists and I still get the error. Can anyone help me with this?
In short: is there a way to force clean the store or move all the objects in created/updated/inflight buckets to clean bucket?
Application.store.get('defaultTransaction').rollback() will remove any dirty objects in the store and take it to the initial state.
There is an open issue for store.rollback() too which might be an alternative when merged to master.
https://github.com/emberjs/data/pull/350#issuecomment-9578563
I'm developing a Django project where I need to serve temporary images, which are generated online. The sessions should be anonymous; anyone should be able to use the service. The images should be destroyed when the session expires or closes.
I don't know, however, what's the best approach. For instance, I could use file-based sessions and just set the images to be generated at the session folder, and they would (or at least should) be destroyed with the session. I suppose I could do something similar with database sessions, maybe saving the images in the database or just removing them when the sessions ends, however, the file-based solution sounds more reliable to me.
Is it a good solution, or are there more solid alternatives?
I'd name the temporary images based on a hash of the session key and then create a management command that:
makes a list containing potential temp filename hashes for all the current sessions.
grabs a list of all the current filenames in your temporary directory
deletes filenames which don't have a matching entry in the hash list
Since there's no failsafe way to know if a session has "closed", you should use the cleanup management command first - either before this one, or you could make it run implicitly as part of this new command by using call_command() function.