Access Postman Examples from tests - postman

I am developing a API, and have a Postman collection used with extensive tests on the endpoints. I also used Examples to be able to mock a good chunk of the API.
However, whenever something is planned to change in the response, I have to edit the tests and examples in almost the same way. In some cases, it would make sense to have the test depend directly on the example response for a part, or even the entire response.
It would make it much easier for those tests to be able to reference the example value:
const expectedStatusCode = pm.examples[0].request.status; // ???
pm.test(`Status code is ${expectedStatusCode}`, () => {
pm.response.to.have.status(expectedStatusCode)
})
The Postman API reference how to access a lot of things, but there seems to be no way to either access the examples data nor read a file (so I could parse the collection JSON file, and fetch the value directy in the exported collection)

Found the confirmation that the asked feature do not exist, and a feature request was already made to Postman on for both accessing examples directly and reading external files

Related

how to run endpoint test with different variable values in Postman?

This would seem to be a simple question, but not sure how to best set it up.
I have a few test cases for the same endpoint.
I want to just pass different values for the various {{variables}}.
I know I can use pm.globals.set('..') and other ways to modify the env during testing, but I don’t want to basically code up my tests in JS, or use Newman. Also want to be able to easily share tests.
I’m assuming there must be somewhere in the UI (maybe test runner?) to say - run the same test and endpoint, but changing out these values, and expect different results. eg
/login
userId = “{{returningUser}}” => expect success
userId = “{{bannedUserId}}” => expect fail
userId = “{{unknownId}}” => expect fail
etc
Maybe I could script that up, but then I'd also have to use code to "call" the API to re-load the request. Seems like just writing jest tests in a clunky UI at that point.
I see two possibilites for testing the same endpoint with different data:
Either you use the collection runner with a data file, or
You use the newman command line runner, using different environment files.
For sharing tests, you can either use Postman's integrated cloud stuff (never tried that), or export collections and environments and put them into a (git) repository. We're doing the latter. Yes, it's a bit cumbersome, but it works.

Create simple http request from chromium code

I want to collect some statistics of my chromium build and for that purpose I want to send some very simple data (for example just a string) with simple GET or POST request for example in browser startup stage in Launch browser function: link to code and in other similar places.
There is a way to write it on bare C++ with sockets and bytes but to be honest it's bad way I think.
I tried to read the documentation but without examples it's quite difficult to write code. To find example I found some usages in tests like: link to the code but still it not so easy to get really necessary part from this for very simple purpose: just to send "hello world" to the server.
Can anyone help with simple example how to use chromium api to make this simple request?
How to initialize QuickStreamRequest (or I need other Request)?
Where to get QuickStreamFactory?
Which parameters to put into Request method?
How to put request data ("hello world") into request?
Very simple example will bee cool :)

Should I test the whole return of a function or only a sample?

I want to test the return of a script that queries a list of urls (more than 1000), extracts some data from the response and returns it as an array of objects (dict) with certain attributes.
Is it safe to test only a sample from the returned list ?
My concern is mainly that exhaustive testing would be time consuming.
P.S. I am hoping that a random sampling would help catch errors, knowing that the response bodies of the urls my script queries may be inconsistant.
Thanks,
I understand your question such that you actually access the urls in the list. In unit-testing, you would normally take a different approach (but not in integration-testing, see the bottom of my answer). You would not actually access those urls, but instead find some way to "simulate" the url access. As part of this simulated url access, your tests can also define what the responses look like.
This way, you can test all aspects of your code that handles the responses. You can simulate all kinds of valid, but also as you mention inconsistent responses - because you have full control from the tests.
There are several ways to make it possible for your tests to "simulate" that url access: One option is, to separate within your code the part that does the url access from the part that processes the response. In pseudo-code:
response = accessUrl(url);
handleResponse(response);
Then, in unit-testing you would focus on testing the function handleResponse, and test the rest of the code in integration-testing.
A second option is to mock the function/method that performs the url access. This makes sense if it is difficult to change the code to achieve the separation I have shown in the pseudo-code. There are lots of information about mocking available on the web.
In any case, this way of testing allows you to test the functionality of your code more systematically. You can test all scenarios you are aware of and will be sure that these were really covered because you have full control.
The testing approach you have described is more on the level of integration testing, and also makes sense after you have fully unit-tested your code: Because, after all, you may still have missed some real-world scenarios that your code should handle.

How to unit test .NET api core controller with injected dependencies which also have dependencies with MOQ or Fake?

I implemented an api controller that will be used by part of another system, rather than users directly. However, I want to provide a unit test for it. I started looking at MOQ and then I realized my particular case is a little more complex. The code works, but as I said, Im trying to make a test for it, without (ideally) writing any data to the Db.
The structure of the classes look like this
api controller
|__MyCustomClass (injected via startup along with configuration)
|__UtilityClass (method: ImportSomeDataFromaFolder)
|__MydataRepositoryClass
|__CustomDerivedDbContext
(override savechanges etc so as to capture EF errors)
Note:
- The return value of the api method is a complex JSON object.
- Id like to have a test that avoids actually writing to the Db
- I am creating a custom DbContext (CustomDerivedContext) and overriding savechanges, so as to capture EF entities that change via in a list, eg. List<EntityEntry>
- The method ImportSomeDataFromaFolder, after parsing the data into POCO objects and sending them to the Repository for persisting to the Db, then moves the file to a different folder. When testing, i'd rather this didnt happen, but rather just load a file to parse.
There are 3 primary things to test:
(1) Does the data in the file get loaded into POCO objects
(2) Do the POCO objects get translated correctly to EF model entities
(3) Does the api return a JSON object that contains the expected results
Or, am I making things more complicated than what should be done for a unit test. I want to write a test against the api controller, but its the CustomDerivedDbContext that seems I want to use a fake here, since I could then remove the step that actually calls the underlying DbContext savechanges.
Sounds like you have tight coupling to implementation concerns that make unit testing your subject in isolation difficult.
That should be seen as a code smell and an indication that the current design choice need to be reviewed and refactored where possible.
If unit testing the API controller then ideally all you should need is a mock of the explicitly injected abstractions.
The API controller need not know anything about the dependencies of its dependencies if proper separation of concerns are followed.

Testing third party API's

I would like to test a third party API such as forecast.io, but I am not quite sure how to accomplish what I want to achieve.
I have read all over the internet that I should use mock objects. However, the concept of mock objects is not what I need, as I do not want to test my implementation of the parsing rather than the network call itself.
I want to test for example if the URL is still working, if my API key is still working, if the request is still in the expected format so GSON does not crash or other things directly related to the network call itself.
Is there any good way to do this?
Many thanks
TLDR; I don't need mock objects!
I am going to try to answer this question as generally as it was asked. I understand OP wants to avoid testing each type of response, but if you are relying on this data for continuity of users and/or revenue, you may consider creating an api caller so you can look at each part of the api response(s) as well as test the URL, api key, etc. I would use an OO language, but I'm sure there are other ways.
In general:
create a process/library/software that can call the api
serialize/store the data you are expecting (GSON in OP's case)
unit test it with xUnit or NUnit
automate it to run every x time period and generate an email with success/change/fail message
This is no small project if you are counting on it - need to tighten all the screws and bolts. Each step deserves its own question (and may already have one).
[I can add some sample code in C# here if that will help]
How to automate this to run and email you is a completely different question, but hopefully this gives you the idea of how an object oriented library can help you test every piece of data your own software is planning to use. Not every api host will let you know in a timely manner when/if changes are taking place and you may even know before they do if something breaks.