I am working on a POSTMAN collection. Say, I have two separate postman environments with each having URL variables, lets domain1 & domain2. In my initial script in pre-request tab I want to get a list of all the environments available so I can switch them when I need to. How do I get the list of environments?
Thanks,
Thanks Christian Bauman. I was able to accomplish by doing following
In postman Pre-request Script tab. The response will contain environment array with object having id, name, owner, uid properties. you can then call by id to get further details of an environment.
let options = {
method: 'GET',
url: 'https://api.getpostman.com/environments',
header: {
'x-api-key': 'PMAK-your own key goes here'
},
json: true
};
let envs = [];
pm.sendRequest(options, function(err, response) {
if (!err) {
let data = response.json();
_.forEach(data.environments, function(item) {
envs.push(item);
});
console.log(envs);
} else {
console.log(err);
}
});
It is not possible to select environment from scripts. the closest one can get, is to receive the name of the currently active environment: pm.environment.name
Related
I am trying to run a few automated testing using the Postman tool. For regular scenarios, I understand how to write pre-test and test scripts. What I do not know (and trying to understand) is, how to write scripts for checking 409 error (let us call it duplicate resource check).
I want to run a create resource api like below, then run it again and ensure that the 2nd invocation really returns 409 error.
POST /myservice/books
Is there a way to run the same api twice and check the return value for 2nd invocation. If yes, how do I do that. One crude way of achieving this could be to create a dependency between two tests, where the first one creates a resource, and the second one uses the same payload once again to create the same resource. I am looking for a single test to do an end-to-end testing.
Postman doesn't really provide a standard way, but is still flexible. I realized that we have to write javascript code in the pre-request tab, to do our own http request (using sendRequest method) and store the resulting data into env vars for use by the main api call.
Here is a sample:
var phone = pm.variables.replaceIn("{{$randomPhoneNumber}}");
console.log("phone:", phone)
var baseURL = pm.variables.replaceIn("{{ROG_SERVER}}:{{ROG_PORT}}{{ROG_BASE_URL}}")
var usersURL = pm.variables.replaceIn("{{ROG_SERVICE}}/users")
var otpURL = `${baseURL}/${phone}/_otp_x`
// Payload for partner creation
const payload = {
"name": pm.variables.replaceIn("{{username}}"),
"phone":phone,
"password": pm.variables.replaceIn("{{$randomPassword}}"),
}
console.log("user payload:", payload)
function getOTP (a, callback) {
// Get an OTP
pm.sendRequest(otpURL, function(err, response) {
if (err) throw err
var jsonDaata = response.json()
pm.expect(jsonDaata).to.haveOwnProperty('otp')
pm.environment.set("otp", jsonDaata.otp)
pm.environment.set("phone", phone);
pm.environment.set("username", "{{$randomUserName}}")
if (callback) callback(jsonDaata.otp)
})
}
// Get an OTP
getOTP("a", otp => {
console.log("OTP received:", otp)
payload.partnerRef = pm.variables.replaceIn("{{$randomPassword}}")
payload.otp = otp
//create a partner user with the otp.
let reqOpts = {
url: usersURL,
method: 'POST',
headers: { 'Content-Type': 'application/json'},
body: JSON.stringify(payload)
}
pm.sendRequest(reqOpts, (err, response) => {
console.log("response?", response)
pm.expect(response).to.have.property('code', 201)
})
// Get a new OTP for the main request to be executed.
getOTP()
})
I did it in my test block. Create your normal request as you would send it, then in your tests, validate the original works, and then you can send the second command and validate the response.
You can also use the pre and post scripting to do something similar, or have one test after the other in the file (they run sequentially) to do the same testing.
For instance, I sent an API call here to create records. As I need the Key_ to delete them, I can make a call to GET /foo at my API
pm.test("Response should be 200", function () {
pm.response.to.be.ok;
pm.response.to.have.status(200);
});
pm.test("Parse Key_ values and send DELETE from original request response", function () {
var jsonData = JSON.parse(responseBody);
jsonData.forEach(function (TimeEntryRecord) {
console.log(TimeEntryRecord.Key_);
const DeleteURL = pm.variables.get('APIHost') + '/bar/' + TimeEntryRecord.Key_;
pm.sendRequest({
url: DeleteURL,
method: 'DELETE',
header: { 'Content-Type': 'application/json' },
body: { TimeEntryRecord }
}, function (err, res) {
console.log("Sent Delete: " + DeleteURL );
});
});
});
I'm working with Next.js Server Side Rendering and AWS Amplify to get data. However, I've come to a roadblock, where I'm getting an error saying that there's no current user.
My question is why does the app need to have a user if the data is supposed to be read for the public?
What I'm trying to do is show data for the public, if they go to a user's profile page. They don't have to be signed into the app.
My current folder structure is:
/pages/[user]/index.js with getStaticProps and getStaticPaths:
export async function getStaticPaths() {
const SSR = withSSRContext();
const { data } = await SSR.API.graphql({ query: listUsers });
const paths = data.listUsers.items.map((user) => ({
params: { user: user.username },
}));
return {
fallback: true,
paths,
};
}
export async function getStaticProps({ params }) {
const SSR = withSSRContext();
const { data } = await SSR.API.graphql({
query: postsByUsername,
variables: {
username: params.username,
},
});
return {
props: {
posts: data.postsByUsername.items,
},
};
}
Finally figured it out. A lot of tutorials uses authMode: 'AMAZON_COGNITO_USER_POOLS ' // or AWS_IAM parameter in their graphql query for example in https://docs.amplify.aws/lib/graphqlapi/authz/q/platform/js/
// Creating a post is restricted to IAM
const createdTodo = await API.graphql({
query: queries.createTodo,
variables: {input: todoDetails},
authMode: 'AWS_IAM'
});
But you rarely come across people who use authMode: API_KEY.
So I guess, if you want the public to read without authentication, you would just need to set authMode: 'API_KEY'...
Make sure you configure your amplify API to have public key as well.
Im using newman to run api tests after build in travis.
Im trying to limit the duplication of pre-request scripts so checked out some workarounds on how can I have pre-request-scripts at collection level.
My problem is that I dont want to run them on every request, only the ones where I need them.
Example: Im trying to run a login script to use the returned token on private endpoints.
My code looks like:
Collection level pre-request script definiton:
Object.prototype.login = function() {
const request = {
url: 'somthing',
method: 'GET',
header: 'Content-Type:application/json',
body: {
mode: 'application/json',
raw: JSON.stringify(
{
email: pm.environment.get('someenv'),
password: pm.environment.get('someenv')
})
}
};
pm.sendRequest(request, function (err, res) {
var response = res.json();
pm.environment.set("token", response.token);
});
}
Request level pre-request script definiton:
_.login();
Can someone help me out why I cant run pm.sendRequest in this scope?
pm.environment.get('someenv') works like a charm, so Im not sure what to do here.
It runs fine when called from Collection level pre-request script without using the Object, but if I just put the whole request there, it will run before every request what I want to avoid in the first place.
I have tried to log some stuff out using console.log(), but it seems that the callback in pm.sendRequest() never runs.
So I have found a workaround for the issue, I hope its going to help out someone in the future :)
So its easy to setup a collection level pre-request that runs before every single request.
But to optimize this a little bit because you dont need to run every script for every request you make in a collection. You can use my solution here. :)
The issue I think is caused by:
PM object used in a different scope is not going to affect the PM object in global scope, so first you should pass global PM object as parameter for function call.
The collection level request should look like this:
login = function (pm) {
const request = {
url: pm.environment.get('base_url') + '/login',
method: 'POST',
header: {
'Content-Type': 'application/json',
},
body: {
mode: 'application/json',
raw: JSON.stringify({
email: pm.environment.get('email'),
password:pm.environment.get('passwd')
})
}
};
pm.sendRequest(request, (err, res) => {
var response = res.json();
pm.expect(err).to.be.a('null');
pm.expect(response).to.have.property('token')
.and.to.not.be.empty;
pm.globals.set("token", response.token);
});
};
And for the exact request where you want to call auth first and use the token for the request call:
login(pm);
I have a fairly simple node app using AWS AppSync. I am able to run queries and mutations successfully but I've recently found that if I run a query twice I get the same response - even when I know that the back-end data has changed. In this particular case the query is backed by a lambda and in digging into it I've discovered that the query doesn't seem to be sent out on the network because the lambda is not triggered each time the query runs - just the first time. If I use the console to simulate my query then everything runs fine. If I restart my app then the first time a query runs it works fine but successive queries again just return the same value each time.
Here are some part of my code:
client.query({
query: gql`
query GetAbc($cId: String!) {
getAbc(cId: $cId) {
id
name
cs
}
}`,
options: {
fetchPolicy: 'no-cache'
},
variables: {
cid: event.cid
}
})
.then((data) => {
// same data every time
})
Edit: trying other fetch policies like network-only makes no visible difference.
Here is how I set up the client, not super clean but it seems to work:
const makeAWSAppSyncClient = (credentials) => {
return Promise.resolve(
new AWSAppSyncClient({
url: 'lalala',
region: 'us-west-2',
auth: {
type: 'AWS_IAM',
credentials: () => {
return credentials
}
},
disableOffline: true
})
)
}
getRemoteCredentials()
.then((credentials) => {
return makeAWSAppSyncClient(credentials)
})
.then((client) => {
return client.hydrated()
})
.then((client) => {
// client is good to use
})
getRemoteCredentials is a method to turn an IoT authentication into normal IAM credentials which can be used with other AWS SDKs. This is working (because I wouldn't get as far as I do if not).
My issue seems very similar to this one GraphQL Query Runs Sucessfully One Time and Fails To Run Again using Apollo and AWS AppSync; I'm running in a node environment (rather than react) but it is essentially the same issue.
I don't think this is relevant but for completeness I should mention I have tried both with and without the setup code from the docs. This appears to make no difference (except annoying logging, see below) but here it is:
global.WebSocket = require('ws')
global.window = global.window || {
setTimeout: setTimeout,
clearTimeout: clearTimeout,
WebSocket: global.WebSocket,
ArrayBuffer: global.ArrayBuffer,
addEventListener: function () { },
navigator: { onLine: true }
}
global.localStorage = {
store: {},
getItem: function (key) {
return this.store[key]
},
setItem: function (key, value) {
this.store[key] = value
},
removeItem: function (key) {
delete this.store[key]
}
};
require('es6-promise').polyfill()
require('isomorphic-fetch')
This is taken from: https://docs.aws.amazon.com/appsync/latest/devguide/building-a-client-app-javascript.html
With this code and without offlineDisabled: true in the client setup I see this line spewed continuously on the console:
redux-persist asyncLocalStorage requires a global localStorage object.
Either use a different storage backend or if this is a universal redux
application you probably should conditionally persist like so:
https://gist.github.com/rt2zz/ac9eb396793f95ff3c3b
This makes no apparent difference to this issue however.
Update: my dependencies from package.json, I have upgraded these during testing so my yarn.lock contains more recent revisions than listed here. Nevertheless: https://gist.github.com/macbutch/a319a2a7059adc3f68b9f9627598a8ca
Update #2: I have also confirmed from CloudWatch logs that the query is only being run once; I have a mutation running regularly on a timer that is successfully invoked and visible in CloudWatch. That is working as I'd expect but the query is not.
Update #3: I have debugged in to the AppSync/Apollo code and can see that my fetchPolicy is being changed to 'cache-first' in this code in apollo-client/core/QueryManager.js (comments mine):
QueryManager.prototype.fetchQuery = function (queryId, options, fetchType, fetchMoreForQueryId) {
var _this = this;
// Next line changes options.fetchPolicy to 'cache-first'
var _a = options.variables, variables = _a === void 0 ? {} : _a, _b = options.metadata, metadata = _b === void 0 ? null : _b, _c = options.fetchPolicy, fetchPolicy = _c === void 0 ? 'cache-first' : _c;
var cache = this.dataStore.getCache();
var query = cache.transformDocument(options.query);
var storeResult;
var needToFetch = fetchPolicy === 'network-only' || fetchPolicy === 'no-cache';
// needToFetch is false (because fetchPolicy is 'cache-first')
if (fetchType !== FetchType.refetch &&
fetchPolicy !== 'network-only' &&
fetchPolicy !== 'no-cache') {
// so we come through this branch
var _d = this.dataStore.getCache().diff({
query: query,
variables: variables,
returnPartialData: true,
optimistic: false,
}), complete = _d.complete, result = _d.result;
// here complete is true, result is from the cache
needToFetch = !complete || fetchPolicy === 'cache-and-network';
// needToFetch is still false
storeResult = result;
}
// skipping some stuff
...
if (shouldFetch) { // shouldFetch is still false so this doesn't execute
var networkResult = this.fetchRequest({
requestId: requestId,
queryId: queryId,
document: query,
options: options,
fetchMoreForQueryId: fetchMoreForQueryId,
}
// resolve with data from cache
return Promise.resolve({ data: storeResult });
If I use my debugger to change the value of shouldFetch to true then at least I see a network request go out and my lambda executes. I guess I need to unpack what that line that is changing my fetchPolicy is doing.
OK I found the issue. Here's an abbreviated version of the code from my question:
client.query({
query: gql`...`,
options: {
fetchPolicy: 'no-cache'
},
variables: { ... }
})
It's a little bit easier to see what is wrong here. This is what it should be:
client.query({
query: gql`...`,
fetchPolicy: 'network-only'
variables: { ... }
})
Two issues in my original:
fetchPolicy: 'no-cache' does not seem to work here (I get an empty response)
putting the fetchPolicy in an options object is unnecessary
The graphql client specifies options differently and we were switching between the two.
Set the query fetch-policy to 'network-only' when running in an AWS Lambda function.
I recommend using the overrides for WebSocket, window, and localStorage since these objects don't really apply within a Lambda function. The setup I typically use for NodeJS apps in Lambda looks like the following.
'use strict';
// CONFIG
const AppSync = {
"graphqlEndpoint": "...",
"region": "...",
"authenticationType": "...",
// auth-specific keys
};
// POLYFILLS
global.WebSocket = require('ws');
global.window = global.window || {
setTimeout: setTimeout,
clearTimeout: clearTimeout,
WebSocket: global.WebSocket,
ArrayBuffer: global.ArrayBuffer,
addEventListener: function () { },
navigator: { onLine: true }
};
global.localStorage = {
store: {},
getItem: function (key) {
return this.store[key]
},
setItem: function (key, value) {
this.store[key] = value
},
removeItem: function (key) {
delete this.store[key]
}
};
require('es6-promise').polyfill();
require('isomorphic-fetch');
// Require AppSync module
const AUTH_TYPE = require('aws-appsync/lib/link/auth-link').AUTH_TYPE;
const AWSAppSyncClient = require('aws-appsync').default;
// INIT
// Set up AppSync client
const client = new AWSAppSyncClient({
url: AppSync.graphqlEndpoint,
region: AppSync.region,
auth: {
type: AppSync.authenticationType,
apiKey: AppSync.apiKey
}
});
There are two options to enable/disable caching with AppSyncClient/ApolloClient, for each query or/and on initializing the client.
Client Config:
client = new AWSAppSyncClient(
{
url: 'https://myurl/graphql',
region: 'my-aws-region',
auth: {
type: AUTH_TYPE.AWS_MY_AUTH_TYPE,
credentials: await getMyAWSCredentialsOrToken()
},
disableOffline: true
},
{
cache: new InMemoryCache(),
defaultOptions: {
watchQuery: {
fetchPolicy: 'no-cache', // <-- HERE: check the apollo fetch policy options
errorPolicy: 'ignore'
},
query: {
fetchPolicy: 'no-cache',
errorPolicy: 'all'
}
}
}
);
Alternative: Query Option:
export default graphql(gql`query { ... }`, {
options: { fetchPolicy: 'cache-and-network' },
})(MyComponent);
Valid fetchPolicy values are:
cache-first: This is the default value where we always try reading data from your cache first. If all the data needed to fulfill your query is in the cache then that data will be returned. Apollo will only fetch from the network if a cached result is not available. This fetch policy aims to minimize the number of network requests sent when rendering your component.
cache-and-network: This fetch policy will have Apollo first trying to read data from your cache. If all the data needed to fulfill your query is in the cache then that data will be returned. However, regardless of whether or not the full data is in your cache this fetchPolicy will always execute query with the network interface unlike cache-first which will only execute your query if the query data is not in your cache. This fetch policy optimizes for users getting a quick response while also trying to keep cached data consistent with your server data at the cost of extra network requests.
network-only: This fetch policy will never return you initial data from the cache. Instead it will always make a request using your network interface to the server. This fetch policy optimizes for data consistency with the server, but at the cost of an instant response to the user when one is available.
cache-only: This fetch policy will never execute a query using your network interface. Instead it will always try reading from the cache. If the data for your query does not exist in the cache then an error will be thrown. This fetch policy allows you to only interact with data in your local client cache without making any network requests which keeps your component fast, but means your local data might not be consistent with what is on the server. If you are interested in only interacting with data in your Apollo Client cache also be sure to look at the readQuery() and readFragment() methods available to you on your ApolloClient instance.
no-cache: This fetch policy will never return your initial data from the cache. Instead it will always make a request using your network interface to the server. Unlike the network-only policy, it also will not write any data to the cache after the query completes.
Copied from: https://www.apollographql.com/docs/react/api/react-hoc/#graphql-options-for-queries
I had logined in my server with fetch(),I want to know how I get the cookies.
I know that I can use "document.cookie" to get the cookies in a web browser development,but in react native develop how?
thank you very much.
I just came across the same problem.
My first approach was to manually get the cookies from the response headers.
This become more difficult since Headers.prototype.getAll was removed (see this issue).
The details are shown further down below.
Getting and parsing cookies might be unnecessary
First, I want to mention that all the below cookie parsing turned out to be unnecessary because the implementation of fetch on React Native sends the cookies automatically (if the credentials key is set correctly).
So the session is kept (just like in the browser) and further fetches will work just fine.
Unfortunately, the React Native documentation on Networking does not explicitly tell you that it'll work out of the box. It only says: "React Native provides the Fetch API for your networking needs."
First approach
Thus, I wrote a helper function:
// 'headers' is iterable
const get_set_cookies = function(headers) {
const set_cookies = []
for (const [name, value] of headers) {
if (name === "set-cookie") {
set_cookies.push(value)
}
}
return set_cookies
}
fetch(url, {
method: "POST",
credentials: "same-origin", // or 'include' depending on CORS
// ...
})
.then(response => {
const set_cookies = get_set_cookies(response.headers)
})
To parse the cookie strings into objects I used set-cookie-parser.
This way I wanted send the cookies back manually like
import SetCookieParser from "set-cookie-parser"
const cookies_to_send = set_cookies
.map(cookie => {
const parsed_cookie = SetCookieParser.parse(cookie)
return `${cookie.name}=${cookie.value}`
})
.join('; ')
fetch(url, {
method: "POST",
credentials: "same-origin", // or 'include' depending on CORS
headers: {
Cookie: cookies_to_send,
// ...
},
// ...
})
Inspired by jneuendorf, I created a helper method that returns a key/value pair to easily look up the value of a cookie
export const getCookies = function(response) {
const cookies = {}
for (const [name, values] of response.headers) {
if (name === 'set-cookie') {
for (const cookie of values.split(';')) {
const [key, value] = cookie.split('=')
cookies[key] = value
}
}
}
return cookies
}