Related
OS: Windows 10 Pro
apollo-client: 2.6.3
apollo-boost: 0.1.16
Can anyone explain why I'm getting the following error message?:
Found #client directives in a query but no ApolloClient resolvers were
specified. This means ApolloClient local resolver handling has been
disabled, and #client directives will be passed through to your link
chain.
when I've defined my ApolloClient as follows:
return new ApolloClient({
uri: process.env.NODE_ENV === 'development' ? endpoint : prodEndpoint,
request: operation => {
operation.setContext({
fetchOptions: {
credentials: 'include',
},
headers: { cookie: headers && headers.cookie },
});
},
// local data
clientState: {
resolvers: {
Mutation: {
toggleCart(_, variables, { cache }) {
// Read the cartOpen value from the cache
const { cartOpen } = cache.readQuery({
query: LOCAL_STATE_QUERY,
});
// Write the cart State to the opposite
const data = {
data: { cartOpen: !cartOpen },
};
cache.writeData(data);
return data;
},
},
},
defaults: {
cartOpen: false,
},
},
});
From the docs:
If you're interested in integrating local state handling capabilities with Apollo Client < 2.5, please refer to our (now deprecated) apollo-link-state project. As of Apollo Client 2.5, local state handling is baked into the core, which means it is no longer necessary to use apollo-link-state
The clientState config option was only used with apollo-link-state. You need to add the resolvers directly to the config as shown in the docs:
new ApolloClient({
uri: '/graphql',
resolvers: { ... },
})
Also note that there is no defaults option anymore -- the cache should be initialized by calling writeData directly on the cache instance (see here).
I would suggest going through the latest docs and avoiding any examples from external sources (like existing repos or tutorials) since these may be outdated.
Note: As of version 3.0, writeData was removed in favor of writeFragment and writeQuery.
I have a fairly simple node app using AWS AppSync. I am able to run queries and mutations successfully but I've recently found that if I run a query twice I get the same response - even when I know that the back-end data has changed. In this particular case the query is backed by a lambda and in digging into it I've discovered that the query doesn't seem to be sent out on the network because the lambda is not triggered each time the query runs - just the first time. If I use the console to simulate my query then everything runs fine. If I restart my app then the first time a query runs it works fine but successive queries again just return the same value each time.
Here are some part of my code:
client.query({
query: gql`
query GetAbc($cId: String!) {
getAbc(cId: $cId) {
id
name
cs
}
}`,
options: {
fetchPolicy: 'no-cache'
},
variables: {
cid: event.cid
}
})
.then((data) => {
// same data every time
})
Edit: trying other fetch policies like network-only makes no visible difference.
Here is how I set up the client, not super clean but it seems to work:
const makeAWSAppSyncClient = (credentials) => {
return Promise.resolve(
new AWSAppSyncClient({
url: 'lalala',
region: 'us-west-2',
auth: {
type: 'AWS_IAM',
credentials: () => {
return credentials
}
},
disableOffline: true
})
)
}
getRemoteCredentials()
.then((credentials) => {
return makeAWSAppSyncClient(credentials)
})
.then((client) => {
return client.hydrated()
})
.then((client) => {
// client is good to use
})
getRemoteCredentials is a method to turn an IoT authentication into normal IAM credentials which can be used with other AWS SDKs. This is working (because I wouldn't get as far as I do if not).
My issue seems very similar to this one GraphQL Query Runs Sucessfully One Time and Fails To Run Again using Apollo and AWS AppSync; I'm running in a node environment (rather than react) but it is essentially the same issue.
I don't think this is relevant but for completeness I should mention I have tried both with and without the setup code from the docs. This appears to make no difference (except annoying logging, see below) but here it is:
global.WebSocket = require('ws')
global.window = global.window || {
setTimeout: setTimeout,
clearTimeout: clearTimeout,
WebSocket: global.WebSocket,
ArrayBuffer: global.ArrayBuffer,
addEventListener: function () { },
navigator: { onLine: true }
}
global.localStorage = {
store: {},
getItem: function (key) {
return this.store[key]
},
setItem: function (key, value) {
this.store[key] = value
},
removeItem: function (key) {
delete this.store[key]
}
};
require('es6-promise').polyfill()
require('isomorphic-fetch')
This is taken from: https://docs.aws.amazon.com/appsync/latest/devguide/building-a-client-app-javascript.html
With this code and without offlineDisabled: true in the client setup I see this line spewed continuously on the console:
redux-persist asyncLocalStorage requires a global localStorage object.
Either use a different storage backend or if this is a universal redux
application you probably should conditionally persist like so:
https://gist.github.com/rt2zz/ac9eb396793f95ff3c3b
This makes no apparent difference to this issue however.
Update: my dependencies from package.json, I have upgraded these during testing so my yarn.lock contains more recent revisions than listed here. Nevertheless: https://gist.github.com/macbutch/a319a2a7059adc3f68b9f9627598a8ca
Update #2: I have also confirmed from CloudWatch logs that the query is only being run once; I have a mutation running regularly on a timer that is successfully invoked and visible in CloudWatch. That is working as I'd expect but the query is not.
Update #3: I have debugged in to the AppSync/Apollo code and can see that my fetchPolicy is being changed to 'cache-first' in this code in apollo-client/core/QueryManager.js (comments mine):
QueryManager.prototype.fetchQuery = function (queryId, options, fetchType, fetchMoreForQueryId) {
var _this = this;
// Next line changes options.fetchPolicy to 'cache-first'
var _a = options.variables, variables = _a === void 0 ? {} : _a, _b = options.metadata, metadata = _b === void 0 ? null : _b, _c = options.fetchPolicy, fetchPolicy = _c === void 0 ? 'cache-first' : _c;
var cache = this.dataStore.getCache();
var query = cache.transformDocument(options.query);
var storeResult;
var needToFetch = fetchPolicy === 'network-only' || fetchPolicy === 'no-cache';
// needToFetch is false (because fetchPolicy is 'cache-first')
if (fetchType !== FetchType.refetch &&
fetchPolicy !== 'network-only' &&
fetchPolicy !== 'no-cache') {
// so we come through this branch
var _d = this.dataStore.getCache().diff({
query: query,
variables: variables,
returnPartialData: true,
optimistic: false,
}), complete = _d.complete, result = _d.result;
// here complete is true, result is from the cache
needToFetch = !complete || fetchPolicy === 'cache-and-network';
// needToFetch is still false
storeResult = result;
}
// skipping some stuff
...
if (shouldFetch) { // shouldFetch is still false so this doesn't execute
var networkResult = this.fetchRequest({
requestId: requestId,
queryId: queryId,
document: query,
options: options,
fetchMoreForQueryId: fetchMoreForQueryId,
}
// resolve with data from cache
return Promise.resolve({ data: storeResult });
If I use my debugger to change the value of shouldFetch to true then at least I see a network request go out and my lambda executes. I guess I need to unpack what that line that is changing my fetchPolicy is doing.
OK I found the issue. Here's an abbreviated version of the code from my question:
client.query({
query: gql`...`,
options: {
fetchPolicy: 'no-cache'
},
variables: { ... }
})
It's a little bit easier to see what is wrong here. This is what it should be:
client.query({
query: gql`...`,
fetchPolicy: 'network-only'
variables: { ... }
})
Two issues in my original:
fetchPolicy: 'no-cache' does not seem to work here (I get an empty response)
putting the fetchPolicy in an options object is unnecessary
The graphql client specifies options differently and we were switching between the two.
Set the query fetch-policy to 'network-only' when running in an AWS Lambda function.
I recommend using the overrides for WebSocket, window, and localStorage since these objects don't really apply within a Lambda function. The setup I typically use for NodeJS apps in Lambda looks like the following.
'use strict';
// CONFIG
const AppSync = {
"graphqlEndpoint": "...",
"region": "...",
"authenticationType": "...",
// auth-specific keys
};
// POLYFILLS
global.WebSocket = require('ws');
global.window = global.window || {
setTimeout: setTimeout,
clearTimeout: clearTimeout,
WebSocket: global.WebSocket,
ArrayBuffer: global.ArrayBuffer,
addEventListener: function () { },
navigator: { onLine: true }
};
global.localStorage = {
store: {},
getItem: function (key) {
return this.store[key]
},
setItem: function (key, value) {
this.store[key] = value
},
removeItem: function (key) {
delete this.store[key]
}
};
require('es6-promise').polyfill();
require('isomorphic-fetch');
// Require AppSync module
const AUTH_TYPE = require('aws-appsync/lib/link/auth-link').AUTH_TYPE;
const AWSAppSyncClient = require('aws-appsync').default;
// INIT
// Set up AppSync client
const client = new AWSAppSyncClient({
url: AppSync.graphqlEndpoint,
region: AppSync.region,
auth: {
type: AppSync.authenticationType,
apiKey: AppSync.apiKey
}
});
There are two options to enable/disable caching with AppSyncClient/ApolloClient, for each query or/and on initializing the client.
Client Config:
client = new AWSAppSyncClient(
{
url: 'https://myurl/graphql',
region: 'my-aws-region',
auth: {
type: AUTH_TYPE.AWS_MY_AUTH_TYPE,
credentials: await getMyAWSCredentialsOrToken()
},
disableOffline: true
},
{
cache: new InMemoryCache(),
defaultOptions: {
watchQuery: {
fetchPolicy: 'no-cache', // <-- HERE: check the apollo fetch policy options
errorPolicy: 'ignore'
},
query: {
fetchPolicy: 'no-cache',
errorPolicy: 'all'
}
}
}
);
Alternative: Query Option:
export default graphql(gql`query { ... }`, {
options: { fetchPolicy: 'cache-and-network' },
})(MyComponent);
Valid fetchPolicy values are:
cache-first: This is the default value where we always try reading data from your cache first. If all the data needed to fulfill your query is in the cache then that data will be returned. Apollo will only fetch from the network if a cached result is not available. This fetch policy aims to minimize the number of network requests sent when rendering your component.
cache-and-network: This fetch policy will have Apollo first trying to read data from your cache. If all the data needed to fulfill your query is in the cache then that data will be returned. However, regardless of whether or not the full data is in your cache this fetchPolicy will always execute query with the network interface unlike cache-first which will only execute your query if the query data is not in your cache. This fetch policy optimizes for users getting a quick response while also trying to keep cached data consistent with your server data at the cost of extra network requests.
network-only: This fetch policy will never return you initial data from the cache. Instead it will always make a request using your network interface to the server. This fetch policy optimizes for data consistency with the server, but at the cost of an instant response to the user when one is available.
cache-only: This fetch policy will never execute a query using your network interface. Instead it will always try reading from the cache. If the data for your query does not exist in the cache then an error will be thrown. This fetch policy allows you to only interact with data in your local client cache without making any network requests which keeps your component fast, but means your local data might not be consistent with what is on the server. If you are interested in only interacting with data in your Apollo Client cache also be sure to look at the readQuery() and readFragment() methods available to you on your ApolloClient instance.
no-cache: This fetch policy will never return your initial data from the cache. Instead it will always make a request using your network interface to the server. Unlike the network-only policy, it also will not write any data to the cache after the query completes.
Copied from: https://www.apollographql.com/docs/react/api/react-hoc/#graphql-options-for-queries
I've been experiencing some issues with AWS Kinesis inasmuch as I have a stream set up and I want to use a standard http POST request to invoke a Kinesis PutRecord call on my stream. I'm doing this because bundle-size of my resultant javascript application matters and I'd rather not import the aws-sdk to accomplish something that should (on paper) be possible.
Just so you know, I've looked at this other stack overflow question about the same thing and It was... sort of informational.
Now, I already have a method to sigv4 sign a request using an access key, secret token, and session token. but when I finally get the result of signing the request and send it using the in-browser fetch api, the service tanks with (or with a json object citing the same thing, depending on my Content-Type header, I guess) as the result.
Here's the code I'm working with
// There is a global function "sign" that does sigv4 signing
// ...
var payload = {
Data: { task: "Get something working in kinesis" },
PartitionKey: "1",
StreamName: "MyKinesisStream"
}
var credentials = {
"accessKeyId": "<access.key>",
"secretAccessKey": "<secret.key>",
"sessionToken": "<session.token>",
"expiration": 1528922673000
}
function signer({ url, method, data }) {
// Wrapping with URL for piecemeal picking of parsed pieces
const parsed = new URL(url);
const [ service, region ] = parsed.host.split(".");
const signed = sign({
method,
service,
region,
url,
// Hardcoded
headers : {
Host : parsed.host,
"Content-Type" : "application/json; charset=UTF-8",
"X-Amz-Target" : "Kinesis_20131202.PutRecord"
},
body : JSON.stringify(data),
}, credentials);
return signed;
}
// Specify method, url, data body
var signed = signer({
method: "POST",
url: "https://kinesis.us-west-2.amazonaws.com",
data : JSON.stringify(payload)
});
var request = fetch(signed.url, signed);
When I look at the result of request, I get this:
{
Output: {
__type: "com.amazon.coral.service#InternalFailure"},
Version: "1.0"
}
Now I'm unsure as to whether Kinesis is actually failing here, or if my input is malformed?
here's what the signed request looks like
{
"method": "POST",
"service": "kinesis",
"region": "us-west-2",
"url": "https://kinesis.us-west-2.amazonaws.com",
"headers": {
"Host": "kinesis.us-west-2.amazonaws.com",
"Content-Type": "application/json; charset=UTF-8",
"X-Amz-Target": "Kinesis_20131202.PutRecord",
"X-Amz-Date": "20180613T203123Z",
"X-Amz-Security-Token": "<session.token>",
"Authorization": "AWS4-HMAC-SHA256 Credential=<access.key>/20180613/us-west-2/kinesis/aws4_request, SignedHeaders=content-type;host;x-amz-target, Signature=ba20abb21763e5c8e913527c95a0c7efba590cf5ff1df3b770d4d9b945a10481"
},
"body": "\"{\\\"Data\\\":{\\\"task\\\":\\\"Get something working in kinesis\\\"},\\\"PartitionKey\\\":\\\"1\\\",\\\"StreamName\\\":\\\"MyKinesisStream\\\"}\"",
"test": {
"canonical": "POST\n/\n\ncontent-type:application/json; charset=UTF-8\nhost:kinesis.us-west-2.amazonaws.com\nx-amz-target:Kinesis_20131202.PutRecord\n\ncontent-type;host;x-amz-target\n508d2454044bffc25250f554c7b4c8f2e0c87c2d194676c8787867662633652a",
"sts": "AWS4-HMAC-SHA256\n20180613T203123Z\n20180613/us-west-2/kinesis/aws4_request\n46a252f4eef52991c4a0903ab63bca86ec1aba09d4275dd8f5eb6fcc8d761211",
"auth": "AWS4-HMAC-SHA256 Credential=<access.key>/20180613/us-west-2/kinesis/aws4_request, SignedHeaders=content-type;host;x-amz-target, Signature=ba20abb21763e5c8e913527c95a0c7efba590cf5ff1df3b770d4d9b945a10481"
}
(the test key is used by the library that generates the signature, so ignore that)
(Also there are probably extra slashes in the body because I pretty printed the response object using JSON.stringify).
My question: Is there something I'm missing? Does Kinesis require headers a, b, and c and I'm only generating two of them? Or is this internal error an actual failure. I'm lost because the response suggests nothing I can do on my end.
I appreciate any help!
Edit: As a secondary question, am I using the X-Amz-Target header correctly? This is how you reference calling a service function so long as you're hitting that service endpoint, no?
Update: Followinh Michael's comments, I've gotten somewhere, but I still haven't solved the problem. Here's what I did:
I made sure that in my payload I'm only running JSON.stringify on the Data property.
I also modified the Content-Type header to be "Content-Type" : "application/x-amz-json-1.1" and as such, I'm getting slightly more useful error messages back.
Now, my payload is still mostly the same:
var payload = {
Data: JSON.stringify({ task: "Get something working in kinesis" }),
PartitionKey: "1",
StreamName: "MyKinesisStream"
}
and my signer function body looks like this:
function signer({ url, method, data }) {
// Wrapping with URL for piecemeal picking of parsed pieces
const parsed = new URL(url);
const [ service, region ] = parsed.host.split(".");
const signed = sign({
method,
service,
region,
url,
// Hardcoded
headers : {
Host : parsed.host,
"Content-Type" : "application/json; charset=UTF-8",
"X-Amz-Target" : "Kinesis_20131202.PutRecord"
},
body : data,
}, credentials);
return signed;
}
So I'm passing in an object that is partially serialized (at least Data is) and when I send this to the service, I get a response of:
{"__type":"SerializationException"}
which is at least marginally helpful because it tells me that my input is technically incorrect. However, I've done a few things in an attempt to correct this:
I've run JSON.stringify on the entire payload
I've changed my Data key to just be a string value to see if it would go through
I've tried running JSON.stringify on Data and then running btoa because I read on another post that that worked for someone.
But I'm still getting the same error. I feel like I'm so close. Can you spot anything I might be missing or something I haven't tried? I've gotten sporadic unknownoperationexceptions but I think right now this Serialization has me stumped.
Edit 2:
As it turns out, Kinesis will only accept a base64 encoded string. This is probably a nicety that the aws-sdk provides, but essentially all it took was Data: btoa(JSON.stringify({ task: "data"})) in the payload to get it working
While I'm not certain this is the only issue, it seems like you are sending a request body that contains an incorrectly serialized (double-encoded) payload.
var obj = { foo: 'bar'};
JSON.stringify(obj) returns a string...
'{"foo": "bar"}' // the ' are not part of the string, I'm using them to illustrate that this is a thing of type string.
...and when parsed with a JSON parser, this returns an object.
{ foo: 'bar' }
However, JSON.stringify(JSON.stringify(obj)) returns a different string...
'"{\"foo\": \"bar\"}"'
...but when parsed, this returns a string.
'{"foo": "bar"}'
The service endpoint expects to parse the body and get an object, not a string... so, parsing the request body (from the service's perspective) doesn't return the correct type. The error seems to be a failure of the service to parse your request at a very low level.
In your code, body: JSON.stringify(data) should just be body: data because earlier, you already created a JSON object with data: JSON.stringify(payload).
As written, you are effectively setting body to JSON.stringify(JSON.stringify(payload)).
Not sure if you ever figured this out, but this question pops up on Google when searching for how to do this. The one piece I think you are missing is that the Record Data field must be base64 encoded. Here's a chunk of NodeJS code that will do this (using PutRecords).
And for anyone asking, why not just use the SDK? I currently must stream data from a cluster that cannot be updated to a NodeJS version that the SDK requires due to other dependencies. Yay.
const https = require('https')
const aws4 = require('aws4')
const request = function(o) { https.request(o, function(res) { res.pipe(process.stdout) }).end(o.body || '') }
const _publish_kinesis = function(logs) {
const kin_logs = logs.map(function (l) {
let blob = JSON.stringify(l) + '\n'
let buff = Buffer.from(blob, 'binary');
let base64data = buff.toString('base64');
return {
Data: base64data,
PartitionKey: '0000'
}
})
while(kin_logs.length > 0) {
let data = JSON.stringify({
Records: kin_logs.splice(0,250),
StreamName: 'your-streamname'
})
let _request = aws4.sign({
hostname: 'kinesis.us-west-2.amazonaws.com',
method: 'POST',
body: data,
path: '/?Action=PutRecords',
headers: {
'Content-Type': 'application/x-amz-json-1.1',
'X-Amz-Target': 'Kinesis_20131202.PutRecords'
},
}, {
secretAccessKey: "****",
accessKeyId: "****"
// sessionToken: "<your-session-token>"
})
request(_request)
}
}
var logs = [{
'timeStamp': new Date().toISOString(),
'value': 'test02',
},{
'timeStamp': new Date().toISOString(),
'value': 'test01',
}]
_publish_kinesis(logs)
Part of the project, I want to write data entered into my web page from onto to a DynamoDB database, for this, I have written a Node.js code in AWS lambda to write items into a DynamoDB table. Created a web page form with more than one entries for users to fill required information and created an API Gateway to connect Lambda and HTML web page. Below are the codes for Lambda, API gateway, and HTML form. Please go through them.
Lambda My code:
exports.handler = function (e,ctx,callback) {
"use strict";
var params = {
Item : {
marchp : e.stepno,
Prev_step_no :e.prevstepno,
Next_step_no: e.nextstepno,
Inputdata : e.inputdata,
Acknowledgement: e.acknowledgement,
Condition: e.condition,
},
TableName : 'MARCHPevents'
};
API Gateway Body Mapping Templates:
{
"stepno": $input.json("$.stepno"),
"prevstepno": $input.json("$.prevstepno"),
"nextstepno": $input.json("$.nextstepno"),
"inputdata": $input.json("$.inputdata"),
"acknowledgement": $input.json("$.acknowledgement"),
"condition": $input.json("$.condition")
}
HTML Code pasing data to API gateway:
url:API_URL,
success: function(data){
$('#entries').html('');
data.Items.forEach(function(MARCHPreventsItem){
$('#entries').append('<p>' + MARCHPreventsItem.InputData + '</p>');
})
}
});
});
$('#submitButton').on('click', function () {
$.ajax({
type: 'POST',
url: API_URL,
data: JSON.stringify({ "stepno": $('#s1').val() }),
data: JSON.stringify({ "prevstepno": $('#p1').val() }),
data: JSON.stringify({ "nextstepno": $('#n1').val() }),
data: JSON.stringify({ "inputdata": $('#msg').val() }),
data: JSON.stringify({ "acknowledgement": $('#ack').val() }),
data: JSON.stringify({ "condition": $('#con').val() }),
contentType: "application/json",
success: function (data) {
location.reload();
}
});
return false;
});
If on passed one value from HTMLwebpage form to API gateway to passing to perfectly to Lambda which writing that one value to DynamoDB.
Problem Facing is passing more than one value from HTML web page form to API gateway this time it is having an invocation error at Lambda
Any Help?
Your JavaScript looks incorrect - you are overwriting the data parameter. You need to set the properties on the data object, i.e.
var obj = {
stepno : ${'#s1'}.val(),
prevstepno : ${'#s2'}.val(),
...
}
$.ajax({
type: 'POST',
url: API_URL,
data: obj,
contentType: "application/json",
success: function (data) {
location.reload();
}
});
I need to use a jquery ajax setup in Bloodhound's remote property since I have a server side page that takes POST requests only. Everything works, but just once. Any subsequent change to the text in the typeahead input box calls the filter function, but does not fire a new server side request to fetch new data. It just filters through the data that it got in the first request. I need for it make a new request as the user removes the text and types in something else.
I am new to typeahead and I am spending way too much time trying to figure this out. Here is my code.
var users = new Bloodhound({
datumTokenizer: function (d) {
return Bloodhound.tokenizers.whitespace(d.value);
},
queryTokenizer: Bloodhound.tokenizers.whitespace,
remote: {
url: 'fake.jsp',
filter: function (users) {
return $.map(users, function (user) {
return {
value: user.USER_ID,
name: user.DISPLAYNAME
};
});
},
ajax: {
type: 'POST',
data: {
param: function(){
return $('#userid').val();
}
},
context: this
}
}
});
users.initialize(true);
$('#userid').typeahead({
minLength: 3,
highlight: true
}, {
name: 'userslist',
displayKey: 'name',
source: users.ttAdapter()
});
I had the same solution and discovered jQuery's cache: false; option does not work in this situation for whatever reason. Here is the solution I found:
remote: {
url: ...
replace: function(url, query) {
return url + "#" + query; // used to prevent the data from being cached. New requests aren't made without this (cache: false setting in ajax settings doesn't work)
}
}
try this:
remote: {
url: 'fake.jsp/?' + Math.random(),
.
.
.
it's not really the solution but at least the results will be fetched from server everytime the page is refreshed.