My Nextjs SSR App on AWS Amplify is too slow:
The same app on local pc started with commands yarn build && yarn start it is fast enough but on aws:
First page load: 10.43 Sec
When click on a link in the page: 9.5 Sec
The getServerSideProps() doesn't call 999 API but just retrieve user data from cognito :
export async function getServerSideProps(context) {
let req = context.req;
const { Auth } = withSSRContext({ req });
try {
let user = await Auth.currentAuthenticatedUser();
return {
props: {
param1: context.params.param1,
param2: context.params.param2,
user: user,
...(await serverSideTranslations(context.locale, ["common", "dashboard", "footer","fs"])),
},
};
} catch (err) {
return {
redirect: {
permanent: false,
destination: "/auth/signin",
},
props: {},
};
}
}
After the first time click page, the same page (with other params) it's faster (~2.1 sec)...
I tried to enable "amplify performance mode" that it should lengthen the time of caching but no real improvement.
How can I fixed that?
I have a firestore database that I want to backup regulalry using google cloud's scheduler alongside google cloud function. I am following this tutorial: https://firebase.google.com/docs/firestore/solutions/schedule-export#gcp-console.
I am using Node.js 16 with 2 GB of RAM with the code below:
const firestore = require('#google-cloud/firestore');
const client = new firestore.v1.FirestoreAdminClient();
const bucket = 'gs://BUCKET_NAME'
exports.scheduledFirestoreExport = (event, context) => {
const databaseName = client.databasePath(
process.env.GCLOUD_PROJECT,
'(default)'
);
return client
.exportDocuments({
name: databaseName,
outputUriPrefix: bucket,
collectionIds: [],
})
.then(responses => {
const response = responses[0];
return response;
})
.catch(err => {
});
};
When I test the function it just returns an Internal Server Error 500. Any ideas?
Also, here is a picture of the errors im getting from the logs:
I created 2 tables in RDS aurora:
test-table in aurora db with values id(PK), tasknumber, status , contactid(FK)
contact-table in same db with values contactid(PK), email, phone
I created a trigger in 'test-table' that whenever a 'status' changes, the trigger should call AWS-Lambda ARN.
The Lambda function and Aurora has all the permissions, and security cleared, still on testing from Lambda I get below image error and on updating the 'status' field manually in aurora(via Workbench Sql query) it shows:
Operation failed: There was an error while applying the SQL script to the database.
ERROR 2013: 2013: Lost connection to MySQL server during query
I have attached my Lambda Node.Js code too.
var AWS = require('aws-sdk');
AWS.config.update({ region: 'us-east-1' });
const mysql = require('mysql');
var con = mysql.createConnection({
host: 'correct value',
user: 'root',
password: 'correct value',
port: correct value,
database: 'correct value'
});
exports.handler = (event, context, callback) => {
context.callbackWaitsForEmptyEventLoop = false;
var fk = event.contact_id;
console.log('FOREIGN KEY=', fk)
con.connect(function(err) {
if (err) throw err;
var sql = `SELECT * FROM db.contacts where contact_id=${fk}`
con.query(sql, function(err, result) {
if (err) throw err;
var email = result[0].email;
console.log(email);
var sns = new AWS.SNS();
var params = {
Message: email,
Subject: "Test SNS From Lambda",
TopicArn: "arn:correct value"
};
sns.publish(params, function(err, data) {
if (err) console.log(err, err.stack);
else {
console.log(data);
callback(null, 'Success');
}
});
});
});
};
Image1ErrorMySqlWorkbench
Image2ErrorLambdaaws
Also followed for NodeJs package: https://github.com/isaacs
you have to close the connection if you want the instant response otherwise it will take the default time to close the connection, please let me know if makes sense
I am trying to use google-cloud/logging-winston Cloud Function.
I want to use prefix & labels but when I configure them based on the google-cloud/logging-winston documentation nothing works.
As it can be seen in the pic. Label is added to the textpayload and prefix is not used at all.
Any idea whats wrong and how to fix this...
/**
* Responds to any HTTP request.
*
* #param {!express:Request} req HTTP request context.
* #param {!express:Response} res HTTP response context.
*/
exports.helloWorld = (req, res) => {
const winston = require('winston');
// Imports the Google Cloud client library for Winston
const {LoggingWinston} = require('#google-cloud/logging-winston');
const loggingWinston = new LoggingWinston({
serviceContext: {
service: 'winston-test',
version: '1'
},
prefix: 'DataInflow'
});
// Create a Winston logger that streams to Stackdriver Logging
// Logs will be written to: "projects/YOUR_PROJECT_ID/logs/winston_log"
const logger = winston.createLogger({
level: 'info',
transports: [
new winston.transports.Console(),
// Add Stackdriver Logging
loggingWinston,
],
});
// Writes some log entries
//logger.debug('Testing labels', { labels: { module: 'Test Winston Logging' }});
logger.error('Testing labels', { custom_metadata: 'yes', labels: { module: 'Test Winston Logging' }});
let message = req.query.message || req.body.message || 'Hello World!';
res.status(200).send(message);
};
Edit#1: If I specify the keyFilename=path-to-sa-json-file then it works.
const loggingWinston = new LoggingWinston({
serviceContext: {
service: 'winston-test',
version: '1'
},
prefix: 'DataInflow',
keyFilename=path-to-sa-json-file.json
});
But this is strange. Cloud Function & logging-winston library should use Application Default Credentials (ADC).
So if the service account I use with my CF has stackdriver write access then it should be fine.
Is my understanding wrong? Anyone having other insights...
Edit#2: Seems to be an issue with Cloud Functions on Nodejs 10. Its working quite fine with Nodejs 8 without specifying the keyFilename... quite strange...
Finally I managed to sort out the issue. All winston logs are stored in stackdriver under custom label so this was the issue in my case.
With the below code I was able to log the error to stackdriver and also to error reporting.
In error reporting I could find my errors under service -> winston-test and in stackdriver under All logs -> winston_log
/**
* Responds to any HTTP request.
*
* #param {!express:Request} req HTTP request context.
* #param {!express:Response} res HTTP response context.
*/
exports.helloWorld = (req, res) => {
const winston = require('winston');
// Imports the Google Cloud client library for Winston
const {LoggingWinston} = require('#google-cloud/logging-winston');
const metadata = {
resource: {
type: 'global',
serviceContext: {
service: 'winston-test',
version: '1'
},
prefix: 'DataInflow'
}
};
const resource = {
// This example targets the "global" resource for simplicity
type: 'global',
};
const loggingWinston = new LoggingWinston({ resource: resource });
// Create a Winston logger that streams to Stackdriver Logging
// Logs will be written to: "projects/YOUR_PROJECT_ID/logs/winston_log"
const logger = winston.createLogger({
level: 'info',
transports: [
new winston.transports.Console(),
// Add Stackdriver Logging
loggingWinston,
],
});
// Writes some log entries
//logger.debug('Testing labels', { labels: { module: 'Test Winston Logging' }});
logger.error(Error('Testing labels'));
let message = req.query.message || req.body.message || 'Hello World!';
res.status(200).send(message);
};
I have a fairly simple node app using AWS AppSync. I am able to run queries and mutations successfully but I've recently found that if I run a query twice I get the same response - even when I know that the back-end data has changed. In this particular case the query is backed by a lambda and in digging into it I've discovered that the query doesn't seem to be sent out on the network because the lambda is not triggered each time the query runs - just the first time. If I use the console to simulate my query then everything runs fine. If I restart my app then the first time a query runs it works fine but successive queries again just return the same value each time.
Here are some part of my code:
client.query({
query: gql`
query GetAbc($cId: String!) {
getAbc(cId: $cId) {
id
name
cs
}
}`,
options: {
fetchPolicy: 'no-cache'
},
variables: {
cid: event.cid
}
})
.then((data) => {
// same data every time
})
Edit: trying other fetch policies like network-only makes no visible difference.
Here is how I set up the client, not super clean but it seems to work:
const makeAWSAppSyncClient = (credentials) => {
return Promise.resolve(
new AWSAppSyncClient({
url: 'lalala',
region: 'us-west-2',
auth: {
type: 'AWS_IAM',
credentials: () => {
return credentials
}
},
disableOffline: true
})
)
}
getRemoteCredentials()
.then((credentials) => {
return makeAWSAppSyncClient(credentials)
})
.then((client) => {
return client.hydrated()
})
.then((client) => {
// client is good to use
})
getRemoteCredentials is a method to turn an IoT authentication into normal IAM credentials which can be used with other AWS SDKs. This is working (because I wouldn't get as far as I do if not).
My issue seems very similar to this one GraphQL Query Runs Sucessfully One Time and Fails To Run Again using Apollo and AWS AppSync; I'm running in a node environment (rather than react) but it is essentially the same issue.
I don't think this is relevant but for completeness I should mention I have tried both with and without the setup code from the docs. This appears to make no difference (except annoying logging, see below) but here it is:
global.WebSocket = require('ws')
global.window = global.window || {
setTimeout: setTimeout,
clearTimeout: clearTimeout,
WebSocket: global.WebSocket,
ArrayBuffer: global.ArrayBuffer,
addEventListener: function () { },
navigator: { onLine: true }
}
global.localStorage = {
store: {},
getItem: function (key) {
return this.store[key]
},
setItem: function (key, value) {
this.store[key] = value
},
removeItem: function (key) {
delete this.store[key]
}
};
require('es6-promise').polyfill()
require('isomorphic-fetch')
This is taken from: https://docs.aws.amazon.com/appsync/latest/devguide/building-a-client-app-javascript.html
With this code and without offlineDisabled: true in the client setup I see this line spewed continuously on the console:
redux-persist asyncLocalStorage requires a global localStorage object.
Either use a different storage backend or if this is a universal redux
application you probably should conditionally persist like so:
https://gist.github.com/rt2zz/ac9eb396793f95ff3c3b
This makes no apparent difference to this issue however.
Update: my dependencies from package.json, I have upgraded these during testing so my yarn.lock contains more recent revisions than listed here. Nevertheless: https://gist.github.com/macbutch/a319a2a7059adc3f68b9f9627598a8ca
Update #2: I have also confirmed from CloudWatch logs that the query is only being run once; I have a mutation running regularly on a timer that is successfully invoked and visible in CloudWatch. That is working as I'd expect but the query is not.
Update #3: I have debugged in to the AppSync/Apollo code and can see that my fetchPolicy is being changed to 'cache-first' in this code in apollo-client/core/QueryManager.js (comments mine):
QueryManager.prototype.fetchQuery = function (queryId, options, fetchType, fetchMoreForQueryId) {
var _this = this;
// Next line changes options.fetchPolicy to 'cache-first'
var _a = options.variables, variables = _a === void 0 ? {} : _a, _b = options.metadata, metadata = _b === void 0 ? null : _b, _c = options.fetchPolicy, fetchPolicy = _c === void 0 ? 'cache-first' : _c;
var cache = this.dataStore.getCache();
var query = cache.transformDocument(options.query);
var storeResult;
var needToFetch = fetchPolicy === 'network-only' || fetchPolicy === 'no-cache';
// needToFetch is false (because fetchPolicy is 'cache-first')
if (fetchType !== FetchType.refetch &&
fetchPolicy !== 'network-only' &&
fetchPolicy !== 'no-cache') {
// so we come through this branch
var _d = this.dataStore.getCache().diff({
query: query,
variables: variables,
returnPartialData: true,
optimistic: false,
}), complete = _d.complete, result = _d.result;
// here complete is true, result is from the cache
needToFetch = !complete || fetchPolicy === 'cache-and-network';
// needToFetch is still false
storeResult = result;
}
// skipping some stuff
...
if (shouldFetch) { // shouldFetch is still false so this doesn't execute
var networkResult = this.fetchRequest({
requestId: requestId,
queryId: queryId,
document: query,
options: options,
fetchMoreForQueryId: fetchMoreForQueryId,
}
// resolve with data from cache
return Promise.resolve({ data: storeResult });
If I use my debugger to change the value of shouldFetch to true then at least I see a network request go out and my lambda executes. I guess I need to unpack what that line that is changing my fetchPolicy is doing.
OK I found the issue. Here's an abbreviated version of the code from my question:
client.query({
query: gql`...`,
options: {
fetchPolicy: 'no-cache'
},
variables: { ... }
})
It's a little bit easier to see what is wrong here. This is what it should be:
client.query({
query: gql`...`,
fetchPolicy: 'network-only'
variables: { ... }
})
Two issues in my original:
fetchPolicy: 'no-cache' does not seem to work here (I get an empty response)
putting the fetchPolicy in an options object is unnecessary
The graphql client specifies options differently and we were switching between the two.
Set the query fetch-policy to 'network-only' when running in an AWS Lambda function.
I recommend using the overrides for WebSocket, window, and localStorage since these objects don't really apply within a Lambda function. The setup I typically use for NodeJS apps in Lambda looks like the following.
'use strict';
// CONFIG
const AppSync = {
"graphqlEndpoint": "...",
"region": "...",
"authenticationType": "...",
// auth-specific keys
};
// POLYFILLS
global.WebSocket = require('ws');
global.window = global.window || {
setTimeout: setTimeout,
clearTimeout: clearTimeout,
WebSocket: global.WebSocket,
ArrayBuffer: global.ArrayBuffer,
addEventListener: function () { },
navigator: { onLine: true }
};
global.localStorage = {
store: {},
getItem: function (key) {
return this.store[key]
},
setItem: function (key, value) {
this.store[key] = value
},
removeItem: function (key) {
delete this.store[key]
}
};
require('es6-promise').polyfill();
require('isomorphic-fetch');
// Require AppSync module
const AUTH_TYPE = require('aws-appsync/lib/link/auth-link').AUTH_TYPE;
const AWSAppSyncClient = require('aws-appsync').default;
// INIT
// Set up AppSync client
const client = new AWSAppSyncClient({
url: AppSync.graphqlEndpoint,
region: AppSync.region,
auth: {
type: AppSync.authenticationType,
apiKey: AppSync.apiKey
}
});
There are two options to enable/disable caching with AppSyncClient/ApolloClient, for each query or/and on initializing the client.
Client Config:
client = new AWSAppSyncClient(
{
url: 'https://myurl/graphql',
region: 'my-aws-region',
auth: {
type: AUTH_TYPE.AWS_MY_AUTH_TYPE,
credentials: await getMyAWSCredentialsOrToken()
},
disableOffline: true
},
{
cache: new InMemoryCache(),
defaultOptions: {
watchQuery: {
fetchPolicy: 'no-cache', // <-- HERE: check the apollo fetch policy options
errorPolicy: 'ignore'
},
query: {
fetchPolicy: 'no-cache',
errorPolicy: 'all'
}
}
}
);
Alternative: Query Option:
export default graphql(gql`query { ... }`, {
options: { fetchPolicy: 'cache-and-network' },
})(MyComponent);
Valid fetchPolicy values are:
cache-first: This is the default value where we always try reading data from your cache first. If all the data needed to fulfill your query is in the cache then that data will be returned. Apollo will only fetch from the network if a cached result is not available. This fetch policy aims to minimize the number of network requests sent when rendering your component.
cache-and-network: This fetch policy will have Apollo first trying to read data from your cache. If all the data needed to fulfill your query is in the cache then that data will be returned. However, regardless of whether or not the full data is in your cache this fetchPolicy will always execute query with the network interface unlike cache-first which will only execute your query if the query data is not in your cache. This fetch policy optimizes for users getting a quick response while also trying to keep cached data consistent with your server data at the cost of extra network requests.
network-only: This fetch policy will never return you initial data from the cache. Instead it will always make a request using your network interface to the server. This fetch policy optimizes for data consistency with the server, but at the cost of an instant response to the user when one is available.
cache-only: This fetch policy will never execute a query using your network interface. Instead it will always try reading from the cache. If the data for your query does not exist in the cache then an error will be thrown. This fetch policy allows you to only interact with data in your local client cache without making any network requests which keeps your component fast, but means your local data might not be consistent with what is on the server. If you are interested in only interacting with data in your Apollo Client cache also be sure to look at the readQuery() and readFragment() methods available to you on your ApolloClient instance.
no-cache: This fetch policy will never return your initial data from the cache. Instead it will always make a request using your network interface to the server. Unlike the network-only policy, it also will not write any data to the cache after the query completes.
Copied from: https://www.apollographql.com/docs/react/api/react-hoc/#graphql-options-for-queries