My Nextjs SSR App on AWS Amplify is too slow:
The same app on local pc started with commands yarn build && yarn start it is fast enough but on aws:
First page load: 10.43 Sec
When click on a link in the page: 9.5 Sec
The getServerSideProps() doesn't call 999 API but just retrieve user data from cognito :
export async function getServerSideProps(context) {
let req = context.req;
const { Auth } = withSSRContext({ req });
try {
let user = await Auth.currentAuthenticatedUser();
return {
props: {
param1: context.params.param1,
param2: context.params.param2,
user: user,
...(await serverSideTranslations(context.locale, ["common", "dashboard", "footer","fs"])),
},
};
} catch (err) {
return {
redirect: {
permanent: false,
destination: "/auth/signin",
},
props: {},
};
}
}
After the first time click page, the same page (with other params) it's faster (~2.1 sec)...
I tried to enable "amplify performance mode" that it should lengthen the time of caching but no real improvement.
How can I fixed that?
Related
I am working on a website which uses angular for client and django for backend and I want to include stripe for payments. To configure stripe on the backend I have a microservice which runs on docker and accepts the following requests:
one for creating a stripe customer
another one for creating a payment
I configured stripe on the client using the provided form as follows:
export class PaymentComponent implements OnInit {
paymentHandler:any = null;
constructor(private checkoutService: CheckoutService) { }
ngOnInit() {
this.invokeStripe();
}
initializePayment(amount: number) {
const paymentHandler = (<any>window).StripeCheckout.configure({
key: 'pk_test_51MKU1wDo0NxQ0glB5HRAxUsR9MsY24POw3YHwIXnoMyFRyJ3cAV6FaErUeuEiWkGuWgAOoB3ILWXTgHA1CE9LTFr00WOT5U5vJ',
locale: 'auto',
token: function (stripeToken: any) {
console.log(stripeToken);
alert('Stripe token generated!');
paymentStripe(stripeToken);
}
});
const paymentStripe = (stripeTocken: any) => {
this.checkoutService.makePayment(stripeTocken).subscribe((data:any) => {
console.log(data)
})
}
paymentHandler.open({
name: 'Card Details',
description: 'Introduce the information from your card',
amount: amount * 100
});
}
invokeStripe() {
if(!window.document.getElementById('stripe-script')) {
const script = window.document.createElement("script");
script.id = "stripe-script";
script.type = "text/javascript";
script.src = "https://checkout.stripe.com/checkout.js";
script.onload = () => {
this.paymentHandler = (<any>window).StripeCheckout.configure({
key: 'pk_test_51MKU1wDo0NxQ0glB5HRAxUsR9MsY24POw3YHwIXnoMyFRyJ3cAV6FaErUeuEiWkGuWgAOoB3ILWXTgHA1CE9LTFr00WOT5U5vJ',
locale: 'auto',
token: function (stripeToken: any) {
console.log(stripeToken)
alert('Payment has been successfull!');
}
});
}
window.document.body.appendChild(script);
}
}
}
The makePayment method makes a request to the django server sending the stripe tocken.
From the server I need the above requests to the microservice.
My question is why do I need the configurations from the server as long as all I have to do to perform a payment is to make a request to the microservice? And where should I use the tocken.
I have also read about webhooks, but I don't really understand the concept and how to use this in my situation.
Also, do I need to test everything with angular cli? And why?
Thank you in advance!
Our app allow our clients large file uploads. Files are stored on AWS/S3 and we use Uppy for the upload, and dockerize it to be used under a kubernetes deployment where we can up the number of instances.
It works well, but we noticed all > 5GB uploads fail. I know uppy has a plugin for AWS multipart uploads, but even when installed during the container image creation, the result is the same.
Here's our Dockerfile. Has someone ever succeeded in uploading > 5GB files to S3 via uppy? IS there anything we're missing?
FROM node:alpine AS companion
RUN yarn global add #uppy/companion#3.0.1
RUN yarn global add #uppy/aws-s3-multipart
ARG UPPY_COMPANION_DOMAIN=[...redacted..]
ARG UPPY_AWS_BUCKET=[...redacted..]
ENV COMPANION_SECRET=[...redacted..]
ENV COMPANION_PREAUTH_SECRET=[...redacted..]
ENV COMPANION_DOMAIN=${UPPY_COMPANION_DOMAIN}
ENV COMPANION_PROTOCOL="https"
ENV COMPANION_DATADIR="COMPANION_DATA"
# ENV COMPANION_HIDE_WELCOME="true"
# ENV COMPANION_HIDE_METRICS="true"
ENV COMPANION_CLIENT_ORIGINS=[...redacted..]
ENV COMPANION_AWS_KEY=[...redacted..]
ENV COMPANION_AWS_SECRET=[...redacted..]
ENV COMPANION_AWS_BUCKET=${UPPY_AWS_BUCKET}
ENV COMPANION_AWS_REGION="us-east-2"
ENV COMPANION_AWS_USE_ACCELERATE_ENDPOINT="true"
ENV COMPANION_AWS_EXPIRES="3600"
ENV COMPANION_AWS_ACL="public-read"
# We don't need to store data for just S3 uploads, but Uppy throws unless this dir exists.
RUN mkdir COMPANION_DATA
CMD ["companion"]
EXPOSE 3020
EDIT:
I made sure I had:
uppy.use(AwsS3Multipart, {
limit: 5,
companionUrl: '<our uppy url',
})
And it still doesn't work- I see all the chunks of the 9GB file sent on the network tab but as soon as it hits 100% -- uppy throws an error "cannot post" (to our S3 url) and that's it. failure.
Has anyone ever encountered this? upload goes fine till 100%, then the last chunk gets HTTP error 413, making the entire upload fail.
Thanks!
Here I'm adding some code samples from my repository that will help you to understand the flow of using the BUSBOY package to stream the data to the S3 bucket. Also, I'm adding the reference links here for you to get the package details I'm using.
https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-s3/index.html
https://www.npmjs.com/package/busboy
export const uploadStreamFile = async (req: Request, res: Response) => {
const busboy = new Busboy({ headers: req.headers });
const streamResponse = await busboyStream(busboy, req);
const uploadResponse = await s3FileUpload(streamResponse.data.buffer);
return res.send(uploadResponse);
};
const busboyStream = async (busboy: any, req: Request): Promise<any> {
return new Promise((resolve, reject) => {
try {
const fileData: any[] = [];
let fileBuffer: Buffer;
busboy.on('file', async (fieldName: any, file: any, fileName: any, encoding: any, mimetype: any) => {
// ! File is missing in the request
if (!fileName)
reject("File not found!");
let totalBytes: number = 0;
file.on('data', (chunk: any) => {
fileData.push(chunk);
// ! given code is only for logging purpose
// TODO will remove once project is live
totalBytes += chunk.length;
console.log('File [' + fieldName + '] got ' + chunk.length + ' bytes');
});
file.on('error', (err: any) => {
reject(err);
});
file.on('end', () => {
fileBuffer = Buffer.concat(fileData);
});
});
// ? Haa, finally file parsing wen't well
busboy.on('finish', () => {
const responseData: ResponseDto = {
status: true, message: "File parsing done", data: {
buffer: fileBuffer,
metaData
}
};
resolve(responseData)
console.log('Done parsing data! -> File uploaded');
});
req.pipe(busboy);
} catch (error) {
reject(error);
}
});
}
const s3FileUpload = async (fileData: any): Promise<ResponseDto> {
try {
const params: any = {
Bucket: <BUCKET_NAME>,
Key: <path>,
Body: fileData,
ContentType: <content_type>,
ServerSideEncryption: "AES256",
};
const command = new PutObjectCommand(params);
const uploadResponse: any = await this.S3.send(command);
return { status: true, message: "File uploaded successfully", data: uploadResponse };
} catch (error) {
const responseData = { status: false, message: "Monitor connection failed, please contact tech support!", error: error.message };
return responseData;
}
}
In the AWS S3 service in a single PUT operation, you can upload a single object up to 5 GB in size.
To upload > 5GB files to S3 you need to use the multipart upload S3 API, and also the AwsS3Multipart Uppy API.
Check your upload code to understand if you are using AWSS3Multipart correctly, setting the limit properly for example, in this case a limit between 5 and 15 is recommended.
import AwsS3Multipart from '#uppy/aws-s3-multipart'
uppy.use(AwsS3Multipart, {
limit: 5,
companionUrl: 'https://uppy-companion.myapp.net/',
})
Also, check this issue on Github Uploading a large >5GB file to S3 errors out #1945
If you're getting Error: request entity too large in your Companion server logs I fixed this in my Companion express server by increasing the body-parser limit:
app.use(bodyparser.json({ limit: '21GB', type: 'application/json' }))
This is a good working example of Uppy S3 MultiPart uploads (without this limit increased): https://github.com/jhanitesh10/uppy
I'm able to upload files up to a (self-imposed) limit of 20GB using this code.
I'm working with Next.js Server Side Rendering and AWS Amplify to get data. However, I've come to a roadblock, where I'm getting an error saying that there's no current user.
My question is why does the app need to have a user if the data is supposed to be read for the public?
What I'm trying to do is show data for the public, if they go to a user's profile page. They don't have to be signed into the app.
My current folder structure is:
/pages/[user]/index.js with getStaticProps and getStaticPaths:
export async function getStaticPaths() {
const SSR = withSSRContext();
const { data } = await SSR.API.graphql({ query: listUsers });
const paths = data.listUsers.items.map((user) => ({
params: { user: user.username },
}));
return {
fallback: true,
paths,
};
}
export async function getStaticProps({ params }) {
const SSR = withSSRContext();
const { data } = await SSR.API.graphql({
query: postsByUsername,
variables: {
username: params.username,
},
});
return {
props: {
posts: data.postsByUsername.items,
},
};
}
Finally figured it out. A lot of tutorials uses authMode: 'AMAZON_COGNITO_USER_POOLS ' // or AWS_IAM parameter in their graphql query for example in https://docs.amplify.aws/lib/graphqlapi/authz/q/platform/js/
// Creating a post is restricted to IAM
const createdTodo = await API.graphql({
query: queries.createTodo,
variables: {input: todoDetails},
authMode: 'AWS_IAM'
});
But you rarely come across people who use authMode: API_KEY.
So I guess, if you want the public to read without authentication, you would just need to set authMode: 'API_KEY'...
Make sure you configure your amplify API to have public key as well.
I tried but didn't work. Got an error: Error when evaluating SSR module /node_modules/cross-fetch/dist/browser-ponyfill.js:
<script lang="ts">
import fetch from 'cross-fetch';
import { ApolloClient, InMemoryCache, HttpLink } from "#apollo/client";
const client = new ApolloClient({
ssrMode: true,
link: new HttpLink({ uri: '/graphql', fetch }),
uri: 'http://localhost:4000/graphql',
cache: new InMemoryCache()
});
</script>
With SvelteKit the subject of CSR vs. SSR and where data fetching should happen is a bit deeper than with other somewhat "similar" solutions. The bellow guide should help you connect some of the dots, but a couple of things need to be stated first.
To define a server side route create a file with the .js extension anywhere in the src/routes directory tree. This .js file can have all the import statements required without the JS bundles that they reference being sent to the web browser.
The #apollo/client is quite huge as it contains the react dependency. Instead, you might wanna consider importing just the #apollo/client/core even if you're setting up the Apollo Client to be used only on the server side, as the demo bellow shows. The #apollo/client is not an ESM package. Notice how it's imported bellow in order for the project to build with the node adapter successfully.
Try going though the following steps.
Create a new SvelteKit app and choose the 'SvelteKit demo app' in the first step of the SvelteKit setup wizard. Answer the "Use TypeScript?" question with N as well as all of the questions afterwards.
npm init svelte#next demo-app
cd demo-app
Modify the package.json accordingly. Optionally check for all packages updates with npx npm-check-updates -u
{
"name": "demo-app",
"version": "0.0.1",
"scripts": {
"dev": "svelte-kit dev",
"build": "svelte-kit build --verbose",
"preview": "svelte-kit preview"
},
"devDependencies": {
"#apollo/client": "^3.3.15",
"#sveltejs/adapter-node": "next",
"#sveltejs/kit": "next",
"graphql": "^15.5.0",
"node-fetch": "^2.6.1",
"svelte": "^3.37.0"
},
"type": "module",
"dependencies": {
"#fontsource/fira-mono": "^4.2.2",
"#lukeed/uuid": "^2.0.0",
"cookie": "^0.4.1"
}
}
Modify the svelte.config.js accordingly.
import node from '#sveltejs/adapter-node';
export default {
kit: {
// By default, `npm run build` will create a standard Node app.
// You can create optimized builds for different platforms by
// specifying a different adapter
adapter: node(),
// hydrate the <div id="svelte"> element in src/app.html
target: '#svelte'
}
};
Create the src/lib/Client.js file with the following contents. This is the Apollo Client setup file.
import fetch from 'node-fetch';
import { ApolloClient, HttpLink } from '#apollo/client/core/core.cjs.js';
import { InMemoryCache } from '#apollo/client/cache/cache.cjs.js';
class Client {
constructor() {
if (Client._instance) {
return Client._instance
}
Client._instance = this;
this.client = this.setupClient();
}
setupClient() {
const link = new HttpLink({
uri: 'http://localhost:4000/graphql',
fetch
});
const client = new ApolloClient({
link,
cache: new InMemoryCache()
});
return client;
}
}
export const client = (new Client()).client;
Create the src/routes/qry/test.js with the following contents. This is the server side route. In case the graphql schema doesn't have the double function specify different query, input(s) and output.
import { client } from '$lib/Client.js';
import { gql } from '#apollo/client/core/core.cjs.js';
export const post = async request => {
const { num } = request.body;
try {
const query = gql`
query Doubled($x: Int) {
double(number: $x)
}
`;
const result = await client.query({
query,
variables: { x: num }
});
return {
status: 200,
body: {
nodes: result.data.double
}
}
} catch (err) {
return {
status: 500,
error: 'Error retrieving data'
}
}
}
Add the following to the load function of routes/todos/index.svelte file within <script context="module">...</script> tag.
try {
const res = await fetch('/qry/test', {
method: 'POST',
credentials: 'same-origin',
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify({
num: 19
})
});
const data = await res.json();
console.log(data);
} catch (err) {
console.error(err);
}
Finally execute npm install and npm run dev commands. Load the site in your web browser and see the server side route being queried from the client whenever you hover over the TODOS link on the navbar. In the console's network tab notice how much quicker is the response from the test route on every second and subsequent request thanks to the Apollo client instance being a singleton.
Two things to have in mind when using phaleth solution above: caching and authenticated requests.
Since the client is used in the endpoint /qry/test.js, the singleton pattern with the caching behavior makes your server stateful. So if A then B make the same query B could end up seeing some of A data.
Same problem if you need authorization headers in your query. You would need to set this up in the setupClient method like so
setupClient(sometoken) {
...
const authLink = setContext((_, { headers }) => {
return {
headers: {
...headers,
authorization: `Bearer ${sometoken}`
}
};
});
const client = new ApolloClient({
credentials: 'include',
link: authLink.concat(link),
cache: new InMemoryCache()
});
}
But then with the singleton pattern this becomes problematic if you have multiple users.
To keep your server stateless, a work around is to avoid the singleton pattern and create a new Client(sometoken) in the endpoint.
This is not an optimal solution: it recreates the client on each request and basically just erases the cache. But this solves the caching and authorization concerns when you have multiple users.
I have a fairly simple node app using AWS AppSync. I am able to run queries and mutations successfully but I've recently found that if I run a query twice I get the same response - even when I know that the back-end data has changed. In this particular case the query is backed by a lambda and in digging into it I've discovered that the query doesn't seem to be sent out on the network because the lambda is not triggered each time the query runs - just the first time. If I use the console to simulate my query then everything runs fine. If I restart my app then the first time a query runs it works fine but successive queries again just return the same value each time.
Here are some part of my code:
client.query({
query: gql`
query GetAbc($cId: String!) {
getAbc(cId: $cId) {
id
name
cs
}
}`,
options: {
fetchPolicy: 'no-cache'
},
variables: {
cid: event.cid
}
})
.then((data) => {
// same data every time
})
Edit: trying other fetch policies like network-only makes no visible difference.
Here is how I set up the client, not super clean but it seems to work:
const makeAWSAppSyncClient = (credentials) => {
return Promise.resolve(
new AWSAppSyncClient({
url: 'lalala',
region: 'us-west-2',
auth: {
type: 'AWS_IAM',
credentials: () => {
return credentials
}
},
disableOffline: true
})
)
}
getRemoteCredentials()
.then((credentials) => {
return makeAWSAppSyncClient(credentials)
})
.then((client) => {
return client.hydrated()
})
.then((client) => {
// client is good to use
})
getRemoteCredentials is a method to turn an IoT authentication into normal IAM credentials which can be used with other AWS SDKs. This is working (because I wouldn't get as far as I do if not).
My issue seems very similar to this one GraphQL Query Runs Sucessfully One Time and Fails To Run Again using Apollo and AWS AppSync; I'm running in a node environment (rather than react) but it is essentially the same issue.
I don't think this is relevant but for completeness I should mention I have tried both with and without the setup code from the docs. This appears to make no difference (except annoying logging, see below) but here it is:
global.WebSocket = require('ws')
global.window = global.window || {
setTimeout: setTimeout,
clearTimeout: clearTimeout,
WebSocket: global.WebSocket,
ArrayBuffer: global.ArrayBuffer,
addEventListener: function () { },
navigator: { onLine: true }
}
global.localStorage = {
store: {},
getItem: function (key) {
return this.store[key]
},
setItem: function (key, value) {
this.store[key] = value
},
removeItem: function (key) {
delete this.store[key]
}
};
require('es6-promise').polyfill()
require('isomorphic-fetch')
This is taken from: https://docs.aws.amazon.com/appsync/latest/devguide/building-a-client-app-javascript.html
With this code and without offlineDisabled: true in the client setup I see this line spewed continuously on the console:
redux-persist asyncLocalStorage requires a global localStorage object.
Either use a different storage backend or if this is a universal redux
application you probably should conditionally persist like so:
https://gist.github.com/rt2zz/ac9eb396793f95ff3c3b
This makes no apparent difference to this issue however.
Update: my dependencies from package.json, I have upgraded these during testing so my yarn.lock contains more recent revisions than listed here. Nevertheless: https://gist.github.com/macbutch/a319a2a7059adc3f68b9f9627598a8ca
Update #2: I have also confirmed from CloudWatch logs that the query is only being run once; I have a mutation running regularly on a timer that is successfully invoked and visible in CloudWatch. That is working as I'd expect but the query is not.
Update #3: I have debugged in to the AppSync/Apollo code and can see that my fetchPolicy is being changed to 'cache-first' in this code in apollo-client/core/QueryManager.js (comments mine):
QueryManager.prototype.fetchQuery = function (queryId, options, fetchType, fetchMoreForQueryId) {
var _this = this;
// Next line changes options.fetchPolicy to 'cache-first'
var _a = options.variables, variables = _a === void 0 ? {} : _a, _b = options.metadata, metadata = _b === void 0 ? null : _b, _c = options.fetchPolicy, fetchPolicy = _c === void 0 ? 'cache-first' : _c;
var cache = this.dataStore.getCache();
var query = cache.transformDocument(options.query);
var storeResult;
var needToFetch = fetchPolicy === 'network-only' || fetchPolicy === 'no-cache';
// needToFetch is false (because fetchPolicy is 'cache-first')
if (fetchType !== FetchType.refetch &&
fetchPolicy !== 'network-only' &&
fetchPolicy !== 'no-cache') {
// so we come through this branch
var _d = this.dataStore.getCache().diff({
query: query,
variables: variables,
returnPartialData: true,
optimistic: false,
}), complete = _d.complete, result = _d.result;
// here complete is true, result is from the cache
needToFetch = !complete || fetchPolicy === 'cache-and-network';
// needToFetch is still false
storeResult = result;
}
// skipping some stuff
...
if (shouldFetch) { // shouldFetch is still false so this doesn't execute
var networkResult = this.fetchRequest({
requestId: requestId,
queryId: queryId,
document: query,
options: options,
fetchMoreForQueryId: fetchMoreForQueryId,
}
// resolve with data from cache
return Promise.resolve({ data: storeResult });
If I use my debugger to change the value of shouldFetch to true then at least I see a network request go out and my lambda executes. I guess I need to unpack what that line that is changing my fetchPolicy is doing.
OK I found the issue. Here's an abbreviated version of the code from my question:
client.query({
query: gql`...`,
options: {
fetchPolicy: 'no-cache'
},
variables: { ... }
})
It's a little bit easier to see what is wrong here. This is what it should be:
client.query({
query: gql`...`,
fetchPolicy: 'network-only'
variables: { ... }
})
Two issues in my original:
fetchPolicy: 'no-cache' does not seem to work here (I get an empty response)
putting the fetchPolicy in an options object is unnecessary
The graphql client specifies options differently and we were switching between the two.
Set the query fetch-policy to 'network-only' when running in an AWS Lambda function.
I recommend using the overrides for WebSocket, window, and localStorage since these objects don't really apply within a Lambda function. The setup I typically use for NodeJS apps in Lambda looks like the following.
'use strict';
// CONFIG
const AppSync = {
"graphqlEndpoint": "...",
"region": "...",
"authenticationType": "...",
// auth-specific keys
};
// POLYFILLS
global.WebSocket = require('ws');
global.window = global.window || {
setTimeout: setTimeout,
clearTimeout: clearTimeout,
WebSocket: global.WebSocket,
ArrayBuffer: global.ArrayBuffer,
addEventListener: function () { },
navigator: { onLine: true }
};
global.localStorage = {
store: {},
getItem: function (key) {
return this.store[key]
},
setItem: function (key, value) {
this.store[key] = value
},
removeItem: function (key) {
delete this.store[key]
}
};
require('es6-promise').polyfill();
require('isomorphic-fetch');
// Require AppSync module
const AUTH_TYPE = require('aws-appsync/lib/link/auth-link').AUTH_TYPE;
const AWSAppSyncClient = require('aws-appsync').default;
// INIT
// Set up AppSync client
const client = new AWSAppSyncClient({
url: AppSync.graphqlEndpoint,
region: AppSync.region,
auth: {
type: AppSync.authenticationType,
apiKey: AppSync.apiKey
}
});
There are two options to enable/disable caching with AppSyncClient/ApolloClient, for each query or/and on initializing the client.
Client Config:
client = new AWSAppSyncClient(
{
url: 'https://myurl/graphql',
region: 'my-aws-region',
auth: {
type: AUTH_TYPE.AWS_MY_AUTH_TYPE,
credentials: await getMyAWSCredentialsOrToken()
},
disableOffline: true
},
{
cache: new InMemoryCache(),
defaultOptions: {
watchQuery: {
fetchPolicy: 'no-cache', // <-- HERE: check the apollo fetch policy options
errorPolicy: 'ignore'
},
query: {
fetchPolicy: 'no-cache',
errorPolicy: 'all'
}
}
}
);
Alternative: Query Option:
export default graphql(gql`query { ... }`, {
options: { fetchPolicy: 'cache-and-network' },
})(MyComponent);
Valid fetchPolicy values are:
cache-first: This is the default value where we always try reading data from your cache first. If all the data needed to fulfill your query is in the cache then that data will be returned. Apollo will only fetch from the network if a cached result is not available. This fetch policy aims to minimize the number of network requests sent when rendering your component.
cache-and-network: This fetch policy will have Apollo first trying to read data from your cache. If all the data needed to fulfill your query is in the cache then that data will be returned. However, regardless of whether or not the full data is in your cache this fetchPolicy will always execute query with the network interface unlike cache-first which will only execute your query if the query data is not in your cache. This fetch policy optimizes for users getting a quick response while also trying to keep cached data consistent with your server data at the cost of extra network requests.
network-only: This fetch policy will never return you initial data from the cache. Instead it will always make a request using your network interface to the server. This fetch policy optimizes for data consistency with the server, but at the cost of an instant response to the user when one is available.
cache-only: This fetch policy will never execute a query using your network interface. Instead it will always try reading from the cache. If the data for your query does not exist in the cache then an error will be thrown. This fetch policy allows you to only interact with data in your local client cache without making any network requests which keeps your component fast, but means your local data might not be consistent with what is on the server. If you are interested in only interacting with data in your Apollo Client cache also be sure to look at the readQuery() and readFragment() methods available to you on your ApolloClient instance.
no-cache: This fetch policy will never return your initial data from the cache. Instead it will always make a request using your network interface to the server. Unlike the network-only policy, it also will not write any data to the cache after the query completes.
Copied from: https://www.apollographql.com/docs/react/api/react-hoc/#graphql-options-for-queries