I have a NextJS app that runs on AWS amplify. I built a basic REST API by adding a pages/api directory to my project. The endpoint returns a 200 and some test JSON data. This works fine when I run the project locally. I deployed to AWS Amplify and the build detects that a Next API is present so it provisions a Lambda and configures the CloudFront behavior for /api/* routes to point to the Lambda function. When hitting the API CloudFront returns a 503 error:
503 ERROR
The request could not be satisfied.
The Lambda function associated with the CloudFront distribution is invalid or doesn't have the required permissions. We can't connect to the server for this app or website at this time. There might be too much traffic or a configuration error. Try again later, or contact the app or website owner.
If you provide content to customers through CloudFront, you can find steps to troubleshoot and help prevent this error by reviewing the CloudFront documentation.
Generated by cloudfront (CloudFront)
Request ID: YnQI9alkoBXwkmkkpXwa29zqXaOT06VCXiBZWJI6xQVkhQ8MElB2bQ==
It doesn't appear that CloudFront is even calling the Lambda, as I am unable to see any logs in Cloudwatch. I've tried to debug and built a test that passes a mock CloudFront event request to the Lambda but I am unable to get my API to execute successfully.
It seems that Amplify/Next provides a lot of boilerplate code for supporting Next API routes so I'm not sure where to focus my debugging efforts.
Has anyone run into this issue before? Any guidance or suggestions would be super helpful!
I did have a similary problem, but i'm using serverless with #sls-next/serverless-component component. Because in moment hasn't support for NextJs 12 version.
In my package.json i'm force NextJs version for more recently in 11, this case is 11.1.4.
// package.json
{
"name": "next-aws",
"version": "0.1.0",
"private": true,
"scripts": {
"dev": "next dev",
"build": "next build",
"start": "next start"
},
"dependencies": {
"next": "^11.1.4",
"react": "17.0.2",
"react-dom": "17.0.2"
},
"devDependencies": {
"#types/node": "^14.14.37",
"#types/react": "^17.0.3",
"typescript": "^4.2.3"
}
}
Related
there is monorepo for development where are two folders - client, api. Developing on localhost is working very fine. But a problem, of course, is on AWS. My whole setting is in next.config.js which is
async rewrites() {
return [
{
source: '/api/:slug*',
destination: `${process.env.API_URL}/api/:slug*`
},
]
}
but this is not working on AWS Amplify. I suspect that should be more setting in Rewrites and redirects or AWS Cloudfront, but I don`t have any clue. Do you have some experience with that?
Error:
403 Bad request. We can't connect to the server for this app or website at this time. There might be too much traffic or a configuration error. Try again later, or contact the app or website owner. If you provide content to customers through CloudFront, you can find steps to troubleshoot and help prevent this error by reviewing the CloudFront documentation.
Problem was on client folder, where wasn't directory pages/api with any file and AWS CloudFront couldn't detect the directory for some internal settings. Look at the image bellow - There wasn't Path pattern - 'api/*', now it is. :)
I can't use apollo Studio. After migration for Graphql playground. When I try to run in localhost and redirect me to apollo studio sanbox https://studio.apollographql.com/sandbox?endpoint=http%3A%2F%2Flocalhost%3A5018%2Fgraphql: Unable to connect to localhost.
Please help to solve this
Add CORS configuration options for the server's CORS behavior.
const server = new ApolloServer({
cors: {
"origin": "https://studio.apollographql.com",
"credentials": true
},
typeDefs,
resolvers,
});
Update
I was able to solve my problem. I had added the helmet middleware to Express and just needed to update the contentSecurityPolicy setting.
export default async (app: express.Application) => {
app.use(config.graphqlPath, express.json());
app.use(cors());
app.use(
helmet({
contentSecurityPolicy:
process.env.NODE_ENV === 'production' ? undefined : false
})
);
};
Not sure if that helps since there were not a lot of details on the environment in the original post, but maybe this can help someone else in the future.
Original Post
I'm having the same issue only with Apollo Sandbox. I just get a page stating that I appear to be offline. I checked the console and there are a number of CORS errors.
I also attempted to switch to the GraphQL Playground as a plugin. It displayed the initial loading screen, but never progressed past that point. I checked the console and also saw similar CORS errors.
I'm using apollo-server-express. I've created Apollo servers in the past and have never run into this while trying to run tools locally.
Apollo now supports an embedded version of the Apollo Sandbox & Apollo Explorer that you can host on your Apollo Server endpoint urls. This will remove the need to whitelist the Apollo Studio endpoint in your CORS configuration to use our Explorer. You can use the Explorer right on your server endpoint.
For local development endpoints, pass embed: true to the ApolloServerPluginLandingPageLocalDefault plugin in your Apollo Server config. See more details here.
For production endpoints, pass a graphRef and embed: true to the ApolloServerPluginLandingPageProductionDefault plugin in your Apollo Server config. See more details here.
I have a django app deployed in heroku, which was working previously, however after adding the Edge add-on, which serves your static files from Amazon cloudfront for caching, all of my post requests are getting 403 errors.
I do pass the csrf token correctly, as my post requests still work when not served from cloudfront, but on cloudfront I am getting the error that the CSRF Verification failed
It also mentions that the referer header is missing, however I can see referer specified in the request headers in the network tab of the developer tools.
Any idea what I am missing here?
CloudFront removes the referer header before sending the request to the server. The following link specifies how each header is treated: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/RequestAndResponseBehaviorCustomOrigin.html#request-custom-headers-behavior
Using the AWS CLI, I had to configure my CloudFront distribution to forward the request header as well as the csrftoken, since cookies aren't forwarded either. Instructions can be found here:
https://devcenter.heroku.com/articles/edge#cloudfront-configuration
The ForwardedValues section of my config looked like this:
"ForwardedValues": {
"QueryString": false,
"Cookies": {
"Forward": "whitelist",
"WhitelistedNames": {
"Quantity": 2,
"Items": [
"_app_session",
"csrftoken",
]
}
},
"Headers": {
"Quantity": 2,
"Items": [
"Origin",
"Referer"
]
},
"QueryStringCacheKeys": {
"Quantity": 0
}
}
I also had to update my django settings file to include
CSRF_TRUSTED_ORIGINS = [<CLOUDFRONT_URL>]
Note: Although the above instructions solved my issue in the original question, I did have to forward additional cookies such as "sessionid" and "messages" to get other features of the django app working.
The sendCommandToDevice endpoint seems to be unavailable. I tried sending the command directly from the cloud console on the device page. The notification at the bottom left said, Command sent to device, but the inspector on chrome showed a 503 error. Time of error: 17:46:02 UTC Saturday, 27 October 2018
Request:
Request URL: https://cloudiot.clients6.google.com/v1/projects/<project-id>/locations/<location-name>/registries/<registry-name>/devices/<device-name>/:sendCommandToDevice?key=<removed>
Request Method: POST
Status Code: 503
Remote Address: 216.58.196.74:443
Referrer Policy: no-referrer-when-downgrade
Payload: {binaryData: "eyJ0ZXN0IjoxfQ==", subfolder: ""}
Response:
{
"error": {
"code": 503,
"message": "The service is currently unavailable.",
"status": "UNAVAILABLE"
}
}
Also, an additional note, sendCommandToDevice is not available in the nodejs client library (34.0.0). I had to do API discovery to have the method available.
Are you still receiving the 503? I tested this just now and was able to receive messages successfully.
Also, regarding the the lack of availability of Commands features in the client library, Commands is still in Beta and those methods will be available in the client library once it is fully released.
This is probably due to
bug in the node library
bug in Google's RPC endpoint
Lack of testing on Google's part
The problem is that subfolder MUST be specified, and MUST not be an empty string.
As I was using this in a Firebase function, I just use the firebase subfolder for any commands being sent that do not have a specific subfolder
const request = {
name: `${registryName}/devices/${deviceId}`,
binaryData: Buffer.from(JSON.stringify(commandMessage)).toString("base64"),
subfolder: 'firebase'
}
Here's functions deps:
"dependencies": {
"firebase-admin": "^6.4.0",
"firebase-functions": "^2.1.0",
"fs-extra": "^7.0.0",
"googleapis": "^36.0.0",
},
After switching from hosting my own Elastiscsearch cluster to Amazon's Elasticsearch Service,
my Kibana dashboards (versions 4.0.2 and 4.1.2) won't load and I'm receiving the following error in kibana.log:
{
"name": "Kibana",
"hostname": "logs.example.co",
"pid": 8037,
"level": 60,
"err": {
"message": "Not Found",
"name": "Error",
"stack": "Error: Not Found\n at respond (\/srv\/kibana\/kibana-4.1.2-linux-x64\/src\/node_modules\/elasticsearch\/src\/lib\/transport.js:235:15)\n at checkRespForFailure (\/srv\/kibana\/kibana-4.1.2-linux-x64\/src\/node_modules\/elasticsearch\/src\/lib\/transport.js:203:7)\n at HttpConnector.<anonymous> (\/srv\/kibana\/kibana-4.1.2-linux-x64\/src\/node_modules\/elasticsearch\/src\/lib\/connectors\/http.js:156:7)\n at IncomingMessage.bound (\/srv\/kibana\/kibana-4.1.2-linux-x64\/src\/node_modules\/elasticsearch\/node_modules\/lodash-node\/modern\/internals\/baseBind.js:56:17)\n at IncomingMessage.emit (events.js:117:20)\n at _stream_readable.js:944:16\n at process._tickCallback (node.js:442:13)"
},
"msg": "",
"time": "2015-10-14T20:48:40.169Z",
"v": 0
}
unfortunately, this error is not very helpful. I assume it's a wrapped HTTP 404, but for what?
How can I connect a Kibana install to Amazon's Elasticsearch Service?
Here are a some things to keep in mind when using Amazon's Elasticsearch Service:
Modifications to the access policies take a non-deterministic amount of time. I've found that it's good to wait at least 15 minutes after the status is no longer "processing" after making policy changes.
It listens on port 80 for HTTP requests and not the standard port 9200. Be sure that your elasticsearch_url configuration directive reflects this, e.g.:
elasticsearch_url: "http://es.example.co:80"
It's very likely that your Kibana instance will not have the necessary permissions to create the index that it needs to show you a dashboard -- this is towards the root of the issue. Check out the indexes on the Elasticsearch domain and look for a line that matches the kibana_index config directive (e.g. via http://es.example.co/_cat/indices).
for instance, if your kibana_index directive is the value is .kibana-4, if you don't see a line like the following:
green open .kibana-4 1 1 3 2 30.3kb 17.2kb
then your Kibana index was not able to create the index it needs. If you go to the dashboard for the Elasticsearch service in amazon and click on the Kibana link, it will likely create a .kibana-4 index for you.
You can specify this index in your existing Kibana's configuration and you should see the next point.
Your existing Kibana install will likely require authentication via the header:
Kibana: Authorization header requires 'Credential' parameter. Authorization header requires 'Signature' parameter. Authorization header requires 'SignedHeaders' parameter. Authorization header requires existence of either a 'X-Amz-Date' or a 'Date' header.
You can configure this in Kibana and can see the general signing API request documentation for more help.
It's worth noting that the error messaging is reportably better in Kibana 4.2, but as that's in beta and Amazon's Elasticsearch Service was very recently released, the above should be helpful in debugging.