NextJs Application is working perfectly fine locally but after pushing the fronted to AWS I'm getting the run time error, No Credentials in callback.js api
No Build time error, so I assume aws-exports.js file is built correctly during the build but I don't know if it is built with the required details like API key on AWS.
I'm using API key authentication by default. I'm not using amplify add auth because I've a requirement to use custom auth. I know amplify auth is the recommend way but I still need to use my custom method.
I have already tried multiple suggestions like disable Analytics: true as suggested in couple of other discussions but none of them worked for me. Build my project multiple times from scratch by re-installing all the dependencies but no luck.
callback.js API
import { API, graphqlOperation } from 'aws-amplify';
import {getAuth} from "../../../../src/graphql/queries"
import {createAuth} from "../../../../src/graphql/mutations"
export default async function callback(req, res) {
const record = await API.graphql(graphqlOperation(getAuth, {emailId: "abc#gmail.com"}))
res.status(200).json({ record });
}
aws-exports.js
/* eslint-disable */
// WARNING: DO NOT EDIT. This file is automatically generated by AWS Amplify. It will be overwritten.
const awsmobile = {
"aws_project_region": "us-east-1",
"aws_appsync_graphqlEndpoint": "https://dummyUrl.appsync-api.us-east-1.amazonaws.com/graphql",
"aws_appsync_region": "us-east-1",
"aws_appsync_authenticationType": "API_KEY",
"aws_appsync_apiKey": "da2-************"
};
export default awsmobile;
_app.js
import {Amplify} from 'aws-amplify';
import config from "../aws-exports"
Amplify.configure(config)
function MyApp({ Component, pageProps: { session, ...pageProps } }) {
// App logic
})
GraphQL Schema
type Auth #model #auth(rules: [{ allow: public }]) {
emailId: ID! #primaryKey
name: String
screen_name: String
profile_img: String
userSession: String
tokenType: String
accessToken: String
accessSecret: String
refreshToken: String
accessScope: String
}
Package.json
"dependencies": {
"#emoji-mart/data": "^1.0.6",
"#emoji-mart/react": "^1.0.1",
"aes256": "^1.1.0",
"aws-amplify": "^4.3.37",
"emoji-mart": "^5.2.2",
"formidable": "^2.0.1",
"js-cookie": "^3.0.1",
"next": "12.3.1",
"react": "18.2.0",
"react-datepicker": "^4.8.0",
"react-dom": "18.2.0",
},
Amplify.yml
version: 1
backend:
phases:
build:
commands:
- '# Execute Amplify CLI with the helper script'
- amplifyPush --simple
frontend:
phases:
preBuild:
commands:
- yarn install
build:
commands:
- yarn run build
artifacts:
baseDirectory: .next
files:
- '**/*'
cache:
paths:
- node_modules/**/*
Edited
I've found how the server side process work with amplify and graphql. Please refer to this. On the server side, you need to pass the API KEY explicitly into the graphql request as that page writes.
==========
I'm using appsync, but appsync pure directives. So just let me provide the reference below. Please confirm the rule follows this way.
{ allow: public, provider: apiKey }
Related
I'm building a CDK Pipeline that with update another CDK template.
This CDK template is a static frontend react app.
The backend uses an AWS Lambda, API Gateway, and CloudFront Distribution to host the site.
I want to put the api's in the config.json file as I normally would if I were building it manually one service at a time.
The problem seems to be in the cdk pipeline-stack, which builds the static-frontend-stack.
When you initialize a new pipeline, it wants you to add shell steps first, (npm i, cd into correct folder, npm run build, etc) which creates the distribution folder I need.
As well as turning the whole thing into a CF template.
Then you can drop that into different stages you want, e.g., test and prod.
However, I won't receive CfnOutputs until the stages are built. And the CfnOutputs hold the api's and other info I need to put into the config.json file (which was already built first, and created empty values).
There is even a envFromCfnOutputs param to add to the initial codebuild pipeline, but since they are initialized/created later, typescript yells at me for putting it in there before. I understand why that errors, but I can't figure a clever way to fix this issue.
import * as cdk from "aws-cdk-lib";
import { Construct } from "constructs";
import * as pipelines from "aws-cdk-lib/pipelines";
import * as codecommit from "aws-cdk-lib/aws-codecommit";
import { Stages } from "./stages";
import { Stack, Stage } from "aws-cdk-lib";
interface PipelineStackProps extends cdk.StackProps {
env: {
account: string;
region: string;
stage: string;
};
}
export class PipelineStack extends cdk.Stack {
constructor(scope: Construct, id: string, props: PipelineStackProps) {
super(scope, id, props);
/************ Grab Repo ************/
const source = codecommit.Repository.fromRepositoryName(
this,
"PreCallbackSMSSolution",
"PreCallbackSMSSolution"
);
/************ Define Pipeline & Build ShellStep (for Frontend) ************/
const Pipeline = new pipelines.CodePipeline(this, "Pipeline", {
pipelineName: `CodePipeline`,
selfMutation: true,
crossAccountKeys: true,
synthCodeBuildDefaults: {
rolePolicy: [
// #desc Policy to allow CodeBuild to use CodeArtifact
// #external https://docs.aws.amazon.com/codeartifact/latest/ug/using-npm-packages-in-codebuild.html
new cdk.aws_iam.PolicyStatement({
actions: [
"codeartifact:GetAuthorizationToken",
"codeartifact:GetRepositoryEndpoint",
"codeartifact:ReadFromRepository",
],
resources: ["*"],
}),
new cdk.aws_iam.PolicyStatement({
actions: ["sts:GetServiceBearerToken"],
resources: ["*"],
conditions: {
StringEquals: {
"sts:AWSServiceName": "codeartifact.amazonaws.com",
},
},
}),
],
},
synth: new pipelines.ShellStep("Synth", {
input: pipelines.CodePipelineSource.codeCommit(source, "master"),
installCommands: [
"cd $CODEBUILD_SRC_DIR/deployment",
"npm install -g typescript",
"npm run co:login",
"npm i",
],
env: {
stage: props.env.stage,
},
envFromCfnOutputs: {
// TODO: cfn outputs need to go here!
// CcpUrlOutput: TestStage.CcpUrlOutput,
// loginUrlOutput: TestStage.LoginUrlOutput,
// regionOutput: TestStage.RegionOutput,
// apiOutput: TestStage.ApiOutput
},
commands: [
"cd $CODEBUILD_SRC_DIR/frontend",
"pwd",
"apt-get install jq -y",
"chmod +x ./generate-config.sh",
"npm i",
"npm run build-prod",
"pwd",
"cat ./src/config-prod.json",
"cd ../deployment",
"npx cdk synth",
],
primaryOutputDirectory: "$CODEBUILD_SRC_DIR/deployment/cdk.out", // $CODEBUILD_SRC_DIR = starts root path
}),
});
/************ Initialize Test Stack & Add Stage************/
const TestStage = new Stages(this, "TestStage", {
env: { account: "***********", region: "us-east-1", stage: "test" },
}); // Aspen Sandbox
Pipeline.addStage(TestStage);
/************ Initialize Prod Stack & Add Stage ************/
const ProdStage = new Stages(this, "ProdStage", {
env: { account: "***********", region: "us-east-1", stage: "prod" },
}); // Aspen Sandbox
Pipeline.addStage(ProdStage);
/************ Build Pipeline ************/
Pipeline.buildPipeline();
/************ Manual Approve Stage ************/
const ApproveStage = Pipeline.pipeline.addStage({
stageName: "PromoteToProd",
placement: {
justAfter: Pipeline.pipeline.stage("TestStage"),
},
});
ApproveStage.addAction(
new cdk.aws_codepipeline_actions.ManualApprovalAction({
actionName: "Approve",
additionalInformation: "Approve this deployment for production.",
})
);
}
/****/
}
I started a bare Expo app with expo init called MyVideoApp. Then I created an AWS account and in the terminal ran:
npm install -g #aws-amplify/cli
amplify configure
This signed me into the console, I went through the default steps and created an account in region:eu-west-2, username:amplify-user, pasted in the accessKeyId & secretAccessKey, profile name:amplify-user-profile.
cd ~/Documents/MyVideoApp/ & amplify init
? Enter a name for the project MyVideoApp
? Enter a name for the environment dev
? Choose your default editor: IntelliJ IDEA
? Choose the type of app that you're building javascript
Please tell us about your project
? What javascript framework are you using react-native
? Source Directory Path: /
? Distribution Directory Path: /
? Build Command: npm run-script build
? Start Command: npm run-script start
Using default provider awscloudformation
? Do you want to use an AWS profile? Yes
? Please choose the profile you want to use amplify-user-profile
Adding backend environment dev to AWS Amplify Console app: d37chh30hholq6
amplify push
At this point I had an amplify folder in my project directory and an S3 bucket called amplify-myvideoapp-dev-50540-deployment. I uploaded an image into the bucket icon_1.png. And tried to download it from the app via a button click.
import React from 'react';
import { StyleSheet, Text, View, SafeAreaView, Button } from 'react-native';
import Amplify, { Storage } from 'aws-amplify';
import awsmobile from "./aws-exports";
Amplify.configure(awsmobile);
async function getImage() {
try {
let data = await Storage.get('icon_1.jpg')
} catch (err) {
console.log(err)
}
}
export default function App() {
return (
<SafeAreaView style={styles.container}>
<Text>Hello, World!</Text>
<Button title={"Click to Download!"} onPress={getImage}/>
</SafeAreaView>
);
}
const styles = StyleSheet.create({
container: {
flex: 1,
justifyContent: 'center',
alignItems: 'center',
},
});
Output:
No credentials
[WARN] 18:54.93 AWSS3Provider - ensure credentials error, No Cognito Identity pool provided for unauthenticated access
...
So I setup (but maybe not correctly?) a user pool (my_first_pool) and an identity pool (myvidapp). This didn't help. Furthermore when I go into my bucket and click Permissions -> Bucket Policy, it's just empty ... not sure if that's okay if only owner is trying to access the bucket & it's contents.
I don't know what's wrong and what else to try. I essentially just want to authenticate my backend so anyone who git clones this code would just be able to run it and access the bucket.
Edit: aws-exports.js
/* eslint-disable */
// WARNING: DO NOT EDIT. This file is automatically generated by AWS Amplify. It will be overwritten.
const awsmobile = {
"aws_project_region": "eu-west-2"
};
export default awsmobile;
Since you've indicated that you're okay with all of the files in the S3 bucket being publicly accessible, I would suggest the following:
Select the bucket from in the AWS console (console.aws.amazon.com)
Under "Permissions" select "Block Public Access" and edit the settings by un-checking all of the options under and including "Block all public access", then save and confirm.
Go to the bucket policy, and paste in the following (Note: replace "YOUR_BUCKET_NAME_HERE" with "amplify-myvideoapp-dev-50540-deployment" first):
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicRead",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": [
"arn:aws:s3:::[YOUR_BUCKET_NAME_HERE]/*"
]
}
]
}
I need to upload some files to S3 from a NextJs application. Since it is server side I am under the impression simply setting environment variables should work but it doesn't. I know there are other alternative like assigning a role to EC2 but I want to use accessKeyID and secretKey.
This is my next.config.js
module.exports = {
env: {
//..others
AWS_ACCESS_KEY_ID: process.env.AWS_ACCESS_KEY_ID
},
serverRuntimeConfig: {
//..others
AWS_SECRET_ACCESS_KEY: process.env.AWS_SECRET_ACCESS_KEY
}
}
This is my config/index.js
export default {
//...others
awsClientID: process.env. AWS_ACCESS_KEY_ID,
awsClientSecret: process.env.AWS_SECRET_ACCESS_KEY
}
This is how I use in my code:
import AWS from 'aws-sdk'
import config from '../config'
AWS.config.update({
accessKeyId: config.awsClientID,
secretAccessKey: config.awsClientSecret,
});
const S3 = new AWS.S3()
const params = {
Bucket: "bucketName",
Key: "some key",
Body: fileObject,
ContentType: fileObject.type,
ACL: 'public-read'
}
await S3.upload(params).promise()
I am getting this error:
Unhandled Rejection (CredentialsError): Missing credentials in config, if using AWS_CONFIG_FILE, set AWS_SDK_LOAD_CONFIG=1
If I hard code the credentials in code, it works fine.
How can I make it work correctly?
Looks like the Vercel docs are currently outdated (AWS SDK V2 instead of V3). You can pass the credentials object to the AWS service when you instantiate it. Use an environment variable that is not reserved by adding the name of your app to it for example.
.env.local
YOUR_APP_AWS_ACCESS_KEY_ID=[your key]
YOUR_APP_AWS_SECRET_ACCESS_KEY=[your secret]
Add these env variables to your Vercel deployment settings (or Netlify, etc) and pass them in when you start up your AWS service client.
import { S3Client } from '#aws-sdk/client-s3'
...
const s3 = new S3Client({
region: 'us-east-1',
credentials: {
accessKeyId: process.env.TRENDZY_AWS_ACCESS_KEY_ID ?? '',
secretAccessKey: process.env.TRENDZY_AWS_SECRET_ACCESS_KEY ?? '',
},
})
(note: undefined check so Typescript stays happy)
are you possibly hosting this app via vercel?
As per vercel docs, some env variables are reserved by vercel.
https://vercel.com/docs/concepts/projects/environment-variables#reserved-environment-variables
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
Maybe that's the reason why it is not getting those env vars
I was able to workaround this by adding my custom env variables into .env.local and then calling for those variables
AWS.config.update({
'region': 'us-east-1',
'credentials': {
'accessKeyId': process.env.MY_AWS_ACCESS_KEY,
'secretAccessKey': process.env.MY_AWS_SECRET_KEY
}
});
As last step would need to add these into vercel UI
obviously not ideal solution and not recommended by AWS.
https://vercel.com/support/articles/how-can-i-use-aws-sdk-environment-variables-on-vercel
If I'm not mistaken, you want to make AWS_ACCESS_KEY_ID into a runtime variable as well. Currently, it is a build time variable, which won't be accessible in your node application.
// replace this
env: {
//..others
AWS_ACCESS_KEY_ID: process.env.AWS_ACCESS_KEY_ID
},
// with this
module.exports = {
serverRuntimeConfig: {
//..others
AWS_ACCESS_KEY_ID: process.env.AWS_ACCESS_KEY_ID
}
}
Reference: https://nextjs.org/docs/api-reference/next.config.js/environment-variables
I need to upload an executable file ( i.e. wkhtmltopdf to be exact) along with my function code in aws lambda. I'm using serverless framework. I tried different ways but the exe is not uploaded. The function works well when the code is zipped and uploaded via the aws dashboard.
Given below is the directory structure of the function that need to be uploaded
node_modules
index.js
wkhtmltopdf
This is my serverless.yml
service: consult-payment-api
frameworkVersion: ">=1.1.0 <2.0.0"
package:
individually: true
provider:
name: aws
region: us-west-2
runtime: nodejs8.10
stage: dev
timeout: 300
functions:
UserPackageCharge:
handler: payment/module/chargePackage.create
package:
include:
- packages/wkhtmltopdf
events:
- http:
path: payment/module/package
method: post
cors:
origin: '*'
headers:
- Content-Type
- X-Amz-Date
- Authorization
- X-Api-Key
- X-Amz-Security-Token
- X-Amz-User-Agent
- My-Custom-Header
This is my index.js (handler)
var wkhtmltopdf = require('wkhtmltopdf');
var MemoryStream = require('memorystream');
process.env['PATH'] = process.env['PATH'] + ':' + process.env['LAMBDA_TASK_ROOT'];
exports.handler = function(event, context) {
var memStream = new MemoryStream();
var html_utf8 = new Buffer(event.html_base64, 'base64').toString('utf8');
wkhtmltopdf(html_utf8, event.options, function(code, signal) { context.done(null, { pdf_base64: memStream.read().toString('base64') }); }).pipe(memStream);
};
But I still get the error 'Error: /bin/bash: wkhtmltopdf: command not found'
How to get this working in serverless?
I did get a version working.
Here's what I did:
1) Created a package.json and added:
"dependencies": {
"wkhtmltopdf": "^0.3.4",
"memorystream": "^0.3.1"
},
2) Ran ndm install
3) Added WKhtmltopdf in the directory:
4) Added this in serverless.yml
package:
include:
- wkhtmltopdf
5) Added this in the lambda:
var wkhtmltopdf = require('wkhtmltopdf');
var MemoryStream = require('memorystream');
That's about it. Hope it helps.
Well I can suggest for Python as that's what I've implemented recently in my project. I've all my lambda scripts and dependency python scripts in one zip and put those on my bastion server. To make those easier to execute and upload I've implemented cattle+click cli which ensure correct version of zips are picked up which are then uploaded to s3 bucket location. When the lambda is triggered based on s3 event it looks for the required parameter file or the input file in the repository(which is nothing but an s3 bucket only).
I am unable to deploy my ember application in Firebase. I can only see the welcome page of Firebase hosting:
You're seeing this because you've successfully setup Firebase Hosting. Now it's time to go build something extraordinary!
I have installed the EmberFire add-on, as well as the Firebase tool.
My config file looks like this:
module.exports = function(environment) {
var ENV = {
modulePrefix: 'sample',
environment: environment,
rootURL: '/',
locationType: 'auto',
firebase : {
apiKey: 'xxxxxx',
authDomain: 'xxxxx',
databaseURL: 'xxxx',
storageBucket: 'xxxxx',
messagingSenderId: 'xxxxx'
},
EmberENV: {
FEATURES: {
// Here you can enable experimental features on an ember canary build
// e.g. 'with-controller': true
}
},
APP: {
// Here you can pass flags/options to your application instance
// when it is created
}
};
if (environment === 'development') {
// ENV.APP.LOG_RESOLVER = true;
ENV.APP.LOG_ACTIVE_GENERATION = true;
ENV.APP.LOG_TRANSITIONS = true;
ENV.APP.LOG_TRANSITIONS_INTERNAL = true;
ENV.APP.
LOG_VIEW_LOOKUPS = true;
}
Firebase.json:
{
"database": {
"rules": "database.rules.json"
},
"hosting": {
"public": "dist",
"rewrites": [
{
"source": "**",
"destination": "/index.html"
}
]
}
}
I have built the app and deployed using following commands:
ember build --prod
firebase login
firebase init
firebase deploy
Thanks in advance :-)
When you initialise your ember.js app with firebase init command for the first time, you will be prompted that
? File dist/index.html already exists. Overwrite? (y/N)
respond with No. Responding with yes will allow the default firebase hosting welcome page override your ember app index.html file, which is why you are still greeted with the firebase hosting welcome page.