Im trying to run storybook from amplify in a nextjs project.
It sets the framework as Nextjs SSR but i went ahead and changed it to React and the app platform was kept as it is to Web Compute
I updated the amplify.yaml file as:
version: 1
frontend:
phases:
preBuild:
commands:
- nvm install
- nvm use
- yarn install --immutable
build:
commands:
- npm run build-storybook
artifacts:
baseDirectory: storybook-static
files:
- "**/*"
cache:
paths:
- node_modules/**/*
But the build always fails with this error
2023-02-15T09:45:08.672Z [WARNING]: info
2023-02-15T09:45:08.672Z [WARNING]: => Output directory: /codebuild/output/src160065809/src/dc-extension-get-started/storybook-static
2023-02-15T09:45:08.737Z [INFO]: # Completed phase: build
2023-02-15T09:45:08.739Z [INFO]: ## Build completed successfully
2023-02-15T09:45:08.740Z [INFO]: # Starting caching...
2023-02-15T09:45:08.750Z [INFO]: # Creating cache artifact...
2023-02-15T09:45:21.174Z [INFO]: # Created cache artifact
2023-02-15T09:45:21.296Z [INFO]: # Uploading cache artifact...
2023-02-15T09:45:24.843Z [INFO]: # Uploaded cache artifact
2023-02-15T09:45:24.919Z [INFO]: # Caching completed
2023-02-15T09:45:24.922Z [INFO]: Setting NEXT_PRIVATE_STANDALONE=true to produce .next/standalone directory
2023-02-15T09:45:24.926Z [INFO]: # No custom headers found.
2023-02-15T09:45:24.930Z [ERROR]: !!! CustomerError: Standalone directory not found in /codebuild/output/src160065809/src/dc-extension-get-started/storybook-static/standalone. Please enable output standalone on your next.config.js file or set NEXT_PRIVATE_STANDALONE=true. https://nextjs.org/docs/advanced-features/output-file-tracing#automatically-copying-traced-files
2023-02-15T09:45:24.930Z [INFO]: # Starting environment caching...
2023-02-15T09:45:24.930Z [INFO]: # Uploading environment cache artifact...
2023-02-15T09:45:24.982Z [INFO]: # Uploaded environment cache artifact
2023-02-15T09:45:24.982Z [INFO]: # Environment caching completed
Terminating logging...
Looks like it still tries to find a nextjs app how to get the storybook running in a nextjs app on amplify ?
I deploy the nextjs project by connecting the github repo, the provision passes, the backend is built, but the frontend fails to build. Here is the log with the error:
# Starting phase: preBuild
# Executing command: yarn install
2021-12-13T06:55:51.568Z [INFO]: yarn install v1.22.0
2021-12-13T06:55:51.620Z [INFO]: [1/4] Resolving packages...
2021-12-13T06:55:51.815Z [INFO]: [2/4] Fetching packages...
2021-12-13T06:56:02.529Z [WARNING]: error next#12.0.7: The engine "node" is incompatible with this module. Expected version ">=12.22.0". Got "12.21.0"
2021-12-13T06:56:02.537Z [WARNING]: error Found incompatible module.
2021-12-13T06:56:02.538Z [INFO]: info Visit https://yarnpkg.com/en/docs/cli/install for documentation about this command.
2021-12-13T06:56:02.550Z [ERROR]: !!! Build failed
2021-12-13T06:56:02.552Z [ERROR]: !!! Non-Zero Exit Code detected
2021-12-13T06:56:02.552Z [INFO]: # Starting environment caching...
2021-12-13T06:56:02.552Z [INFO]: # Environment caching completed
Terminating logging...
The build settings:
version: 1
frontend:
phases:
preBuild:
commands:
- yarn install
build:
commands:
- yarn run build
artifacts:
baseDirectory: .next
files:
- '**/*'
cache:
paths:
- node_modules/**/*
I really do not know how to fix this, is it an AWS issue or my project has outdated packages? Any help would be appreciated 😄.
To fix this issue you have to tell Amplify which version of NodeJS to use.
Go To:
Build Settings
At the bottom is Edit Build image settings
Click Add Package Overide
Set your NodeJS version
This worked for me.
I'm trying to deploy my nuxt app via AWS Amplify. Here is my build config:
version: 1
frontend:
phases:
preBuild:
commands:
- npm install
build:
commands:
- npm run generate
artifacts:
# IMPORTANT - Please verify your build output directory
baseDirectory: dist
files:
- '**/*'
cache:
paths:
- node_modules/**/*
Once it gets to the build part of the deploy, the console gives the following output in the build phase:
2021-07-05T18:13:35.839Z [INFO]: # Completed phase: preBuild
# Starting phase: build
2021-07-05T18:13:35.840Z [INFO]: # Executing command: npm run generate
2021-07-05T18:13:36.003Z [INFO]: > eagle-nuxt#1.0.0 generate /codebuild/output/src824807188/src/eagle-nuxt
> nuxt generate
2021-07-05T18:14:03.863Z [WARNING]: [error] /articles
connect ECONNREFUSED 127.0.0.1:3000
I've deployed a nuxt app via amplify before and this is my first time seeing this. Any help would be appreciated
Dependencies:
"dependencies": {
"#nuxtjs/axios": "^5.13.1",
"core-js": "^3.15.2",
"marked": "^2.0.7",
"moment": "^2.29.1",
"nuxt": "^2.15.7"
},
"devDependencies": {
"#nuxtjs/tailwindcss": "^4.0.3",
"autoprefixer": "^9.8.6",
"postcss": "^7.0.35",
"tailwindcss": "npm:#tailwindcss/postcss7-compat#^2.1.2"
}
Thanks in advance
I figured out my issue! While developing locally, I was using a .env file. I had to add the environment variable to my build via through amplify. Documentation: https://docs.aws.amazon.com/amplify/latest/userguide/environment-variables.html#access-env-vars
tl;dr
CodePipeline crashes on the mvn sonar:sonar line of my buildspec.yml file with the following log (I formatted it a bit for better readability):
[ERROR] SonarQube server [http://localhost:9000] can not be reached
...
[ERROR] Failed to execute goal
org.sonarsource.scanner.maven:sonar-maven-plugin:3.7.0.1746:sonar
(default-cli) on project myproject:
Unable to execute SonarQube:
Fail to get bootstrap index from server:
Failed to connect to localhost/127.0.0.1:9000:
Connection refused (Connection refused) -> [Help 1]
Goal
This is my first project with AWS, so sorry if I'm providing irrelevant informations.
I'm trying to deploy my backend API so that it's reachable by the public. Among other things, I want a CI/CD set up to automatically run tests and abort on failure or if a certain quality gate isn't passed. If everything went fine, then the new version should automatically be deployed online.
Current state
My pipeline automatically aborts when one of the tests fails, but that is about all I've gotten to properly do.
I've yet to figure out how to deploy (even manually) the API to be able to send requests to it. Maybe it's already done and I just don't know which URL to use, though.
Anyways, as it is, the CodePipeline crashes on the mvn sonar:sonar line of my buildspec.yml file.
The files
Here is my buildspec.yml:
version: 0.2
phases:
install:
runtime-versions:
java: openjdk8
commands:
##############################################################################################
##### "cd / && ls" returns: [bin, boot, codebuild, dev, etc, go, home, lib, lib32, lib64,
##### media, mnt, opt, proc, root, run, sbin, srv, sys, tmp, usr, var]
##### Initial directory where this starts is $CODEBUILD_SRC_DIR
##### That variable contains something like "/codebuild/output/src511423169/src"
##############################################################################################
# Upgrade AWS CLI to the latest version
- pip install --upgrade awscli
# Folder organization
- cd /root
- codeAnalysisFolder="Sonar" # todo: refactor to include "/root"
- mkdir $codeAnalysisFolder && cd $codeAnalysisFolder
# Get SonarQube
- wget https://binaries.sonarsource.com/Distribution/sonarqube/sonarqube-8.1.0.31237.zip
- unzip ./sonarqube-8.1.0.31237.zip
# Launch SonarQube server locally
- cd ./sonarqube-8.1.0.31237/bin/linux-x86-64
- sh ./sonar.sh start
# Get SonarScanner
- cd /root/$codeAnalysisFolder
- wget https://binaries.sonarsource.com/Distribution/sonar-scanner-cli/sonar-scanner-cli-4.2.0.1873-linux.zip
- unzip ./sonar-scanner-cli-4.2.0.1873-linux.zip
- export PATH=$PATH:/root/$codeAnalysisFolder/sonar-scanner-cli-4.2.0.1873-linux.zip/bin/ # todo: .zip ?!
pre_build:
commands:
- cd $CODEBUILD_SRC_DIR
- mvn clean compile test
- mvn sonar:sonar
build:
commands:
- mvn war:exploded
post_build:
commands:
- cp -r .ebextensions/ target/ROOT/
- aws cloudformation package --template template.yml --s3-bucket $S3_BUCKET --output-template-file template-export.yml
# Do not remove this statement. This command is required for AWS CodeStar projects.
# Update the AWS Partition, AWS Region, account ID and project ID in the project ARN on template-configuration.json file so AWS CloudFormation can tag project resources.
- sed -i.bak 's/\$PARTITION\$/'${PARTITION}'/g;s/\$AWS_REGION\$/'${AWS_REGION}'/g;s/\$ACCOUNT_ID\$/'${ACCOUNT_ID}'/g;s/\$PROJECT_ID\$/'${PROJECT_ID}'/g' template-configuration.json
artifacts:
type: zip
files:
- 'template-export.yml'
- 'template-configuration.json'
Here are the last few lines of the log of the failed build:
[INFO] User cache: /root/.sonar/cache
[ERROR] SonarQube server [http://localhost:9000] can not be reached
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 6.071 s
[INFO] Finished at: 2019-12-18T21:27:23Z
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.sonarsource.scanner.maven:sonar-maven-plugin:3.7.0.1746:sonar (default-cli) on project myproject: Unable to execute SonarQube: Fail to get bootstrap index from server: Failed to connect to localhost/127.0.0.1:9000: Connection refused (Connection refused) -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[Container] 2019/12/18 21:27:23 Command did not exit successfully mvn sonar:sonar exit status 1
[Container] 2019/12/18 21:27:23 Phase complete: PRE_BUILD State: FAILED
[Container] 2019/12/18 21:27:23 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: mvn sonar:sonar. Reason: exit status 1
And because you might also be interested in knowing the build's log related to the sh ./sonar.sh start command:
[Container] 2019/12/18 21:25:49 Running command sh ./sonar.sh start
Starting SonarQube...
Started SonarQube.
Additionally, here is my sonar-project.properties file:
# SONAR SCANNER CONFIGS
sonar.projectKey=bullhubs
# SOURCES
sonar.java.source=8
sonar.sources=src/main/java
sonar.java.binaries=target/classes
sonar.sourceEncoding=UTF-8
# EXCLUSIONS
# (exclusion of Lombok-generated stuff comes from the `lombok.config` file)
sonar.coverage.exclusions=**/*Exception.java
# TESTS
sonar.coverage.jacoco.xmlReportPaths=target/site/jacoco/jacoco.xml
sonar.junit.reportsPath=target/surefire-reports/TEST-*.xml
sonar.tests=src/test/java
The environment
(Sorry for the hidden infos: not being sure what should remain private, I went on the safe side. If you need any specific information, please let me know!)
I have an Elastic Beanstalk set up with the following properties:
I also have an EC2 instance up and running:
I also use a VPC.
What I've tried
I tried adding a bunch of entries into the inbound rules of my EC2's Security Group:
I started from 0.0.0.0/0 : 9000, to then try 127.0.0.1/32 : 9000, to finally try All traffic. None of it worked, so the problem seems to be somewhere else.
I also tried changing some properties of the sonar-project.properties file, namely sonar.web.host and sonar.host.url, to try to redirect where the SonarQube server is hosted (I thought maybe I was supposed to point it to the EC2's IPv4 Public IP address or its attached Public DNS (IPv4)), but somehow the failing build log keeps displaying the failure to connect on localhost:9000 when trying to contact the SonarQube server.
I've figured it out.
Somehow, SonarQube reports having started properly despite that not being true. Thus, when you see this log after having ran your sh ./sonar.sh start command:
[Container] 2019/12/18 21:25:49 Running command sh ./sonar.sh start
Starting SonarQube...
Started SonarQube.
It isn't necessarily true that SonarQube's local server has successfully started. One would have to go into the logs folder of the SonarQube installation folder and read the sonar.log file to figure out that something was actually wrong and that the server was stopped...
In my case, it reported an error that JDK11 was required to run the server. To solve that, I changed the java: openjdk8 line of my buildspec.yml to java: openjdk11.
Then, I had to figure out that now a new log file was available to be read: es.log. When printing that file in the console, it was revealed to me that the latest ElasticSearch version (which is used by the latest SonarQube server version) does not allow itself to be ran by a root user. Thus, I had to create a new user group and edit some configuration file to run the server with that user:
# Set up non-root user to run SonarQube
- groupadd sonar
- useradd -c "Sonar System User" -d $sonarPath/$sonarQube -g sonar -s /bin/bash sonar
- chown -R sonar:sonar $sonarPath/$sonarQube # recursively changing the folder's ownership
# Launch SonarQube server locally
- cd ./$sonarQube/bin/linux-x86-64
- sed -i 's/#RUN_AS_USER=/RUN_AS_USER=sonar/g' sonar.sh # enabling user execution of server
- sh ./sonar.sh start
Complete solution
This gives us the following working version of buildspec.yml :
version: 0.2
phases:
install:
runtime-versions:
java: openjdk11
commands:
##############################################################################################
##### "cd / && ls" returns: [bin, boot, codebuild, dev, etc, go, home, lib, lib32, lib64,
##### media, mnt, opt, proc, root, run, sbin, srv, sys, tmp, usr, var]
##### Initial directory where this starts is $CODEBUILD_SRC_DIR
##### That variable contains something like "/codebuild/output/src511423169/src"
##### This folder contains the whole structure of the CodeCommit repository. This means that
##### the actual Java classes are accessed through "cd src" from there, for example.
##############################################################################################
# Upgrade AWS CLI to the latest version
- pip install --upgrade awscli
# Folder organization
- preSonarPath="/opt/"
- codeAnalysisFolder="Sonar"
- sonarPath="$preSonarPath$codeAnalysisFolder"
- cd $preSonarPath && mkdir $codeAnalysisFolder
# Get SonarQube
- cd $sonarPath
- sonarQube="sonarqube-8.1.0.31237"
- wget https://binaries.sonarsource.com/Distribution/sonarqube/$sonarQube.zip
- unzip ./$sonarQube.zip
# Set up non-root user to run SonarQube
- groupadd sonar
- useradd -c "Sonar System User" -d $sonarPath/$sonarQube -g sonar -s /bin/bash sonar
- chown -R sonar:sonar $sonarPath/$sonarQube # recursively changing the folder's ownership
# Launch SonarQube server locally
- cd ./$sonarQube/bin/linux-x86-64
- sed -i 's/#RUN_AS_USER=/RUN_AS_USER=sonar/g' sonar.sh # enabling user execution of server
- sh ./sonar.sh start
# Get SonarScanner and add to PATH
- sonarScanner="sonar-scanner-cli-4.2.0.1873-linux"
- cd $sonarPath
- wget https://binaries.sonarsource.com/Distribution/sonar-scanner-cli/$sonarScanner.zip
- unzip ./$sonarScanner.zip
- export PATH=$PATH:$sonarPath/$sonarScanner.zip/bin/ # todo: .zip ?!
pre_build:
commands:
- cd $CODEBUILD_SRC_DIR
- mvn clean compile test
# - cd $sonarPath/$sonarQube/logs
# - cat access.log
# - cat es.log
# - cat sonar.log
# - cat web.log
# - cd $CODEBUILD_SRC_DIR
- mvn sonar:sonar
build:
commands:
- mvn war:exploded
post_build:
commands:
- cp -r .ebextensions/ target/ROOT/
- aws cloudformation package --template template.yml --s3-bucket $S3_BUCKET --output-template-file template-export.yml
# Do not remove this statement. This command is required for AWS CodeStar projects.
# Update the AWS Partition, AWS Region, account ID and project ID in the project ARN on template-configuration.json file so AWS CloudFormation can tag project resources.
- sed -i.bak 's/\$PARTITION\$/'${PARTITION}'/g;s/\$AWS_REGION\$/'${AWS_REGION}'/g;s/\$ACCOUNT_ID\$/'${ACCOUNT_ID}'/g;s/\$PROJECT_ID\$/'${PROJECT_ID}'/g' template-configuration.json
artifacts:
type: zip
files:
- 'template-export.yml'
- 'template-configuration.json'
Cheers !
I am running my build scripts locally with NPM and they complete successfully. However, I am hosting my website with AWS Amplify, have accepted the default build settings per their recommendation but the build always fails on the during the frontend build.
I've read through the documentation (https://aws.amazon.com/getting-started/tutorials/deploy-react-app-cicd-amplify/)
Here is my build script in package.json:
''''
"scripts": {
"start": "npm run watch:all",
"test": "echo \"Error: no test specified\" && exit 1",
"lite": "lite-server",
"jshint": "jshint",
"scss": "node-sass -o css/ css/",
"watch:scss": "onchange \"css/*.scss\" -- npm run scss",
"watch:all": "concurrently \"npm run watch:scss\" \"npm run lite\"",
"clean": "rimraf dist",
"copyfonts": "copyfiles -f node_modules/font-awesome/fonts/* dist/fonts",
"imagemin": "imagemin img/* -o dist/img",
"usemin": " usemin index.html -d dist --htmlmin -o dist/index.html && usemin photos.html -d dist --htmlmin -o dist/photos.html && usemin \"pico's picks\".html -d dist --htmlmin -o dist/\"pico's picks\".html && usemin about.html -d dist --htmlmin -o dist/about.html && usemin contact.html -d dist --htmlmin -o dist/contact.html",
"build": "npm run clean && npm run copyfonts && npm run imagemin && npm run usemin"
''''
Here is the default build file in AWS Amplify:
''''
version: 0.1
frontend:
phases:
preBuild:
commands:
- npm ci
build:
commands:
- npm run build
artifacts:
# IMPORTANT - Please verify your build output directory
baseDirectory: /
files:
- '**/*'
cache:
paths:
- node_modules/**/*
''''
Here is the output of the Amplify FrontEnd build process:
''''
# Starting phase: preBuild
# Executing command: npm ci
2019-05-12T10:10:02.664Z [INFO]: > pngquant-bin#3.1.1 postinstall /codebuild/output/src794671044/src/Project-Pico/node_modules/pngquant-bin
> node lib/install.js
2019-05-12T10:10:03.154Z [WARNING]: ✔ pngquant pre-build test passed successfully
2019-05-12T10:10:03.163Z [INFO]: > optipng-bin#3.1.4 postinstall /codebuild/output/src794671044/src/Project-Pico/node_modules/optipng-bin
> node lib/install.js
2019-05-12T10:10:03.514Z [WARNING]: ✔ optipng pre-build test passed successfully
2019-05-12T10:10:03.514Z [WARNING]:
2019-05-12T10:10:03.521Z [INFO]: > jpegtran-bin#3.2.0 postinstall /codebuild/output/src794671044/src/Project-Pico/node_modules/jpegtran-bin
> node lib/install.js
2019-05-12T10:10:03.882Z [WARNING]: ✔ jpegtran pre-build test passed successfully
2019-05-12T10:10:03.883Z [WARNING]:
2019-05-12T10:10:03.890Z [INFO]: > gifsicle#3.0.4 postinstall /codebuild/output/src794671044/src/Project-Pico/node_modules/gifsicle
> node lib/install.js
2019-05-12T10:10:04.265Z [WARNING]: ✔ gifsicle pre-build test passed successfully
2019-05-12T10:10:04.374Z [INFO]: > fsevents#1.2.9 install /codebuild/output/src794671044/src/Project-Pico/node_modules/fsevents
> node install
2019-05-12T10:10:04.504Z [INFO]: > node-sass#4.12.0 install /codebuild/output/src794671044/src/Project-Pico/node_modules/node-sass
> node scripts/install.js
2019-05-12T10:10:05.004Z [INFO]: Downloading binary from https://github.com/sass/node-sass/releases/download/v4.12.0/linux-x64-57_binding.node
2019-05-12T10:10:05.343Z [INFO]: Download complete
2019-05-12T10:10:05.346Z [INFO]: Binary saved to /codebuild/output/src794671044/src/Project-Pico/node_modules/node-sass/vendor/linux-x64-57/binding.node
2019-05-12T10:10:05.371Z [INFO]: Caching binary to /root/.npm/node-sass/4.12.0/linux-x64-57_binding.node
2019-05-12T10:10:05.396Z [INFO]: > node-sass#4.12.0 postinstall /codebuild/output/src794671044/src/Project-Pico/node_modules/node-sass
> node scripts/build.js
2019-05-12T10:10:05.526Z [INFO]: Binary found at /codebuild/output/src794671044/src/Project-Pico/node_modules/node-sass/vendor/linux-x64-57/binding.node
2019-05-12T10:10:05.527Z [INFO]: Testing binary
2019-05-12T10:10:05.623Z [INFO]: Binary is fine
2019-05-12T10:10:05.702Z [WARNING]: added 878 packages in 8.92s
2019-05-12T10:10:05.712Z [INFO]: # Completed phase: preBuild
# Starting phase: build
2019-05-12T10:10:05.713Z [INFO]: # Executing command: npm run build
2019-05-12T10:10:05.901Z [INFO]: > project-pico#1.0.0 build /codebuild/output/src794671044/src/Project-Pico
> npm run clean && npm run copyfonts && npm run imagemin && npm run usemin
2019-05-12T10:10:06.087Z [INFO]: > project-pico#1.0.0 clean /codebuild/output/src794671044/src/Project-Pico
> rimraf dist
2019-05-12T10:10:06.342Z [INFO]: > project-pico#1.0.0 copyfonts /codebuild/output/src794671044/src/Project-Pico
> copyfiles -f node_modules/font-awesome/fonts/* dist/fonts
2019-05-12T10:10:06.657Z [INFO]: > project-pico#1.0.0 imagemin /codebuild/output/src794671044/src/Project-Pico
> imagemin img/* -o dist/img
....
The AWS Amplify build should complete successfully, but it always fails.
I used the following to get my build on AWS amplify. Follow the order of these steps, try the next if you fail.
First of all its recommended to use the same package manager for your build that you use in your project. Anyways try these:
try deleting and re-installing node_modules .
try building the project with webpack version(3.1.0)/webpack-server-version(1)
try deleting and re-installing react-scripts.