Currently I have a AWS EB stack setup using the default 64bit Amazon Linux 2016.03 v2.1.3 running Node.js AMI.
Our codebase is written in ES6 and we use Babel to transpile the code into ES5. Our current deployment process is running babel locally to build the /dist directory and commiting the dist into our git repository and using eb deploy to deploy the application to EB.
I would like to be able to remove the step of building the distribution locally and perform this operation on the EB server.
I have tried using .ebextensions with a command to execute npm run build which yields an error Return code: 127 Output: /bin/sh: npm: command not found.. I have also tried using files to insert the file into the appdeploy/pre which yields the same error. And I have tried using container_commands which also yields an error stating that npm is not available.
Which of the available deployment hooks that AWS EB provides would be the correct place to use npm run build?
You can use npm hooks.
For example:
Here I have a npm start command that fires my server but before it I have a babel . -d ./dist to compile the files on the dist directory. So I have a task called prestart ( using the npm hook pre naming convention ) that does that.
Related
I have a Django App deployed via AWS' elastic beanstalk, that uses the CodePipeline service to build. As part of that pipeline, the CodeBuild service is used to build the app that gets deployed to the ElasticBeanstalk environment.
The build failed, sending the following error message:
django.core.exceptions.ImproperlyConfigured: SQLite 3.9.0 or later is required (found 3.7.17).
Per Amazon's own package version listing I realize that is expected given the older version is intended to be on the Amazon Linux 2 distro.
Ok. I wrote a shell script to download the latest version of SQLite and build from scratch, but I'm still getting the same issue.
In the buildspec.yaml file I have the following:
...
post_build:
commands:
- echo "beginning post-build phase..."
- bash get_sqlite.sh
- echo "sqlite version -- $(sqlite3 --version)"
- python manage.py makemigrations
...
In the CodeBuild logs, I can see the result of the echo command as such:
sqlite version -- 3.40.1 2022-12-28
Yet, the build fails and the logs still show the following error:
django.core.exceptions.ImproperlyConfigured: SQLite 3.9.0 or later is required (found 3.7.17).
Any idea what further steps need to be taken for the updated version to be detected rather than the previous one?
I think you need to move your sqlite command to bin directory in linux.
- mv sqlite /usr/local/bin/
Even though package has been updated but linux reads PATH ( env variable to find the commands ) and still it is finding the old version of the command, that is hwy you need to replace the old sqlite command with the new version in bin directory
I am getting the below error in my GCP Cloud Run service:
Error: Could not find or load main class com.sdas.demo.sd.Application
Caused by: java.lang.ClassNotFoundException: com.sdas.demo.sd.Application
What I was doing:
I have a spring boot application where I used jib-maven-plugin. In BitBucket pipeline, I was executing the below command:
mvn clean compile com.google.cloud.tools:jib-maven-plugin:3.1.4:build -Dimage=eu.gcr.io/sdas-demo-dev/temp-service
After that deploying this GCR image to Cloud Run using gcloud command from BitBucket pipeline. This deployment failed with the error that 'Could not load main class'.
But if I run the mvn clean compile com.google.cloud.tools:jib-maven-plugin:3.1.4:build -Dimage=eu.gcr.io/sdas-demo-dev/temp-service from my computer git bash for the same spring boot application code and then deploy it to Cloud Run (via gcloud command or via console or via pipeline); it's deployed successfully.
I used 'mainClass' tag under jib-maven-plugin in pom.xml. But still it is unable to find or load the main class.
Can anyone help how to identify the problem? Is this a classpath issue or environment issue?
Issue sorted now.
Root cause:
'No resources found to compile' - I found this message in the build log. This message remind me something wrong within the application package.
My system is running on Windows 10 and my application directory starting with 'Java.com.demo.sdas' (J in capital). Since Windows is case in sensitive; it is not causing an issue.
BitBucket pipeline running on Linux server and it is case sensitive. Thus it is unable to find the application directory starting with 'Java.com.demo.sdas'.
Solution: Renamed the directory as 'java' and then everything is working as expected.
I am trying to run an AWS Lambda project locally on Ubuntu. When I run the project with AWS SAM Local it shows me this error: Error: Running AWS SAM projects locally requires Docker. Have you got it installed?
I had trouble installing it on Fedora.
When I followed the Docker postinstall instructions I managed to get past this issue.
https://docs.docker.com/install/linux/linux-postinstall/
I had to:
Delete the ~/.docker directory;
Create the "docker" group;
Add my user to the "docker" group;
Logout and back in again;
Restart the "docker" daemon.
I was then able to run the command:
sam local start-api
If you want to run local sam-cli, you have first install docker from docker official website then run sudo sam local start-api. Note that sudo is necessary for running local developer with needed privileges.
This error mostly arises due to lack of admin privilege to use docker. Just add sudo to your command. This will work.
eg: sudo sam local start-api --region eu-west-3
We are working on Mac and were seeing same message when using an older version of Docker (1.12.6). Have since updated to a newer (but not latest) version 17.12.0-ce-mac49 and it is now fine.
Another cause for this is this recent issue within Docker for Mac.
A quick workaround, as specified in the issue itself, is to run SAM with:
$ DOCKER_HOST=unix://$HOME/.docker/run/docker.sock sam local start-api
You don't need to run SAM as root.
I am using colima for docker on mac with intel chip. and faced this error. was able to resolve it by adding DOCKER_HOST in .zshrc file
vi ~/.zshrc
paste export DOCKER_HOST="unix://$HOME/.colima/docker.sock" in the .zshrc file
escape :wq
I'm trying to run unit tests against our AngularCLI project using our hosted VSTS build agents however it keeps running into trouble when it tries to run 'ng test'.
To resolve this I have tried to make the agent use the ng tool directly by providing the path to the tool. This hasn't worked as it looks like it's trying to run 'ng test' where the tool is rather than in the specified current working directory:
I've also tried to add it as an environment variable in Windows (we're using Windows Server 2012 to host the VSTS agent) and setting the tool in the VSTS agent as just ng however it doesn't appear to be finding the ng tool:
How can I get the VSTS agent to make use of the ng tool to run tests? We have got #angular/cli installed on the server hosting the agent.
The thing is that you won't get angular cli installed on VSTS globally as its build server is not supporting that. But the good thing you not even need cli globally installed on your agent.
All you need is npm run ng build -- prod - this way it will always run the local version. Also this way you won't need to take care of updating your global package at all.
Use npm run ng test to run tests, npm run ng e2e to run protractor. If you need to pass any more params to any of these just use --
As mentioned by #Kuncevic, to use the Angular CLI without installing it globally, you will need to use the npm run command.
To run an Angular build using Azure Devops:
Add an npm task to install dependencies (choose install for the command)
Add another npm task, but choose custom for the command. Then add your command and arguments:
run ng -- build --output-path=dist --configuration=prod
Note how npm is not a part of the command and arguments since this will be provided by the task. Also note how -- separates the command to be run and the arguments to be passed to the command.
I'm having trouble installing AWS Elastic Beanstalk command line tool and I don't understand why. I've downloaded the package from AWS and followed the instruction carefully. Following is the installation instruction:
== Installation
Once you have downloaded the CLI package:
1) Unzip this archive to a location of your choosing.
Eb is located in the "eb" directory. The complete CLI reference
for more advanced scenarios can be found in the "api" directory.
To add eb files to your path:
Linux/Mac OS X (Bash shell):
export PATH=$PATH:<path to eb>
Windows:
set PATH=<path to eb>;%PATH%
I'm using Mac OS X so I've used export PATH=$PATH:. For the path to eb, I've just copied the file into the terminal, which resulted export PATH=$PATH:/Users/lydia/Downloads/ElasticBeanstalk/eb/macosx/python2.7/eb. I'm not sure what I'm missing and I can't deploy without downloading eb command line first.
Remove the eb at the end so it's just
/Users/lydia/Downloads/ElasticBeanstalk/eb/macosx/python2.7/
this worked for me although i can only get it to work if i export the CLI into the specific website folder i am working on see my question here https://askubuntu.com/questions/428417/aws-elastic-beanstalk-command-line-tool-setup
Fix that worked for me (if you installed python using brew) is remove python via
brew uninstall --force python
and then install it again from https://www.python.org/downloads/.
Then just follow the instructions from AWS.
You only add directories to your $PATH. Is ~/Downloads/ElasticBeanstalk/eb/macosx/python2.7/eb a directory? Or is it the actual command?