Background
I currently have an npm script that looks something like
"dev":"yarn install && concurrently -k \"npm run webpack\" \"cd dist/ && node App.js\" \"npm run test\" \"npm run lint\""
Logically this runs webpack, starts the app, lints, and tests in parallel.
npm webpack in that script has --watch set
Note: this is for dev.
The Problems
This tries to run the app before it has webpacked
This won't re-run the app when webpack repacks due to watch
The Goal
Run npm run webpack once
When it outputs (meaning the watch fired and finished) run the other three commands
when something crashes inform me, don't waste time running stuff that won't work, but try again when I fix the file.
The Real Problem
I don't know what I don't know. I suspect the real answer will be in the webpack config itself potentially, or there's a better tool than concurrently/watch for my use case, or the core idea on how I've designed this is just crazy. Maybe I want to create a devServer.js that uses webpack dev middleware to serve these instead? how would that pull in linting and testing then?
I don't know what a beautiful version of this build would look like.
What I Really Need
A great tutorial/guide/blog post about how this 'Should' go.
Here's what I would do; perhaps there's a better way:
"scripts": {
"dev": "yarn install && concurrently -k \"npm run webpack\" \"npm run watch\"",
"watch": "onchange \"dist/**/" -- concurrently -k \"cd dist/ && node App.js\" \"npm run test\" \"npm run lint\""
}
This uses onchange. npm run dev starts webpack and onchange in parallel. onchange monitors for any file changes in dist/ and runs your tasks when any files change.
The limitation of this approach is that your tasks will not run until files change in dist. You can work around this by deleting dist/ before running webpack. (Use rimraf to do this in a cross-platform way.) Example:
"dev": "yarn install && rimraf dist && concurrently -k \"npm run webpack\" \"npm run watch\""
You can just use rm -rf dist if you don't care about Windows support.
Related
I have created a script that can run the python django application. I run this script using pm2.
I do pm2 start scripts.sh, it works properly but after some time my application doesn't work and displays an error like this
the runtime process for the instance running on port 37001 has unexpectedly quit**
I show the log using pm2 logs, it displays an error like this
script.sh had too many unstable restarts (16). Stopped. "errored"
How to resolved it? Can anyone help me?
I had the same issue and resolved with:
pm2 kill
rm -rf node_modules
npm i
pm2 start index.js
sudo shutdown -r now
Primarily I have created a configuration file using the following command:
cd ~
pm2 init
sudo nano ecosystem.config.js
Then copy and paste the following code into the newly created ecosystem.config.js file (note: change your project location in cwd section):
apps: [
{
name: 'my-site',
cwd: ' /home/your-name/your-project-directory',
script: 'npm',
args: 'start',
env: {
NODE_PUBLIC_APP: 'NODE_PUBLIC_APP', // for example
},
},
// optionally a second project
],};
After that before running the app using pm2 I did the following:
pm2 kill
sudo rm -rf node_modules
npm i
Then I run the application in watch mode, so that I can see what is going on using pm2 log later. To run the app in watch mode do the following:
pm2 start npm --name "AnyAppName" -- run start --watch
And finally I see the logs what causes the error by running the following command:
pm2 log
Hope this helps someone in the future like me.
I deployed a create-react-app on GitHub Pages using a mix of these 2 tutorials (https://create-react-app.dev/docs/deployment & https://github.com/gitname/react-gh-pages). It's working just fine.
But this webpage is supposed to evolve regularly, I have some content to add.
Is there a way to modify the deployed app? How can I push the changes I would make in my local directory to gh-pages (production mode)? Do I need to run npm build again after my additions?
For the moment I have the development mode on master branch. I think it's the build mode on gh-pages, although I don't have any other branch.
Thank you for your help.
What I did to deploy:
Terminal
$ npm install --save gh-pages
$ git init
$ git remote add origin https://github.com/REMOTE
$ npm run deploy
Package.json
{
"homepage": "https://USER-PAGE.github.io/",
"scripts": {
"predeploy": "npm run build",
"deploy": "gh-pages -b master -d build",
},
"devDependencies": {
"gh-pages": "^2.1.1"
}
}
To modify the site just make the changes and then git add, commit and push like one would normally do to the required branch.
Then again run the command:
npm run deploy
A Meteor app is running on the local machine. Then it gets built appDir$ Meteor build . and the resultant myApp.tar.gz gets copied to the AWS cloud. Then a script runs on the cloud to put the app into a docker container following some Dockerfile commands.
Every time a change needs to be done, a repeat of the above follows, any better way to reduce the effort of re- building/copying/dockerizing?
Is it possible by using volume and docker-compose and just sync the changes from the local development machine to the aws EC2 volume directory? How?
//Dockerfile on AWS EC2
FROM lambdalinux/baseimage-amzn:2016.09-000
RUN curl --silent --location https://rpm.nodesource.com/setup_4.x | bash -
RUN yum install -y tar nodejs
ADD ./myApp.tar.gz /opt/
EXPOSE 80
ENV ROOT_URL http://example.com
ENV MONGO_URL "mongodb://username:pass..."
ENV PORT 80
# Install nodejs modules
WORKDIR /opt/bundle/
RUN npm install fibers
RUN npm install underscore
RUN npm install source-map-support
RUN npm install semver
# Start the app
CMD node ./main.js
There is a command called rsync that will do a smart sync of a whole directory structure - if you unpacked the build locally you could then rsync it up to the server.
It can use either file dates or checksums to work out what has changed, and will make the process quicker. Minified files will probably change every time, but certainly many assets won't change every time.
I would set it up with a mirror of your production directory, sync the files into there, do some (automated) sanity checks first, and then switch the new version into place. If it doesn't work you can switch the old version back. There is a little work required to get this set up, but it will make deployment faster/easier
As a starting point to making my own app that uses meanjs, I went to the meanjs website and used their yeomen generator to create the template/sample app. Following the instructions getting the sample application running out of the box on my local desktop machine worked within minutes. To complete the exercise I tried to deploy the sample app to an AWS/EC2 instance before making any changes to it. I have used the command line deployment tools in the past and liked it. Also it is nice how now you can just select an EC2 Linux instance with node and npm already installed and ready.
After checking the sample into git, I run "git aws.push" to deploy the app.
The problem is in the package.json the line:
"postinstall": "bower install --config.interactive=false"
In the eb-activity.log:
npm WARN cannot run in wd meansample#0.0.1 bower install --config.interactive=false (wd=/tmp/deployment/application)
The result is that AngularJS ends up not getting installed in /public/lib.
First thing I tried was giving the full path in the package.json file: node_modules/bower/bin/bower. This didn't help and results in the same error. Also noting that other commands like "grunt" don't need the full path specified in the package.json and they work.
I don't understand enough of the black box magic that aws.push does to understand why this error is happening. For example what user does it run as? What permissions does that user have? what options if any does it use when it runs npm install?
I did figure out a work-around, but it adds a lot of extra steps that shouldn't be required if aws.push was able to run bower install directly. Basically I can manually run the bower install in the ssh client connected to my EC2 instance, set the owner/group on the installed files, and restart the server.
Work-around steps:
1) On local command prompt run git aws.push. Wait for unsuccessfully deployment to finish.
2) Connect ssh client to EC2 instance. From the command prompt:
cd /var/app/current
/* NOTE: if I don't use sudo the ec2user I am logged in as does not have permission to create /public/lib needed to install AngularJS into*/
sudo node_modules/bower/bin/bower install --config.interactive=false --allow-root
/* NOTE: just changing the owner and group to match the same as the other files that aws.push deployed */
sudo chown -R nodejs public/lib
sudo chgrp -R nodejs public/lib
From AWS dashboard, select the correct EC2 instance, Action = Restart App Server(s)
Now AngularJS is install and the sample app works.
How do I eliminate the extra steps and make it so aws.push can do the bower install successfully?
I have experienced the same problem when trying to publish my nodejs app in a private server running CentOs using root user. The same error is fired by "postinstall": "./node_modules/bower/bin/bower install" in my package.json file so the only solution that was working for me is to use both options to avoid the error:
1: use --allow-root option for bower install command
"postinstall": "./node_modules/bower/bin/bower --allow-root install"
2: use --unsafe-perm option for npm install command
npm install --unsafe-perm
I am using Jenkins CI for my django project. For Django-Jenkins integration I am using the django-jenkins app. In the build step of Jenkins I create a fresh virtualenv and install all the dependencies for each build using requirements file. However, this makes build extremely slow because a fresh copy of all the dependencies must be downloaded from a PyPI mirror, even if nothing has changed in the dependencies since the last build. So I started using the local caching built-in to pip by setting the PIP_DOWNLOAD_CACHE environment variable. But the whole build process is still painfully slow and takes more than 10 minutes. Is there any way I could speed up the whole process? Maybe by caching the compiled dependencies or something else?
Just only install a fresh virtualenv if your requirements.txt file changes. This can be done easily with some shell commands. We are doing something similar in one of our projects. In a Jenkins shell window we have (after svn up):
touch changed.txt
stat -c %Y project/requirements.txt > changed1.txt
diff -q changed.txt changed1.txt || echo "DO YOUR PIP --upgrade HERE!"
Why bother creating a fresh virtualenv each time you build? You should be able to create just one and simply activate it with . /path/to/venv/bin/activate as an 'Execute shell script' build step (assuming the use of linux here). Then, if you need to install a new dependency, you can activate the venv on your own and pip install the new package.