As a starting point to making my own app that uses meanjs, I went to the meanjs website and used their yeomen generator to create the template/sample app. Following the instructions getting the sample application running out of the box on my local desktop machine worked within minutes. To complete the exercise I tried to deploy the sample app to an AWS/EC2 instance before making any changes to it. I have used the command line deployment tools in the past and liked it. Also it is nice how now you can just select an EC2 Linux instance with node and npm already installed and ready.
After checking the sample into git, I run "git aws.push" to deploy the app.
The problem is in the package.json the line:
"postinstall": "bower install --config.interactive=false"
In the eb-activity.log:
npm WARN cannot run in wd meansample#0.0.1 bower install --config.interactive=false (wd=/tmp/deployment/application)
The result is that AngularJS ends up not getting installed in /public/lib.
First thing I tried was giving the full path in the package.json file: node_modules/bower/bin/bower. This didn't help and results in the same error. Also noting that other commands like "grunt" don't need the full path specified in the package.json and they work.
I don't understand enough of the black box magic that aws.push does to understand why this error is happening. For example what user does it run as? What permissions does that user have? what options if any does it use when it runs npm install?
I did figure out a work-around, but it adds a lot of extra steps that shouldn't be required if aws.push was able to run bower install directly. Basically I can manually run the bower install in the ssh client connected to my EC2 instance, set the owner/group on the installed files, and restart the server.
Work-around steps:
1) On local command prompt run git aws.push. Wait for unsuccessfully deployment to finish.
2) Connect ssh client to EC2 instance. From the command prompt:
cd /var/app/current
/* NOTE: if I don't use sudo the ec2user I am logged in as does not have permission to create /public/lib needed to install AngularJS into*/
sudo node_modules/bower/bin/bower install --config.interactive=false --allow-root
/* NOTE: just changing the owner and group to match the same as the other files that aws.push deployed */
sudo chown -R nodejs public/lib
sudo chgrp -R nodejs public/lib
From AWS dashboard, select the correct EC2 instance, Action = Restart App Server(s)
Now AngularJS is install and the sample app works.
How do I eliminate the extra steps and make it so aws.push can do the bower install successfully?
I have experienced the same problem when trying to publish my nodejs app in a private server running CentOs using root user. The same error is fired by "postinstall": "./node_modules/bower/bin/bower install" in my package.json file so the only solution that was working for me is to use both options to avoid the error:
1: use --allow-root option for bower install command
"postinstall": "./node_modules/bower/bin/bower --allow-root install"
2: use --unsafe-perm option for npm install command
npm install --unsafe-perm
Related
I am trying to use a post-startup script to create a Vertex AI User Managed Notebook whose Jupyter Lab has a dedicated virtual environment and corresponding computing kernel when first launched. I have had success creating the instance and then, as a second manual step from within the Jupyter Lab > Terminal, running a bash script like so:
#!/bin/bash
cd /home/jupyter
mkdir -p env
cd env
python3 -m venv envName --system-site-packages
source envName/bin/activate
envName/bin/python3 -m pip install --upgrade pip
python -m ipykernel install --user --name=envName
pip3 install geemap --user
pip3 install earthengine-api --user
pip3 install ipyleaflet --user
pip3 install folium --user
pip3 install voila --user
pip3 install jupyterlab_widgets
deactivate
jupyter labextension install --no-build #jupyter-widgets/jupyterlab-manager jupyter-leaflet
jupyter lab build --dev-build=False --minimize=False
jupyter labextension enable #jupyter-widgets/jupyterlab-manager
However, I have not had luck using this code as a post-startup script (being supplied through the console creation tools, as opposed to command line, thus far). When I open Jupyter Lab and look at the relevant structures, I find that there is no environment or kernel. Could someone please provide a working example that accomplishes my aim, or otherwise describe the order of build steps that one would follow?
Post startup scripts run as root.
When you run:
python -m ipykernel install --user --name=envName
Notebook is using current user which is root vs when you use Terminal, which is running as jupyter user.
Option 1) Have 2 scripts:
Script A. Contents specified in original post. Example: gs://newsml-us-central1/so73649262.sh
Script B. Downloads script and execute it as jupyter. Example: gs://newsml-us-central1/so1.sh and use it as post-startup script.
#!/bin/bash
set -x
gsutil cp gs://newsml-us-central1/so73649262.sh /home/jupyter
chown jupyter /home/jupyter/so73649262.sh
chmod a+x /home/jupyter/so73649262.sh
su -c '/home/jupyter/so73649262.sh' jupyter
Option 2) Create a file in bash using EOF. Write the contents into a single file and execute it as mentioned above.
This is being posted as support context for the accepted solution from #gogasca.
#gogasca's suggestion (I'm using Option 1) works great, if you are patient. Through many attempts, I discovered that inconsistent behavior was based on timing of access. Using Option 1, the User Managed Notebook appears available for use in Vertex AI Workbench (green check and clickable "OPEN JUPYTERLAB" link) before the installation script(s) have finished.
If you open the Notebook too soon, you will find two things: (1) you will be prompted for a recommended Jupyter Lab build, for instance:
Build Recommended
JupyterLab build is suggested:
#jupyter-widgets/jupyterlab-manager changed from file:../extensions/jupyter-widgets-jupyterlab-manager-3.1.1.tgz to file:../extensions/jupyter-widgets-jupyterlab-manager-5.0.3.tgz
and (2) while the custom environment/kernel is present and accessible, if you try to use ipyleaflet or ipywidget tools, you will see one of several JavaScript errors, depending on how quickly you try to use the kernel, relative to the build that is (apparently) continuing to take place in the background: Error displaying widget: model not found, and/or a broken page icon with a JavaScript error, that, if clicked, will show you something like:
[Open Browser Console for more detailed log - Double click to close this message]
Failed to load model class 'LeafletMapModel' from module 'jupyter-leaflet'
Error: No version of module jupyter-leaflet is registered
at f.loadClass (https://someURL.notebooks.googleusercontent.com/lab/extensions/#jupyter-widgets/jupyterlab-manager/static/134.bcbea9feb6e7c4da7530.js?v=bcbea9feb6e7c4da7530:1:74856)
at f.loadModelClass (https://someURL.notebooks.googleusercontent.com/lab/extensions/#jupyter-widgets/jupyterlab-manager/static/150.3e1e5adfd821b9b96340.js?v=3e1e5adfd821b9b96340:1:10729)
at f._make_model (https://someURL.notebooks.googleusercontent.com/lab/extensions/#jupyter-widgets/jupyterlab-manager/static/150.3e1e5adfd821b9b96340.js?v=3e1e5adfd821b9b96340:1:7517)
at f.new_model (https://someURL.notebooks.googleusercontent.com/lab/extensions/#jupyter-widgets/jupyterlab-manager/static/150.3e1e5adfd821b9b96340.js?v=3e1e5adfd821b9b96340:1:5137)
at https://someURL.notebooks.googleusercontent.com/lab/extensions/#jupyter-widgets/jupyterlab-manager/static/150.3e1e5adfd821b9b96340.js?v=3e1e5adfd821b9b96340:1:6385
at Array.map ()
at f._loadFromKernel (https://someURL.notebooks.googleusercontent.com/lab/extensions/#jupyter-widgets/jupyterlab-manager/static/150.3e1e5adfd821b9b96340.js?v=3e1e5adfd821b9b96340:1:6278)
at async f.restoreWidgets (https://someURL.notebooks.googleusercontent.com/lab/extensions/#jupyter-widgets/jupyterlab-manager/static/134.bcbea9feb6e7c4da7530.js?v=bcbea9feb6e7c4da7530:1:77764)
The solution here is to keep waiting. In my demo script, I transfer a file at the end of the build process. If I wait long enough for this file to actually appear in the Instance directories, the recommendation for a rebuild is absent and the extensions work properly.
I am using expo#43.0.3 (and expo-cli#5.0.3) to manage my react native project and I have to install an npm package from local source:
$ npm install /path/to/mypackage
In my package.json the package is successfully linked via
"dependencies": {
...
"myPackage": "file:../../mypackage",
...
}
I can also confirm the package works when installing to a new plain node project (same node version 14.8.2)
Now when I start expo via expo start and navigate to the app it does not throw any error but only a warning:
› Reloading apps
warn No apps connected. Sending "reload" to all React Native apps failed. Make sure your app is running in the simulator or on a phone connected via USB.
When using the package from registry everything builds, however.
I tried to use the private packages section form the expo docs, but they only describe how to use private packages from registry but not local.
Anything I'm missing here?
edit:
After resetting the expo network adapters it loads the bundle but it now says it can't find the package:
Unable to resolve module myPackage from /home/user/path/to/myPackage/file.js: myPackage could not be found within the project or in these directories:
node_modules
If you are sure the module exists, try these steps:
1. Clear watchman watches: watchman watch-del-all
2. Delete node_modules and run yarn install
3. Reset Metro's cache: yarn start --reset-cache
4. Remove the cache: rm -rf /tmp/metro-*
However, I'm not using watchman and I'm not using yarn and rmoving metro- folders from /tmp did not make a difference.
As it turned out in this issue on GitHub it can be solved via npm pack:
run npm pack inside of your library and then npm install path/to/the/packed/file.tgz from your project
Which worked fine for the setup I described in the question.
When running a bash script during CodeBuild, I get this error:
./scripts/test.sh: line 95: docker: command not found
However, I've made sure to install docker at the start of the script using:
curl -sSL https://get.docker.com/ | sh
apt-get install -y docker-ce docker-compose
But this results in the following error:
Package docker-ce is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source
E: Package 'docker-ce' has no installation candidate
Any ideas on how to get docker working during CodeBuild?
There are a few different options for this in CodeBuild:
You can use CodeBuild provided images, which will already have docker installed on them. To use any one of these images select the privilege mode when creating the CodeBuild project.
You can enable Docker in custom image (images not managed by CodeBuild. e.g.: hosted in your ECR repo or public DockerHub) when configuring CodeBuild project. Select the privileged mode for your project settings. Instructions here: https://docs.aws.amazon.com/codebuild/latest/userguide/sample-docker-custom-image.html
A Meteor app is running on the local machine. Then it gets built appDir$ Meteor build . and the resultant myApp.tar.gz gets copied to the AWS cloud. Then a script runs on the cloud to put the app into a docker container following some Dockerfile commands.
Every time a change needs to be done, a repeat of the above follows, any better way to reduce the effort of re- building/copying/dockerizing?
Is it possible by using volume and docker-compose and just sync the changes from the local development machine to the aws EC2 volume directory? How?
//Dockerfile on AWS EC2
FROM lambdalinux/baseimage-amzn:2016.09-000
RUN curl --silent --location https://rpm.nodesource.com/setup_4.x | bash -
RUN yum install -y tar nodejs
ADD ./myApp.tar.gz /opt/
EXPOSE 80
ENV ROOT_URL http://example.com
ENV MONGO_URL "mongodb://username:pass..."
ENV PORT 80
# Install nodejs modules
WORKDIR /opt/bundle/
RUN npm install fibers
RUN npm install underscore
RUN npm install source-map-support
RUN npm install semver
# Start the app
CMD node ./main.js
There is a command called rsync that will do a smart sync of a whole directory structure - if you unpacked the build locally you could then rsync it up to the server.
It can use either file dates or checksums to work out what has changed, and will make the process quicker. Minified files will probably change every time, but certainly many assets won't change every time.
I would set it up with a mirror of your production directory, sync the files into there, do some (automated) sanity checks first, and then switch the new version into place. If it doesn't work you can switch the old version back. There is a little work required to get this set up, but it will make deployment faster/easier
I'm using Elastic Beanstalk to deploy my application as a Single Docker Application.
My Dockerfile does composer install while deploying, but I get a Could not authenticate against github.com error.
I use these lines in my Dockerfile to install my dependencies:
WORKDIR /www
RUN ["composer", "install", "-o"]
How would I solve this issue?
I think you need to configure composer inside your container with your key or something like that, remember that inside your container you're basically on another os and you don't have public keys etc.
I'd try to install it from source rather than from git (as you don't have keys).
try this:
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer ()