webpack-cli [Error: EROFS: read-only file system, mkdir '/resource'] - webpack-5

I want to copy the static file in webpack 5.x, so I install the webpack copy plugin "copy-webpack-plugin": "^10.2.1", and added config like this:
new CopyPlugin({
patterns: [
{ from: "src/manifest.json", to: "manifest.json" },
{ from: "src/resource/image", to: "/resource/image" },
],
}),
what I am trying to do is copy the image folder from source folder to dist folder and keep the folder structure. But when I run the webpack command and shows error like this:
➜ reddwaf-translate-plugin git:(main) ✗ npm run dev
> reddwaf-translate-plugin#1.0.0 dev
> rm -rf src/bundle && webpack --mode development --config src/resource/config/webpack.dev.config.js
[webpack-cli] [Error: EROFS: read-only file system, mkdir '/resource'] {
errno: -30,
code: 'EROFS',
syscall: 'mkdir',
path: '/resource'
}
I am sure the filesystem is not read only because I have created files and folder. why did this happen? what should I do to fix it? The current operation system is macOS Monteney with M1 pro chip.

I finally figure out what was happen, I config the folder start with a slash means that will create a folder resource in the root of macOS filesystem. That why the error happen, just remove the root slash will fix this problem.
new CopyPlugin({
patterns: [
{ from: "src/manifest.json", to: "manifest.json" },
{ from: "src/resource/image", to: "resource/image" },
],
}),
it will create the folder structure in the dist folder.

Related

How do I get vite to build entire project instead of just the index.html page?

I am new to vite and I can't figure out how to get it to build my entire project instead of just my index.html page. I run "npm run build" and every time it just does that index.html but in npm run dev it works fine. I have all my files on the same level as in the picture. How do I resolve this problem?
Create a vite.config.js file at the root of project and put this in it
const { defineConfig } = require('vite')
module.exports = defineConfig({
build: {
rollupOptions: {
input: {
main: './index.html',
about: './about.html',
shaderOne: './shaderOne.html',
// ...
// List all files you want in your build
}
}
}
})
If not, you will need to install vite locally. You can install it using npm install vite
See this documentation

How to add subresources integrity with Angular appShell build

I built an application with Angular CLI 9.
I patched the package.json file with :
{
"scripts": {
"build:prod": "ng build --prod --subresource-integrity",
"prebuild:prod": "TS_NODE_COMPILER_OPTIONS='{\"module\": \"commonjs\"}' ts-node ./sitemap_generator.ts"
}
}
So, when I called npm run build:prod, my 2 commands are executed and output files generated by compiler contains SRI.
Now, I added the appShell :
npm run ng generate appShell -- --client-project my-project
To run the build with the appShell, I have to use the command :
npm run ng run my-project:app-shell:production
MAIN QUESTION
But this command calls my-project:build:production configuration of angular.json file, and this does not accept the --subresource-integrity argument :/
How to patch this to have appShell production build with SRI ?
SECONDARY QUESTION for the braves
This appShell build create a server/ folder in dist/. It just contains a main.js file. I suppose it's internally used with Node to build the appShell ; can someone confirm that ?
And so, can I use Unversal too with this architecture to do some SSR for search engines ?
Thanks !
Ok, I found a way by editing angular.json :
{
"$schema": "./node_modules/#angular/cli/lib/config/schema.json",
"version": 1,
"newProjectRoot": "my-project",
"projects": {
"oce-training": {
"architect": {
"build": {
"configurations": {
"production": {
"subresourceIntegrity": true,
}
}
}
}
}
}
}
So, we cannot override on package.json or by CLI command, but it's sufficient for my case.
Now I have in package.json:
{
"scripts": {
"build:prod": "ng run oce-training:app-shell:production",
"prebuild:prod": "TS_NODE_COMPILER_OPTIONS='{\"module\": \"commonjs\"}' ts-node ./sitemap_generator.ts"
},
}
My question about SSR is maintained, but it could be another Stackoverflow post ;)

How do I deploy monorepo code to AWS Lambda using lerna?

I am attempting to make two AWS Lambda functions (written in typescript). Both of these functions share the same code for interacting with an API. In order to not have to copy the same code to two different Lambdas, I would like to move my shared code to a local module, and have both my Lambdas depend on said module.
My initial attempt at staring code between the two lambdas was to use a monorepo and lerna. My current project structure looks like this:
- lerna.json
- package.json
- packages
- api
- package.json
- lambdas
- funcA
- package.json
- func B
- package.json
lerna.json:
{
"packages": [
"packages/api",
"packages/lambdas/*"
],
"version": "1.0.0"
}
In each of my package.json for my Lambda functions, I am able to include my local api module as such:
"dependencies": {
"#local/api": "*"
}
With this, I've been able to move the common code to its own module. However, I'm now not sure how to bundle my functions to deploy to AWS Lambda. Is there a way for lerna to be able to create a bundle that can be deployed?
As cp -rL doesn't work on the mac I had to come up with something similar.
Here is a workflow that works if all of your packages belong to one scope (#org):
In the package.json of your lerna repo:
"scripts": {
"deploy": "lerna exec \"rm -rf node_modules\" && lerna bootstrap -- --production && lerna run deploy && lerna bootstrap"
}
In the package that contains your lambda function:
"scripts":{
"deploy": "npm pack && tar zxvf packagename-version.tgz && rm -rf node_modules/#org && cp -r node_modules/* package/node_modules && cd package && npm dedupe"
}
Now replace "packagename-version" and "#org" with the respective values of your project. Also add all of the dependent packages to "bundledDependencies".
After running npm run deploy in the root of your lerna mono repo you end up with a folder "package" in the package that contains your lambda function. It has all the dependencies needed to run your function. Take it from there.
I had hoped that using npm pack would allow me to utilize .npmignore files but it seems that that doesn't work. If anyone has an idea how to make it work let me know.
I have struggled with this same problem for a while now, and I was finally forced to do something about it.
I was using a little package named slice-node-modules, as found here in this similar question, which was good enough for my purposes for a while. As I have consolidated more of my projects into monorepos and begun using shared dependencies which reside as siblings rather than being externally published, I ran into shortcomings with that approach.
I've created a new tool called lerna-to-lambda which was specifically tailored to my use case. I published it publicly with minimal documentation, hopefully enough to help others in similar situations. The gist of it is that you run l2l in your bundling step, after you've installed all of your dependencies, and it copies what is needed into an output directory which is then ready to deploy to Lambda using SAM or whatever.
For example, from the README, something like this might be in your Lambda function's package.json:
"scripts": {
...
"clean": "rimraf build lambda",
"compile": "tsc -p tsconfig.build.json",
"package": "l2l -i build -o lambda",
"build": "yarn run clean && yarn run compile && yarn run package"
},
In this case, the compile step is compiling TypeScript files from a source directory into JavaScript files in the build directory. Then the package step bundles up all the code from build along with all of the Lambda's dependencies (except aws-sdk) into the directory lambda, which is what you'd deploy to AWS. If someone were using plain JavaScript rather than TypeScript, they could just copy the necessary .js files into the build directory before packaging.
I realize this question is over 2 years old, and you've probably figured out your own solutions and/or workarounds since then. But since it is still relevant to me, I assume it's still relevant to someone out there, so I am sharing.
Running lerna bootstrap will create a node_modules folder in each "package". This will include all of your lerna managed dependencies as well as external dependencies for that particular package.
From then on, your deployment of each lambda will be agnostic of the fact that you're using lerna. The deployment package will need to include the code for that specific lambda and the node_modules folder for that lambda - you can zip these and upload them manually, or use something like SAM or CloudFormation.
Edit: as you rightly point out you'll end up with symlinks in your node_modules folder which make things awkward to package up. To get around this, you could run something like this prior to packaging for deployment:
cp -rL lambdas/funcA/node_modules lambdas/funcA/packaged/node_modules
The -L will force the symlinked directories to be copied into the folder, which you can then zip.
I have used a custom script to copy the dependencies during the install process.. This will allow me to develop and deploy the application with the same code.
Project structure
In the package.json file of the lambda_a, I have the following line:
"scripts": {
"install": "node ./install_libs.js #libs/library_a"
},
#libs/library_a can be used by the lambda code using the following statement:
const library_a = require('#libs/library_a')
for SAM builds, I use the following command from the lmbdas frolder:
export SAM_BUILD=true && sam build
install_libs.js
console.log("Starting module installation")
var fs = require('fs');
var path = require('path');
var {exec} = require("child_process");
if (!fs.existsSync("node_modules")) {
fs.mkdirSync("node_modules");
}
if (!fs.existsSync("node_modules/#libs")) {
fs.mkdirSync("node_modules/#libs");
}
const sam_build = process.env.SAM_BUILD || false
libs_path = "../../"
if (sam_build) {
libs_path = "../../" + libs_path
}
process.argv.forEach(async function (val, index, array) {
if (index > 1) {
var currentLib = libs_path + val
console.log(`Building lib ${currentLib}`)
await exec(`cd ${currentLib} && npm install` , function (error, stdout, stderr){
if (error) {
console.log(`error: ${error.message}`);
return;
}
console.log(`stdout: ${stdout}`);
console.log('Importing module : ' + currentLib);
copyFolderRecursiveSync(currentLib, "node_modules/#libs")
});
}
});
function copyFolderRecursiveSync(source, target) {
var files = [];
// Check if folder needs to be created or integrated
var targetFolder = path.join(target, path.basename(source));
if (!fs.existsSync(targetFolder)) {
fs.mkdirSync(targetFolder);
}
// Copy
if (fs.lstatSync(source).isDirectory()) {
files = fs.readdirSync(source);
files.forEach(function (file) {
var curSource = path.join(source, file);
if (fs.lstatSync(curSource).isDirectory()) {
copyFolderRecursiveSync(curSource, targetFolder);
} else {
copyFileSync(curSource, targetFolder);
}
});
}
}
function copyFileSync(source, target) {
var targetFile = target;
// If target is a directory, a new file with the same name will be created
if (fs.existsSync(target)) {
if (fs.lstatSync(target).isDirectory()) {
targetFile = path.join(target, path.basename(source));
}
}
fs.writeFileSync(targetFile, fs.readFileSync(source));
}

How can I call Django's manage.py from GNOME Builder?

I have GNOME Builder installed on 3.24.1 installed on Ubuntu 17.04. I have a functional Django project and an associated virtualenv. (Django 1.11, Python 3)
How can I configure Builder, so that when I click Run it invokes manage.py runserver in the virtualenv? (Ideally I'd like to be able to run other manage.py functions too, like manage.py collectstatic.)
This is not really possible as Gnome-Builder works tightly integrated with flatpak. As far as I know the "hostsystem buildsystem" only supports auto detected run targets and only one of those.
However if you create a flatpak json manifest you can set the command to be run in the command variable of the json manifest - though probably not everything you want. As this means the application runs in a flatpak sandbox.
Setup
To do that you can create a new python gnome application with gnome-builder called djangoproj. This will generate a Project that uses the meson buildsystem and a org.gnome.djangoproj.json. The next thing would be to remove the gnome application - or you just ignore it and add your Django dependencies.
Add the required modules before the native modules. For just Django this is:
[…]
"modules" : [
{
"name": "python3-Django",
"buildsystem": "simple",
"build-commands": [
"pip3 install --no-index --find-links=\"file://${PWD}\" --prefix=${FLATPAK_DEST} Django"
],
"sources": [
{
"type": "file",
"url": "https://pypi.python.org/packages/1b/50/4cdc62fc0753595fc16c8f722a89740f487c6e5670c644eb8983946777be/pytz-2018.3.tar.gz",
"sha256": "410bcd1d6409026fbaa65d9ed33bf6dd8b1e94a499e32168acfc7b332e4095c0"
},
{
"type": "file",
"url": "https://pypi.python.org/packages/54/59/4987ae4a4a8be8507af1b213e75a449c05939ab1e0f62b5e90ccea2b51c3/Django-2.0.3.tar.gz",
"sha256": "769f212ffd5762f72c764fa648fca3b7f7dd4ec27407198b68e7c4abf4609fd0"
}
]
},
{
"name" : "djangoproj",
"buildsystem" : "meson",
[…]
If you have additional dependencies there is a handy tool to generate the necessary json lines: https://github.com/flatpak/flatpak-builder-tools/tree/master/pip
Now you can add the Django project files using the host system.
django-admin startproject sample
Meson needs to know about the new files so just add subdir('sample') to the root meson directory and create new meson files in the subdirectories. The meson.build in the sample directory looks like this for me. for the sample/sample directory you'd need to adjust the moduledir and the djangoproj_sources
pkgdatadir = join_paths(get_option('prefix'), get_option('datadir'), meson.project_name())
moduledir = join_paths(pkgdatadir, 'djangoproj')
python3 = import('python3')
conf = configuration_data()
conf.set('PYTHON', python3.find_python().path())
conf.set('VERSION', meson.project_version())
conf.set('localedir', join_paths(get_option('prefix'), get_option('localedir')))
conf.set('pkgdatadir', pkgdatadir)
subdir('sample')
djangoproj_sources = [
'manage.py',
]
install_data(djangoproj_sources, install_dir: moduledir)
Now you can set the command in the org.gnome.Djangoproj.json to bash and after pressing launch in the window where otherwise the logs of the program appear there is an interactive shell. There you can explore your newly created flatpak with Django included in the /app/ directory. If you want to run the Django app you'd do:
$ python3 /app/share/djangoproj2/djangoproj2/manage.py runserver
you can also write this command in the command variable of the json file to launch it directly when pressing the "play"-button.
All the other commands do work too- however keep in mind that the environment is in a flatpak and recreated on every rebuild... So nothing that needs to persist can be saved in the flatpak directory.

How to make mocha watch, compile and test coffeescript with dependencies on save

I'm working on a project that uses coffeescript for development and testing. I run the tests in node with mocha's --watch flag on so I can have the tests run automatically when I make changes.
While this works to some extent, only the ./test/test.*.coffee files are recompiled when something is saved. This is my directory structure:
/src/coffee
-- # Dev files go here
/test/
-- # Test files go here
The mocha watcher responds to file changes inside the /src and /test directories, but as long as only the files in the /test directory are recompiled continuous testing is kind of borked. If I quit and restart the watcher process the source files are also recompiled. How can I make mocha have the coffee compiler run over the development files listed as dependencies inside the test files on each run?
Here is my answer using grunt.js
You will have to install grunt and few additionnal packges.
npm install grunt grunt-contrib-coffee grunt-simple-mocha grunt-contrib-watch
And write this grunt.js file:
module.exports = function(grunt) {
grunt.loadNpmTasks('grunt-contrib-coffee');
grunt.loadNpmTasks('grunt-simple-mocha');
grunt.loadNpmTasks('grunt-contrib-watch');
grunt.initConfig({
coffee:{
dev:{
files:{
'src/*.js':'src/coffee/*.coffee',
}
},
test:{
files:{
'test/test.*.js':'test/test.*.coffee'
}
}
},
simplemocha:{
dev:{
src:"test/test.js",
options:{
reporter: 'spec',
slow: 200,
timeout: 1000
}
}
},
watch:{
all:{
files:['src/coffee/*', 'test/*.coffee'],
tasks:['buildDev', 'buildTest', 'test']
}
}
});
grunt.registerTask('test', 'simplemocha:dev');
grunt.registerTask('buildDev', 'coffee:dev');
grunt.registerTask('buildTest', 'coffee:test');
grunt.registerTask('watch', ['buildDev', 'buildTest', 'test', 'watch:all']);
};
Note: I didn't have some detials on how you build / run your tests so you certainly have to addapt ;)
Then run the grunt watch task :
$>grunt watch
Using a Cakefile with flour:
flour = require 'flour'
cp = require 'child_process'
task 'build', ->
bundle 'src/coffee/*.coffee', 'lib/project.js'
task 'watch', ->
invoke 'build'
watch 'src/coffee/', -> invoke 'build'
task 'test', ->
invoke 'watch'
cp.spawn 'mocha --watch', [], {stdio: 'inherit'}
Mocha already watches the test/ folder, so you only need to watch src/.