Why does Terser Webpack plugin generate different file names in development and production modes? - webpack-5

I have recently upgraded to Webpack 5, and had to drop Uglify in favour of the Terser plugin.
However now when I build my project I get different output files when I'm in different modes.
// mode: 'development'
vendors-node_modules_axios_index_js-node_modules_vue-loader_lib_runtime_componentNormalizer_j-66b5c5.js
vendors-node_modules_css-loader_dist_runtime_api_js-node_modules_css-loader_dist_runtime_cssW-d8fbbe.js
vendors-node_modules_fullstack-phone_client_index_js-node_modules_fullstack-phone_server_load-a7472a.js
vendors-node_modules_vuedraggable_dist_vuedraggable_umd_js.js
// mode: 'production'
284.js
328.js
730.js
This is making it hard to link the files in my templates and load them into my project without writing some logic in the templates to specifically pick the chunks I need, find out the file names and load them.
How can I have Terser output the same file names in both development and production modes, but keep the right chunking?

if use webpack5, change chunkIds to 'named'in production mode; see doc: https://webpack.js.org/configuration/optimization/#optimizationchunkids
optimization: {
chunkIds: 'named'
},

Related

Does ES5 modules accelate webpack build time comparing with ES6 modules?

In convention, when we write a ES6 modules, we put source code in src folder, and compile it using babel-loader and webpack to lib or dist folder to set code to ES5, and set main entry to dist folder, then publish to npm.
On the one hand, user can use this module without using webpack, and the code can run. On the other hand, when using webpack, ES5 code may reduce babel-loader time because it's already ES5 code.
What I confused is the second point, when using webpack, does ES5 codes in node_module reduce babel-loader time so we can accelate webpack build performance ?
The question is almost about ES5 npm modules with webpack build performance, although it's a convention we already did, I just want to know about something about webpack build performance. Thanks!
Yes, generally public packages are distributed with sources that have already been transformed. The performance benefit, with regards to Webpack and babel-loader, is that you can consume these sources as-is without having to process them with babel-loader, so you'll commonly see:
{
test: '\.js$',
loader: 'babel',
exclude: ['node_modules']
}
So, I too am confused about this excerpt, specifically why one would want to parse ES5 code with Babel, since no transformation would eventually take place.
Either way, the sources are always parsed by Webpack and not having to parse, transform them beforehand with babel-loader should improve performances.

TFS 2015 builds : Is it possible to use Variables in Repository mappings?

When creating a vNext build on TFS 2015 you can define variables, which are then used in build steps, and can also be used as environment variables in scripts the build runs.
The build I am working on runs scripts that pulls files from mapped locations, so it would be great if I could define a variable and use it in a mapping so that for example, if I update a reference in the project the build is building, I can simply update the variable with the new location and have the repository mappings and scripts all pull correctly from the new location without having to make the change in multiple places.
I have tried doing this by setting up the variable and mapping as follows,
But this generates an error when you try to save the build complaining that there are two '$' characters in the mapping. Is there way to do this or is this not currently possible?
This has been causing me havok for quite a while as well.
For starters, there is a uservoice request for this feature. You can add your votes and input here to get Microsoft to allow this feature: https://visualstudio.uservoice.com/forums/330519-team-services/suggestions/14131002-allow-variables-in-repository-variables-and-trigg
Second, we've developed a workaround that gets us most of the way there. It's not perfect, but it might be useful to you if you're comfortable with the tradeoffs or can work around the deficiencies.
Start by turning off the "Label Sources" option of the build and mapping the Server Path field to you base build. You'll want to add a custom variable to the Build Definition to tell the build instance what TFS location to pull from. For example, we have a base project and then multiple branches from the project, so our source is structured like this
$\Team Project\Project1
$\Team Project\Project1Branch1
$\Team Project\Project1Branch2
$\Team Project\Project1Branch3
and we create a variable named "Branch" that we can set to "Branch1", "Branch2", and so forth.
When we want to build the base project, we leave the Branch variable blank when launching the build. For branch builds, we set it to the name of the branch we want to build.
Then our build steps look like this
Remap Workspace Folder to Branch Folder
Get Files for Specified Branch - We have to do this manually after
remapping our workspace
Compile the Source in the Specified Branch
Publish Build Artifacts from the Specified Branch
Label the Code of the Specified Branch Manually
The Remap task runs the command
tf workfold "$/Team Project/Project1$(Branch)" "$(build.sourcesDirectory)\$(Build.DefinitionName)$(Branch)"
The Manual Get task runs the following command
get /recursive /noprompt "$/Team Project/Project1$(Branch)"
The build uses the Branch variable to point to the correct location of the solution file for the specified branch
$(build.sourcesDirectory)\$(Build.DefinitionName)$(Branch)\SolutionFile.sln
The Publish Artifacts task uses the Branch variable in both the Contents field and the Path field
Example in Contents
**\$(Build.DefinitionName)$(Branch)\bin
The Label Code task uses the following command
tf label "$(build.buildNumber)" "$/Team Project/Project1$(Branch)" /recursive
The downside of this setup is that you don't capture Associated Changes and Work Items to your subsidiary branches as the Server Path field is always set to the main location. This may not be an issue if you always merge from your branches to your main location prior to launching a build meant to go to production. What you can do to compensate for this really depends on your use case.
With some tweaking, you could use this same format to specify full paths as well if you needed to.
It's impossible. Just as the error message mentioned: there are two '$' characters in the mapping. Which means your application's path shouldn't vary from build to build.
Mappings on the Repository page are used to specify source control
folder which contains projects that need to be built in the build
definition. You can set it via clicking the Ellipsis (...) button,
however, you can't include variables in the mapping path.
There is a similar question: Variables in TFS Mappings on Visual Studio Online Team Builds

OpenLayers 3 Build from master

I've cloned the OpenLayers 3 repo and merged the latest from master. There exists a recently merged pull request that I'm interested in exploring, but I'm not sure how to create a regular old comprehensive, non-minified build.
Does anyone know how to create a non-minified, kitchen sink (everything included) build for OpenLayers?
(similar to ol-debug.js).
You can use the ol-debug.json config to concatenate all sources for the library without any minification.
node tasks/build.js config/ol-debug.json ol-debug.js
Where the ol-debug.json looks like this:
{
"exports": ["*"],
"umd": true
}
The build.js task generates builds of the library given a JSON config files. The custom build tutorial describes how this can be used to create minified profiles of the library. For a debug build, you can simply omit the compile member of the build config. This is described in the task readme:
If the compile object is not provided, the build task will generate a "debug" build of the library without any variable naming or other minification. This is suitable for development or debugging purposes, but should not be used in production.

Custom Project Type Templates

When you create a new project in WebStorm, you are given the option to create a new directory structure prepopulated with files: libraries, stylesheets, etc. for patterns like HTM5 boilerplate, Twitter boilerplate, etc.
How does one create one's own template for this? Is importing dummy projects the hack for it?
I suggest not using templates. I find it far more easy and maintainable to create "empty" projects (from existing projects, of course) in a git repo (bitbucket, github, ...), clone one, and start from there.
The .idea should be in the repo, but .idea/workspace.xml should be ignored, as per the documentation.
This gives you the opportunity to gradually refine your template, and share it easily with a team.
Use the LivePlugin plugin to create a project template:
<projectTemplate projectType="foo" templatePath="resources/bar.zip" category="true"/>
Use the Velocity Template Language (VTL) to create a file template:
File and code templates are written in the Velocity Template Language (VTL). So they may include:
Fixed text (markup, code, comments, etc.).
In a file based on a template, the fixed text is used literally, as-is.
File template variables.
When creating a file, the variables are replaced with their values.
#parse directives to include other templates defined in the Includes tab on the File and Code Templates page of the Settings dialog box.
Other VTL constructs.
References
Create project template extensions using "user defined" templates
Creation of Extension to applicationConfigurable
IdeaPlugin.xml
PlatformExtensionPoints.xml
LivePlugin: Plugin for writing IDE plugins
Webstorm Help: File and Code Templates
Configuring JetBrains WebStorm for UI5 development
Webstorm Project and IDE Settings
Idea NodeJS Plugin
Apache Velocity Engine VTL Reference

Generate ibm-webservices-ext.xmi and ibm-webservices-bnd.xmi without RAD

I'm working on webservices for WebSphere and I wish to not depend anymore from the Rational Software Delipvery Platform (aka RAD) IDE.
I'm asking if someone knows if it is possible to generate the following files:
ibm-webservices-ext.xmi
ibm-webservices-bnd.xmi
webservices.xml
without having to use RAD (eg some ant script or WebSphere batch).
This is a really annoying lock-in.
I'm trying to port these webservices projects to a more controllable development process, using maven, automatic builds, and so on, but i found it quite difficult.
Has someone solved similar issues?
If anyone is still looking for help with this, we took a slightly different approach by creating the RAD and WAS 8.5 specific files at project creation time.
For my current project, we have a fairly standard project structure and naming convention so we use a Maven archetype to create our projects and include those IBM specific files, ibm-webservices-bnd.xmi in particular, in the Maven archetype.
Easiest way to do this is to take an existing project that has those necessary files, and use the create-from-project archetype from your project folder:
mvn clean archetype:create-from-project -Dinteractive=true
Use interactive mode to give the archetype a sensible archetype.artifactId (but do not change the GAV of the project):
Define value for archetype.groupId: com.name.archgroup: : com.name.common.archetype
Define value for archetype.artifactId: MyService-archetype: : service-archetype-0.8
Define value for archetype.version: 1.0-SNAPSHOT: :
Define value for groupId: com.name.archgroup: :
Define value for artifactId: MyService: :
Define value for version: 1.0-SNAPSHOT: :
Define value for package: com.name: : com.name.common.archetype
This gets you most of the way, but the IBM files do not get processed by default. The trick then is to modify the generated target files in /MyService/target/generated-sources/archetype/target/classes/archetype-resources to also modify the IBM files. Replace instances of the old project name and package name with ${rootArtifactId} and ${groupId} keeping track of which files had the incorrect values.
Then modify the /MyService/target/generated-sources/archetype/target/classes/META-INF/maven/archetype-metadata.xml to include the files you had to manually change in the filtering. For instance, under my EJB module section, *.xmi was included but not filtered. Move the include to the filtered fileset:
<fileSet filtered="true" encoding="UTF-8">
<directory>src/main/resources</directory>
<includes>
<include>**/*.xml</include>
<include>**/*.properties</include>
<include>**/*.xmi</include>
</includes>
</fileSet>
You'll need to do this for everything that you modified to include a ${rootArtifactId} or ${groupId} so that velocity processes it in the next step:
cd target\generated-sources\archetype
mvn install
This packages up your changes and places the jar into your local repository so that you can test it out before publishing to your Maven repository server.
Once your are satisfied, add your maven repositories to target/generated-sources/archetype/pom.xml and run
mvn deploy
And instruct developers to begin using the archetype to create your mavenized projects.
Note: our ibm-webservices-bnd.xmi files appear to include something like xmi:id="RouterModule_112345678901234"
We remove this value before the mvn install as it appears to be project specific.