move output file after ember build is complete - ember.js

I'm using ember-cli 0.0.28 which depends BroccoliJS to build the distributable source for my front-end application. The problem I'm having is that whenever I (re)build I need the index.html file to be copied (or rather moved) to my back-end's template directory from which I serve the application.
I can't figure out how to configure the Brocfile.js in the ember-cli project directory to do this after the build is complete.
I've used a symlink for the time being, which works but would be a dead link until the front-end application is built with ember build. I think it's possible to use grunt-broccoli to run the build as a grunt task?! I don't know if this is the way forward though.
Using broccoli-file-mover is easy enough but it works with current trees, not future trees!
All help is appreciated.

ember-cli has progressed quite a bit but this question is still, fundamentally, valid and there are myriad of ways to address it.
If the front-end build is to be bundled with the back-end assets, a symlink from the build/dist directory to the back-end's assets directory is adequate for most development phases.
Now, ember-cli also allows proxying to the back-end via the ember server command which is useful if building an API backed app, sort of thing.
ember-cli-deploy also appears as an excellent way to deploy front-end apps which can help with deploying to a development or production environment. It has many packs but I've reverted to using the redis pack as it provides an easy way to checkout font-end revision with a small back-end tweak, like this:
defmodule PageController do
def index(conn, %{"index_id" => sha}) do
case _fetch_page_string(sha) do
{:ok, output_string} -> html(conn, output_string)
{:error, reason} -> conn |> send_resp(404, reason)
end
end
defp _fetch_page_string(sha) do
# some code to fetch page string (content)
...
end
end
In the above index page handler, attempt to catch an index_id queryParam, if present, we look for the corresponding page string that can be checked into e.g., a key/value store.

Related

Does webpack splitting good for saving AWS cost?

Our website is build with heavy node_modules.
Many heavy package cause heavy node_modules and eventually affects heavy cost on AWS.
So we are trying to reduce total bundle size to reduce some cost.
What I found is, by splitting code with webpack, we can have more but smaller bundles. Though, overall size doesn't change a lot.
What i'm wondering is, does splitting code with webpack can save some AWS cost?
Sorry for poor question, but this is all I can think of right now.
Any great idea would very helpful Thx.
This is a complex topic and equally open-ended question too! To answer this, I will make few assumptions:
By saving AWS cost, it means reducing bundle size so that outgoing bandwidth cost is saved.
Application being built is 100% SPA i.e. fully client-side. Server-side optimization gets very complex quickly.
Out of box, Webpack will bundle everything into one big file/bundle. It contains your own code as well as code from third-party libraries. The fundamental idea here is that third-party code rarely changes while our own code frequently changes.
So, we can use Webpack to split our code into two distinct chucks using SplitChunksPlugin. One for our own code and another for third-party code i.e. code from node_modules folder; lets call it vendor bundle. Now as long as your node_modules folder remains constant i.e. your lock file - package-lock.json file is constant, it will always produce the same bundle with exact same content for third-party code.
Then next part of the idea is that you can simply take this vendor bundle and upload it to CDN and then use via CDN. The CDN and browser will do their caching magic and users will hardly need to download this file each time. CDN will use ETag and/or cache-control header to achieve this and browser will use that.
However, the reality is different. When you have too many dependencies and or using dependabot to update dependencies, you will often update your lock file. This means on each build a new vendor bundle gets generated even if there is a difference of single byte. The hash id generated by Webpack will be different. And in other scenario, the way you import the dependencies may also change the generated bundle content resulting in a different bundle.
So, architecturally, we do better vendor bundling by making use of CDNs. The first step is to distinguish between stable third-party module and frequently updated third-party module. For example, consider react, react-dom and rxjs, etc. These are not updated often. For these libraries, make use of third-party CDN like cloudflare, cdnjs or unpkg. Add these libraries as CDN based UMD packages.
For this, you will add these dependencies to your index.html file which is typically generated using html-webpack-plugin.
<!-- index.html -->
<script crossorigin src="https://unpkg.com/react#18/umd/react.production.min.js"></script>
<script crossorigin src="https://unpkg.com/react-dom#18/umd/react-dom.production.min.js"></script>
Now, simply tell Webpack to not bundle these dependencies which you have already made available via CDN script. Use Webpack externals to do that:
// webpack.config.js
module.exports = {
externals: {
'react': 'React',
'react-dom': 'ReactDOM'
},
};
With this configuration, Webpack is not only going to exclude React from the bundle but also will speed up your bundling. Where ever you import stuff from react library, Webpack will substitute this for global object React.
You can then extend this model to all the stable libraries that you are using. Another important advantage of using this way is that there would be other websites that your users may have already visited which would have cached this particular file in their browser using the same CDN.
To automate your workflow, you can customize Webpack or any bundler script to inject these scripts with exact version by reading the packge.json file for your dependencies and then generating the <script> tags. It means you would still have single source of truth for your dependencies versions. You can read CDN documentation to understand how they allow you to construct CDN URLs for the dependencies.

What is the difference between Puppeteer and Electron with respect to using it to test client side of an app

I have been rushing to get my app, a mixture between nodejs server driving an sqlite database and a litElement based client side providing the ui, into a useable state as a beta release. I achieved that a couple of days ago and now I am (belatedly I know) thinking how to put together a test framework. However I am really struggling to understand how to best test the client side. I think its because I am having difficulty understanding conceptually what the two main choices of framework are. Before I go into more detail, let me explain the structure of the app in top level terms.
At the project root level there are three main directories node_modules which comprises all the modules I've pulled in (including lit-element and web-components-loader which are client side elements - but see below) server which contains all the code for the server side of my application and client which consist of all the code for the client side of my app. I run rollup ONLY at module install time to "package" lit-element and the directives I use and the web-component-loader and effectively treeshake and copy them to the client/libs. As a result of this my client is coded to assume the modules are in the libs directory AND I DO NOT NEED OR HAVE any build stage. I guess the root of the client is index.html which pulls in a service-worker.js and main-app.js. main-app is the root of a tree in lit-element based components that make up the entire client app. Nginx is the web server for all the static files in the client, but also acts as a proxy to pass any urls that start /api to a standard node http web server (not even express, although I do use the router, body-parser and final-handler modules) and these get passed to various api handlers each of which is a separate javascript file - although these can "require" a few common modules that I have written and those in the node_modules directory.
I plan on using jest as a test environment. For my server I think it is easy. For each api handler I want to test I can build a test script that "requires" the javascript file I want to test. I am in two minds about whether to use a sqlite database for testing or mock something - I am leaning towards the former as I am using better-sqlite3 and it is totally synchronous and very fast. I already have scripts to create empty databases, so I have no worry about test isolation.
Client testing is where I get confused. I "think" that in essence jest can run tests the same way as for the server, one element at a time. BUT, these elements and my test scripts are going to need a "web-platform" set of APIs - not least of which is the entire shadow dom and custom components stuff that lit-element uses. This is where, I think, puppeteer or electron in with there associated jest plugins which can put these platform apis into the test environment. But, and this is the essence of my confusion, puppeteer instructions all start with a something like
const browser = await puppeteer.launch({headless: true});
const page = await browser.newPage()
await page.goto(SOME URL);
What is this URL? - do I have to also run a server? I cannot relate this snippet to running a test controlled by jest. All the examples seem to use webpack and typescript, neither of which I know anything about.
The other module I have seen mentioned is electron, in particular this article, which everything else seems to point to (or the same text).
https://www.ninkovic.dev/blog/2020/testing-web-components-with-jest-and-lit-element
From the code snippets in this article it "seems" like it might be what I want, BUT ...
I cannot find very many references to electron other than on its own web site. Here it is telling you to use electron as a tool to build a cross platform application, but nowhere can I find what it is - it assumes you already know. I don't want a UI for my unit testing I want it to be headless like in puppeteer.
Hence my confusion and why I am unsure how to achieve what I want. Can someone give me some pointers as to
How can I set up puppeteer to run headless tests without needing a server OR
What exactly is electron (and can I use it to
a) run my tests
b) provide me with tools to examine the dom elements I have created to see I have created the right ones) and how is it different from puppeteer and can I use it to conduct headless tests of my client.
UPDATE
I've done some more digging and am beginning to understand the differences. Let me summarise what I think I have found.
Puppeteer is great for end to end testing of your site. You run the tests by launching the included puppeteer at the home page of your site (or the more likely scenario of a development test site) and programatically pretend to be a user who can click on buttons etc. You can use various methods, including functions such as document.querySelector() to check your UI has behaved how you think, or you can take screen shots and compare with standardised version. I could possible use it for unit tests, but I would have to run a server, create a test fixture html page for every test and navigate to it. jest-puppeteer is a package with some of that built in.
Electron is a platform for building apps. What the url I was referencing was using a test runner app jest-electron built using electron. So worrying about electron is a red herring, I should be worrying about jest-electron.
My main concern right now, I think, is that I need different jest configurations for my three scenarios
unit tests on the server
unit tests on the client
end to end testing of the complete app.
Given I have only one package.json file and one set of node_modules I need to figure out a way to have three different jest-config.js files.

How do those who are not using a backend framework (such as Rails/Symfony/Django) go about developing and deploying an Ember application's assets?

More specifically, when using a backend application framework I generally am afforded some level of asset management which allows me to work with multiple files in development which are uncompressed and unminified and then in production mode those files become automatically minified, compressed, and concatenated into a single file.
I am looking to create an Ember application that is a single page app that interfaces with a separate RESTful services layer. I simply do not need the weight of a framework behind the Ember app and am hoping to serve it as static html+css+js, so I am looking for any guidance on how to easily manage development and deployment of a client-side only app without adding much overhead.
Right now my biggest issue is with including JS (and to a lesser extent, CSS) files. My HTML is static and I have an Ember app comprised of many files, so I have many script tags to include them all. This is clearly not appropriate for production so I imagine some kind of build tool will be needed to assemble my Javascript files and overwrite the script tags in the HTML file. Are there tools out there right now that will do this? Is there another approach that I may be overlooking?
This is my first fully client-side application so it's very possible that I just need to make a paradigm shift, having done server-side applications for so long.
Agreed this can be tricky without a backend framework. For sure script tags are not the way to go and you will need some kind of build tool for production deployment.
Ember App Kit is a solution a few of us have been working on. It's still early stages but i've used it for a couple of projects so far and it's been much better than trying to roll-my-own with grunt. I would expect it to become the default starting point for ember apps in near future, to try it now just download it as a zip then read the Getting Started Guide
There are many other solid solutions out there, consider checking out:
ember-tools
brunch-with-ember-reloaded
brunch-with-hapmsters
charcoal
I use a combination of requirejs and Grunt, using these lovely functions and this one, which can compile your ember-handlebars templates into functions. (The git-contrib includes the ability to watch for changes in your files and perform various build steps which may differ if you are in development or production. You can have separate grunt functions which run various tasks for production or development. Of course for all of this you are going to need node!

Django: pre-deployment

Question 1:
I am about to deploy my first Django website and I was wondering what tools are recommended to gathering all your Django files.
Like for example I don't need my sass and coffeescript files I just want the compiled css and js files. I also want to use the correct production settings file.
Question 2:
Do I put these files ready for deployment into their own version control repository? I guess the advantage is that you can easily roll back changes?
Question 3:
Do I run my tests before gathering the files or before deploying?
Shell scripts could be a solution but maybe there is a better way? I looked at jenkins/hudson but that seems more like a tool that sits on top of the tools that I am looking for.
For questions one and two, I'd recommend using a version control system for this. I'm sure you're already using some sort of version control, so you can just say which branch of your repository you would like to deploy. And yes, this makes rollbacks incredibly easy. Probably the most popular method for Django deployment is to package your files using git, and then deploy these files and run any deployment scripts using fabric.
Using git, packaging your files using your local repository would look something like:
git archive --format=tar HEAD | gzip > my_repo.tar.gz
Alternately, you can first push your changes to a github repository, and then in your deployment script just clone your repository from your production server.
For your third question, if you use this version control method for packaging your files, then just make sure when you are testing you have the deployment branch checked out.
I'll typically use Fabric for deploying most Django projects:
http://docs.fabfile.org/en/1.0.0/?redir
It has a decent api for communicating with remote servers and it's all in Python – bonus!
You don't need to store your concatenated media files in a separate repo. They're only needed for production. In that case I've found libraries like django-mediasync and django-compress to be useful. They both provide template tags/settings that can concatenate and cache your static files for you depending on the DEBUG setting/environments (production vs development).
You can run your tests whenever. Some people will run them as a version control hook to prevent broken code from being checked in or during deployment, stopping the deployment in case of test failure.

Is there an ideal way to move from Staging to Production for Coldfusion code?

I am trying to work out a good way to run a staging server and a production server for hosting multiple Coldfusion sites. Each site is essentially a fork of a repo, with site specific changes made to each. I am looking for a good way to have this staging server move code (upon QA approval) to the production server.
One fanciful idea involved compiling the sites each into EAR files to be run on the production server, but I cannot seem to wrap my head around Coldfusion archives, plus I cannot see any good way of automating this, especially the deployment part.
What I have done successfully before is use subversion as a go between for a site, where once a site is QA'd the code is committed and then the production server's working directory would have an SVN update run, which would then trigger a code copy from the working directory to the actual live code. This worked fine, but has many moving parts, and still required some form of server access to each server to run the commits and updates. Plus this worked for an individual site, I think it may be a nightmare to setup and maintain this architecture for multiple sites.
Ideally I would want a group of developers to have FTP access with the ability to log into some control panel to mark a site for QA, and then have a QA person check the site and mark it as stable/production worthy, and then have someone see that a site is pending and click a button to deploy the updated site. (Any of those roles could be filled by the same person mind you)
Sorry if that last part wasn't so much the question, just a framework to understand my current thought process.
Agree with #Nathan Strutz that Ant is a good tool for this purpose. Some more thoughts.
You want a repeatable build process that minimizes opportunities for deltas. With that in mind:
SVN export a build.
Tag the build in SVN.
Turn that export into a .zip, something with an installer, etc... idea being one unit to validate with a set of repeatable deployment steps.
Send the build to QA.
If QA approves deploy that build into production
Move whole code bases over as a build, rather than just changed files. This way you know what's put into place in production is the same thing that was validated. Refactor code so that configuration data is not overwritten by a new build.
As for actual production deployment, I have not come across a tool to solve the multiple servers, different code bases challenge. So I think you're best served rolling your own.
As an aside, in your situation I would think through an approach that allows for a standardized codebase, with a mechanism (i.e. an API) that allows for the customization you're describing. Otherwise managing each site as a "custom" project is very painful.
Update
Learning Ant: Ant in Action [book].
On Source Control: for the situation you describe, I would maintain a core code base and overlays per site. Export core, then site specific over it. This ensures any core updates that site specific changes don't override make it in.
Call this combination a "build". Do builds with Ant. Maintain an Ant script - or perhaps more flexibly an ant configuration file - per core & site combination. Track version number of core and site as part of a given build.
If your software is stuffed inside an installer (Nullsoft Install Shield for instance) that should be part of the build. Otherwise you should generate a .zip file (.ear is a possibility as well, but haven't seen anyone actually do this with CF). Point being one file that encompasses the whole build.
This build file is what QA should validate. So validation includes deployment, configuration and functionality testing. See my answer for deployment on how this can flow.
Deployment:
If you want to automate deployment QA should be involved as well to validate it. Meaning QA would deploy / install builds using the same process on their servers before doing a staing to production deployment.
To do this I would create something that tracks what server receives what build file and whatever credentials and connection information is necessary to make that happen. Most likely via FTP. Once transferred, the tool would then extract the build file / run the installer. This last piece is an area I would have to research as to how it's possible to let one server run commands such as extraction or installation remotely.
You should look into Ant as a migration tool. It allows you to package your build process with a simple XML file that you can run from the command line or from within Eclipse. Creating an automated build process is great because it documents the process as well as executes it the same way, every time.
Ant can handle zipping and unzipping, copying around, making backups if needed, working with your subversion repository, transferring via FTP, compressing javascript and even calling a web address if you need to do something like flush the application memory or server cache once it's installed. You may be surprised with the things you can do with Ant.
To get started, I would recommend the Ant manual as your main resource, but look into existing Ant builds as a good starting point to get you going. I have one on RIAForge for example that does some interesting stuff and calls a groovy script to do some more processing on my files during the build. If you search riaforge for build.xml files, you will come up with a great variety of them, many of which are directly for ColdFusion projects.