I would like to know what is included in a process of building web-application with ember build, as I know that JavaScript is interpreted language and does not require compilation (unlike Java or C++).
You are right that JavaScript is interpreted and not compiled language. You also most probably know that in web-development we use <script> tags to include JavaScript code into html pages. But at some point web-application grows so big that developer needs to break js code into few files. That's not a problem - we can have 2 or 3 or 5 script tags. However for bigger apps, for which we need frameworks like ember, we need to break code into tens or even hundreds of files. But having tens of script tags on page is different thing than having 2-3 - large amount of external resources slows down page load process. That's a first problem. It means that developers need a tool to at least concatenate all files into one.
Other problem - our js code is pretty formatted. But formatting means a lot of extra bytes - spaces/tabs, new lines, long variable names. And all that extra bytes slow down page load, too. To solve this problem other tools were invented - uglyfiers, which take formatted code and strip all those extra bytes.
Next, it's more convenient not to just concatenate all js files in correct order but to have some module system to isolate code in each module and then require that module in any other. But then concatenate all that in one file and uglify it.
And have you ever heard about es6? It's a new, better js standard but it's still not supported by all browsers. To use its features you need a tool that will convert (transpile) es6 code to es5 (which is supported everywhere). That tool is called "babel".
Also, sometimes developers need to manage non-js assets, like css or images - concatenate styles, move images around.
Building is a process of running all these tasks to convert a lot of pretty source files into one ugly but effective. Ember-cli is a toolset to do all that things and ember build runs all needed tasks to build web-application. Some other frameworks may have their own toolset and if you don't use any framework, tools like gulp and webpack exist which are framework-agnostic and help to create your own build process.
I hope I answered your question. In fact, ember-cli does more work than I described, you can find more features and details at ember-cli.com.
Related
We are currently using requirejs/backbone for development and firebug for debugging. We are thinking of moving to Ember and using ember appkit.
I noticed that because of the new ES6 javascript modules, the application needs to be pre-compiled into a single javascript file app.js.
I am concerned that this will make it difficult to debug problems because you are dealing with a massive single file instead of small ones that we have at the moment and can easily find in firebug.
Has this been an issue for people, are there any good solutions?
As kingpin2k mentions, Ember App Kit has been effectively superseded by Ember-CLI. I would recommend looking into that. Depending on your needs and planning, Ember-CLI may or may not be suitable for your situation. Some people have successfully put Ember-CLI apps in production, but this is brand new technology, so caveat emptor.
Ember-CLI provides a build system based on Broccoli that will transpile ES6-modules, compact the output into a single Javascript file, and lots more. Ember-CLI is still under heavy development, but is already shaping up quite nicely. In my opinion, the clean code organization and fast Broccoli build are really quite awesome.
Modern browsers such as Firefox and Chrome come with an integrated debugger that will show you the original source when source maps are supplied. This will eventually be provided to the browser in Ember-CLI projects as well when you run the development server. However, this functionality is currently incomplete. It is possible to get some source map support in Ember-CLI now, have a look at this issue.
In the mean time, there are more ways to debug code of course, and I suspect that before proper source map support lands in Ember-CLI/Broccoli, liberal use of console logging and such may be sufficient. Running Ember-CLI's live-reload development server means that when you change and save a file in your project, the results will be shown almost instantaneously in the browser; the Broccoli build is blazing fast.
Keep in mind that minifying and combining all Javascript code into a single output file is a common approach in single page application frameworks such as Ember, Angular, and Backbone. Debugging these applications with breakpoints and such will happen more and more through the browser's debug tools in combination with source maps.
Update
By now the Ember core team actively recommends Ember-CLI. It is quite awesome.
I'm trying to optimize my company's application.
Tha architecture at this time is composed of different folders (inside common folder) for different sections of the app (for example Managing Bills, Managing Canteen, Managing Events etc etc).
Every js and css are included in the first page of the application (login.html) because I'm using the simple page template of jQuery Mobile.
Now I'm considering to add some other components to make the app easier to mantain and maybe speed it a bit.
What do you think about:
RequireJS to divide each section in a module so i can load only the javascript of a particular module at run-time instead of loading within login.html
Inline #imports for CSS files to produce single composite CSS
uglify.js to Minimize file sizes
Handlebars.js to realize fragments of html reusable
Do you think is a good way of work for an application that will become greater by adding new sections?
Do you think of other tools?
Thank you
This is a very broad question. I think you're on the right track... I'll list some libraries that could be worth trying:
Require.js - Will give you the ability to have 'modules' and dynamically determine and load dependancies. Alternatively some people prefer patterns such as the Revealing Module Pattern, jQuery Plugin Style or Common JS style modules. For what it's worth I recommend Require.js.
Bower is a package manager, you can use it to bower install [package]. They have a lot of packages here and you can also link it to your own repo. This could be helpful for managing dependancies.
Uglify.js and Google Closure Compiler are both good for minifying your code. Remember that some minification configurations such as advance mode could break your code. Run tests against the minified version of your source code.
QUnit is good for doing JavaScript unit tests. There are many other alternatives like Jasmine, which is what the Cordova guys use.
Lodash is another (faster) implementation of underscore.js that will provide a lot of utility methods for working wit arrays, objects, functions, etc. It also includes templating support. There's a good talk by the author here.
There a MV* JavaScript frameworks that could help more than jQuery (DOM+AJAX+Animations) or jQuery Mobile (Mostly UI) such as: Dojo, AngularJS, Backbone and Ember.js.
For UI you may want to checkout Adobe's topcoat repository and website. There's also Twitter Bootstrap and Foundation which allow you do to responsive design out of the box. If you're set on jQuery Mobile I personally like this Flat theme.
JSDoc and YUIDoc are good alternatives for documenting your JavaScript code.
I have no idea how many of those tools will interact inside Worklight Applications. It should be fine, since Worklight doesn't impose a certain set of JavaScript libraries you have to use. However, I haven't personally tried most of them inside Worklight Applications.
Ember don't seems to play well with requirejs or building with r.js without some hackery.
Bit hard to manage a large application, I like to break down my file structure as HMVC, which manages modules bit easier :
app
- blah
- modules
-module1
- controller
- model
- view.
Has anyone come up a build stack to automate the compilation into single file with dependency management?
Update: You should now be starting new projects using ember-cli, which provides all the build/dev tools plus many other useful features.
Original answer:
An example Ember project using grunt.js as a build system:
https://github.com/trek/ember-todos-with-build-tools-tests-and-other-modern-conveniences
I've used this as a starting point for some Ember apps and it works well.
Ember don't seems to play well with requirejs or building with r.js without some hackery.
Why do you want to use require.js in the first place?
As for making this all work you'll need a build tool. I recommend brunch (Node) or iridium (Ruby). Brunch is more simple and supports many different module formats. Iridium is much more powerful. It uses Minispade for modules. Require.js/AMD is not needed because your js is always delivered as a single concatenated file.
For stand-alone ember dev I use ember-skeleton as a starting point, which is itself based primarily on rake-pipeline.
https://github.com/interline/ember-skeleton
This provides the whole kit-and-kaboodle for beginning an app, but the meat of it is the rake-p Assetfile, which describes how rake-pipeline will digest all the files, filter their contents, and ultimately combine them into the final handful of files. Additionally, the combination of loader.js and the minispade filter allows the use of 'require()' statements for managing dependencies amongst your files.
It's finally (after years of postponing) the time to localize my app in a few other languages other than English.
The first challenge is to design the integration into my C++ / MFC application that has dozens of dialogs and countless strings. I came across two possible alternative implementations:
Compile and deploy localized resource files as DLLs
Extract and replace all strings with the localized version. For each
language there will be an XML (or simple text) file.
Personally I opt for the second alternative since it seems to me more flexible. The changes are many but not hard to make, and very importantly the XML files will be very easy to modify for the translators.
Any advise is greatly appreciated.
Regards,
Cosmin Unguru
http://www.batchphoto.com/
I did some long-living MFC projects with different languages.
I strongly recommend the first approach with resource-only DLLs.
The reasons:
(1) If the user does a XCOPY install, he always has the default language (English) in the main executables.
(2) If you don't translate everything (e.g. you're late with your release or forget some strings), the Windows resource functions if properly used return the resource in the default language automatically - you don't have to implement it on your own.
(3) My very person opinion: (a) Line breaks, tabs, whitespaces in XML files are a pain in your a**. (b) Merging XML files is even worse...
(4) Don't forget the encoding. It's okay in XML but your translators might use an unsuitable editor and damage the file.
And now for the main reason:
(5) You will have to rearrange many of your dialogs, because many strings are longer in e.g. French or German than in English. And making all statics, buttons, ... larger "just in case" looks crappy.
Another hint: Spend some bucks and buy one of the translation tools which import your projects / binaries and build up a translation database. This will be amortized after the first release.
Another hint (2): If possible make a release which doesn't contain any changes but only the multi-language feature. Also in future, if possible: Release your product in English. Then do the translation in one single step (per language) and release the other languages.
My good and friendly suggestion from somebody who worked a lot with localization:
Grab GNU Gettext,
Mark all your strings as _("English").
Extract all strings using gettext tool xgettext and compile dictionalries
Translate string using great tools like poedit.
Use gettext in your project and make your localization life simpler!
You can also use boost::locale for same purpose - it uses GNU Gettext dictionaries and approach but provides different and more powerful runtime and for windows developer it has very good addon - it supports wide strings that MFC requires to use for normal Unicode support.
Don't use resources and other "translation" tools that are total crap from linguistic point of view (and developer's point of view as well).
Further reading: http://cppcms.sourceforge.net/boost_locale/html/tutorial.html
Using a DLL resource library is a relatively straightforward operation, and allows you to manage not only strings, but other resources as well. And this is its main advantage, because i18n is not only about string translation.
However, depending on your needs, a text-based solution may be a better decision, because of its easier handling - resource scripts being more complex than xml files, especially for the average translator.
I would suggest creating your own abstraction layer, something like "LoadLocalizedString", etc.; in this way, you can start implementing it just with text files, and then change to something more complex when and if required in a transparent way - all the effort for making your software i18n aware would still be valid.
In our case we had diffrent dialogues per Language. The resource file was the same as the multiple laguages were implemented at development time. You could basically append on existing resource files the diferent languages. I hope it helps to find your way.
The DLL option is commonly used for this since the resource loading procedure (e.g. LoadLibrary) is already written - meaning you don't have to write any parsing/loading code. While XML is easier to edit, DLLs have a bit more security (users won't be able to easily edit them) and will allow the developer (meaning you) more time to work on application logic instead of writing a language loading system.
HMODULE hLangDLL = LoadLibrary("text_en.dll");
// more stuff
TCHAR mybuffer[1024] = {0};
LoadString(hLangDLL, IDS_MYSTRING, mybuffer, 1023);
If it is just the strings that are changing then I agree that XML is the way forward here for the exact reasons you outline. Easy for other people to edit, easy to change language at runtime, etc.
The only reason (in my eyes) that you'd choose option 1 is if things other than strings are being localized such as needing different icons.
If it's just text? I say go with the XML.
I'm doing some i18n on a web-based app using Django, which uses gettext as its i18n foundation. It seems like an obvious idea that translations should be stored in the database, and not difficult to do, but po files on the filesystem are still being used. Why is this?
My current suspicion is that the benefits of developing a db backaged are simply outweighed by the reliability/familiarity of gettext as a well-established package. Are there other significant reasons for continuing to store the translations on the filesystem?
Performance is the main reason. Gettext is not using a database because a database will always be considerably slower than a file. The load time of the dictionary is very important and for this reason almost everyone is using files for that.
Also, the compiled gettext files (.mo) are optimized for loading in memory and for this reason they are more appropriate than plain text files (like not-compiled .po files).
You can always use translation platform, probably that uses a database backend, for doing the translation and export the results to text files. Examples: Pootle, Narro, Launchpad Rosetta, Transifex (hosted only).
Do not confuse your application language files with the localization database. Your application should use file based dictionaries that are fast to load and your localization system probably will have to use a database and logically be able to export data to files.
By the way, using gettext is probably the best technological decision you may be able to make regarding localization. I never seen any commercial solution or in-house developed to be able to compete with it on features, tools and even support.
It's a very common way to do translations that has been around for a long time allowing any issues to be ironed out over the years. I imagine writing something like gettext it would be all too easy to make incorrect generalisations about how languages work. Why should the Django development team spending time researching that and developing it when it's already been done in a tried and tested system? Furthermore professional translators probably know what to do with PO files where as a home-brew translation database may prevent them from working in ways they're used to.
Why would you prefer translations in a database? I guess you might prefer it as you could make a translation interface to the database. If that's the case have a look at Pootle it's a powerful web-based translation interface that works directly with PO files and can even integrate with common version control systems. Add some post-commit hooks and you can have such a system with little work and without the overhead of a translations database.
Hope that helps.
This seems like an obvious idea for you, I don't think everybody will agree. AFAIK django uses .po files for following reasons:
Version control - you will have to create additional ".po to database" tools, because you still need to maintain different people working on translations, and you can't get away from having .po files for that purpose
gettext is a standart way of doing translations in .nix world, there are many tools for working with it and it's simple to edit, diff, etc.
No need to hit database if you need to translate anything. Some views can work without any db requests, so no need to tie them to database just to get translation. (I may be wrong, but in case of mod_wsgi - translations will be loaded once and stored in memory for every thread).
Btw, if you need to have different translations for fields, it's a bit different question and you should check http://www.muhuk.com/2010/01/dynamic-translation-apps-for-django/ and choose app that best fit your needs.