can npm-shrinkwrap be installed while project is already started - npm-shrinkwrap

I am currently half way through my node.js project and now, I want to install npm-shrinkwrap for dependency management. SO is it ok, if I do it now, or I should not do it right now.

You can add a shrink-wrap to your project at any point, so yes, it's OK.
As a side note, you may want to take a look at yarn, which is an alternative dependency management tool which does a much better job at locking down dependencies than npm-shrinkwrap.

Related

I want to use JSQMessagesViewController without pod install! can I do it?

I open another project in my mac and this won't work because I got Error for JSQMessagesViewController and I think I should Install that so I want to install JSQMessagesViewController to my project ! I have problem with pod install can I use Another way? or there is no way Just That way?!
If I have to use this method please tell me how to use this in the terminal
at the End when my project does not build its just because of JSQMessagesViewController pod file or not? thanks a lot
I am a little confused on your situation for what some of your questions are asking for but as to the on
can I use Another way? or there is no way Just That way?!
You can clone the project and add it directly to your project. So not using pods. The drawback with this is updating. You will have to manually download the latest version and add it to your project every time you want new features that are released with this framework.
Let me know if you need more clarification or if I can help with your other questions I would just need more details. 🖖🏽

Implementing single script build - non-portable dependencies

It seems that a build system best-practice is to have a single script that can build all source and package the releases. See Joel Test #2
How do you account for non-portable dependencies? For example, if you code for .net 4, then you need .net 4 installed on the box. Standard MS release .net 4 is not xcopy deployable (unless I'm mistaken?). I can see a few avenues:
the dependencies are clearly stated in some resource file (wiki, txt, whatever). When you
call the build script, the build will fail if you don't have the dependency installed. This is an acceptable outcome.
The build script is responsible for setting up the environment. So if you require .net 4 and its not on the box then it installs it for you.
A flavor of #2 - instead of installing dependencies, the script spawns a pre-packaged image (virtual machine, Amazon EC2 AMI) that is setup with all dependencies
???
For implementing a build script you have to ask yourself, how much work you want/can spent on it. This leads to the question how often you have to set up the build environment. I can see #2 would be the perfect solution, but i would need a lot of work, since usually you have more than one non portable dependency.
So we use #1 one. And it works quite well. The most important thing is, that the build script is starting with some sort of self-test. It looks for everything which is needed to build the whole software and gives an error if something is not found. And it gives a clear error message, so that any new guy knows what to do to make it running. Of course as with a lot of software it is nearly never finished and gets extended by needs. The drawback that this test can take some seconds is insignificant when whole build process needs more than minutes.
A wiki (or even sth. else) with the setup solution was not a good solution for us, since after three month nobody knows where this was, but the build script is used every day.
The build script itself is a set of a lot of different things, which where chosen by needs. It is starting with a batch (we are using Windows) which invokes a lot of other things. Other batches, MSBuild, home grown tools. Each step by it self is checking for its own dependencies, to have the problem local and you can see three lines later why this special thing is needed.
Number 2 states "Can you make a build in one step?" As described this means for a development team to be effective the build process must be as simple as possible to reduce errors in the build process and insure consistency. This is especially important as a team gets larger. You want to make sure everyone is building the same thing. (What is done with that package should also be simple, but it is not as important IMHO.) Msbuild is great at this; they provide the facilities to set up a build server that access the source control system independently so the developers actions can't corrupt the build environment. I highly recommend setting up a build server using TFS -- many build issues will go away and you will have the 1-click build Joel describes.
As for your points about what that package does for deployment -- you have many options with MS, but the more "one click" you can make it the better. I believe this is slightly different than Joel's #2. In his example he describes changing what software he will use for the install not because one performs with fewer steps, but instead because one can be incorporated into a one step build.

What are best practices for making Erlang releases?

I've been checking out Faxien+Sinan and Rebar, and the basic philosophy of Erlang OTP seems to be, install applications and releases on a single Erlang image instance. What are the best practices for keeping releases self contained? Is there a way to package releases such that you don't have to modify the site for machines that you're deploying to? How about gathering all dependencies into the codebase for management?
Perhaps I'm going against the grain... I come from a Java background and the philosophy of "nothing pre-installed but the JVM" seems very different.
IMHO this can't be answered in a few sentences. You should have to read some parts of the included documentation, especially "Erlang/OTP System Documentation" (otp-system-documentation-X.Y.Z.pdf, with X.Y.Z being the version number), or have a look at the book "Erlang and OTP in Action" because throughout this book there is "one" example of a "service" with different "parts" from the first steps, using Erlang/OTP concepts and finally building a "release".
IMHO this is currently the best book around, because it not only introduces Erlang, but also shows what OTP is and how OTP is used for a project. And it is not just a collection of loose samples, but everything is build around a single project.
I'll describe the approach that currently works for me for regular (often daily) releases to a small number of instances on EC2:
I set up my project with rebar and check it into github.
All of my dependencies are listed in my rebar.config file (they too are on github).
My Makefile looks similar to what I described here.
My EC2 image only has a regular build of erlang and no other libs installed by default.
To create a new node, I spin up an instance, clone my git repository, and run make. This will fetch my dependencies and build everything.
To update my code I do a git pull and a rebar update-deps. Depending on what changed I may restart the node or, quite often, I'll attach to the running node and reload the updated modules. It helps to have start and attach scripts as part of your project.
It may be helpful to look at how a project like webmachine is packaged.
I don't know much about the standard OTP release management system, other than it looks like a lot of work. Because this seems counter to rapid deployment, I never gave it a serious try - though I'm certain it makes sense for other projects.

What is the purpose of a dedicated "Build Server"? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
This post was edited and submitted for review 1 year ago and failed to reopen the post:
Original close reason(s) were not resolved
Improve this question
I haven't worked for very large organizations and I've never worked for a company that had a "Build Server".
What is their purpose?
Why aren't the developers building the project on their local machines, or are they?
Are some projects so large that more powerful machines are needed to build it in a reasonable amount of time?
The only place I see a Build Server being useful is for continuous integration with the build server constantly building what is committed to the repository. Is it I have just not worked on projects large enough?
Someone, please enlighten me: What is the purpose of a build server?
The reason given is actually a huge benefit. Builds that go to QA should only ever come from a system that builds only from the repository. This way build packages are reproducible and traceable. Developers manually building code for anything except their own testing is dangerous. Too much risk of stuff not getting checked in, being out of date with other people's changes, etc. etc.
Joel Spolsky on this matter.
Build servers are important for several reasons.
They isolate the environment The local Code Monkey developer says "It compiles on my machine" when it won't compile on yours. This can mean out-of-sync check-ins or it could mean a dependent library is missing. Jar hell isn't near as bad as .dll hell; either way, using a build server is cheap insurance that your builds won't mysteriously fail or package the wrong libraries by mistake.
They focus the tasks associated with builds. This includes updating the build tag, creating any distribution packaging, running automated tests, creating and distributing build reports. Automation is the key.
They coordinate (distributed) development. The standard case is where multiple developers are working on the same code base. The version control system is the heart of this sort of distributed development but depending on the tool, the developers may not interact with each other's code much. Instead of forcing developers to risk bad builds or worry about merging code overly aggressively, design the build process where the automated build can see the appropriate code and processes the build artifacts in a predictable way. That way when a developer commits something with a problem, like not checking in a new file dependency, they can be notified quickly. Doing this in a staged area let's you flag the code that has built so that developers don't pull code that would break their local build. PVCS did this quite well using the idea of promotion groups. Clearcase could do it too using labels but would require more process administration than a lot of shops care to provide.
What is their purpose?
Take load of developer machines, provide a stable, reproducible environment for builds.
Why aren't the developers building the project on their local machines, or are they?
Because with complex software, amazingly many things can go wrong when just "compiling through". problems I have actually encountered:
incomplete dependency checks of different kinds, resulting in binaries not being updated.
Publish commands failing silently, the error message in the log ignored.
Build including local sources not yet commited to source control
(fortunately, no "damn customers" message boxes yet..).
When trying to avoid above problem by building from another folder, some files picked from the wrong folder.
Target folder where binaries are aggregated contains additional stale developer files that shoulkd not be included in release
We've got an amazing stability increase since all public releases start with a get from source control onto an empty folder. Before, there were lots of "funny problems" that "went away when Joe gave me a new DLL".
Are some projects so large that more powerful machines are needed to build it in a reasonable amount of time?
What's "reasonable"? If I run a batch build on my local machine, there are many things I can't do. Rather than pay developers for builds to complete, pay IT to buy a real build machine already.
Is it I have just not worked on projects large enough?
Size is certainly one factor, but not the only one.
A build server is a distinct concept to a Continuous Integration server. The CI server exists to build your projects when changes are made. By contrast a Build server exists to build the project (typically a release, against a tagged revision) on a clean environment. It ensures that no developer hacks, tweaks, unapproved config/artifact versions or uncommitted code makes it into the released code.
The build server is used to build everyone's code when it is checked in. Your code may compile locally, but you most likely won't have all the change made by everyone else all the time.
To add on what has already been said :
An ex-colleague worked on the Microsoft Office team and told me a complete build sometimes took 9 hours. That would suck to do it on YOUR machine, wouldn't it?
It's necessary to have a "clean" environment free of artifacts of previous versions (and configuration changes) in order to ensure that builds and tests work and don't depend on the artifacts. An effective way to isolate is to create a separate build server.
I agree with the answers so far in regards to stability, tracability, and reproducability. (Lots of 'ity's, right?). Having ONLY ever worked for large companies (Health Care, Finance) with MANY build servers, I would add that it's also about security. Ever seen the movie Office Space? If a disgruntled developer builds a banking application on his local machine and no one else looks at it or tests it... BOOM. Superman III.
These machines are used for several reasons, all trying to help you provide a superior product.
One use is to simulate a typical end user configuration. The product might work on your computer, with all your development tools and libraries set up, but the end user most likely won't have the same configuration as you. For that matter, other developers won't have the exact same setup as you either. If you have a hardcoded path somewhere in your code, it will probably work on your machine, but when Dev El O'per tries to build the same code, it won't work.
Also they can be used to monitor who broke the product last, with what update, and where the product regressed at. Whenever new code is checked in, the build server builds it, and if it fails, its clear that something is wrong and the user who committed last is at fault.
For consistent quality and to get the build 'off your machine' to spot environment errors and so that any files you forget to check in to source control also show up as build errors.
I also use it to create installers as these take a lot of time to do on the desktop with code signing etc.
We use one so that we know that the production/test boxes have the same libraries and versions of those libraries installed as what is available on the build server.
It's about management and testing for us. With a build server we always know that we can build our main "trunk" line from version control. We can create a master install with one-click and publish it to the web. We can run all of our unit tests each time code is checked in to make sure it works. By collecting all these tasks into a single machine it makes it easier to get it right repeatedly.
You are right that developers could build on their own machines.
But these are some of the things our build server buys us, and we're hardly sophisticated build makers:
Version control issues (some have been mentioned in earlier responses)
Efficiency. Devs don't have to stop to make builds locally. They can kick it off on the server and get on to the next task. If builds are large, then that is even more time the dev's machine is not occupied. For those doing continuous integration and automated testing, even better.
Centralization. Our build machine has scripts that make the build, distribute it to UAT environments, and even to production staging. Keeping them in one place reduces the hassle of keeping them in sync.
Security. We don't do much special here, but I'm sure a sysadmin can make it such that production migration tools can only be accessed on a build server by certain authorized entities.
Maybe i'm the only one...
I think everyone agrees that one should
use a file repository
do builds from the repository (and in a clean environment)
use a continous testing server (e.g. cruise control) to see if anything is broken after your "fixes"
But no one cares about automatically built versions.
When something was broken in an automatic build, but it's not anymore - who cares? It's a work in progress. Someone fixed it.
When you want to do a release version, you run a build from the repository. And i'm pretty sure you want to tag the version in the repository at that time and not every six hours when the server does it's work.
So, maybe a "build server" is just a misnomer and it's actually a "continous test server". Otherwise it sounds pretty much useless.
A build server gets you a sort of second opinion of your code. When you check it in, the code is checked. If it works, the code has a minimum quality.
Additionally, remember that low level languages take much longer to compile than high level languages. It's easy to think "Well look, my .Net project compiles in a couple of seconds! What's the big deal?" Awhile back I had to mess with some C code and I had forgotten how much longer it takes to compile.
A build server is used to schedule compile tasks (e.g. nightly builds) of usually large projects located in a repository that can sometimes take more than a couple of hours.
A build server also gives you a basis for escrow, being able to capture all the parts necessary to reproduce a build in the case that others may have rights to take ownership.

how-to: programmatic install on windows?

Can anyone list the steps needed to programatically install an application on Windows. Aside from copying the files where they need to be, what are the additional steps needed so that your app will be a first-class citizen in Windows (i.e. show up in the programs list, uninstall list...etc.)
I tried to google this, but had no luck.
BTW: This is for an unmanaged c++ application (developed in Qt), so I'd rather not involve the .net framework if I don't have to.
I highly recommend NSIS. Open Source, very active development, and it's hard to match/beat its extensibility.
To add your program to the Add/Remove Programs (or Programs and Features) list, add the following reg keys:
[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\PROGRAM_NAME]
"DisplayName"="PROGRAM_NAME"
"Publisher"="COMPANY_NAME"
"UninstallString"="PATH_TO_UNINSTALL_PROGRAM"
"DisplayIcon"="PATH_TO_ICON_FILE"
"DisplayVersion"="VERSION"
"InstallLocation"="PATH_TO_INSTALLATION_LOCATION"
I think the theme to the answers you'll see here is that you should use an installation program and that you should not write the installer yourself. Use one of the many installer-makers, such as Inno Setup, InstallSheild, or anything else someone recommends.
If you try to write the installer yourself, you'll probably do it wrong. This isn't a slight against you personally. It's just that there are a lot of little details that an installer should consider, and a lot of things that can go wrong, and if you want to write the installer yourself, you're just going to have to get all those things right. That means lots of research and lots of testing on your part. Save yourself the trouble.
Besides copying files, installation tasks vary quite a bit depending on what your program needs. Maybe you need to put an icon on the Start menu; an installer tool should have a way to make that happen very easily, automatically filling in the install location that the customer chose earlier in the installation, and maybe even choosing the right local language for the shortcut's label.
You might need to create registry entries, such as for file associations or licensing. Your installer tool should already have an easy way to specify what keys and values to create or modify.
You might need to register a COM server. That's a common enough action that your installer tool probably has a way of specifying that as part of the post-file-copy operation.
If there are some actions that your chosen installer tool doesn't already provide for, the tool will probably offer a way to add custom actions, perhaps through a scripting language, or perhaps through linking external code from a DLL you would write that gets included with your installer. Custom actions might include downloading an update from a specific Web site, sending e-mail, or taking an inventory of what other products from your company are already installed.
A couple of final things that an installer tool should provide are ways to apply upgrades to an existing installation, and a way to uninstall the program, undoing all those installation tasks (deleting files, restoring backups, unregistering COM servers, etc.).
I've used Inno Setup to package my software for C++. It's very simple compared to heavy duty solutions such at InstallShield. Everything can be contained in a single setup.exe without creating all these crazy batch scripts and so on.
Check it out here: http://www.jrsoftware.org/isinfo.php
It sounds like you need to check out the Windows Installer system. If you need the nitty-gritty, see the official documentation. For news, read the installer team's blog. Finally, since you're a programmer, you probably want to build the installer as a programmer would. WiX 3.0 is my tool of choice - open source code, from Microsoft to boot. Start with this tutorial on WiX. It's good.
The GUI for innosetup (highly recommended) is Istool
You can also use the MSI installer built into Visual Studio, it's a steeper learning curve (ie is a pain) but is useful if you are installing software in a corporate environment.
To have your program show up in the Start program menu,
You would need to create folder
C:\Documents and Settings\All Users\Start Menu\Programs
and added a short cut to the program you want to launch.
(If you want your application be listed
directly in the Start menu, or in the programs submenu,
you would put your short cut in the respective directory)
To programically create a short cut you can use IShellLink
(See MSDN article).
Since you want to uninstall, that gets a lot more involved because you don't want to simply go deleting DLLs or other common files without checking dependencies.
I would recommend using a setup/installation generator, especially nowadays with Vista being so persnickety, it is getting rather complicated to roll your own installation
if you need anything more than a single executable and a start menu shortcut.
I have been using Paquet Builder setup generator for several years now.
(The registered version includes uninstall).
You've already got the main steps. One you left out is to install on the Start Menu and provide an option to create a desktop and/or quick launch icon.
I would encourage you to look into using a setup program, as suggested by Jeremy.