Disable Gradle auto make in Android Studio - build

In Android Studio 0.2.0, whenever I type anything in my build.gradle files, Gradle decides it's time to rebuild. This takes a long time, generates noise and kills my battery life. It never ends as well, at least not until I finish editing the file… always rebuilding as I type. Lucky me I use Ubuntu with plenty of free memory.
So… I'd like to deactivate any option to auto make stuff. This is what I've tried so far:
Checking "File" | "Power Save Mode" in the menu.
Unchecking all options and all combinations between them in "Compiler" options, especially "Make Project Automatically".
Nothing works. I'd like a way to only build when I ask. Like a manual mode with a button. Something like that.
I understand Android Studio and the whole build system is very new, with lots of rough edges, but I'm hoping it's just a matter of an obscure flag definition in a file somewhere.
Previous research: this question does not provide enough details or goals, so I made my own. This G+ thread was a dead end as well. I'm still getting used to the new stuff and may be lost searching (i.e., missing the right keywords etc.), so sorry in advance if there are no updates on this issue.
Thank you.

under preferences > Gradle you can disable auto-import. With it selected it'll reimport the gradle project (which right now builds it first) every time you change the file.

Additionally, if you are using Kotlin build script (Kotlin DSL), after disabled auto-import like #Xavier said, you need one more step: uncheck the Auto Reload checkbox of KotlinBuildScript under preference -> languages & frameworks -> Kotlin -> Kotlin Scripting.

Related

I accidentally deleted part of my code in c++ Visual Studio

I deleted part of my code, and accidentally saved.
I tried looking at my history and my source explorer, but those were both greyed out.
Help?
Thanks
I'm not sure if you can pull your deleted code out of the void, but you should look into version control software, like Git, to keep a lock on each version of your programs. I know it doesn't solve your problem now, but it will help a ton in the future if you make changes often!
Go to "Edit" → "Undo" to undo recent changes.
Or, go into your version control system's log and revert to an older committed version.
Or, restore from one of your off-site backups.
The most important was already said. However, here some relevant complements:
The Edit -> Undo (or Ctrl+Z) has to be done file by file if you have several files in your project.
The Undo works after a Save, provided that you've set the focus in the editor's window (and not the solution explorer's). However, once you've left Visual Studio, it's lost forever!
The AutoRecover setting in the Environment section of the Option dialog box ensures that a copy of your unsaved work is saved every couple of minutes and keep it a couple of days (in your document folder under Visual Studio 20xx\Backup Files...). Unfortunately, it is designed to protect you against a crash; so the files are removed when you save, and it won't help you (I mention it only in case you were aware of the backup files and hoped to get a solution with it).
If you're working with windows 10 and have activated the file history backup you may luckily look at older versions in the explorer. This will not help if you have done changes and deleted them without a backup in-between (e.g within one hour at least).
You may not like this suggestion because the VS IDE is very comfortable, once you're used to it. But some programming editors allow you to set the configuration to make a backup of the files before you save them (e.g. such as for example emacs or atom). The purpose is exactly to prevent the kind of problems that you've just mentioned.
The best approach to avoid loosing previous work is of course the source code version control, with the corresponding discipline. It's easy to setup: a right-click on your solution in the solution explorer to activate this feature on your project, then at each significant change, again a right-click on the solution to commit the changes. With git, you don't even need to create a central repository if you're working alone on smaller projects. The local repository is sufficient to archive the successive versions of your code and find them back. But again, it's no magic: if you've made a lot of changes and didn't commit them, it won't retrieve them...

How do I add a new dependency to a Clojure project using emacs or lein?

I use emacs (to be more precise, Spacemacs), and so far, I haven't seen any way to add a project dependency (say, ring or hiccup) to my project, other than opening ./project.clj and adding a new vector to :dependencies. I'm not feeling comfortable doing this because I need to remember exact version of the package I want to add as a dependency, and multiplied by possible number of these packages, this amount of information is clearly not for a human head. Meanwhile, I have a strong feeling that it's possible to add a project dependency either via CLI or in emacs directly (perhaps Cider?). Is it possible, and how do I do this?
In Spacemacs you can use clj-refactor to help you with this. Navigate to your project.clj, cider jack-in with ,' and press ,rap (major mode, refactor, add, project dependency) for cljr-add-project-dependency.
In the menu you can search for an artifact available in Clojars:
and select one of the available versions:
When you press enter the dependency is added to the bottom of the list.
Managing this by hand is not difficult. As you said you simply open your project.clj file in your editor and add dependencies.
You can find the current version by either checking the project's page or by searching for it on clojars' or maven's website. If you know what you need it only takes a few minutes, and if you're not writing throw away code a few minutes is nothing compared to the life of the project.
To maintain dependencies, something like lein ancient is very helpful.

How to make XCode 4 build and run code automatically when changes are saved?

I am using XCode for test driven development in C++.
It occurred to me that I would save a lot of time if XCode could automatically build and run my tests every time I save.
Is there any way to do this (by scripting XCode or otherwise)? Google doesn't seem to have a clue.
I have seen this workflow when using interpreted languages and it really does increase productivity.
Let's assume that my machine is fast enough to build and run tests in a few seconds.
If you're targeting C++, then you're probably out of luck.
With Objective-C, there's a project called «Injection»:
http://injectionforxcode.com/
It tracks the changes to your project files, and when a change occurs, it re-build the files as categories, placed inside a bundle.
The bundle is then loaded dynamically into the running app, and the contents from the categories replace the running code.
But it's Objective-C. C++ does not have such a runtime and capabilities.
Anyway, you may want to take a look at it... : )
automatically? no. you could write your own fsevent monitor agent. when a change occurs that requires a rebuild, do something appropriate.
the easy way around this: you can configure xcode to save when building. you don't need save explicitly, just hit run with this preference enabled. in that sense, hitting run is as simple as hitting save, and that performs a save, build and run in the correct order when you hit run. You may want an intermediate target or scheme for this.
another option would be to use a vc commit as a trigger for a build and run of your tests (saw your comment: use branches).
No, I don't think this can be done.
Most projects don't build and test in the fraction of a second it would require for it to be practical to do on every save anyway (i.e., whenever Xcode autosaves).
A lot of work has gone into the infrastructure for just getting Xcode's live errors and warnings. As long as your project isn't too weird those live errors ought to give a pretty good proxy for actually building it anyway.
For testing you might want to look into continuous integration if you don't already use it.
Grey-beards that grew up before autosave may have developed the habit of occasionally using a key command to save manually. Such users may be able to change that habit by substituting the key command that runs the tests for the key command they use to manually save.

Why should one use a build system over that which is included as part of an IDE?

I've heard more than one person say that if your build process is clicking the build button, than your build process is broken. Frequently this is accompanied with advice to use things like make, cmake, nmake, MSBuild, etc. What exactly do these tools offer that justifies manually maintaining a separate configuration file?
EDIT: I'm most interested in answers that would apply to a single developer working on a ~20k line C++ project, but I'm interested in the general case as well.
EDIT2: It doesn't look like there's one good answer to this question, so I've gone ahead and made it CW. In response to those talking about Continuous Integration, yes, I understand completely when you have many developers on a project having CI is nice. However, that's an advantage of CI, not of maintaining separate build scripts. They are orthogonal: For example, Team Foundation Build is a CI solution that uses Visual Studio's project files as it's configuration.
Aside from continuous integration needs which everyone else has already addressed, you may also simply want to automate some other aspects of your build process. Maybe it's something as simple as incrementing a version number on a production build, or running your unit tests, or resetting and verifying your test environment, or running FxCop or a custom script that automates a code review for corporate standards compliance. A build script is just a way to automate something in addition to your simple code compile. However, most of these sorts of things can also be accomplished via pre-compile/post-compile actions that nearly every modern IDE allows you to set up.
Truthfully, unless you have lots of developers committing to your source control system, or have lots of systems or applications relying on shared libraries and need to do CI, using a build script is probably overkill compared to simpler alternatives. But if you are in one of those aforementioned situations, a dedicated build server that pulls from source control and does automated builds should be an essential part of your team's arsenal, and the easiest way to set one up is to use make, MSBuild, Ant, etc.
One reason for using a build system that I'm surprised nobody else has mentioned is flexibility. In the past, I also used my IDE's built-in build system to compile my code. I ran into a big problem, however, when the IDE I was using was discontinued. My ability to compile my code was tied to my IDE, so I was forced to re-do my entire build system. The second time around, though, I didn't make the same mistake. I implemented my build system via makefiles so that I could switch compilers and IDEs at will without needing to re-implement the build system yet again.
I encountered a similar problem at work. We had an in-house utility that was built as a Visual Studio project. It's a fairly simple utility and hasn't needed updating for years, but we recently found a rare bug that needed fixing. To our dismay, we found out that the utility was built using a version of Visual Studio that was 5-6 versions older than what we currently have. The new VS wouldn't read the old-version project file correctly, and we had to re-create the project from scratch. Even though we were still using the same IDE, version differences broke our build system.
When you use a separate build system, you are completely in control of it. Changing IDEs or versions of IDEs won't break anything. If your build system is based on an open-source tool like make, you also don't have to worry about your build tools being discontinued or abandoned because you can always re-build them from source (plus fix bugs) if needed. Relying on your IDE's build system introduces a single point of failure (especially on platforms like Visual Studio that also integrate the compiler), and in my mind that's been enough of a reason for me to separate my build system and IDE.
On a more philosophical level, I'm a firm believer that it's not a good thing to automate away something that you don't understand. It's good to use automation to make yourself more productive, but only if you have a firm understanding of what's going on under the hood (so that you're not stuck when the automation breaks, if for no other reason). I used my IDE's built-in build system when I first started programming because it was easy and automatic. I later started to become more aware that I didn't really understand what was happening when I clicked the "compile" button. I did a little reading and started to put together a simple build script from scratch, comparing my output to that of the IDE's build system. After a while I realized that I now had the power to do all sorts of things that were difficult or impossible through the IDE. Customizing the compiler's command-line options beyond what the IDE provided, I was able to produce a smaller, slightly faster output. More importantly, I became a better programmer by having real knowledge of the entire development process from writing code all the way down through the generation of machine language. Understanding and controlling the entire end-to-end process allows me to optimize and customize all of it to the needs of whatever project I'm currently working on.
If you have a hands-off, continuous integration build process it's going to be driven by an Ant or make-style script. Your CI process will check the code out of version control when changes are detected onto a separate build machine, compile, test, package, deploy, and create a summary report.
Let's say you have 5 people working on the same set of code. Each of of those 5 people are making updates to the same set of files. Now you may click the build button and you know that you're code works, but what about when you integrate it with everyone else. The only you'll know is that if you get everyone else's and try. This is easy every once in a while, but it quickly becomes tiresome to do this over and over again.
With a build server that does it automatically, it checks if the code compiles for everyone all the time. Everyone always knows if the something is wrong with the build, and what the problem is, and no one has to do any work to figure it out. Small things add up, it may take a couple of minutes to pull down the latest code and try and compile it, but doing that 10-20 times a day quickly becomes a waste of time, especially if you have multiple people doing it. Sure you can get by without it, but it is so much easier to let an automated process do the same thing over and over again, then having a real person do it.
Here's another cool thing too. Our process is setup to test all the sql scripts as well. Can't do that with pressing the build button. It reloads snapshots of all the databases it needs to apply patches to and runs them to make sure that they all work, and run in the order they are supposed to. The build server is also smart enough to run all the unit tests/automation tests and return the results. Making sure it can compile is fine, but with an automation server, it can handle many many steps automatically that would take a person maybe an hour to do.
Taking this a step further, if you have an automated deployment process along with the build server, the deployment is automatic. Anyone who can press a button to run the process and deploy can move code to qa or production. This means that a programmer doesn't have to spend time doing it manually, which is error prone. When we didn't have the process, it was always a crap shoot as to whether or not everything would be installed correctly, and generally it was a network admin or a programmer who had to do it, because they had to know how to configure IIS and move the files. Now even our most junior qa person can refresh the server, because all they need to know is what button to push.
the IDE build systems I've used are all usable from things like Automated Build / CI tools so there is no need to have a separate build script as such.
However on top of that build system you need to automate testing, versioning, source control tagging, and deployment (and anything else you need to release your product).
So you create scripts that extend your IDE build and do the extras.
One practical reason why IDE-managed build descriptions are not always ideal has to do with version control and the need to integrate with changes made by other developers (ie. merge).
If your IDE uses a single flat file, it can be very hard (if not impossible) to merge two project files into one. It may be using a text-based format, like XML, but XML it notoriously hard with standard diff/merge tools. Just the fact that people are using a GUI to make edits makes it more likely that you end up with unnecessary changes in the project files.
With distributed, smaller build scripts (CMake files, Makefiles, etc.), it can be easier to reconcile changes to project structure just like you would merge two source files. Some people prefer IDE project generation (using CMake, for example) for this reason, even if everyone is working with the same tools on the same platform.

how-to: programmatic install on windows?

Can anyone list the steps needed to programatically install an application on Windows. Aside from copying the files where they need to be, what are the additional steps needed so that your app will be a first-class citizen in Windows (i.e. show up in the programs list, uninstall list...etc.)
I tried to google this, but had no luck.
BTW: This is for an unmanaged c++ application (developed in Qt), so I'd rather not involve the .net framework if I don't have to.
I highly recommend NSIS. Open Source, very active development, and it's hard to match/beat its extensibility.
To add your program to the Add/Remove Programs (or Programs and Features) list, add the following reg keys:
[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\PROGRAM_NAME]
"DisplayName"="PROGRAM_NAME"
"Publisher"="COMPANY_NAME"
"UninstallString"="PATH_TO_UNINSTALL_PROGRAM"
"DisplayIcon"="PATH_TO_ICON_FILE"
"DisplayVersion"="VERSION"
"InstallLocation"="PATH_TO_INSTALLATION_LOCATION"
I think the theme to the answers you'll see here is that you should use an installation program and that you should not write the installer yourself. Use one of the many installer-makers, such as Inno Setup, InstallSheild, or anything else someone recommends.
If you try to write the installer yourself, you'll probably do it wrong. This isn't a slight against you personally. It's just that there are a lot of little details that an installer should consider, and a lot of things that can go wrong, and if you want to write the installer yourself, you're just going to have to get all those things right. That means lots of research and lots of testing on your part. Save yourself the trouble.
Besides copying files, installation tasks vary quite a bit depending on what your program needs. Maybe you need to put an icon on the Start menu; an installer tool should have a way to make that happen very easily, automatically filling in the install location that the customer chose earlier in the installation, and maybe even choosing the right local language for the shortcut's label.
You might need to create registry entries, such as for file associations or licensing. Your installer tool should already have an easy way to specify what keys and values to create or modify.
You might need to register a COM server. That's a common enough action that your installer tool probably has a way of specifying that as part of the post-file-copy operation.
If there are some actions that your chosen installer tool doesn't already provide for, the tool will probably offer a way to add custom actions, perhaps through a scripting language, or perhaps through linking external code from a DLL you would write that gets included with your installer. Custom actions might include downloading an update from a specific Web site, sending e-mail, or taking an inventory of what other products from your company are already installed.
A couple of final things that an installer tool should provide are ways to apply upgrades to an existing installation, and a way to uninstall the program, undoing all those installation tasks (deleting files, restoring backups, unregistering COM servers, etc.).
I've used Inno Setup to package my software for C++. It's very simple compared to heavy duty solutions such at InstallShield. Everything can be contained in a single setup.exe without creating all these crazy batch scripts and so on.
Check it out here: http://www.jrsoftware.org/isinfo.php
It sounds like you need to check out the Windows Installer system. If you need the nitty-gritty, see the official documentation. For news, read the installer team's blog. Finally, since you're a programmer, you probably want to build the installer as a programmer would. WiX 3.0 is my tool of choice - open source code, from Microsoft to boot. Start with this tutorial on WiX. It's good.
The GUI for innosetup (highly recommended) is Istool
You can also use the MSI installer built into Visual Studio, it's a steeper learning curve (ie is a pain) but is useful if you are installing software in a corporate environment.
To have your program show up in the Start program menu,
You would need to create folder
C:\Documents and Settings\All Users\Start Menu\Programs
and added a short cut to the program you want to launch.
(If you want your application be listed
directly in the Start menu, or in the programs submenu,
you would put your short cut in the respective directory)
To programically create a short cut you can use IShellLink
(See MSDN article).
Since you want to uninstall, that gets a lot more involved because you don't want to simply go deleting DLLs or other common files without checking dependencies.
I would recommend using a setup/installation generator, especially nowadays with Vista being so persnickety, it is getting rather complicated to roll your own installation
if you need anything more than a single executable and a start menu shortcut.
I have been using Paquet Builder setup generator for several years now.
(The registered version includes uninstall).
You've already got the main steps. One you left out is to install on the Start Menu and provide an option to create a desktop and/or quick launch icon.
I would encourage you to look into using a setup program, as suggested by Jeremy.