How to prevent WiX custom bootstrapper to uninstall missing packages when upgrading - bootstrapper

I have a WiX Custom Bootstrapper that install several msi packages, let’s say package A, B, C, D and E. Now I want to distribute a new Bootstrapper that upgrade package A and B but no longer install package C, D and E. The problem is that I want to leave package C, D and E on the machine (if already there).
When upgrading, the Bootstrapper will install/upgrade package A and B, then uninstall the old Bootstrapper to clean up. That will uninstall package C, D and E because they are no longer a part of the product.
Question: How can I prevent that package C, D and E being uninstalled?

#Shique: If the packages (C, D and E) are no longer included in the Bootstrapper, you will never hit OnPlanPackageBegin on those packages and can therefore not set the State property to RequestState.None.
This is not a direct answer to the question but a way to remove packages, that you no longer want to distribute, from you bootstrapper without uninstalling the packages.
We created a second Bootstrapper containing package C, D and E. In our original Bootstrapper, now only containing package A and B, we added the second Bootstrapper as an ExePackage with the Permanent attribute on.
When running the original Bootstrapper it will upgrade package A and B, run the second Bootstrapper, that will increment the reference count on package C, D and E. When the Bootstrapper comes to the cleanup, it will of course leave package A and B, but also package C, D and E due to the reference from the second Bootstrapper.
If the package C, D and E are not embedded (compressed) in the second Bootstrapper remember to add them as Payload to the ExePackage.
In this way we still have a handle to package C, D and E through Apps & features and the user can choose the time to uninstall the packages.

Related

Does an ember package imported component use its parents library version?

I tried to use a component from an imported package inside a class in ember.
I expected the component to use its own version of ember-intl.
It resulted in the imported package using the parents version of ember-intl.
I have ember-intl 5.7.0 in parent and 4.3.0 used in an imported package.
There is a component called <Calendar> from the imported package used in a parent class which has a string that looks like:
"{Date} was selected for <span class='exampleName'".
4.3.0 will handle this string fine but 5.7.0 will fail as the major version change was that apostrophes were made into escape characters :[
node_modules shows that the child package resolves to 4.3.0 but during runtime it fails due to the apostrophe.
The imported component uses a service by injecting it:
import { inject as service } from '#ember/service';
intl: service()
I've added logging to both versions to see which is used and it is the parent version.
I would prefer not to downgrade the parent or alter the child library.
Can anyone explain why this is happening?
If any more info is needed, let me know, thanks.
You can only ever have one copy of a package at any given time -- allowing duplicates would explode your bundle size very rapidly (exponentially, even).
In best case scenarios, the "app version" will win, but in worst case scenarios, you duplicate dependencies in your app -- this can be linted against with ember-cli-dependency-lint
Because ember-intl relies on app-wide state, tho, it's not possible to get in to duplicate dependencies in your app -- because you can only ever have one copy of a service.
I recommend upgrading the package you're using, since ember-intl is nearly on v6 right now, and v4 is very old.

What is the difference between packages.dhall and spago.dhall files?

spago docs state:
packages.dhall: this file is meant to contain the totality of the packages available to your project (that is, any package you might want to import).
In practice it pulls in the official package-set as a base, and you are then able to add any package that might not be in the package set, or override existing ones.
spago.dhall: this is your project configuration. It includes the above package set, the list of your dependencies, the source paths that will be used to build, and any other project-wide setting that spago will use. (my emphasis)
Why do both files have the notion/concept of dependencies? Example: packages.dhall and spago.dhall from the ebook.
spago.dhall dependencies can be found in the project .spago folder. But I cannot locate the ones from packages.dhall. Others are common like aff. A different perspective:
[...] what you choose is a "snapshot", which is a collection of certain versions of all available packages that are guaranteed to compile and work together.
The snapshot is defined in your packages.dhall file, and then you specify the specific packages that you want to use in spago.dhall. The version for each package comes from the snapshot.
That sounds, like spago.dhall is an excerpt of packages from packages.dhall.The note about versions is a bit confusing, as there aren't version specifiers in both files.
So, why two files? What is the mental model for someone coming from npm ecosystem with package.json (which might be present as well)?
The mental model is that of a Haskell developer, which is what most PureScript developers used to be, and many still are. :-)
But more seriously, the mental model is having multiple "projects" in a "solution", which is the model of Haskell's de-facto standard package manager, Stack. In Haskell this situation is very common, in PureScript - much less so, but still not unheard of.
In a situation like this it's usually beneficial to have all the "projects" to share a common set of packages, which are all guaranteed to be "compatible" with each other, which simply means that they all compile together and their tests pass. In Haskell Stack this common set of packages is defined in stack.yaml. In Spago - it's packages.dhall.
Once you have this common base set of packages established, each individual project may pick and choose the particular packages that it uses. In Haskell Stack this is specified either in package.yaml or in <project-name>.cabal (the latter being phased out). In Spago - it's spago.dhall.
But of course, when you have just the one project, having both packages.dhall to establish the "base set" of packages and then, separately, spago.dhall to pick some particular packages from that set - may seem a bit redundant. And indeed, it's possible to do without the packages.dhall file completely: just specify the URL of the package set directly in spago.dhall, as the value of the packages property:
{ name = "my-project"
, dependencies = [ ... ]
, license = "..."
, packages = https://github.com/purescript/package-sets/releases/download/psc-0.13.8-20201223/packages.dhall
, repository = "..."
, sources = [ "src/**/*.purs" ]
}
This will work, but there is one important caveat: hashing. When the URL of the package set is specified in packages.dhall, running spago install will compute a hash of that package set and put it inside packages.dhall, right next to the URL. Here's what mine looks like:
let upstream =
https://github.com/purescript/package-sets/releases/download/psc-0.13.8-20201222/packages.dhall sha256:620d0e4090cf1216b3bcbe7dd070b981a9f5578c38e810bbd71ece1794bfe13b
Then, if maintainers of the package set become evil and change the contents of that file, Spago will be able to notice that, recompute the hash, and reinstall the packages.
If you put the URL directly in spago.dhall, this doesn't happen, and you're left with the slight possibility of your dependencies getting out of sync.
Now to address this point separately:
Why do both files have the notion/concept of dependencies? Example: packages.dhall and spago.dhall from the ebook.
If you look closer at the examples you linked, you'll see that these are not the same dependencies. The ones in spago.dhall are dependencies of your package - the one where spago.dhall lives.
But dependencies in packages.dhall are dependencies of the test-unit package, which is being added to the package set as an override, presumably because we want to use the special version stackless-default, which isn't present in the official package set. When you override a package like this, you can override any fields specified in that package's own spago.dhall, and in this case we're overriding dependencies, repo, and version.

Cmake and multi stage build pipelines (reuse)

I have a cpp cmake project that takes a bit to compile the core components.
Let's say
Component A (Takes 2 hours to compile)
Component B (Takes 1 hour to compile)
Component C needs to statically link in Component A and B (takes 5 minutes)
We mostly are only changing Component C.
But we want our PR Build gates to rebuild A and B if they have to, but not necessarily if there wasn't any changes.
I'd like to build component A and B once per night.
Then during the day, allow our PR gates to download the intermediates from them, and then do an incremental build, only rebuilding the parts that have changed.
However, cmake seems to embed a lot of folder path information into the intermediates/ cache stamp files and stuff. And our build machines have slightly different paths each time. (e.g. c:\repo###\sourceDir\
and the ### changes each time.
Is there any easy way to do this? Or do I have to 'modify all the relevant .txt, .tlog, cache text, etc.. files' and also manipulate the timestamps back to their originals after modifying the files path to match the current machine.
I tried just copying over the /intermediates/ folder with all the .obj and other things, but I think cmake is picky about the folderpaths.
And I don't want to consume the prebuilt .lib or .dll from component A or B, because we might have made a change to them, so it should do incremental in that case.
Edit/answers
I haven't looked into ccache yet, hadn't heard of it.
we evoke it via cmake.exe (configure/build/install) cmakelists.txt. Each component does have their own cmakelists.txt in a toplevelfolder
src/componentA/cmakelists.txt (controls everything in itself, has no ledge of B or C)
src/componentA/cmakelists.txt (controls everything in itself, has no ledge of A or C)
src/componentC/cmakelists.txt (controls everything in itself + include from A and B)

Tup -- manually inserting a generated node

Say I have a project A that depends on project B and that B takes a long time to build.
Both project A and project B use tup.
I already have B built in its separate directory.
Can I now copy B into A or create a submodules that points to B, add the build products (cp -a), and convince tup that the build products are fine?

Moving from sourceCpp to a package w/Rcpp

I currently have a .cpp file that I can compile using sourceCpp(). As expected the corresponding R function is created and the code works as expected.
Here it is:
#include <Rcpp.h>
using namespace Rcpp;
// [[Rcpp::export]]
NumericVector exampleOne(NumericVector vectorOne, NumericVector vectorTwo){
NumericVector outputVector = vectorOne + vectorTwo;
return outputVector;
}
I am now converting my project over to a package using Rcpp. So I created the skeleton with rStudio and started looking at how to convert things over.
In Hadley's excellent primer on Cpp, he says in section "Using Rcpp in a Package":
If your packages uses the Rcpp::export attribute then one additional step in the package build process is requried. The compileAttributes function scans the source files within a package for Rcpp::export attributes and generates the code required to export the functions to R.
You should re-run compileAttributes whenever functions are added, removed, or have their signatures changed. Note that if you build your package using RStudio or devtools then this step occurs automatically.
So it looks like the code that compiled with sourceCpp() should work pretty much as is in a package.
I created the corresponding R file.
exampleOne <- function(vectorOne, vectorTwo){
outToR <- .Call("exampleOne", vectorOne, vectorTwo, PACKAGE ="testPackage")
outToR
}
Then I (re)built the package and I get this error:
Error in .Call("exampleOne", vectorOne, vectorTwo, PACKAGE = "voteR") :
C symbol name "exampleOne" not in DLL for package "testPackage"
Does anyone have an idea as to what else I need to do when taking code that compiles with sourceCpp() and then using it in a package?
I should note that I have read: "Writing a package that uses Rcpp" http://cran.rstudio.com/web/packages/Rcpp/vignettes/Rcpp-package.pdf and understand the basic structure presented there. However, after looking at the RcppExamples source code, it appears that the structure in the vignettes is not exactly the same as that used in the example package. For example there are no .h files used. Also neither the vignette nor the source code use the [[Rcpp::export]] attribute. This all makes it difficult to track down exactly where my error is.
Here is my "walk through" of how to go from using sourceCpp() to a package that uses Rcpp. If there is an error please feel free to edit this or let me know and I will edit it.
[NOTE: I HIGHLY recommend using RStudio for this process.]
So you have the sourceCpp() thing down pat and now you need to build a package. This is not hard, but can be a bit tricky, because the information out there about building packages with Rcpp ranges from the exhaustive thorough documentation you want with any R package (but that is above your head as a newbie), and the newbie sensitive introductions (that may leave out a detail you happen to need).
Here I use oneCpp.cpp and twoCpp.cpp as the names of two .cpp files you will use in your package.
Here is what I suggest:
A. First I assume you have a version of theCppFile.cpp that compiles with sourceCpp() and works as you expect it to. This is not a must, but if you are new to Rcpp OR packages, it is nice to make sure your code works in this simple situation before you move to the more complicated case below.
B. Now build your package using Rcpp.package.skeleton() or use the Project>Create Project>Package w/Rcpp wizard in RStudio (HIGHLY recommended). You can find details about using Rcpp.package.skeleton() in hadley/devtools or Rcpp Attributes Vignette. The full documentation for writing packages with Rcpp is in Writing a package that uses Rcpp, however this one assumes you know your way around C++ fairly well, and does not use the new "Attributes" way of doing Rcpp. It will be invaluable though if you move toward making more complex packages.
You should now have a directory structure for your package that looks something like this:
yourPackageName
- DESCRIPTION
- NAMESPACE
- \R\
- RcppExports.R
- Read-and-delete-me
- \man\
- yourPackageName-package.Rd
- \src\
- Makevars
- Makevars.win
- oneCpp.cpp
- twoCpp.cpp
- RcppExports.cpp
Once everything is set up, do a "Build & Reload" if using RStudio, or compileAttributes() if you are not in RStudio.
C. You should now see in your \R directory a file called RcppExports.R. Open it and check it out. In RcppExports.R you should see the R wrapper functions for all the .cpp files you have in your \src directory. Pretty sweet, eh?.
D) Try out the R function that corresponds to the function you wrote in theCppFile.cpp. Does it work? If so move on.
E) You can now just add new .cpp files like otherCpp.cpp to the \src directory as you create them. Then you just have to rebuild the package, and the R wrappers will be generated and added to RcppExports.R for you. In RStudio this is just "Build & Reload" in the Build menu. If you are not using RStudio you should run compileAttributes()
In short, the trick is to call compileAttributes() from within the root of the package. So for instance for package foo
$ cd /path/to/foo
$ ls
DESCRIPTION man NAMESPACE R src
$ R
R> compileAttributes()
This command will generate the RcppExports.cpp and RcppExports.R that were missing.
You are missing the forest for the trees.
sourceCpp() is a recent function; it is part of what we call "Rcpp attributes" which has its own vignette (with the same title, in the package, on my website, and on CRAN) which you may want to read. It among other things details how to turn something you compiled and run using sourceCpp() into a package. Which is what you want.
Randomly jumping between documentation won't help you, and at the end of the genuine source documentation by package authors may be preferable. Or to put a different spin on it: you are using a new feature but old documentation that doesn't reflect it. Try to write a basic package with Rcpp, ie come to it from the other end as well.
Lastly, there is a mailing list...