I'm trying to compile a dart project by using the command
pub build
However, this will generate random identifiers for all objects, which will ruin all logging information when it prints the object name.
For example, "Object = K1" is printed instead of "Object = Mirror".
Is there a way to keep all dart names, while compiling to .js?
I also tried:
pub build --mode=debug
...but the downside is that for production builds, debug mode is now allowed.
Is there any production build approach for this issue?
You can disable minification in pubspec.yaml:
transformers:
- $dart2js:
minify: false
See here for more details about the pub transformer.
Related
This issue describes the concept
https://github.com/dart-lang/source_gen/issues/272
To summarize:
I am using source_gen to generate some dart code.
I am using json_serializable on the generated dart code.
I wish to output all of the results to a source directory adjacent or below my target source.
The desired directory structure
src
feature_a
model.dart
gen
model.g.dart
model.g.g.dart
feature_b
...
I have considered building to cache however it seems json_serializable doesn't support this and even if it did I don't know if its even possible to run a builder on files in the cache.
I've also considered an aggregated builder that is mentioned here.
Generate one file for a list of parsed files using source_gen in dart
But json_serializable is still an issue and the source_gen version in that post is super old and doesn't describe the solution well.
This is not possible with build_runner. The issue to follow is https://github.com/dart-lang/build/issues/1689
Note that this doesn't help much with builders that you don't author, and wouldn't work with things like SharedPartBuilder.
I am working on a uni project with multiple tasks, and I am trying to debug a simple program, however I am getting the error "LNK2005 main already defined in task 1".
I realise this is because I have used "int(main)" for both tasks (I have code for task 1 and code for task 2). I do not want to have to create a new project folder for every task. Is there a way around this ?
While it is generally advisable to have a project for each executable you build, you can get away with having a single project for multiple executables if you manage to somehow get rid of the undesired duplicate mains. You have quite a few options available to you:
Have only one main. Have it test its own executable name, and take specific action depending on the name it finds. In the post-build rules, set up rules for creating each (specifically named) executable from your base executable. This allows you to build all your executables at the same time in a fairly efficient manner.
Have multiple mains, but hide them using #ifdefs. Add a #define to the project settings or just somewhere above main(), and compile as needed. This is ok if you don't want to build all your executables all the time.
Just bite the bullet and set up multiple projects.
Whatever you do, consider that being able to build everything you have in a single step is considered a highly desirable trait of build systems, and is usually high on the list of features a properly engineered development process should have.
I'd like to create a new Gradle project without any sources. I'm going to put there some configuration files and I want to generate a zip file when I build.
With maven I'd use the assembly plugin. I'm looking for the easiest and lightest way to do this with Gradle. I wonder if I need to apply the java plugin even if I don't have any sources here, just because it provides some basic and useful tasks like clean, assemble and so on. Generating a zip is pretty straightforward, I know how to do that, but I don't know where and how to put the zip generation within the gradle world.
I've done it manually until now. In other words, for projects where all I want to do is create some kind of distro and I need the basic lifecycle tasks like assemble and clean, I've simply created those tasks along with the needed dependencies.
But there is the 'base' plugin (mentioned under "Base plugins" of the "Standard Gradle Plugins" in the user's guide) that seems to fit the bill nicely for this functionality. Note though that the user guide mentions that this and the other base plugins are not yet considered part of the Gradle API and are not really documented.
The results are pretty much identical to yours, the only difference being that there are no confusing java specific tasks that always remain UP-TO-DATE.
apply plugin: 'base'
task dist(type: Zip) {
from('solr')
into('solr')
}
assemble.dependsOn(dist)
Sample run:
$ gradle clean assemble
:clean
:dist
:assemble
BUILD SUCCESSFUL
Total time: 2.562 secs
As far as I understood, it might sound strange but looks like I need to apply the java plugin in order to create a zip file. Furthermore it's handy to have available some common tasks like for example clean. The following is my build.gradle:
apply plugin: 'java'
task('dist', type: Zip) {
from('solr')
into('solr')
}
assemble.dependsOn dist
I applied the java plugin and defined my dist task which creates a zip file containing a solr directory with the content of the solr directory within my project. The last line is handy to have the task executed when I run the common gradle build or gradle assemble, since I don't want to explicitly call the dist task.
This way if I work with multiple projects I just need to execute gradle build on the parent to generate all the artifacts, including the configuration zip.
Please let me know if you have better solutions and add your own answer!
You could just use the groovy plugin and use ant. I did something like this. I do also like javanna's answer.
task jars(dependsOn: ['dev_jars']) << {
def fromDir = file('/database-files/non_dev').listFiles().sort()
File dist = new File("${project.buildDir}/dist")
dist.mkdir()
fromDir.each { File dir ->
File destFile = new File("${dist.absolutePath}" + "/" + "database-connection-" + dir.name + ".jar")
println destFile.getAbsolutePath()
ant.jar(destfile:destFile, update:false, baseDir:dir)
}
}
In CPP unit we run unit test as part of build as part of post build setup. We will be running multiple tests as part of this. In case if any test case fails post build should not stop, it should go ahead and run all the test cases and should report summary how many test cases passed and failed. how can we achieve this.
Thanks!
His question is specific enough. You need a test runner. Encapsulate each test in its own behavior and class. The test project is contained separately from the tested code. Afterwards just configure your XMLOutputter. You can find an excellent example of how to do this in the linux website. http://www.yolinux.com/TUTORIALS/CppUnit.html
We use this way to compile our test projects for our main projects and observe if everything is ok. Now it all becomes the work of maintaining your test code.
Your question is too vague for a precise answer. Usually, a unit test engine return a code to tell it has failed (like a non zero return code in the shell on linux) or generate some output file with results. The calling system handle this. If you have written it (some home made scripts) you have to give the option to go on tests execution even if an error occurred. If you are using some tools like continuous integration server, then you have to go through the doc and find the option that allows you to go on when tests fails.
A workaround is to write a script that return a "OK" result even if the unit test fails, but there you lose some automatic verification ...
Be more specific if you want more clues.
my2c
I would just write your tests this way. Instead of using the CPPUNIT_ASSERT macros or whatever you would write them in regular C++ with some way of logging errors.
You could use a macro for this too of course. Something like:
LOGASSERT( some_expression )
could be defined to execute some_expression and to log the expression together with FILE and LINE if it fails, and you can also log exceptions of course, as well as ones that are not thrown, simply by writing them in your tests (with macros if you want to log the expression that caused them with FILE and LINE).
If you are writing macros I would advise you to limit the content of your macro to calling an inline function with extra parameters.
Day 1 with using Hudson for our CI build. Slowly but surely getting up to speed.
My question is about run parameters. I've seen that I can use them to reference a particular run of a particular project - that's all fine.
What I don't understand (and can't find any documentation on - there's nothing at Parameterized Build) is how I refer to anything in the run defined by the run parameter.
Essentially I want to reference the %BUILD_NUMBER% and %SVN_REVISION% of the run that is selected in the run parameter.
How can I do that?
Do you really need to add extra property values, extra parameters for your job?
Since BUILD_NUMBER and SVN_REVISION are already defined as environment variables (see Building a software project), you can use those in your job.
When a Hudson job executes, it sets some environment variables that you may use in your shell script, batch command, or Ant script
or:
illustrates you already have those values at your disposal.
You can then use them to define other environment variables/properties within your shell or ant script.
When it comes to pass a variable value from one job to another, the Parameterized Trigger Plugin should do the trick:
The parameters section can contain a combination of one or more of the following:
a set of predefined properties
properties from a properties file read from the workspace of the triggering build
the parameters of the current build
"Subversion revision": makes sure the triggered projects are built with the same revision(s) of the triggering build.
You still have to make sure those projects are actually configured to checkout the right Subversion URLs.
Note: there might be an issue with the Join Plugin, which might not work when the Parameterized Trigger is in action.