Adonis: How do I seed only a specific file? - adonis.js

Is there a way to seed only a specific file with the command adonis seed? It would be useful especially on production when we add a new seeder.

Looks like I should have run adonis seed --help. Here is how to do it in production.
node ace db:seed --files "./database/seeders/User.ts"

Related

Randomize unit-tests using cmake/ctest

I manage a quite large open-source project with many unit-tests (~200 files) and passing all tests is quite time consuming for the continuous integration. We use cmake/ctest/Catch2 for the unit-test framework.
Is there a way to tell cmake/ctest to only build and run a random subset of the unit tests (e.g. just 30%) ?
When iterating with several commits on the code for a given feature, the probability that all tests where checked tends to one, but each individual commit would be way faster.
Obviously, this ratio would be set to 100% when preparing a PR or a release.
ALTETRNATIVE
Finally, I came up with a cmake solution just by creating a new add_test() function that activate a test upon a random test:
function(my_add_test test_file) #optional_avoid_add_test
string(RANDOM LENGTH 2 ALPHABET "0123456789" _random)
if (${_random} LESS ${THRESHOLD_RANDOM_TESTING})
add_executable(${test_file} ${test_file}.cpp)
add_test(${test_file} ${test_file})
endif()
endfunction()
In my main cmake I have the global variable (that can be set at cmake CLI/GUI)
SET(THRESHOLD_RANDOM_TESTING "100" CACHE INTERNAL "~% of unit tests to build and run.")
Each time I regenerate the project, a new random selection is constructed.
Is there a way to tell cmake/ctest
Yes. Get all test, shuffle, get 30% of them and pass them to back to ctest.
Looks like fun, in linux shell would be just:
ctest -N | sed -n 's/ Test #[0-9]\+: //p' | { tmp=$(cat); cnt=$(wc -l <<<"$tmp"); shuf -n "$((cnt * 30 / 100))" <<<"$tmp"; }
I meant an automatic way to do that.
No, there is not.

Is there any Newman option to attach a test/pre-request script before each call of a folder in Postman collection

The need here is to run a pre-request script before each call in a folder of a postman collection, optionally, while running from Newman collection.
For example, if running a test suite of 10 calls in one folder, the call would usually be:
newman run <collectionPath> --folder <folderPath>
Is there any option of passing something like,
newman run <collectionPath> --folder <folderPath> --pre-request_script someScript.js --test_script someTest.js
?
The reason why (an obvious) postman collection test / pre-request script is not being used is that
(the main reason) huge amounts of collections are already written and it will be difficult to go into each one of them and add this code. It will be way more convenient to govern this behavior via command line.
the test / pre-request script may vary across different newman runs and these parameters would remove the need of complex conditional code within pre-requests / test scripts.
Is there any other alternative or solution for the same?
From the latest version you can add pre-requests, tests, variables directly to the collection or something different on each sub folder. These collections can just be used in the normal way via newman. It might solve your problem.
http://blog.getpostman.com/2017/12/13/keep-it-dry-with-collection-and-folder-elements/

lein uberjar - No configuration setting found for key 'akka.version'

I can create an uberjar that is composed of lots of class files, originally Scala, Java, Clojure. The problem I have is that when I run java -jar my-server.jar it crashes with:
No configuration setting found for key 'akka.version'
This is to be expected and has a maven solution. The yellow writing on the accepted answer here is basically Akka saying "you shouldn't build uberjars with Akka jars in them, as then Akka won't be able to find its .conf files."
I am trying this as a lein solution:
:pom-plugins [[org.apache.maven.plugins/maven-shade-plugin 2.2]]
I have a local maven repository (by this I mean not the ~/.m2 one, but a local one that is used to introduce non-Clojars jars into the lein build). Maybe I need to lein deploy localrepo1 for the akka jars again to pick up this new setting - Nope - that didn't help.
Here's some of the stack trace to make it clear where the problem comes from:
Exception in thread "main" com.typesafe.config.ConfigException$Missing: No configuration setting found for key 'akka.version'
at com.typesafe.config.impl.SimpleConfig.findKey(SimpleConfig.java:124)
at com.typesafe.config.impl.SimpleConfig.find(SimpleConfig.java:145)
at com.typesafe.config.impl.SimpleConfig.find(SimpleConfig.java:151)
at com.typesafe.config.impl.SimpleConfig.find(SimpleConfig.java:159)
at com.typesafe.config.impl.SimpleConfig.find(SimpleConfig.java:164)
at com.typesafe.config.impl.SimpleConfig.getString(SimpleConfig.java:206)
at akka.actor.ActorSystem$Settings.<init>(ActorSystem.scala:169)
at akka.actor.ActorSystemImpl.<init>(ActorSystem.scala:505)
at akka.actor.ActorSystem$.apply(ActorSystem.scala:142)
at akka.actor.ActorSystem$.apply(ActorSystem.scala:109)
at com.seasoft.comms.MyPLCActorHolder.createRefToLocalActor(MyPLCActorHolder.scala:39)
Edit I've now looked inside the jar files. There are two akka jar files that both have a reference.conf in them. These files are not correctly merged because (unsurprisingly) lein uberjar doesn't understand the nesting of the property key/values within them.
Specifically the reference.conf in akka-actor_2.11-2.3.9.jar has akka.version = "2.3.9", but this entry has not made it to the merged reference.conf. I altered the uberjar and that fixed the problem, of course giving me the next merge problem. So the fix here is to manually do the merge myself.
And the better fix would be to write a little merging program (with two functions: a predicate and merge) and get it into akka so that people who write build tools can just use it...
Manually merging reference.conf from the two jars and then manually altering the uberjar (overwriting the existing reference.conf) did the trick. The merged file is here.
I believe this problem will occur anytime there are property files that come from different jars that have the same name. For instance log4j.properties also needed to be overwritten.
I changed spark version to avoid this kind of problem. Spark 2.1.0 does not depend on Akka, and does not have the Akka reference.conf problem.

Using Fabric to search for and run a file

I'm working on a website using Django and I have Fabric as well, which is very useful for scripting some chunks of code that me and other developers use. I'm pretty new to all of these (and linux in general, tbh) so I have ideas, but I don't know how (or if) they are possible. Specifically, I wanted to write a script to start the server on a specific port that we use for testing. Manually, I would just run
python ~/project/manage.py runserver 0.0.0.0:8080
but that gets old. To manually implement that specific command, I have the following code in my fabfile:
def start8080():
local("python ~/project/manage.py runserver 0.0.0.0:8080")
which works, but I'm not the only one using the port for testing, and ~/project/ is not the only project which would need to use a similar script. Is there a way to search down the tree from the directory you are working in for the first manage.py and then to run it the same command from there?
Farbic functions allow you to use arguments:
#task #not bad to use once your fabfile is big by using helper functions
def runserver(project_path, port=8000):
run("python %s/manage.py runserver 0.0.0.0:%s" % (project_path,port))
and you would use it like this:
fab runserver:/home/project,8080
you could also simplify it by creating a task that selects a port per project, although all available projects and their paths would have to be defined there. Then it could be as easy as:
fab runserver:myprojectname
Of course you could additionally implement #morgan's answer making the script check if the port is open and automatically assign a free one.
You could use the socket module, as shown here and have the os figure out your port, and then fabric just let you know which it chose.

Gradle project with only configuration and no sources

I'd like to create a new Gradle project without any sources. I'm going to put there some configuration files and I want to generate a zip file when I build.
With maven I'd use the assembly plugin. I'm looking for the easiest and lightest way to do this with Gradle. I wonder if I need to apply the java plugin even if I don't have any sources here, just because it provides some basic and useful tasks like clean, assemble and so on. Generating a zip is pretty straightforward, I know how to do that, but I don't know where and how to put the zip generation within the gradle world.
I've done it manually until now. In other words, for projects where all I want to do is create some kind of distro and I need the basic lifecycle tasks like assemble and clean, I've simply created those tasks along with the needed dependencies.
But there is the 'base' plugin (mentioned under "Base plugins" of the "Standard Gradle Plugins" in the user's guide) that seems to fit the bill nicely for this functionality. Note though that the user guide mentions that this and the other base plugins are not yet considered part of the Gradle API and are not really documented.
The results are pretty much identical to yours, the only difference being that there are no confusing java specific tasks that always remain UP-TO-DATE.
apply plugin: 'base'
task dist(type: Zip) {
from('solr')
into('solr')
}
assemble.dependsOn(dist)
Sample run:
$ gradle clean assemble
:clean
:dist
:assemble
BUILD SUCCESSFUL
Total time: 2.562 secs
As far as I understood, it might sound strange but looks like I need to apply the java plugin in order to create a zip file. Furthermore it's handy to have available some common tasks like for example clean. The following is my build.gradle:
apply plugin: 'java'
task('dist', type: Zip) {
from('solr')
into('solr')
}
assemble.dependsOn dist
I applied the java plugin and defined my dist task which creates a zip file containing a solr directory with the content of the solr directory within my project. The last line is handy to have the task executed when I run the common gradle build or gradle assemble, since I don't want to explicitly call the dist task.
This way if I work with multiple projects I just need to execute gradle build on the parent to generate all the artifacts, including the configuration zip.
Please let me know if you have better solutions and add your own answer!
You could just use the groovy plugin and use ant. I did something like this. I do also like javanna's answer.
task jars(dependsOn: ['dev_jars']) << {
def fromDir = file('/database-files/non_dev').listFiles().sort()
File dist = new File("${project.buildDir}/dist")
dist.mkdir()
fromDir.each { File dir ->
File destFile = new File("${dist.absolutePath}" + "/" + "database-connection-" + dir.name + ".jar")
println destFile.getAbsolutePath()
ant.jar(destfile:destFile, update:false, baseDir:dir)
}
}