I have a Fabric fabfile.py with a long list of commands (and growing). When I go fab -l I can't see the top of the command list. Grouping the commands under headers wouldn't make the list any shorter but it would make skimming the list easier - rather like the output of Django's ./manage.py help command. Has anyone solved this problem?
Using Fabric's "new style" tasks, you can take advantage of namespaces. When you list your tasks, you can provide the -F (--list-format) argument, providing nested as the value, which will list the available tasks in a nested fashion, by namespace, the appearance of such is documented here.
Fabric's "nested" task listing isn't quite as neat looking as Django's management command separation that lists commands by app in a very neat fashion, but it's a start.
Related
I ask my question in such a specific way because I am afraid that a more generic form could lead to excessively theoretic discussions of how the things should be done best and in the most appropriate way (like a question about pre and post-process actions in SCons).
WPP incorporation actually requires execution of an additional command (commands) before compilation of a file and only even if the build process finds necessity to compile the file without any regard to WPP.
I would remark that this is easily achieved with few lines of definitions in a shared Visual Studio property page file making this work for multiple files in multiple projects, folders, etc. in an absolutely transparent for developers way.
Thus I am wondering whether this can be done in a similarly simple way with SCons? I do not have any deep knowledge of either SCons or MSBuild frameworks; I work with them for simple practical use so I would truly appreciate a practical and useful advise.
Here's what I'd suggest.
SCons builds command lines from Environment() variables.
For example the compile command line for building shared object for c++ is stored in SHCXXCOM (and the variable for what is displayed to user when the command is run defaults to SHCXXCOM, but can be changed by modifying SHCXXCOMSTR).
Back to the problem at hand.
Assuming you have a limited number of build steps you want to wrap, you can do something like.
env['SHCXXCOM'] = [ 'MPP PRE COMMAND LINE', env['SHCXXCOM'], 'MPP POST COMMAND LINE']
You'll have to figure out which variables you need to do this with, but take a look at the manpage to figure that out.
https://scons.org/doc/production/HTML/scons-man.html
p.s. I've not tried this, but in theory it should work. Let us know if not.
I want to use the following file structure and file naming:
/app
/components
/some-component
component.some-component.template.hbs
component.some-component.class.js
/routes
/some-route
route.some-route.handler.js
route.some-route.template.hbs
route.some-route.controller.js
/models
model.some-model.js
It's kind of similar to the pods structure but it's not exactly it since the file naming is totally different.
Any hints on how I can achieve this? I tried overriding Resolver methods as it's been suggested here but this is only for when you use pods and you simply want to group all of you route-folders in one parent-folder.
I would suggest NOT adopting your own resolver, but instead encourage waiting for the module unification work to be completed (https://emberjs.com/statusboard)
Straying from the community adopted best-practices here is a good way to run into snags down the road. I spent some time helping a client shift away from their own custom resolver who had gone down this path and wouldn't suggest doing so 😉
Is there a "code free" way to get SOLR/LUCENE (or something similar) pointed at a set of word docs to make them quickly searchable by a user?
I am prototyping, seeing if there is value in, a system to search through some homegrown news articles. Before I stand up code to handle search string input and document indexing, I wanted to see if it was even worth it before I starting trying to figure it all out.
Thanks,
Judd
Using the bin/post tool of Solr and the Tika handler (named the ExtractingRequestHandler), you should be able to get something up and running for prototyping rather quickly.
See the introduction of Uploading Data with Solr Cell using Apache Tika. Tika is used to process a wide range of different document types.
You can give the Solr post tool a directory or a list of files to submit to the index.
Automatically detect content types in a folder, and recursively scan it for documents for indexing into gettingstarted.
bin/post -c gettingstarted afolder/
I'm attempting to build a large legacy code base that has troubles building under new toolchain. In order to speed up fixing problems, I run
make -k
to build everything that can be built, so that I can later focus on unbuildable stuff. But even then a single make takes a minute to figure out the next problem to work on (this code base uses a tangled mess of Makefiles which take ages to parse).
Is there any way to list all targets that failed during a single make -k run?
I'd redirect the make -k output to a file and then look for the error patterns in it. I use vim and I'm typically looking for these:
make:\ \*\*\*
\*\*\*\ \[
A (custom) log parser can be written as well as needed.
When debugging, it is also worth to look out for synchronization irregularities, where part of the stderr message may be missing!
I'm working on a website using Django and I have Fabric as well, which is very useful for scripting some chunks of code that me and other developers use. I'm pretty new to all of these (and linux in general, tbh) so I have ideas, but I don't know how (or if) they are possible. Specifically, I wanted to write a script to start the server on a specific port that we use for testing. Manually, I would just run
python ~/project/manage.py runserver 0.0.0.0:8080
but that gets old. To manually implement that specific command, I have the following code in my fabfile:
def start8080():
local("python ~/project/manage.py runserver 0.0.0.0:8080")
which works, but I'm not the only one using the port for testing, and ~/project/ is not the only project which would need to use a similar script. Is there a way to search down the tree from the directory you are working in for the first manage.py and then to run it the same command from there?
Farbic functions allow you to use arguments:
#task #not bad to use once your fabfile is big by using helper functions
def runserver(project_path, port=8000):
run("python %s/manage.py runserver 0.0.0.0:%s" % (project_path,port))
and you would use it like this:
fab runserver:/home/project,8080
you could also simplify it by creating a task that selects a port per project, although all available projects and their paths would have to be defined there. Then it could be as easy as:
fab runserver:myprojectname
Of course you could additionally implement #morgan's answer making the script check if the port is open and automatically assign a free one.
You could use the socket module, as shown here and have the os figure out your port, and then fabric just let you know which it chose.