Less files - django usage - django

I bought a template for my project which has 'less' files along with other static files and templates.
Any suggestions where to put them?
Also, there are 3 django package to use for 'less' files.. django-static-compiler, django-compressor django-css to execute these files.
Please help me which one should i use and why these packages are needed. Thanks!
Edit - Found out that Django-css is dead. Please help me choose from other 2 packages

In most cases you don't need to use less in your frontend directly, but you need to use a result of compilation of your less files - usual css style files. Look in your template css files, which already compiled from your less. Less files provided by template developer for extra customization purposes.
So the question is how to compile your less? You can do it with various of ways, with python implementation of less compiler, with using of less npm package for node.js, with some kind of gui less compiler or even with many online less compilers.
Generally, compilation is pretty easy; you specify your input less files and output css directory, when compiler do all work for you.

Related

How to manage your own TypeScript declaration files across projects

I have an assets/CDN project which contains my own and third party libs like JQuery. My own scripts are written in TypeScript. All these files get referenced in about five different projects.
If I want to write some TypeScript in one of those five projects that references some TypeScript in the assets project I manually generate and copy across a declaration file.
What would be your recommendation for a better way to manage this process?
I'm using Grunt to run the TypeScript compiler. I can run it with the 'declarations' option but this won't let me put all the declarations in a separate folder ready to be 'collected' by the other projects.
I've looked at the TS Project and DTS Bundle but these don't seem very flexible and I'm worried I'll have to switch to something else later as TypeScript moves in another direction. Plus I'm not keen on the single file approach. But I'm not ruling them out entirely.
I wrote typings which has the express goal of solving definition files in TypeScript (https://github.com/typings/typings). It's hard to tell if it'll work with what you're asking though, since I don't know how you consume the other projects you have. If you want to be able to use dependencies and package it them in dependencies of your other projects, all without having to make a single "declaration" generation step and no conflicts, it might be worth trying it out.
The only overhead of it right now is having to convert DefinitelyTyped into external module declarations, since they're incompatible.

Vim/Syntastic C/C++ syntax checking: possible to configure to build precompiled headers?

I'm using syntastic in vim to provide syntax checking for C/C++ files. Most of the time its current configuration is fast enough, but sometimes, if a header file contains a large number of other header files (in this case I'm including Rcpp, library for building R packages) the syntax checking step takes a noticeably long time, a few seconds. I save frequently while I work so this starts to become a bit cumbersome. I'd rather not disable it because it's a great help, especially since some of the in-house libraries I'm using can use inconsistent naming conventions (which I often forget) so it saves quite a bit of time.
Is there a way to configure syntastic to build precompiled headers when first run on a file? That should speed up the compilation without too much inconvenience (to me :-).
Alternatively I could disable syntax checking all together and only execute it manually, but I'd rather avoid that if possible.
Syntastic has nothing to do with compilation of sources.
I had similar issues with Syntastic with a few of C++ standard headers and I fixed it by add following configuration in ~/.vimrc file:
let g:syntastic_cpp_errorformat = "errorformat"
Hope this helps people not to waste hours of time on this issue like me.

How should I manage dependencies in C or C++ open source projects?

I've got a few open source applications. These depend on a few third party components, notably Crypto++ and Boost. There are a couple of options to take:
Put third party code into version control, and include it with distributions of my code. On one hand, this is the easiest for people to use because they can compile directly out of my source control repository. On the other, they might be wasting bandwidth downloading source they already have, or end up having to fight with my library in order to remove the third party bits. Furthermore, source control tools often have trouble dealing with massive libraries like Boost.
Don't include third party code at all. This forces people to go out of their way to be able to use my library. On the other hand it means my source control repository and distributions will be small.
Something I have not yet anticipated.
What should I do?
Note: I'm not working in an environment where reliance on a dependency mapper like aptitude, apt-get, or yum are acceptable.
Option 3: Don't include it in your code distribution but instead include a (link to a) download, which should match the most recent version you support.
Also, explicitly list your dependencies and the most recent versions you support.
This allows users to do however they want. Want your code with dependencies? Load both from one source. Want only part of the dependencies because you have the others? Load part of them. Want only your code? Load it seperately.
Option 4: Offer 2 versions, one including the dependencies and the other without but combined with option 3 above.
I suggest Autoconf which was designed to abstract these worries away from you.
I doubt you can be expected to maintain build scripts for all of your dependencies across all platforms. Another problem with keeping 3rd party code in your svn is that now you also need to track down their dependencies and so on.
I think its a good idea to have dependencies in the SVN. That way developers can simply check-out and compile. Also you avoid the problem with different incompatible versions of your dependencies.
If you put the dependencies in a separate folder then developers can choose not to check-out your dependencies if they alrdy have them...
If you have a good package manager, than I would definitely not include dependencies in your repository. If you list the dependencies, it should be pretty easy for someone compiling it to get them from the repos.
If you wanted to, you could include all of the dependencies as an additional download if you wanted to. But mixing them in with the code your working is generally not a good idea.

C++: how to build my own utility library?

I am starting to be proficient enough with C++ so that I can write my own C++ based scripts (to replace bash and PHP scripts I used to write before).
I find that I am starting to have a very small collection of utility functions and sub-routines that I'd like to use in several, otherwise unrelated C++ scripts.
I know I am not supposed to reinvent the wheel and that I could use external libraries for some of the utilities I'm creating for myself. However, it's fun to create my own utility functions, they are perfectly tailored to the job I have in mind, and it's for me a large part of the learning process. I'll see about using more polished external libraries when I am proficient enough to work on more serious, long term projects.
So, the question is: how do I manage my personal utility library in a way that the functions can be easily included in my various scripts?
I am using linux/Kubuntu, vim, g++, etc. and mostly coding CLI scripts.
Don't assume too much in terms of experience! ;) Links to tutorials or places where relevant topics are properly documented are welcome.
"Shared objects for the object disoriented!"
"Dissecting shared libraries"
Just stick your hpp and cpp files in seperate directories somewhere. That way, it's easy to add the directory containing the C++ files to any new project, and easy to add the headers to the include path.
If you find compile time starts to suffer, then you might want to consider putting these files in a static library.
If you are compiling by hand you will want to create a makefile to remove the tedium of compiling your libraries. This tutorial helped me when I was learning to do what you are doing, and it has additional links on the site for more detailed tutorials on the makefile.
Unless it's very large, you should probably just keep your utility library in a .h file (for the declarations) and a .cpp file (for the implementation).
Just copy both files into your project folders and use #include "MyLibrary.h", or set the appropriate directory settings so you can use #include <MyLibrary.h> without copying the files each time you want to use them.
If the library gains substantial size, you might consider looking into static libraries.

Finding unused files in a project

We are migrating our works repository so I want to do a cull of all the unreferenced files that exist in the source tree before moving it into the nice fresh (empty) repository.
So far I have gone through by hand and found all the unreferenced files that I know about but I want to find out if I have caught them all. One way would be to manually move the project file by file to a new folder and see what sticks when compiling. That will take all week, so I need an automated tool.
What do people suggest?
Clarifications:
1) It is C++.
2) The files are mixed. I am looking for files that have been superseded by others but have left to rot in the repository - for instance file_iter.h is not referenced by any other file in the program but remains in the repository just in case someone wants to compile a version from 1996! Now we are moving to a fresh repository we can safely junk all the files that are no longer used.
3) Lint only finds unused includes - not unused files (I have the 7.5 manual in front of me).
You've tagged this post with c++, so I'm assuming that's the language in question. If that's the only thing that's in the repository then it shouldn't be too hard to grep all files in the repository for each filename to give you a good starting point. If the repository contains other files (metadata, support files, resources, etc) then you're probably going to need to do it manually.
I can't offer an existing tool for it, but I would expect that you can get a lot of this information from you build tools (with some effort, probably). Typically you can at least let the build tool print the commands it would run, without actually running them. (E.g. the -n option of make and bjam does this.) From it you should be able to extract at least the used source files.
With the -MM of g++ you can get all the non-system header files for the given source files. The output is in the form of a make rule, but with some filtering this shouldn't be a problem.
I don't know if this helps; it's just what I would try in your situation.
You can actually do this indirectly with Lint by running a "whole project analysis" (in which all files are analysed together rather than individually).
Configure it to ignore everything but unreferenced variable/enum/function etc warnings and it should give you a reasonable indicator of where the deadwood lies without those issues being obscured by any others in the codebase.
A static source code analysis tool like lint might do the job. They will tell you if a piece of code will never be called.
Have you taken a look at Source-Navigator? It can be used as an IDE but I found to be very good at analyzing source code structure. For example, it can find out where and if a certain method is used in your source code.
I don't know if it's scriptable but it might be a good starting point for you.