Ring comes with the middleware 'reload' (https://github.com/ring-clojure/ring/blob/master/ring-devel/src/ring/middleware/reload.clj). It's based on ns-tracker (https://github.com/weavejester/ns-tracker). ns-tracker looks through source directories for likely source files that begin with an ns form. It builds a dependency graph from the information contained in those ns forms. This of course works perfectly, but only for dependencies explicitly included in the ns form.
There's an idiom in Clojure where a namespace is broken into several files. There is a single source file that defines a namespace (with an ns form). This file can contain any number of top-level forms, but notably will include loads, normally at the top level but not necessarily I suppose. The files loaded begin with in-ns forms. This is not as obscure a technique as you might think... clojure.core uses it.
The contents of these loaded files do not in themselves constitute modules, nor can they necessarily be coerced into being modules (circular dependencies etc).
ns-tracker does not scan source files looking for load expressions nor does it look for in-ns forms. And the reasons are clear enough. But it really messes up my workflow since changes to the loaded files, obviously, will not reload the namespace.
Does anyone know if there's a library that deals with explicitly loaded source files? If there isn't something I'll hack something together (probably by writing some ugly macro around load) and make it available publicly.
Okay, answering my question... I have extended ns-tracker and submitted a pull request. It's less hacky that I expected, actually reasonably reasonable.
My fork is at: https://github.com/hutch/ns-tracker
There are a number of changes to ns-tracker included. Specifically to my question, it supports the usage of load/in-ns in the way used by clojure/core.
You can use this fork in your projects by using the leiningen 'checkouts' mechanism.
Related
Qt 5.15 introduces (at least I believe it is new to 5.15) .ts files, to allow for properly handling multi-locale text in an application. I'm updating an iOS Qt app from 5.X, where X knows nothing about .ts files. On startup, I'm getting a warning that indicates that there is an app-specific translation set (which is true), but that there is no translation for Qt's own text (things like warning text and dialog prompts). The documentation says, that these translations are in the Qt5 source directory (currently usually /tqtc-qt5) in the qtttranslations folder. Thus sayeth this doc https://doc.qt.io/qt-5/ios-platform-notes.html#application-assets. Examples show only app-unique text translations, not Qt's "built-in" text. So real quick, I'm going list my assumptions, so that they can be corrected or confirmed.
Qt has always had embedded text of its own, but 5.15 introduces a way to ensure that your multi-locale ready app has all the correctly translated "built-in" text available.
Only the app-writer knows what modules they are using, so it is the app-writer's responsibility to specify which set of translations are added to the app's resources for handling different locales. (per this document - https://doc.qt.io/qt-5/qmake-variable-reference.html#translations)
According to the above two docs, for example, if I use basic Qt functionality and QML, AND I have a single app with tier-one language support for, say, English, German, and French, it appears that my app's .pro file should include something like
TRANSLATIONS += <path_to_qt5>/qttranslations/qtbase_en.ts
TRANSLATIONS += <path_to_qt5>/qttranslations/qtbase_de.ts
TRANSLATIONS += <path_to_qt5>/qttranslations/qtbase_fr.ts
TRANSLATIONS += <path_to_qt5>/qttranslations/qtdeclarative_en.ts
TRANSLATIONS += <path_to_qt5>/qttranslations/qtdeclarative_de.ts
TRANSLATIONS += <path_to_qt5>/qttranslations/qtdeclarative_fr.ts
CONFIG += lrelease embed_translations
I've traced though the Qt source in the debugger to the point where it is complaining about the missing translations. It is looking for qt*_*.qm, where the first wildcard is your module ("base", "declarative", etc) and the second is your two-letter language code. So, should I be explicitly adding .qm files as resources in my iOS bundle, or is TRANSLATIONS += foo_ln.ts implicitly doing this embedding in response to CONFIG += lrelease embed_translations. One things is certain: right now, my naive porting of an older .pro file does nothing with respect the TRANSLATIONS property AND Qt is cranky about the missing .qm files in my bundle. It's a warning, not a critical fail, so I assume in a pinch, it would put up US english text, which seems to be the baseline for translations and is embedded in the source (not the bundle) by default. Do my example additions to the .pro file in point 3 above seem sufficient, or is there more to do? Or is there less to do? Is there a .pro directive that I've missed in the docs that says "do the right thing with international translations of Qt-inherent strings"? There is in a addition to the listed .ts files a "qt_*.ts" file set. Is this just everything whether I need it or not, for lazy people who don't care about lugging around a few extra strings? Finally, there is also EXTRA_TRANSLATIONS which is like TRANSLATIONS, only it does not go through the lupdate during the build. Now I'm pretty unclear on the function of lupdate relative to lrelease, but is it the case that one is for "stock" Qt strings, and the other for App-specific translations (because they may be "updated" due to changes during development)? The semantics just don't make sense to me right now, nor do my responsibilities to handle these matters in the "right" way.
I'd like to answer my own question because it turns out many of the above assumptions were just wrong, and it took deeper dives into the source code and the debugger to get my head straight, but now I have Wisdom. The documents I was reading was about how to include raw translation files "(.ts files)" into an app being built with qmake. As is typical with Qt, building with qmake or Qt Creator solves most of your problems automatically, with your part being to provide only the small amount of specification needed to execute your app. The app I'm working on however is monstrously huge, old, and overly-complicated, so I spend a lot of time hunting down the most obscure Qt and CMake features to cobble this Frankenstein's monster of an app together. The problem I was trying to solve is simply providing to Qt its own text localization files for a multi-locale app. The app-specific ones were fine, but Qt was wagging its finger at me and complaining that it couldn't find its own. So the .ts files it turns out are not a thing for this kind of task - they are simply part of the pipeline of getting data from a translation house in a consumable form. During Qt's own configure and build (yes we build Qt from source), it handles the compiling of raw .ts files into .qm files, suitable for use by the framework's translator objects. Those are all just sitting in qtbase/translations ready to be integrated into the app installation. The problem was that we (the group who developed this app long ago and which I am now reviving) didn't handle the iOS case. This is a special case that requires some choices. You can put translation files in the app's bundle at a specific path, you can home-brew some exotic URL resolution whereby you feed the translator the contents of the .qm files on demand, or you can compile them all into a .qrc file and position them in your resource tree. That last is what we did with the app-specific translations, so I mimicked it for Qt's, filtering the total set of .qm files for the modules we use and for the locales in our tier-one localization target languages. This involved a complicated python script, and a few extra lines of Cmake, including one very elusive one (QINIT_RESOURCES) that gated everything until I realized it was needed for this platform. I'm only wrapping this up, so if anyone else is confused enough to find this question, that there are some general comments to set things straight.
I batch create the documents from the code by using doxygen. However, I lost code and I didn't lose the document. I want to convert the code back from the documents. Is there any option in doxygen to do this? Thank you very much.
By the way, the documents are all html files
Doxygen is a documentation generator; it's job is to go from code to documentation. As such, it has no functionality for reversing this process. Especially since the generated HTML can change from version to version.
Documentation conversion is also an inherently lossy process. Unless you outputted all of your source code into the documentation, you're not going to be able to reconstruct everything. The best you might do is rebuild most aspects of some headers, but even then, anything that goes undocumented (like header include files and such) won't be in the HTML.
We're creating very complex embedded system and «sources» contains few projects of Visual C++, IAR, Code Composer Studio and Altium Designer schemes and pcbs. All of that possibly could be in few versions.
So, what practice could you advice me to arrange all that stuff?
Thank you
I have the same setup as you.
I use Altium Designer for the hardware schematics and PCB design. But I also have Firmware source files and related utilities. And I have mechanical design files.
Here's how I do it:
Project Name
Firmware
MainCpu
trunk
tags
branches
IoCpu
trunk
tags
branches
Hardware
MainPcb
trunk
tags
branches
IoPcb
trunk
tags
branches
PowerPcb
trunk
tags
branches
Mechanical
Chassis
trunk
tags
branches
Other
trunk
tags
branches
This way all the project files are stored together in the SVN repository. The only down side I've found is that you can't just check out the Project and get the latest FW/HW/MEK files. You have to check out each Head of FW/HW/MEK.
The reason for the separate sub-modules for FW/HW/MEK is that they will get separate version tags.
Everything that you consider as sources should be under a Source Control System, like SVN. This is the best way to handle versions, revisions, branches and tags. SVN can handle binary files, so you won't have problems with non-text files.
If your C++ source files are numerous and span multiple directories then the effort put into grokking Large Scale C++ Software Design by John Lakos may be very worth it. The main theme of the book is how your physical layout of the software, that is, the arrangement of source code files in directories, limit or extend your ability to modify the software.
I like to have a directory structure that at the top level reflects each of the programmable parts.(i.e. microcontroller, DSP1, FPGA1, FPGA2,...)
I also like to have a subdirectory(ies) that has all the generated files, so it is easy to make a clean source tree. Also make it easy to do a clean build straight from the source code configuration tool. (i.e. get and build from source to binary image(s) in as few steps as possible)
Also have each programmable part have it's own version number, and one version number that reflects each of the combination of the sub component version numbers.
Definitely use source control, if the program itself doesn't support it, just keep the parent folder you use under source control. SVN is my current fav.
As far as how to arrange your files, I noticed you had Altium Designer on your list, that program will a) play nice with source control, and b) arrange your files in an orderly manner, assuming you use their whole 'project' file structure. Look into using their 'PCB' (if that's what your doing) or 'embedded' projects, when you create one, it creates buckets for you to store all your different types of files into.
Even if you don't want to actually use Altium for your files, create a project and look at their directory structure to get an idea about all the files you'll need to keep track of.
(Aside from trivial helper classes) put one class in each cpp/h file, and name the cpp/h files the same as the class.
Group related classes files into folders (you can optionally use a hierarchy of namespaces that match the folder structure. The .net approach here is to use a CompanyName.ProductName namespace, with your files stored in a ProductName project/subfolder of your solution). So for example, you might group your Math, I/O, and Drawing classes into separate "subsystem" folders.
Ideally, make these separate sections into re-usable libraries (MyCompany.Math). You'll be glad of this later when you want to develop a new product that will share some of the code. In that case, the top level "folders" become separate projects in their own right, and you can start to work on minimising dependences between them to realise and then enforce a much better overall framework design in your code base.
The ideal within folders is to find a good balance between clutter and sparseness - try to balance the folders so that they have between 5-15 files in each. If fewer, consider merging the folders; if greater, consider adding sub-category folders to break down the complexity.
As long as your classes/files and namespaces/folders have good descriptive names, and your folders are logically structured, you can make an extremely large project very easy to navigate.
At the risk of starting a religious war, I prefer to put the headers and their source files in the same folder so that when you are editing a .cpp the .h is easily accessible rather than having to move up and fown by a folder all the time.
Reduce the complexity!
My first engineering professor had a famous first lecture. It consisted of a single equation written on the blackboard:
Perfection = Simplicity
The problem with Source Control Systems is that they manage complexity but also promote it.
IMPORTANT: The accepted answer was accepted post-bounty, not necessarily because I felt it was the best answer.
I find myself doing things over and over when starting new projects. I create a folder, with sub-folders and then copy over some standard items like a css reset file, famfamfam icons, jquery, etc.
This got me thinking what the ideal starting template would be. The reason I'm asking is that I'm going through once again and am wondering what I should include in my template so that I don't have to go back in the future and do this all over again with every new site I start.
What I currently have follows:
Project Template Folder
index.html -- XHTML 1.0 Strict Doctype. Meta Tags. CSS/js Files Referenced.
css/
default.css -- Empty. Reserved for user-styles.
960/ -- 960 Grid System for CSS Layouts.
960.css
reset.css
text.css
js/
default.js -- Empty. Reserved for user-scripts.
jQuery/ -- Light-Weight Javascript Framework
jquery-1.3.1.min.js
img/
famfamfam/ -- Excellent collection of png icons
icons/
accept.png
add.png
...etc
I have a similar structure and naming convention but for CSS, I use BluePrint which I find is more extensible. Also prefer jQuery having recently switched from prototype. In addition I have a common.js file that is an extension with custom functions for jQuery.
A /db/ folder with .sql files containing schema definitions. A /lib/ folder for common middle-tier libraries.
I will also have a /src/ folder which will sometimes have raw files such as Photoshop templates, readme's, todo lists etc.
If you have a lot of projects with a lot of static content in common (e.g. jquery, css framework, etc) make yourself a media server to serve all these. Then, instead of creating a bunch of folder structure from a "template" all you do is include the right files in your project's html. If you really want a template, your template becomes one html file instead of a directory structure.
This also gives you an easy way to update the static media for your sites (e.g. moving to the next version of 960). you only have to do it in one place. Of course, you still have to make sure that your updates don't break existing sites! :)
You can make the scheme a bit more complicated if certain projects have overlapping needs but are different from others. Just have a directory at the top level of the server for each setup and to each setup corresponds one html "template". The main idea is to have to deal with only one copy of everything that is common.
You can certainly do this on a small VM (e.g. linode) for $20/mo or a virtual web-server on your current web server. You don't really need a server, for that matter, you just need a folder. However, I think you can have some significant performance gains by having a dedicated media servers. I'd recommend using a fine-tuned apache or nginx for this purpose.
As for site-specific static files, it is also a good idea that they live on the media server and the directory structure would probably be exactly what you have, but they would/should be empty directories.
My web development framework sits in a git repository. Common code, such as general purpose PHP classes gets developed in the master branch. All work for a particular website gets done on a branch, and then changes that will help in future work get merged back into master.
This approach works well for me because I have full revision control of all the websites, and if I happen to fix a bug or implement a new feature while working on a branch I can do the merge, and then everything benefits.
Here's what my template looks like:
/
|-.htaccess //mod_rewrite skeleton
|-admin/ //custom admin frontend to the CMS
|-classes/ //common PHP classes
|-dwoo/ //template system
|-config/ //configuration files (database, etc)
|-controllers/ //PHP scripts that handle particular URLs
|-javascript/
|-tinyMCE/
|-jquery/
|-modules //these are modules for our custom CMS
|-news/
|-mailing_list/
|-others
|-private/ //this contains files that won't be uploaded (.fla, .psd, etc)
|-.htaccess //just in case it gets uploaded, deny all
|-templates/ //template source files for dwoo
I use a similar layout, but with one major exception: all of these directories live under a top-level media/ directory. This is for a few reasons:
This directory is rsync'd to two other servers which handle all of the static media requests.
Having multiple hosts allows some browsers to make more parallel requests for support files.
The media/ directory has its own .htaccess file which strips off a psuedo directory from the path which is the date-time last modified of the image (or whatever).
A custom template tag (I have used this with 2 Django projects, but you could do it in PHP, etc.) generates urls which a) semi-randomly choose one of the media servers, b) add the time-based pseudo directory to the path, and c) give the object an Expires time of now + 10 years.
I think the structure is good. The addition of a few other folders depends on what type of work you are completing.
For freelancing and the like, the addition of PSD folders, client comments would be a nice addition.
A very MS skewed view, but my SOP right now is along the lines of:
documentation/
architecture/ (what you might call code documentation)
communications/ (important client docs)
spec/
whitepapers/
graphics/
*.psd
source/
com.mycompany.projectname.solutionA/
com.mycompany.projectname.solutionB/
com.mycompany.projectname.solutionC/
com.mycompany.projectname.solutionX/ (project in the business sense here)
businesslogic/
*.cs (or whatever)
(further projects - in the visual studio sense)
site/
handlers/ (rarely do I use actual .html these days)
modules/
resources/
img/ (pngs jpegs, gifs whatever)
skin/
icons/
backgrounds/
js/ (compressed when published)
library/ (standard code)
common/ (app specific code)
*.js (app specific code, hopefully nil)
css/
skinX/ (even if there is only "default")
extension.css
base.css
transforms/(always hidden from public by config or build process)
*.xslt
unittests/
mocks/
testmain.cs (or whatever)
thirdparty/
dependencies
I definitely love the idea of having a skeleton template folder like this, but if you use a few different technologies, definitely pay close attention to the structure. My VB.net folder structure has a totally different setup compared to PHP. It sounds like common sense, but I have seen people approach both the same way.
At work we use Code Igniter as a PHP framework for our web applications and have created a new project template which does exactly that: Simple directory structure, Blueprint CSS, jQuery and the Code Igniter application folder, filled with a couple of commonly used libraries (Authentication, some speciales models for often used databases...).
The main motto here is: It's always easier to delete components than to add them. So fill your template up.
(And when I'm starting a new project in my spare time I sorely miss that template...)
I think what you have here is great.... What you've listed is of course all about the public front end of your app. My only addition to this, is to keep all your backend code and source out of the public web space if possible, as the less things you have in the public space, the more secure your app is.
So I'd suggest you take your entire tree, and put it in:
httpdocs/(all you had in your project template folder)
then put all your backend code (e.g. php libraries, sql files, etc) in adjacent subdirectories:
httpdocs/(all you had in your project template folder)
phplibs/
sql/
etc.
And, even for your front end stuff, make sure you don't copy in any example files that may come with your front end libraries, as the examples themselves may have security problems that would allow people to XSS or otherwise compromise your site.
I have been using the following setup for a while now with great results:
/site: This is where my actual working website will live. I'll install my CMS or platform in this directory after the templates are created.
.htaccess (basic tweaks I usually find myself enabling anyway)
robots.txt (so I don't forget to disallow items like /admin later)
/source: Contains any comps, notes, documents, specifications, etc.
/templates: Start here! Create all static templates that will eventually need to be ported into the CMS or framework of /site.
/behavior
global.js (site-specific code; may be broken out into multiple files as needed)
/media: Images, downloadable files, etc. Organized as necessary
/style: I prefer modular CSS development so I normally end up with many stylesheet for each unique section of the website. This is cleaned up greatly with Blender - I highly recommend this tool!
behavior.css (any styling that requires a JS-enabled browser)
print.css (this eventually gets blended, so use #media print)
reset.css (Eric Meyer's)
screen.css (for #media screen, handheld)
/vendor: all 3rd party code (jQuery, shadowbox, etc.)
Blendfile.yaml (for Blender; see above)
template.html (basic starting template; can be copied and renamed for each unique template)
I like OPs as a default start point. your standard template should err on simplicity, with the ability to add complexity only if it's needed.
one addition:
/robots.txt
I have a substantial body of source code (OOFILE) which I'm finally putting up on Sourceforge. I need to decide if I should go with a monolithic include directory or keep the header files with the source tree.
I want to make this decision before pushing to the svn repo on SourceForge. I expect a lot of people who use it after that move will keep a working copy checked out directly from SF so won't want to change their structure.
The full source tree has about 262 files in 25 folders. There are a lot more classes than that suggests as due to conforming to 8.3 character names (yes it dates back to Win3.1) many classes are in one file. As I used to develop with ObjectMaster, that never bothered me but I will be splitting it up to conform to more recent trends to minimise the number of classes per file. From a quick skim of the class list, there are about 600 classes.
OOFILE is a cross-platform product expected to be built on Mac, Windows and assorted Unix platforms. As it started life on Mac, with compilers that point to include trees rather than flat include dirs, headers were kept with the source.
Later, mainly to keep some Visual Studio users happy, a build was reorganised with a single include directory. I'm trying to choose between those models.
The entire OOFILE product covers quite a few domains:
database front-end
range of database backends
simple 2D graphing engine for Mac and Windows
simple character-mode report-writer for trivial html and text listing
very rich banding report-writer with Mac and Windows Preview and Printing and cross-platform generation of text, RTF, HTML and XML reports
forms integration engine for easy CRUD forms binding to the database, with implementations on PowerPlant and MFC
cross-platform utility classes
file and directory manipulation
strings
arrays
XML and tag generation
Many people only want to use it on a single platform and some of those code areas are pure legacy (eg: PowerPlant UI framework on classic Mac). It therefore seems people would appreciate not having headers from those unwanted areas dumped in their monolithic include directory.
I started thinking about having an include directory split up into a few of the domains above and then realised that was sounding more like the original structure.
In summary, the choices seem to be:
Keep original model, all headers adjacent to source - max flexibility at cost of some complex includes in projects.
one include directory with everything inside
split includes by domain, so there may be about 6 directories for someone using the lot but a pure database user would probably have a single directory.
From a Unix build aspect, the recommended structure has been 2. My situation is complicated by needing to keep Visual Studio and XCode users happy (sniff, CodeWarrior, how I doth miss thee!).
Edit - the chosen solution:
I went with four subdirectories in include. I started trying to divide them up further by platform but it just got very noisy very quickly.
Personally I would go with 2, or 3 if really pushed.
But whichever you choose, please make it crystal clear in the build instructions how to set up the include paths. Nothing dooms an open source project more than it being really difficult to build - developers want a quick out-of-the-box experience and if it involves faffing around with many undocumented environment variables (or whatever) most will simply go away.