I've been using Redmine for almost a year to manage my startup. I have all issues stored in one project with two subprojects for areas that I had to outsource and didn't want to give the contractor access to the main project issues. My problem is that I have ended up with hundreds of issues which all vary greatly in the time required to implement them. Some are small e.g.'Fix bug in controller', 'Add telephone number to contact us page' etc and some require much more effort e.g. 'Create a new Q&A area', 'Migrate server to nginx', and some are more abstract e.g. 'Investigate new SEO opportunities', 'Consider implementing a reseller control panel' etc.
I feel like I must be using Redmine incorrectly as having these all mixed together is a bit confusing. Any ideas on how I could better organize would be greatly appreciated. If supplementing with other tools might be a better idea I'd love to hear suggestions.
I don't think there is a problem having all the issues you mentioned mixed together in a project as long as they're all related to the project.
The most important point when using redmine with projects having lots of issues is to make use of custom queries. This is a great feature, but in order to ba able to use it, you must also use and fill in other fields:
Tracker: Make use of different trackers (the default of bugs, features and tasks works for me)
Category: Can be a specific part of your software, or other aspects of your business (administration, IT/server, ...)
Version: Use the version to group different issues, usually used for a release, but can also be ideas or unplanned
Of course priority and Due Date - I often use them for ordering, but you may create a custom query of issues du in the next 2 weeks
Assignee is usually the most important if there is more than one user - first of all you'll want to see the issues assigned to you, as well as the issues created by you (in order to follow-up)
You can always add custom fields in case you have other information which may be used to filter your issues.
Once a set of custom queries are in place, you'll hardly consult all your open issues at once anymore.
Two little used features for redmine newbies are categories and custom fields.
Categories are usually used for modules in your project ("Database", "Front End", "Administration Panel", etc.) and you can use custom fields for anything else you find useful - i.e. Create a "Time Consumer (Estimated)" custom field as a list with "Whale (Weeks)", "Elephant (days)", "Tiger (Hours)", "Monkey (About an hour)", "Mouse (Minutes)".
Related
we're maintaining some Qt applications which are running on Linux and Windows desktops. Now, we want to make the applications more attractive by adding customized forms and reports for each customer or at least groups of customers. There may be 10 or more different versions needed.
As we come from Qt, we are wondering how to manage so many configurations and if there already is a framework/development system that would help us here. We were looking at QML/Qt Quick, WT Toolkit or even NC Reports for the reporting part.
Managing configurations and deriving different versions from a base is not a feature which is discussed or promoted.
There should be a clean distinction between Display and Application Logic (Model/View)
Nice would be a textual GUI description, which enables us to release changes in forms or reports without the need to reinstall the whole applications (like QML seems to have that)
Also nice would be a kind of report generator, that helps to create forms and reports for new customers without the need to code them (and so releases our core developers from boring work)
Has somebody experience with such kind of customer based configurations? It would be nice to have a hint what's the best way to do this in the Qt surrounding.
I know comparisons like http://qt-project.org/doc/qt-5.1/qtdoc/topics-ui.html#comparison, but the specific questions that I have are not mentioned.
best regards
I guess you need to differentiate applications in three aspects:
1. appearance - if the application only differs in button color, icon image and background themes, qt's style sheet is light and convenient, you can choose to deploy different qss file and load different ones without recoding. if the variance among customers concerns layouts or available widgets (some has buttons, some use combo boxes, .etc), style sheet cannot meet the requirement, QML seems promising in such case.
business logic - i'm not sure how "generating reports" differs for different customers, if the reports need to be printed, or saved as document, i don't think qt provides good toolkit (QXXXDocument is not suitable to generate / display large amount of document), html? maybe. And i agree with #hyde that loading different plugins or dynamic libraries can solve this.
What I learnt from 8 month qt:
Model/View Architecture is there, for example a tree view that we fill with voyage data. the data is gatheres from several db tables, so we have a good logical distinction.
We hadn't the time to work us into qml, so we stuck with qt designer. It's quite easy, so we're fine with that. Delivering changes in customer forms without recompile will be a feature for a bigger future rework.
Same with report generators...
I've been wondering if anybody knows why the Presentation Details (stored in the Renderings field) are shared across all languages and versions by default?
I had confirmed this was the intended behaviour of 'shared' field with these links:
This SO post:
In Sitecore, when adding a field to a template, there's a checkbox called "shared". What's it for?
And this SDN resource:
http://sdn.sitecore.net/sdn5/reference/sitecore%205,-d-,3/field%20reference/field%20properties/data%20properties.aspx
The situation
As an author, I have created a new page and pushed it through workflow approval. Everything is great, and the page is published. The next day, I want to make some changes so I open up Page Editor, a new version is created, and I start adding and removing components on the page.
The problem
As soon as I hit save, the approved and published version of my page is also affected. The history of my previous layout is gone. As soon as a Site Publish is executed by somebody (or a scheduled PublishAgent executes) my page in the Web database will be updated.
Sure, the datasource of my new components that I added may not be published yet, but what if I added an existing datasource that was already approved? My removals also are immediate.
The desired goal
I'd like to be able to version these changes, and changing the field to no longer be shared seems like the right way to go. In my case, with a unilingual site, this won't impact the multilingual aspect of it.
Does anybody know why this field is shared across versions? If I unshare it, am I completely breaking the upgrade path?
I've just been "having a chat" with Sitecore support on this very issue. The concensus seems to be - paraphrasing what they said a bit - "We think it'll be fine if you change it. You should test it thoroughly, rendering deltas, page editor Work and so on".
I can add a few comments of my own; unchecking "shared" on __renderings, does appear to Work. At least at the initial glance. I've heard about this being done in solutions before and I've never heard any ill effects come from it.
And yet; whenever you mention it; you get a LOT of nervous responses and comments like "you really shouldn't be messing with Sitecore standard setup". And while a valid point indeed, I'd like to add a point of my own to this debate:
Given that, from an API perspective, there is very few things that are different when reading a field value from a "shared" field as opposed to a versioned one - I also believe there are very few potential cases where "unsharing" it would pose a problem.
Or in other words - I consider it low risk. But I've never had a real life solution running in a live environment either, with this setting changed :-)
I'm sorry, but I don't have a direct answer to your question - WHY Sitecore set it up like this, I belive to be part of Sitecore's heritage: The idea that multiple language versions of a site should be just "layered" versions of the exact same pages and therefore presentation details might as well be shared - presumably for some performance gain. I am not entirely convinced this vision still quite holds today - where editors daily "page edit up" new components on new versions, and set up special sale banners and related content weeks in advance.
I completely agree with and am thankful for the Mark Cassidy March 3 2014 answer to this. Since then, in Sitecore 8.0 they added "Versioned Layouts".
See:
https://dev.sitecore.net/en/Downloads/Sitecore%20Experience%20Platform/8%200/Sitecore%20Experience%20Platform%208%200/Release%20Notes "Versioned layouts – a different presentation set on different versions of different languages for the same item".
nice post: http://jockstothecore.com/sitecore-8-versioned-layouts-mixed-feelings/
This is the default behavior of sitecore as you mentioned in the post. Its not always good practice to change that. This topic has been dicussed earlier which might help you
Setting __Renderings field not shared in Sitecore consequences?
Here is a blog post about considerations for doing this:
Unsharing the Layout field in Sitecore - a multi-language strategy
That said, I've worked on a project where our client went in and did this themselves. It caused problems. As I recall, they unshared the __renderings field and all prior versions lost their presentation settings. Also, other languages other than the selected one also lost their settings. We had to do a DB restore and get things back and told them never to do that again. If you are considering this, read the blog post about, and do some isolated dry runs as it may expose issues you weren't aware of (e.g. impacting other languages, old versions, etc.).
I am currently faced with the task of importing around 200K items from a custom CMS implementation into Sitecore. I have created a simple import page which connects to an external SQL database using Entity Framework and I have created all the required data templates.
During a test import of about 5K items I realized that I needed to find a way to make the import run a lot faster so I set about to find some information about optimizing Sitecore for this purpose. I have concluded that there is not much specific information out there so I'd like to share what I've found and open the floor for others to contribute further optimizations. My aim is to create some kind of maintenance mode for Sitecore that can be used when importing large columes of data.
The most useful information I found was on Mark Cassidy's blogpost http://intothecore.cassidy.dk/2009/04/migrating-data-into-sitecore.html. At the bottom of this post he provides a few tips for when you are running an import.
If migrating large quantities of data, try and disable as many Sitecore event handlers and whatever else you can get away with.
Use BulkUpdateContext()
Don't forget your target language
If you can, make the fields shared and unversioned. This should help migration execution speed.
The first thing I noticed out of this list was the BulkUpdateContext class as I had never heard of it. I quickly understood why as a search on the SND forum and in the PDF documentation returned no hits. So imagine my surprise when i actually tested it out and found that it improves item creation/deletes by at least ten fold!
The next thing I looked at was the first point where he basically suggests creating a version of web config that only has the bare essentials needed to perform the import. So far I have removed all events related to creating, saving and deleting items and versions. I have also removed the history engine and system index declarations from the master database element in web config as well as any custom events, schedules and search configurations. I expect that there are a lot of other things I could look to remove/disable in order to increase performance. Pipelines? Schedules?
What optimization tips do you have?
Incidentally, BulkUpdateContext() is a very misleading name - as it really improves item creation speed, not item updating speed. But as you also point out, it improves your import speed massively :-)
Since I wrote that post, I've added a few new things to my normal routines when doing imports.
Regularly shrink your databases. They tend to grow large and bulky. To do this; first go to Sitecore Control Panel -> Database and select "Clean Up Database". After this, do a regular ShrinkDB on your SQL server
Disable indexes, especially if importing into the "master" database. For reference, see http://intothecore.cassidy.dk/2010/09/disabling-lucene-indexes.html
Try not to import into "master" however.. you will usually find that imports into "web" is a lot faster, mostly because this database isn't (by default) connected to the HistoryManager or other gadgets
And if you're really adventureous, there's a thing you could try that I'd been considering trying out myself, but never got around to. They might work, but I can't guarantee that they will :-)
Try removing all your field types from App_Config/FieldTypes.config. The theory here is, that this should essentially disable all of Sitecore's special handling of the content of these fields (like updating the LinkDatabase and so on). You would need to manually trigger a rebuild of the LinkDatabase when done with the import, but that's a relatively small price to pay
Hope this helps a bit :-)
I'm guessing you've already hit this, but putting the code inside a SecurityDisabler() block may speed things up also.
I'd be a lot more worried about how Sitecore performs with this much data... assuming you only do the import once, who cares how long that process takes. Is this going to be a regular occurrence?
I work for a university, and in the past year we finally broke away from our static HTML site of several thousand pages and moved to a Drupal site. This obviously entails massive amounts of data entry.
What if you're already using a CMS and are switching to another one that better suits your needs? How do you minimize the mountain of data entry during such a huge change? Are there tools built for this, or some best practices one should follow?
The Migrate module for Drupal would provide a big help. The Economist.com data migration to Drupal will give you an overview of the process.
The video from the Migration: not just for the birds presentation at Drupalcon DC 2009 is probably somewhat out-of-date, but also gives a good introduction.
Expect to have to both pre-process and post-process your data manually, whatever happens. Accept early on that your data is likely to be in a worse state than you think it is: fields will be misused; record-to-record references (foreign keys) might not be implemented properly, or at all; content is likely to need weeding and occasionally to be just bad or incorrect.
Check your database encoding. Older databases won't be in Unicode encodings, and get grumpy if you have to export data dumps and import them elsewhere. Even then, assume that there'll be some wacky nonprintable characters in your data: programs like Word seem to somehow inject them everywhere, and I've seen... codepoints... you people wouldn't believe. Consider sweeping your data before you even start (or even sweeping a database dump) for these characters. Decide whether or not to junk them or try to convert them in the case of e.g. Word "smart" punctuation characters.
It's very difficult to create explicit data structures from implied one. If your incoming data has a separate date field, you can map that to a date field; if it has a date as part of a big lump of HTML, even if that date is in a tag with an id attribute, simple scripting won't work. You could use offline scripting with BeautifulSoup or (if your HTML's a bit nicer) the faster lxml to pre-process your data set, extract those implicit fields, and save them into an implicit format. Consider creating an intermediate database where these revisions are going to go.
The Migrate module is excellent, but to get really good data fidelity and play more clever tricks you might need to learn about its hook system (Drupal's terminology for functions following a particular naming scheme) and the basics of writing a module to put these hooks in (a module is broadly just a PHP file where all the functions begin with the same text, the name of the module file.)
All imported content should be flagged for at least a cursory check. You can do this by importing it with status=0 i.e. unpublished, and then create a view with the Views module to go through the content and open it in other tabs for checking. Views Bulk Operations lets you have a set of checkboxes alongside your view items, so you could approve many nodes at once.
Expect to run and re-run and re-run the import, fixing new things every time. Check ten, or twenty items, as early as possible. If there are any problems, check ten or twenty more. Fix and repeat the import.
Gauge how long a single import run is likely to take. Be pessimistic: we had an import we expected to take ten hours encounter exponential slowdown when we introduced the full data set; until we finally fixed some slow queries, it was projected to take two weeks.
If in doubt, or if you think the technical aspects of the above are just going to take more time than the work itself, then just hire temps to do the data. But you still need decent quality controls, as early as possible during their work. Drupal developers are also for hire: try your country's relevant IRC channel, or post a note in a relevant groups.drupal.org group. They're more expensive than temps but they usually write better PHP...! Consider hiring an agency too: that's a shameless plug, as I work for one, but sometimes it's best to get experts in for these specific jobs.
Really good imports are always hard, harder than you expect. Don't let it get you down!
Migrate + table wizard (and schema + views) is the way to go. With table wizard you can expose any table to drupal and map fields accordingly using migrate.
Look here for a detailed walktrough:
http://www.lullabot.com/articles/drupal-data-imports-migrate-and-table-wizard
You'll want to have an access to existing data from django. This helps me a lot with migrating: http://docs.djangoproject.com/en/1.2/howto/legacy-databases/ . With correct model definitions you'll have full django power including the admin. In fact, I'm using django just as admin backend for several legacy php projects - django's admin can easily outachieve a lot of custom hand-written admin scripts.
Authorization should remain the same. Users should be able to login with their credentials but it is hard to write a migration script for auth data because password hashing schemas may be different and there is no way to convert between them without knowing plain passwords. Django provides a way to support different sources of auth so you can write Drupal auth backend: http://docs.djangoproject.com/en/1.2/topics/auth/#writing-an-authentication-backend
There is no need to do the full rewrite. If some parts are working fine they can still be powered by Drupal. New code can written using Django with same UI. Routing between old and new parts can be performed by web server url rewriting. Both django and drupal parts can be powered by the same DB.
I am new to Redmine and I'd like to see if there is a good way to relate requirements (as stated by a product manager) to issues in Redmine. To me it seems that a low impact way to do it would be to define a requirement tracker and then add a custom field with a list of links to feature tickets.
I have tried doing this but cannot figure out how to add a link within a custom field text box.
So I guess I have a general question and a specific question,
General) Is there a recognised recipe in the Redmine community to achieve a linkage from a requirement to a list of features or issues?
Specific) Can I create a link to another issue within an issue field?
I think the answer to both questions is to use the built-in mechanism to link issues - it's called related issues.
Once an issue is created, you can add link to an other issue and indicate the type of relation (related to, blocks, precedes, etc.)
To separate requirements and features by means of a different tracker seems good to me, expecially if you'd like to apply different permissions or workflows.
See also the redmine manual about related issues, and an example of an issue with related issues http://www.redmine.org/issues/337
EDIT: More recently, subtasks have been added to redmine. They may be interesting to use in a scenario where a feature (issue) is implemented by means of different steps (=subtasks, like designing, programming, documentation,...) and/or by different persons (for example designer, programmer, ...).