I'm using TemplaVoila 1.5.5 and TYPO3 4.5.5. I tried to switch a TYPO3 database from latin1_swedish_ci (ISO-8859-1) to utf8_general_ci. Therefore I wrote a PHP script which converted everything to binary and than everything to utf8_general_ci. Everything seemed to work except TemplaVoila (all other settings in Typo3 were already prepared for UTF-8 but not the database). I got the following message when opening the TYP3 page:
Couldn't find a Data Structure set for table/row "pages:x". Please
select a Data Structure and Template Object first.
If I looked in a template mapping I got the next message that there is no mapping available. In the table tx_templavoila_tmplobj in the column templatemapping the mapping is stored as BLOB. When converting to UTF-8 everything is gone. Because its binary I can't access it and convert it in an easy way.
How can I keep the mapping? I don't want to map everything new. What can I do?
Here there are two proposed solutions but I want to know if there are better ones. In the solution from Michael I also have to map everything again?
What is the fastet way to restore the mapping?
I can't say if you'll be able to recover the data now that it's been converted, but if you're willing to run your conversion script I have had some success with the following approach:
Unserialize the data in the templatemapping field of the tx_templavoila_tmplobj table.
Convert the unserialized data array to your target encoding (there is a helper method t3lib_cs::convArray which you might be able to use for this purpose).
Serialize the converted data and save it back to the templatemapping field.
Fastest way: just change the field manually back to mediumtext. All mappings should be fine again. I know, it's quick and dirty, but worked for me...
Related
How do I create a model dynamically upon uploading a csv file? I have done the part where it can read the csv file.
This doc explains very well how to dynamically create models at runtime in django. It also links to an example of doing so.
However, as you will see after looking at the document, it is quite complex and cumbersome to do this. I would not recommend doing this and believe it is quite likely you can determine a model ahead of time that is flexible enough to handle the CSV. This would be much better practice since dynamically changing the schema of your database as your application is running is a recipe for a ton of bugs in your code.
I understand that you want to create new schema's on the fly based on fields in the those in a CSV. While thats a valid use case and could be the absolute right call. I doubt it though - it lends itself to a data model for a single tenet SaaS application that could have goofy performance and migration issues.
I'd try using Mongo/ some other NoSQL solutions as others have mentioned. But a simpler approach may be a modified Star Schema implemented in SQL. In this case you create a dimensions tables that stores each header, then create an instance of each data element that has a foreign key to dimension and records the value of that dimension.
If you read the csv the psuedo code would look something like this:
for row in DictReader(file):
for k in row.keys():
try:
dim = Dimension.objects.get(name=k)
except:
dim = Dimension(name=k)
dim.save()
DimensionRecord(dimension=dim, value=row[k]
Obviously you could better handle reading the headers and error trapping if dimensions already exist, but this would be an example of how you could dynamically load variable headered CSV's into a SQL db.
I have what I believe to be common but complicated problem to model. I've got a product configurator that has a series of buttons. Every time the user clicks on a button (corresponding to a change in the product configuration), the url will change, essentially creating a bookmarkable state to that configuration. The big caveat: I do not get to know what configuration options or values are until after app initialization.
I'm modeling this using EmberCLI. After much research, I don't think it's a wise idea to try to fold these directly into the path component, and I'm looking into using the new Ember query string additions. That should work for allowing bookmarkability, but I still have the problem of not knowing what those query parameters are until after initialization.
What I need is a way to allow my Ember app to query the server initially for a list of parameters it should accept. On the link above, the documentation uses the parameter 'filteredArticles' for a computed property. Within the associated function, they've hard-coded the value that the computed property should filter by. Is it a good idea to try to extend this somehow to be generalizable, with arguments? Can I even add query parameters on the fly? I was hoping for an assessment of the validity of this approach before I get stuck down the rabbit hole with it.
I dealt with a similar issue when generating a preview popup of a user's changes. The previewed model had a dynamic set of properties that could not be predetermined. The solution I came up with was to base64 encode a set of data and use that as the query param.
Your url would have something like this ?filter=ICLkvaDlpb0iLAogICJtc2dfa3
The query param is bound to a 2-way computed that takes in a base64 string and outputs a json obj,
JSON.parse(atob(serializedPreview));
as well as doing the reverse: take in a json obj and output a base64 string.
serializedPreview = btoa(JSON.stringify(filterParams));
You'll need some logic to prevent empty json objects from being serialized. In that case, you should just set the query param as null, and remove it from your url.
Using this pattern, you can store just about anything you want in your query params and still have the url as shareable. However, the downside is that your url's query params are obfuscated from your users, but I imagine that most users don't really read/edit query params by hand.
I'm completely baffled by a problem I found today: I have a PostgreSQL database with tables which are not managed by Django, and completely normal queries via QuerySet on these tables. However, I've started getting Unicode exceptions and when I went digging, I found that my QuerySets are returning non-Unicode strings!
Example code:
d = Document.objects.get(id=45787)
print repr(d.title), type(d.title)
The output of the above statement is a normal string (without the u prefix), followed by a <str> type identifier. What's more, this normal string contains UTF-8 data as expected, in raw byte form! If I call d.title.decode('utf-8'), I get valid Unicode strings!
Even more puzzling, some of the fields work correctly. This same table / model contains another field, html_filename of the same type (TextField) which is returned correctly, as a Unicode string!
I have no special options, the database data is correctly encoded, and I don't even know where to begin searching for a solution. This is Django 1.6.2.
Update:
Database Server encoding is UTF8, as usual, and the data is correctly encoded. This is on PostgreSQL 9.1 on Ubuntu.
Update 2:
I think I may have found the cause, but I don't know why it behaves this way: I thought the database fields were defined with the text type, as usual, but instead they are defined as citext (http://www.postgresql.org/docs/9.1/static/citext.html). Since the Django model is unmanaged, it looks like Django doesn't interpret the field type as being worthy of converting to Unicode. Any ideas how to force Django to do this?
Apparently, Django will not treat fields of type citext as textual and return them as Unicode strings.
I would like to have "realtime" like map.
My main question is:
How to use django-olwidget with openlayers OpenLayers.Strategy.Refresh?
Do I need to start back "from scratch" to use manually openlayers?
With django-olwidget, the data is on the web page so the args which define data-source, protocol.
My "second" question is about which format should I choose...
geoJSON? kml? other?
Can those formats contain openlayers point specific "style" specifications like:
{'graphic_name': 'square', 'point_radius': 10, 'fill_color': "#ABBAAB', 'stroke_color':'#BAABBA'}.
I already overriden the default map template olwidget/multi_layer_map.html to access my map object in JS. I think it should be rather simple to apply a js function on each data layers before passing it to the map.
Thanx in advance.
PS: I'm french speaker.
PS2: I asked this question as a feature request on github: https://github.com/yourcelf/olwidget/issues/89
If you're going to use regularly-refreshing data (without refreshing the page) and serialization formats like geoJSON and KML, django-olwidget won't help you very much out of the box. You might find it easier just to use OpenLayers from scratch.
But if you really wanted to use django-olwidget, here's what I would do:
Subclass olwidget.InfoLayer to create a new vector layer type that uses a network-native format like geoJSON or KML to acquire its data.
Add a corresponding python subclass to be able to use it with Django forms or whatever the use case is. You'll probably need to specify things like the URL from which the map will poll its data.
This is a lot of work beyond writing for OpenLayers directly. The advantages would be that you would get easy Django form integration with the same map.
As to which serialization format to use: I'm partial to JSON flavors over XML flavors such as KML, but it really doesn't matter much -- Django and OpenLayers both speak both fluently.
About the styling,you should take a look at the StyleMap[1] where you can set style properties according to attributes.
For the main question, I’m sorry I don’t know django-olwidget…
1 - http://openlayers.org/dev/examples/stylemap.html
I've found myself unsatisfied with Django's ability to render JSON data. If I use built in serializes then database foreign key relationships are not included in the data (only the keys). Also, it seems to be impossible to include custom data in the json feed that isn't part of the model being serialized.
As a test I implemented a template that rendered some JSON for the resultset of a particular model. I was able to include/exclude whatever parts of the model I wanted and was able to include custom data as well.
The test seemed to work well and wasn't slower than the recommended serialization methods.
Are there any pitfalls to this using this method of serialization?
While it's hard to say definitively whether this method has any pitfalls, it's the method we use in production as you control everything that is serialized, even if the underlying model is changed. We've been running a high traffic application in for almost two years using this method.
Hope this helps.
One problem might be escaping metacharacters like ". Django's template system automatically escapes dangerous characters, but it's set up to do that for HTML. You should look up exactly what the template escaping does, and compare that to what's dangerous in JSON. Otherwise, you could cause XSS problems.
You could think about constructing a data structure of dicts and lists, and then running a JSON serializer on that, rather than directly on your database model.
I don't understand why you see the choice as being either 'use Django serializers' or 'write JSON in templates'. The middle way, which to my mind is much more robust and fits your use case well, is to build up your data as Python lists/dictionaries and then simply use simplejson.dumps() to convert it to a JSON string.
We use this method to get custom JSON format consumed by datatables.net
It was the easiest method we find to accomplish this task and it looks very fine with no problems so far.
You can find details here: http://datatables.net/development/server-side/django
So far, generating JSON from templates, we've run into the need to escape newlines. Looking at doing simplejson.dumps() next.