Generate SQL Insert statements for SQLite with Django - django

I want to generate a complete SQL file with Django that can be downloaded and executed to create a SQLite DB.
The problem is escaping the strings to insert them into the file to download. This is what I got so far:
name = MySQLdb.escape_string(self.name.encode("utf-8")).decode("utf-8")
return "INSERT INTO names VALUES(%d,'%s');" % (self.id, name)
But unfortunately MySQL escapes single quotes with a backslash \ which SQlite does not like. I would prefer to use something contained in Django to replace MySQLdb.escape_string(). I'm not sure if there are any other issues of incompatibility between MySQL-escaping and SQLite therefore it would be good to avoid MySQLdb.escape_string() completely.
As a last recourse I would do this before returning:
name = name.replace("\\\'","\'\'")
Any thoughts?

SQLite has the quote function for this.
It's also used by the sqlite3 command-line tool when you .dump an entire database.

Related

Python in Knime: Downloading files and dynamically pressing them into workflow

I'm using Knime 3.1.2 on OSX and Linux for OPENMS analysis (Mass Spectrometry).
Currently, it uses static filename.mzML files manually put in a directory. It usually has more than one file pressed in at a time ('Input FileS' module not 'Input File' module) using a ZipLoopStart.
I want these files to be downloaded dynamically and then pressed into the workflow...but I'm not sure the best way to do that.
Currently, I have a Python script that downloads .gz files (from AWS S3) and then unzips them. I already have variations that can unzip the files into memory using StringIO (and maybe pass them into the workflow from there as data??).
It can also download them to a directory...which maybe can them be used as the source? But I don't know how to tell the ZipLoop to wait and check the directory after the python script is run.
I also could have the python script run as a separate entity (outside of knime) and then, once the directory is populated, call knime...HOWEVER there will always be a different number of files (maybe 1, maybe three)...and I don't know how to make the 'Input Files' knime node to handle an unknown number of input files.
I hope this makes sense.
Thanks!
Thanks to Gábor for getting me on the right track. Although I ended up doing a slightly different route after much experimentation.
===
Being new to Knime, I don't know if this is an efficient use of Knime, or a complete Kluge...but it does work.
So, part of the problem is some of the Knime specific objects - One of which is called URIDataValue.
A Python Pandas dataframe is, apparently, interchangable with the Knime tables. However, I don't know if there's a way to import one of these URIDataValue objects into Python. So here's what I did...
1. I wrote a Python script that creates a Pandas Dataframe, and populates it with one Column. Everything is a string, including the column header:
from pandas import DataFrame
# Create empty table
T = DataFrame(
[
['file:///Users/.../copy/lfq_spikein_dilution_1.mzML'],
['file:///Users/.../copy/lfq_spikein_dilution_2.mzML'],
],
)
T.columns = ['URIDataValue']
#print T
output_table = T
That creates this dataframe:
Note: The column name and values are just strings. But it is (apparently) important that the column header be 'URIDataValue'...even though HERE it's just text. If the column name is not 'URIDataValue' the next node doesn't know what to do.
NEXT, the 'output_table' from the 'Python Source' node is patched to a 'String to URI' node, which (apparently and magically) knows to change the entire columns string values to URIDataValues (presumably based on the name of the first column...don't know that for sure).
Finally, the NEW table, with the correct data objects goes to a 'URI to PORT' node...since apparently 'Port' objects and a 'URI' object are different.
This, then, matches the needed input to the ZipLoop...which is normally the out put from a static (hard coded) 'Input Files' node.
Now, to actually solve the question above, I just have to add the code to my 'Python Source' to download and unzip the S3 files, then annotate the dataframe with their locations, and go.
I have no idea what I'm doing, but it worked.
There are multiple options to let things work:
Convert the files in-memory to a Binary Object cells using Python, later you can use that in KNIME. (This one, I am not sure is supported, but as I remember it was demoed in one of the last KNIME gatherings.)
Save the files to a temporary folder (Create Temp Dir) using Python and connect the Pyhon node using a flow variable connection to a file reader node in KNIME (which should work in a loop: List Files, check the Iterate List of Files metanode).
Maybe there is already S3 Remote File Handling support in KNIME, so you can do the downloading, unzipping within KNIME. (Not that I know of, but it would be nice.)
I would go with option 2, but I am not so familiar with Python, so for you, probably option 1 is the best. (In case option 3 is supported, that is the best in my opinion.)

Django character latin1 mysql

I create a django application with existing MySQL database, the problem is the encoding to database is latin1_general_c and the utf8 characters is save like this ñ => ñ , ó => ó, i need present the information in that page in correctly form but django show the information of database like this
recepción, 4 oficinas, 2 baños
i need show like this
recepcíon, 4 oficinas, 2 baños
For many reasons I can't change the database to utf8
what do I do for show information the correctly way?
Django seems to explicitly require that your database use UTF-8:
Django assumes that all databases use UTF-8 encoding. Using other encodings may result in unexpected behavior.
That said, it should be possible to use the custom OPTIONS setting to pass the desired character set to the database driver. See this answer, in which setting charset solved the problem. But you should check the documentation for the specific MySQL driver and database that you're using, since these options are not used by Django itself.
As suggested by the quote above, even if this works to correctly translate strings between the database and unicode, parts of Django still won't work correctly. For example, to correctly validate the length of a CharField Django has to know the encoding, and it will always assume UTF-8.

Copy in Postgresql: Absolute Path Interpreted as Relative Path

I am running this statement in a Django app:
c = connections['default'].cursor()
query="copy (select * from analysis.\"{0}\") to STDOUT DELIMITER ',' CSV HEADER;".format(view_name)
with open(csvFile,'w') as f:
c.copy_expert(query,f)
f.close()
It does not create the correct csv file. Some of the values appear to be in the wrong columns. I am trying to test the SQL statement by running it in POSTGRESQL:
copy (select * from analysis."S03_2005_activity_140807_153431_with_geom") to 'C:/djangoProjects/web_output/csvfiles/S03_2005_activity_140807_153431_with_geom.csv' DELIMITER ',' CSV HEADER;
It gives me: "ERROR: relative path not allowed for COPY to file". I have looked into the issue and it appears to typically be one of two issues: 1. confusing '\' and '/'. My slashes should be correct. 2. The server being on a different computer. I thought this may be my issue as the database is located on an external computer, but I have the connection in my Postgresql. It also runs from Django so I'm not sure why it isn't working from PG Admin.
If you want to store data / get data from your local machine and communicate with a Postgres server on a different, remote machine, you cannot simply use COPY.
Try the meta-command \copy in psql. It's a wrapper for the SQL COPY command and uses local files.
Your filename should work as is on a Windows machine, but Postgres interprets it as a local filename on the server, which is probably a Unix derivate. And there the filename would have to start with '/'.

How do I display a field name containing the substring OMIT in ApEx?

One of the fields in my database table is named DATEOFDISCHARGEFROMITU. In any report output, this displays as DATEOFDISCHARGEFRU. I've figured out that the missing characters form the word 'OMIT', which makes me think it's related to this old problem in a previous version of ApEx (I'm using version 4.1.)
Is there a way to display the whole field name in the report header when the field name contains the string 'OMIT'?
Note: Using html character codes will allow the field name to display properly, but then when the report is exported to CSV the character codes are of course shown instead of the full field name. I need a solution that works for exports as well as displaying onscreen.
Platforms (tested): Oracle Application Express (APEX), Version 4.0.2
Note: I am not sure how the linked OTN post is relevant to your problem aside from the coincidence that their file export contains the word "OMIT" and your column title contains the word "OMIT".
It's safe to say that "OMIT" isn't an APEX or ORACLE reserved word that is sabotaging your output. However, if you were talking about a scrap of SQL that attempted to create a table named "SELECT" or "WHERE"
i.e., SELECT * FROM "SELECT" WHERE...
you'll be blocked by the RDBMS from proceeding. :)
I tried an export with a query that contained a column header labeled "OMIT" (see the far right in the example.) The .csv file interpreted by Microsoft Excel looked like this:
I wrote up a separate Q&A post about creating dynamic APEX report headers to answer your follow-on question about a suitable solution for providing a clean, htmlcode-free output when a report is eventually exported to a text, comma separated (or other delimited) output.
In summary, the linked post suggests to set up a dynamic PL/SQL Function within a page item. The page item can be referenced directly in the report column header definition. This is a screenshot demonstrating a possible solution:
The link to the general explanation has more details on the APEX design tasks that gets to this final product.
Onward.
I solved this by using this solution for exporting to csv without an enclosing quote character - as that was another challenge I was faced with for the particular application I was developing. By manually creating the export file I was also able to define the column headings exactly, and the "OMIT" issue did not occur.
Technically that's not a solution for displaying a report with the required headings that can also be exported (Richard's response does that) but it does what I need it to and solves the immediate problem of the DATEOFDISCHARGEFROMITU column heading.

Full Text Search with SQlite3 C library with character "-" not working

I have a SQLite3 database that is using FTS3. It works well in SQLite3 commandline tool but when using C library (using wxSQLite3, but that should not make difference I guess), it does not work with queries containing "-" character something like
SELECT * From Table WHERE columnx MATCH 'text1 -text2'. However, this works fine on commandline version.
I have no Idea why it does not work. All other FTS Match condition I have tried works fine.
Note: I have added wxWidgets to tags instead of wxSQLite3 as I cannot create new tags
Apparently, your databases are configured differently regarding standard/enhanced query syntax; try WHERE columnx MATCH 'text1 NOT text2'.
To enable enhanced query syntax, compile with the SQLITE_ENABLE_FTS3_PARENTHESIS macro.