I have ColdFusion (8) that hooks into a MySQL DB. It uses a DSN connection.
I was wondering if there was a way to create a backup of the DB? It's fairly hefty at around 10Gb so was wondering if there were any extra precautions I'd need to take to ensure it's successful i.e. prevent timeouts
Thanks,
JJ
I'm not able to test this currently, but theoretically you can use <cfexecute> to call the mysqldump utility on the database server. This takes a timeout= parameter, so you can leave a nice long timeout on it.
the documentation for the mysqldump is here: http://dev.mysql.com/doc/refman/5.5/en/mysqldump.html i've not used it myself so can't advise on the correct parameters to use (which you would pass in to the arguments= parameters of the cfexecute.
Related
I have a directory structure of individual databases (around 500) that are accessed via individual connections. Processing queries can be quite slow however. Profiling has given me the hint that the reason is that every connection ( set up via sqlite3_open_v2 ) uses the default vfs which, after enough connections, has 500 entries and every sqlite function that searches through this list takes some time.
Now my question:
Would it be possible to speed up the process by creating an individual vfs for each connection since I never access more than one table from a connection? If yes, how can this be achieved?
Regards
I also contacted SQLite support. The problem is known and a fix will go into a future release. Thanks anyways!
We're looking into implementing audit logs in our application and we're not sure how to do it correctly.
I know that django-reversion works and works well but there's a cost of using it.
The web server will have to make two roundtrips to the database when saving a record even if the save is in the same transaction because at least in postgres the changes are written to the database and comitting the transaction makes the changes visible.
So this will block the web server until the revision is saved to the database if we're not using async I/O which is currently the case. Even if we would use async I/O generating the revision's data takes CPU time which again blocks the web server from handling other requests.
We can use database triggers instead but our DBA claims that offloading this sort of work to the database will use resources that are meant for handling more transactions.
Is using database triggers for this sort of work a bad idea?
We can scale both the web servers using a load balancer and the database using read/write replicas.
Are there any tradeoffs we're missing here?
What would help us decide?
You need to think about the pattern of db usage in your website.
Which may be unique to you, however most web apps read much more often than they write to the db. In fact it's fairly common to see optimisations done, to help scaling a web app, which trade off more complicated 'save' operations to get faster reads. An example would be denormalisation where some data from related records is copied to the parent record on each save so as to avoid repeatedly doing complicated aggregate/join queries.
This is just an example, but unless you know your specific situation is different I'd say don't worry about doing a bit of extra work on save.
One caveat would be to consider excluding some models from the revisioning system. For example if you are using Django db-backed sessions, the session records are saved on every request. You'd want to avoid doing unnecessary work there.
As for doing it via triggers vs Django app... I think the main considerations here are not to do with performance:
Django app solution is more 'obvious' and 'maintainable'... the app will be in your pip requirements file and Django INSTALLED_APPS, it's obvious to other developers that it's there and working and doesn't need someone to remember to run the custom SQL on the db server when you move to a new server
With a db trigger solution you can be certain it will run whenever a record is changed by any means... whereas with Django app, anyone changing records via a psql console will bypass it. Even in the Django ORM, certain bulk operations bypass the model save method/save signals. Sometimes this is desirable however.
Another thing I'd point out is that your production webserver will be multiprocess/multithreaded... so although, yes, a lengthy db write will block the webserver it will only block the current process. Your webserver will have other processes which are able to server other requests concurrently. So it won't block the whole webserver.
So again, unless you have a pattern of usage where you anticipate a high frequency of concurrent writes to the db, I'd say probably don't worry about it.
I have a question regarding SQL. Is there any way to connect to a database without a server into sql (no localhost or anything). I want to use SQL for a resource management in c++. I found the API but I need to know if that's possible, so I can use it like that.
Try SQLite.
SQLite is a sql-like database system that saves the database state to a single file somewhere on the file system, as opposed to requiring a full server. It does not have the full performance optimisations in SQL terms of full servers such as Postgres or MySQL; however, it does not require the overhead of a server.
Theres many options, one I know of is SQL Server Compact
Here's a link to microsoft's API for accessing SQL Server Compact via C++
I'm in the process of porting a Java desktop application to a ColdFusion web app. This desktop app made queries with very large result sets (thousands of text records) that, while being all right on the database side, could take a lot of memory on the client side if they were buffered. For this reason, the app explicitly tells the database driver to not buffer results too much.
Now that I'm working on the ColdFusion port, I'm being hit by the buffering problem. The ColdFusion page times out during the <cfquery> call, and I'm fairly sure this is because it tries to buffer everything.
Can I make an unbuffered query in ColdFusion?
If pagination is not an option (i.e., you're writing out a report for example), then you'll have to get low level with the java, using setFetchSize(). See this answer. Note that the code in the answer uses the DataSourceService, which, with latest security patches from Adobe, is no longer available on CF8. You'll have to figure out how to get a connection via the adminapi or create a connection outside of coldfusion. Or you could transition your datasource to use JNDI, and then you can lookup the resource yourself without using CF api's.
I'm almost certain that ColdFusion does not provide such a mechanism. As a language, it was meant to abstract the developer away from things like that.
I'd suggest that you look into a few options:
Re-work your query to use pagination, and run it in a loop.
Use the timeout attribute on the <cfquery> to prevent timeouts from happening
Use the CreateObject() syntax to instantiate a JDBC database connection.
With the last option, what you'd actually do is access the underlying Java classes to do the querying and getting results. Take a look at this article for a quick look at the CreateObject() function.
You can also look at the Adobe Livedocs for the function, but they don't really seem helpful.
I haven't tried to use CreateObject() to do querying with the Java database access classes, but I imagine that you can probably get it to work.
So I am creating C++ HTTP emulating via TCP server. I will have simple authentification service which will be created in C++. I will have sessions. I wonder which form shall I give them - real files on server or lines in SQL Lite db I use for my server? Or just keep them in RAM? Which way is better for performance / safety?
It all depends what you want to do:
keep them in sqlite is safer than file (you're sure it's either written or not, no half status). Moreover, it's either to fetch your session with a query. In that sense, it's safer
keep them in RAM will be better in terms of performance, but all sessions will be lost when you restart your server