What do I need to do to run Datomic with Caribou framework, both for dev and prod servers?
In other words, how can I hack Caribou to make it happen?
Hope it makes sense! Thanks you!
I'm one of the the caribou devs.
We use a db protocol to abstract over the differences between databases. I have a long term plan to expand the protocol so that we can use storage that is not sql. Datomic in particular (as well as neo4j). We avoid sql in the model namespace itself, so most of the changes would be on the db adapter protocol. Though the protocol would need to be expanded, and some existing operations would need to be swapped out for the protocol.
If you want to contribute to this, I would be happy to provide some guidance, but the above is a rough outline of what would be needed.
I'm not a Caribou expert, but for what I've seen browsing the source code I don't think it's currently designed for Datomic plug&play.
Most of the critical model querying functions are straight up sql, the same for model creation.
So you could try rewriting the complete model.clj with the same API, which would be difficult, or you can try using model hooks, but that would be a real hack.
I'm not Caribou maintainer, but I think currently it's not designed with Datomic nor any other NoSQL database in mind, as you see by current supported database adapters.
Related
We would like to have some recommendation for creating restful web services. We went through many article and answers. Most of the answers are specific to a framework. Can someone please point us to comparison article which helps me to understand different frameworks?
Please explain how to handle login and use web services.
There really isn't a good way to answer this other than it depends. If your talking open source, the standard for a long time was Linux, Apache and MySQL for database (and PHP a.k.a. LAMP) , but some folks prefer PostGres, or a No SQL solution like Mongo DB or Couch DB.
Given that, you need to decide if you want to build on top of a framework(s), and choose a language direction. If you want Java, Spring and Hibernate have pretty good support, and are fairly mature.
Most shops have a set of developers with certain skills that you can leverage, and typically, that's how the decision is made. You don't want to do something completely new and have to retrain everyone.
Without knowing what your goal is, or anything about your situation, it's going to be tough to suggest a reasonable path forward. Sometimes you need to look at how your going to host your site, and find vendors that support your stack. A little research will help you figure out where you need to go.
Sometimes its worth abandoning the open source path, and go with something like IIS and ASP .NET.
We need a reasonable insert and query speed over huge tables so I considered using some noSQL adapter with Django. Unfortunately:
Django does not provide official support for noSQL databases.
In our original schema some Big Data are relational to other Big Data making the data duplication unacceptable.
Project deadlines are enemies of hot stuff like this.
So, as far I can see, PostgreSQL should be the way to go for this scenario, right?!
Please let me know any other detail that may be relevant to this question!
Bonus to anyone that can point out some useful database techniques like database sharding...
Well, there is a fork of django project that uses MongoDb as the backend.You can read about it here . The Code on GitHub is here.You give some heads up, MongoDB is a NOSQL db that does support sharding and replication. So i think this might something that you are looking for.
This is more of an architectural question than a technological one per se.
I am currently building a business website/social network that needs to store large volumes of data and use that data to draw analytics (consumer behavior).
I am using Django and a PostgreSQL database.
Now my question is: I want to expand this architecture to include a data warehouse. The ideal would be: the operational DB would be the current Django PostgreSQL database, and the data warehouse would be something additional, preferably in a multidimensional model.
We are still in a very early phase, we are going to test with 50 users, so something primitive such as a one-column table for starters would be enough.
I would like to know if somebody has experience in this situation, and that could recommend me a framework to create a data warehouse, all while mantaining the operational DB with the Django models for ease of use (if possible).
Thank you in advance!
Here are some cool Open Source tools I used recently:
Kettle - great ETL tool, you can use this to extract the data from your operational database into your warehouse. Supports any database with a JDBC driver and makes it very easy to build e.g. a star schema.
Saiku - nice Web 2.0 frontend built on Pentaho Mondrian (MDX implementation). This allows your users to easily build complex aggregation queries (think Pivot table in Excel), and the Mondrian layer provides caching etc. to make things go fast. Try the demo here.
My answer does not necessarily apply to data warehousing. In your case I see the possibility to implement a NoSQL database solution alongside an OLTP relational storage, which in this case is PostgreSQL.
Why consider NoSQL? In addition to the obvious scalability benefits, NoSQL offer a number of advantages that probably will apply to your scenario. For instance, the flexibility of having records with different sets of fields, and key-based access.
Since you're still in "trial" stage you might find it easier to decide for a NoSQL database solution depending on your hosting provider. For instance AWS have SimpleDB, Google App Engine provide their own DataStore, etc. However there are plenty of other NoSQL solutions you can go for that have nice Python bindings.
I am relatively new to Django and this is a more general 'concept' question.
For a client I need to construct an expansive database holding data returned from a series of questionnaires as well as some basic biological data. The idea is to move away from the traditional tools (i.e. Microsoft Access) and manage the data in a mysql database using a basic CRUD interface. Initially the project doesn't need to live on the web, but the next phase will to be to have a centralized db with login and admin page.
I have started building the db with Django models which is great, and I want to use the Django admin for the management of the data.
My question is: Is this a good use of Django? Is there anything I should consider before relying on django for the whole process? And is it advisable to us the Django runserver for db admin on a client's local machine (before we get to the web phase).
Any advice would be much appreciated.
Actually, your description sounds exactly like the sort of thing for which Django is an ideal solution. It sounds more complex and customized than a CMS, and if it's as straightforward as your description then the ORM is definitely a good tool for this. Then again, this sounds exactly like an appserver-ready problem, so Rails, Express for Node.js, or even ChicagoBoss (if you're brave) would be good platforms for this kind of application.
And sure, Django is solid enough you can run it with the test server for local clients before you go whole-hog and run the thing on the web. For that, though, I recommend Apache/mod_wsgi, and if you're going to be fault tolerant there are diamond architectures (one front end proxy with monitoring failover, two or more appserver machines, one database with hot spare) and more complex (see: sharding) architectural layouts you can approach later.
If you're going to run it in a client's local setting, and you're not running Windows, I recommend looking into the screen program. It will allow you to detach the running job into the background while making diagnostics accessible in an ongoing fashion.
I have my first app, not that big, but it is the first step. (next big one on the way)
Now if I want to put it on my own Linode VPS, I have to configure mod_python or mod_wsgi, as well as memcache, Ngix, mySQL or Postgresql, etc. to make it work. If I put it GAE, All I have to do is convert the models to use GAE's API.
What I like about GAE is scaling. (if they can really do it)
Then I'd only worry about developing my apps and doing SEO work on them instead of worrying about load share/balance, cache, db / IO redundancy, etc.
I don't want to do any porting later on. (I have to decide now and stick with it)
So, if you have any experience on this, what do you recommend:
1- Use VPS(s) for everthing
2- Use VPS(s) plus Amazon S3
3- Use VPS(s) plus Amazon S3 & SimpleDB
4- Use GAE
Also: Would I be able to get away with not having JOIN rights when using the BigTable?
Note: I don't have any spatial need now, but for a location table I might need that later on.
I'd like to know what do you think!
There's business risk and technical risk.
Business risk is that you might have to move hosts later for some external reason. VPS's, EC2, etc require more upfront investment, but keep you independent. Tools like Chef can help with the configuration effort.
Technical risk is that your application may not be easily implemented on the platform. Since most VPS options allow you to install arbitrary software, they minimize this, again at the cost of more configuration effort on your part. AFAIK, the largest constraint GAE enforces on you is it's difficult to do long running background tasks. (Working without JOINs and other aspects of de-normalized data requires a different way of thinking, but this approach is fairly common in web applications no matter where they run once the SQL database is larger than a single host can support.)
If you can live with both these risks, GAE would appear to save you a substantial amount of effort. If you cannot live with these risks, you should tailor your own environment.
As an aside, I find S3 to be worth it no matter your environment. It's far simpler than ensuring your local server static file storage is reliably backed up, and you never have to worry about capacity. It's best if you use it for data that is uploaded but rarely overwritten or deleted (think facebook photo albums).
I don't want to do any porting later on. (I have to decide now and stick with it)
If that's the case, wouldn't you prefer to control deployment from the outset? It could be a great pain to port back from GAE later down the line if you hit its limits (whether they be technological limits or simply business decisions by Google that run counter to your plans for the future of your app).
Also configuring mod_wsgi, installing postgres etc. isn't particularly difficult, and you don't have to worry about things like load balancing and db redundancy for a while yet.
If it were me, I'd prefer the long-term certainty of a traditional server over the quick win of GAE. It all depends on your vision for the app, however.
I may be biased, but if you can live with GAE's limitations it really saves you a lot of work and worry about system administration issues (and to some extent scaling) -- plus, it's free as long as your resource consumption is low (basically meaning your traffic is low).
Can you do without joins? I don't know, as I don't know your app -- I'm a SQL fanatic, myself, yet for simple enough needs I haven't found it too hard to adapt. As I see it, the main limitation of non-relational DBs is that they're nowhere as nice as relational ones for "ad hoc" queries... you typically have to write a lot of procedural code instead of a nice SELECT or two:-(. But, that's more of a "data mining later" issue than one connected with serving your web app -- probably best solved by regularly bulk-downloading data from the web app's online storage to a "data warehouse" kind of setup, anyway, even if such storage was relational in the first place;-).
Before deciding, it might be worth a quick prototype adaptation of your app to GAE. You might run into stoppers that force the decision. Possible stopper issues include
Your schema doesn't make the transition to BigTable
You're depending on some C-based library that GAE doesn't support
You have a few long-running requests that exceed the thresholds that GAE imposes
The answer depends on the complexity and nature of your model layer, really. If it's complex or tightly bound to the rest of your code, porting is likely to be a significant effort. If it's fairly straightforward, or easy to tear out and replace, I would say go for it.
These days, I mostly write new code for GAE, but the fact that I can simply deploy with a single command has really lowered the barrier I feel towards writing cool new apps. Not having to worry about deployment and hosting is quite liberating.
All I have to do is convert the models to use GAE's API.
I am sorry, you are totally mistaken.
You also need to rewrite all the views code that uses the ORM. There are no joins. So you have to deal with and write a lot of procedural code instead of the nifty SQL that provides U whatever you want.
Querying is slow. You need to override save method of each model to store additional information of that model which may take a lot of time to compute when need. You also need to work on memcache to make the queries fast enough.
And then, Guido has said Django 1.1 is going to be included in a future version of Appengine. I am hoping they will have an out of the box generic ORM to BigTable mapper.
That said, if your app is simple without many joins needed, you could use the appengine patch project to use the current version of django on Appengine. Here is how.