I am creating degree verification process using blockchain approach which contain six main entities. By entities I mean to say consensus mechanism will evolve around these six entities, so for this I need to build a distributed database. Two approaches came into my mind
One approach of achieving this is to completely built everything from scratch: Separate database for each node in sqlite and then connect each node with some type of query.
Another approach is to use bigchainDB server which is a distributed database server based on blockchain.
Now my question which approach is feasible? I don't know whether bigchainDB server is compatible with django or not since they haven't mention anything about it in their docs.
If anyone have use bigchainDB please help me out. I am really confused as to which approach should I follow.
Related
I've read articles and posts about what a project and an app is for Django, and basically end up using the typical example of Pool and Users, however a real program generally use a complex relational database, therefore its design gravitates around this RDB; and the eternal conflict raises once again about: which ones to consider an application and which one to consider components of that application?
Let's take as an example this RDB (courtesy of Visual Paradigm):
I could consider the whole set as an application or to consider every entity as an application, the outlook looks gray. The only thing I'm sure is about this:
$ django-admin startproject movie_rental
So I wish to learn from the expertise of all of you: What approach (not necessarily those mentioned before) would you use to create applications based on this RDB for a Django project?
Thanks in advance.
PS1: MORE DETAILS RELATED ABOUT MY REQUEST
When programming something I follow this steps:
Understand the context what you are going to program about,
Identify the main actors and objects in this context,
If needed, make an UML diagram,
Design a solid-relational-database diagram, (solid=constraints, triggers, procedures, etc.)
Create the relational database,
Start coding... suffer and enjoy
When I learn something new I hope they follow these same steps to understand where they want to go with their actions.
When reading articles and posts (and viewing videos), almost all of them omit the steps 1 to 5 (because they choose simple demo apps), and when programming they take the easy route, and don't show other situations or the many supposed features that Django offers (reusability, pluggability, etc).
When doing this request, I wish to know what criteria is used for experienced programmers in Django to determine what applications to create based on this sample RDB diagram.
With the (2) answers obtained so far, "application" for...
brandonris1 is about features/services
Jeff Hui is about implementing entities of a DB
James Bennett is about every action on a object, he likes doing a lot of apps
Conclusion so far: Django application is a personal creed.
My initial request was about creating applications, but as models are mentioned, I have this another question: is with a legacy relational database (as showed in the picture) possible to create a Django project with multiple apps? this is because in every Django demo project showed, every app created has a model with their own tables, giving the impression that tables do not interact with those of other applications.
I hope my request is more clear. Thanks again for your help.
It seems you are trying to decide between building a single monolithic application vs microservices. Both approaches have their pros and cons.
For example, a single monolithic application is a good solution if you have a small amount of support resources and do not need to be able to develop new features in fast sprints across the different areas of the application (i.e. Film Management Features vs Staff Management Features)
One major downside to large monolithic applications is that eventually their feature sets grow too large and with each new feature, you have a significant amount of regression testing which will need to be done to ensure there aren't any negative repercussions in other areas of the application.
Your other option is to go with a microservice strategy. In this case, you would divide these entities amongst a series of smaller services and provide them each methods to integrate/communicate with each other (APIs).
Example:
- Film Service
- Customer Service
- Staff Service
The benefits of this approach is it allows you to separate capabilities and features by specific service areas thus reducing risk and regression testing across the application when new features are deployed or there is a catastrophic issue (i.e. DB goes down).
The downside to this approach is that under true microservice architecture, all resources are separated therefore you need to have unique resources (ie Databases, servers) for each service thus increasing your operating cost.
Either of these options is a good option but is totally dependent on your support model and expected volumes. Hope this helps.
ADDITIONAL DETAIL:
After reading through your additional details, since this DB already exists and my assumption is that you cannot migrate it, you still have the same choice as to whether or not you follow a monolithic application or a microservices architecture.
For both approaches, you would need to connect your django webapp the the specific DB you are already using. I can't speak for every connector out there but I know that the MySQL connector allows django to read from the pre-existing db to systematically generate the models.py file for the application. As a part of that connector, there is a model variable which allows you to define whether or not Django is responsible for actually managing the DB tables themselves.
The only thing this changes from an architecture perspective is how many times do you want to code this connection?
If you only want to do it once and completely comply with the DRY method, you can build a monolithic application knowing that as new features become required, application wide regression testing will be an absolute requirement.
If you want ultimate flexibility for future changes with this collection of features and don't mind recoding the migration across multiple apps while reducing the need for application wide regression testing as new features become required, a microservice architecture strategy is more appropriate.
I want to implement a small blockchain based solution that could serve as Patient Management System. The system should be able to track patients and their medical records/reports. Of course, this system would not be deployed somewhere, its just a university project.
So far, I've started tried to do it with Ethereum. I didn't find a solution using it. Then I tried to use OrbitDB cause I saw it on Ethereum's site in Developer Resources page. But after I had done some POC using OrbitDb, I came to know that it doesn’t claim to be a “blockchain database”, but rather a choice for decentralized apps. Then someone suggested me to use BigchainDb, but after reading about it and trying to make a small project using it, I came to know that it wouldn't fit my needs. I have also read about Fluree but didn't tried it yet as I have already wasted 3 months experimenting with others and didn't want to waste more.
So, could you recommend me a Blockchain-based Database that could serve my needs. Also, some sample code, preferably in Node.js would be a great help for me.
Please excuse me if I have written something wrong or if my understanding is wrong. I am new to Blockchain.
Thanks
You can try IPFS developed at protocol labs
The InterPlanetary File System (IPFS) is a protocol and peer-to-peer network for storing and sharing data in a distributed file system. IPFS uses content-addressing to uniquely identify each file in a global namespace connecting all computing devices.
here is the guide to understand more into ipfs
here is a simple dapp using ipfs with ethereum
You can use Emercoin public blockchain, subsystem NVS (Name-Value Storage). It allows upload your data with name_new command, update value with name_update, and see history of changes with name_history command.
Firstly please let me introduce my use-case: I am working on Django application (GraphQL API using Graphene), which runs in the cloud but also have its local instances in local customer's networks.
For example One application in the cloud and 3 instances (local Django app instance with a PostgreSQL server with enabled BDR) on local networks. If there is a network connection we are using bi-directional replication to have fresh data because if there is no connectivity we use local instances. Here is the simplified infrastructure diagram for an illustration.
So, if I want to use the BDR I can't do DELETE and UPDATE operations in ORM. I have to generate UUIDs for my entities and every change is just a new record with updated data for the same UUID. Latest record for selected UUID is my valid record. Removal is just a another flag. Till now, everything seems to be fine, problem starts when I want to use for example many-to-many relationship. Relationship relies on the database primary keys and I have to handle removal somehow. Can you please find the best way how to solve this issue? I have few ideas but I do not want to made a bad decision:
I can try to override ManyToManyField to work with my UUIDs and special removal flag. It's looks like nice idea because everything should work as before (Graphene will find the relations etc.). But I am afraid of "invisible" consequences.
Create my own models to simulate ManyToMany relationship. It's much more work but it should work just fine.
Did you have to solve similar issue before? Is there some kind of good practice or it's just building a highway to hell (AC/DC is pretty cool)?
Or if you think there is a better way how to build the service architecture, I would love to hear your ideas.
Thanks in advance.
I've developed a web-app in django, and have used MongoDB for backend.
I'm not sure how to do an automatic failover for the database.
My requirement is that, suppose when Primary node of mongodb is down, django should automatically connect to Secondary node.
How can this be achieved?
I found this library, https://github.com/brianjaystanley/django-failover
which is for django 1.3, but i want for django 1.5
What settings do i need to change, or any library available for the rescue? Any solutions on the floor?
Thanks
You should not need to set up anything in your you application to handle this and the link you provided for the library is not appropriate for use with MongoDB as it is a relational back end solution.
The first case here is do you actually have a Replica Set Configuration for MongoDB? I can only answer presuming that you but the link is worthwhile reading as from your question you probably do not have a core understanding of MongoDB Replication concepts.
What will be explained there is that there is no Secondary for your application to failover to, what actually happens is the Replica Set itself elects amongst it's members which node will become the Primary.
Going on with the answer, you configure your application to handle the failover through settting up your Connection String to the driver. Read through that documentation and you will find that among other useful things, you are basically providing a list of hostnames which will be members of the Replica Set. You don't need all the members, but just enough to be a seed list so that the other nodes can be discovered. That would just happen anyway with the correct options, but it is good practice to have more than one host to contact even to get that information. Here's a sample:
mongodb://<Primary>,<Secondary>/<database>
You may possibly want to take a look at MongoEngine, considering you probably have experience with django and it uses modelling concepts that you will be familiar with, whilst still allowing access to MongoDB features. There is some documentation there on setting up Replica Set connections from memory.
As I have very little knowledge on how ESB's work in tandem with database I'm asking a question regarding how communication can take place between the two hoping I'll atleast be pointed in the right direction to search in!
SITUATION : We have two systems(one of them is the client's) on different networks which have their own databases. We are required to do a regular real-time data exchange of all points present in our database with the other. We are also required to have a provision to be abel to import data into our system. This exchange has to follow SOA functionality over customer provided Biztalk ESB.We are supposed to provide the exchange by the use of ODBC.
Question: My query is whether it is possible to integrate the databases to the ESB as some endpoints without making any use of WEBSERVICES or extra interfaces, and send the data over the ESB as a pull-push transfer mechanism?
I have tried searching the net for this situation but have not come up with a lot of straightforward answers. Could someone please point me in the right direction.
ESB Toolkit in BizTalk is not an ESB! It is just small additional tool for some special cases.
Let's stop talk about the ESB, we need to solve the technical problem, right?
As I can understand you have two SQL databases and want to integrate them.
To do so with BizTalk the easiest way is to use the WCF-SQL ports/adapters.
You start the Wizards for this adapter, choose the tables/sp-s which should provide data/consume data, the Wizard will generate all needed Xml schemas for you.
Then you will use BizTalk Mapper to create the Xslt maps, which will transfer one SQL data format to another.
They you will create a pair of ports. One will consume data from one SQL database, the second will insert data to another SQL database. One of this port will use the mentioned above Xslt map.
If you need more processing, you could create and orchestration to manage additional processing, sophisticated error handling, etc.
I would recommend using MSMQ. There's a fairly detailed description of it here