I am creating a few nodes in neo4j using spring data, and then I am also accessing them via findByPropertyValue(prop, val).
Everything works correctly when I am reading/writing to the embedded DB using spring data.
Now, as per the Michael Hunger's book : Good Relationship, I opened up Neoclipse in read-only mode connection to my currently active Neo4j connection in Java..
But, it somehow still says that Neo4j's kernel is actively used by some other program or something.
Question 1 :What am I doing wrong here?
Also, I have created a few nodes and persisted them. Whenever I restart the embedded neo4j db, I can view all my nodes when I do findAll().
Question 2 :When I try to visualize all my nodes in Neoclipse(considering the db is accessible), I can only see one single node(which is empty), has no properties associated to it, whereas I have a name property defined.
I started my java app, persisted few nodes, traversed and got the output from in the java console. Now, I shutdown the application and started the Neoclipse IDE, connected to my DB and found that no nodes are present(Problem of Question 2).
After trying again(heads down), I go back to my Java app and ran my app, and surprisingly I found that I am getting a Lucene-file-corrupted error(unrecognized file format) error. I had no code changes, I did not delete anything, but still got this error.
Question 3 :Not sure what I am doing wrong. But since I found this discussion on my bug(lucene/concurrent db access), I am willing to know if this is a bug or if this is due to any programatic error.(Does it have to do something with Eclipse Juno)
Any reply would be highly appreciated.
Make sure you are properly committing the transactions.
Data is not immediately flushed to the disk by Neo4j and hence you might not be viewing the nodes immediately in Neoclipse. I always restart the application that is using Neo4j in
embedded mode so that data is flushed to the disk and then open neoclipse.
Posting your code would help us to check for any issues.
Related
I am using Django with Google cloud datastore i.e. Djange (https://djangae.org/)
I am new to these tech stacks and currently facing one strange issue.
when I persist data by calling Model.save(commit=true) . The data gets saved into cloud datastore but after 4/5 mins it gets reverted.
To test it further I tried to directly change the value in database but it also got reverted after sometime.
I am kind of confused as there is no error or exception I see . I am making atomit transaction and wrapped my code with try except to catch any exception but no luck.
could someone please advise me as how to debug further here.
I got some lead here. well I was pointing datastore with multiple versions of code and few of them were in infinite loop to hit the same Kind of datastore. currently killing all stale version makes the DB consistent with changes. . wanted to update so that others can get an idea if something similar happen
I'm making a c++/Qt application. It connects to a small online database to find different information. I need to make sure that the application works offline. So I would like to do the following
On start up of application:
- Check if internet connection is available
- if available connect to online database, download database to local (for next time no internet is available)
- if not available connect to the kocally stored version of the database
My problem is I can't find a simple solution how to "download" the database. The user will not update the database, so there is no need for syncing when online again, just the ability to download the newest version of the database, whenever online. It is a MS SQL server that the application uses.
My only idea for a solution is to have an SQLite db in the application, and then write a script that clears the SQLite database and then puts everything from the online server into it, but this requires that I write a script that goes through all of the databse. There must be a better solution. I'm also not sure how this solution should work if the database structure changes. A solution for this could just be to send out a update for the application if the structure changes with a new SQLite db with the new structure.
I tried searching for a solution, but I could not find anything that are simple. Since I don't neew syncing back and forth, I thought there must be a simple solution. Any help pointing me in the right direction is appreciated.
I have a Flask app running using Elastic Beanstalk, using flask-security and plain sqlalchemy. The app's data is stored on a RDS instance, living outside of EB. I am following the flask-security Quick Start with a session, which doesn't use flask-sqlalchemy. Everything is free tier.
Everything is up and running well, but I've encountered that
After a db insert of a certain type, a view which reads all objects of that type gives alternating good/bad results. Literally, on odd refreshes I get a read that includes the newly inserted object and on even refreshes the newly inserted object is missing. This persists for as long as I've tried refreshing (dozens of times... several minutes).
After going away for an afternoon and coming back, the app's db connection is broken. I feel like my free tier resources are going to sleep, and the example code is not recovering well.
I'm looking for pointers on how to debug, for a more robust starting code example, or for suggestions on what else to try.
I may try to switch to flask-sqlalchemy (perhaps getting better session handling), or drop flask-security for flask-login (a downgrade in functionality... sniff).
My problem is peciluar, Please assist me in any way if you can !!
I have around more than 1300 of hibernate entity files, which are by default loaded with lazy intilization. I deployed them with tomcat and able to run web services using them with cxf, the application runs successfuly. With the same enity files, I made a budle in Fuse, the services are been deployed , while running the application , it gives an error saying "Failed to lazily initialize a collection of role" with entity names.
Now for this I came with one solution that at place of
#ManyToMany(fetch = FetchType.LAZY, mappedBy = "prProductLines") , I changed the FetechType with Lazy to EAGER. The problem get resolved.
But now, while changing at all places the fetch type to EAGER, this modification raised me another problem, where the query runs very slow taking too much long time, finally it reports error in SQl server 2008 with "There is insufficient system memory in resource pool 'internal' to run this query." and in console "org.hibernate.exception.SQLGrammarException: could not load an entity"
Now please suggest me the solution for it, if I am able to lazily initiaze it in fuse, I hope it could solve my problem. I am not able to figure out exact problem. How could i move ahead.
Thank you
I have a small python script that pushes data to my django postgres db.
It imports the relevant model from a django project and uses the .save function to save the data to the db without issue.
Yesterday the system was running fine. I started and stopped both my django project and the python script many times over the course of the day, but never rebooted or powered off my computer, until the end of the day.
Today I have discovered that the data is no longer in the db!
This seems silly, as I probably forgotten to do something obvious, but I thought that when the save function is called from a model, the data is committed to the db.
So this answer is "where to start troubleshooting problems like this" since the question is quite vague and we don't have enough info to troubleshoot effectively.
If this ever happens again, the first thing to do is to turn on statement logging for PostgreSQL and look at the statements as they come in. This should show you begin and commit statements as well as the queries. It's virtually impossible to troubleshoot this sort of problem without access to the queries. Things to look for include missing COMMITs, and missing statements.
After that, the next thing to do is to look at the circumstances under which your computer rebooted. Is it possible it did so before an expected commit? Or did it lose power and not have the transaction log flushed to disk in time?
Those two should rule out just about all possible causes on the db side in a development environment. In a production environment for old versions of PostgreSQL you do want to verify that the system has autovacuum running properly and that you aren't getting warnings about xid wraparound. In newer versions this is not a problem because PostgreSQL will refuse to accept queries when approaching xid wraparound.