Google cloud datastore Rollback its previous saved row - django

I am using Django with Google cloud datastore i.e. Djange (https://djangae.org/)
I am new to these tech stacks and currently facing one strange issue.
when I persist data by calling Model.save(commit=true) . The data gets saved into cloud datastore but after 4/5 mins it gets reverted.
To test it further I tried to directly change the value in database but it also got reverted after sometime.
I am kind of confused as there is no error or exception I see . I am making atomit transaction and wrapped my code with try except to catch any exception but no luck.
could someone please advise me as how to debug further here.

I got some lead here. well I was pointing datastore with multiple versions of code and few of them were in infinite loop to hit the same Kind of datastore. currently killing all stale version makes the DB consistent with changes. . wanted to update so that others can get an idea if something similar happen

Related

Unable to delete dataset because it's included in a published app - but it isn't

I am trying to delete a dataset from one of our premium workspaces and am getting an error saying it's included in the published app. However, as you can see below, the dataset in question (Construction Daily Report) is not included in the app and no reports reference it. I also tried deleting it using PowerShell but that didn't work either. Has anyone run into this same issue?
I have sometimes experienced significant lag between removing content from an app and republishing the app, until being allowed to actually remove the dataset from the workspace environment.
If you unpublished this very recently, simply try to wait a bit until all systems are fully up to date with currently published app contents. If a significant amount of time has passed, perhaps contact Microsoft directly.

Superset loading examples don't seem to disable

I am running Superset in docker. At first the example datasets, charts and so on were loading. After some time I decided to disable examples.
I changed the configuration to SUPERSET_LOAD_EXAMPLES=no in the .env file. I also tried to delete this key from .env. However, examples don't seem to disappear. How can they be deleted completely?
If you've run Superset once with SUPERSET_LOAD_EXAMPLES=yes, that will populate the example data and charts/dashboards into your metadata database alongside any actual charts and data. My understanding from searching the codebase is that there's no way to undo this.
If you really want those examples gone, you can start fresh from a new metadata db, but that would mean discarding any content you've created. Or you can try deleting the examples manually, either in the UI or in the database backend.
If you can't start fresh, my personal advice is to just ignore them. Eventually they'll get pushed to the bottom by your new content, and it's nice to have them in case you need to file a reproducible bug report example. I tried deleting some of them from the database backend and all I did was corrupt them so I get 500 errors when I try to load the examples.

Inconsistent RDS query results using EBS+flask-security+sqlalchemy

I have a Flask app running using Elastic Beanstalk, using flask-security and plain sqlalchemy. The app's data is stored on a RDS instance, living outside of EB. I am following the flask-security Quick Start with a session, which doesn't use flask-sqlalchemy. Everything is free tier.
Everything is up and running well, but I've encountered that
After a db insert of a certain type, a view which reads all objects of that type gives alternating good/bad results. Literally, on odd refreshes I get a read that includes the newly inserted object and on even refreshes the newly inserted object is missing. This persists for as long as I've tried refreshing (dozens of times... several minutes).
After going away for an afternoon and coming back, the app's db connection is broken. I feel like my free tier resources are going to sleep, and the example code is not recovering well.
I'm looking for pointers on how to debug, for a more robust starting code example, or for suggestions on what else to try.
I may try to switch to flask-sqlalchemy (perhaps getting better session handling), or drop flask-security for flask-login (a downgrade in functionality... sniff).

Neo4j-Neoclipse Concurrent access issue

I am creating a few nodes in neo4j using spring data, and then I am also accessing them via findByPropertyValue(prop, val).
Everything works correctly when I am reading/writing to the embedded DB using spring data.
Now, as per the Michael Hunger's book : Good Relationship, I opened up Neoclipse in read-only mode connection to my currently active Neo4j connection in Java..
But, it somehow still says that Neo4j's kernel is actively used by some other program or something.
Question 1 :What am I doing wrong here?
Also, I have created a few nodes and persisted them. Whenever I restart the embedded neo4j db, I can view all my nodes when I do findAll().
Question 2 :When I try to visualize all my nodes in Neoclipse(considering the db is accessible), I can only see one single node(which is empty), has no properties associated to it, whereas I have a name property defined.
I started my java app, persisted few nodes, traversed and got the output from in the java console. Now, I shutdown the application and started the Neoclipse IDE, connected to my DB and found that no nodes are present(Problem of Question 2).
After trying again(heads down), I go back to my Java app and ran my app, and surprisingly I found that I am getting a Lucene-file-corrupted error(unrecognized file format) error. I had no code changes, I did not delete anything, but still got this error.
Question 3 :Not sure what I am doing wrong. But since I found this discussion on my bug(lucene/concurrent db access), I am willing to know if this is a bug or if this is due to any programatic error.(Does it have to do something with Eclipse Juno)
Any reply would be highly appreciated.
Make sure you are properly committing the transactions.
Data is not immediately flushed to the disk by Neo4j and hence you might not be viewing the nodes immediately in Neoclipse. I always restart the application that is using Neo4j in
embedded mode so that data is flushed to the disk and then open neoclipse.
Posting your code would help us to check for any issues.

postgres + GeoDajango - data doesn't seem to save

I have a small python script that pushes data to my django postgres db.
It imports the relevant model from a django project and uses the .save function to save the data to the db without issue.
Yesterday the system was running fine. I started and stopped both my django project and the python script many times over the course of the day, but never rebooted or powered off my computer, until the end of the day.
Today I have discovered that the data is no longer in the db!
This seems silly, as I probably forgotten to do something obvious, but I thought that when the save function is called from a model, the data is committed to the db.
So this answer is "where to start troubleshooting problems like this" since the question is quite vague and we don't have enough info to troubleshoot effectively.
If this ever happens again, the first thing to do is to turn on statement logging for PostgreSQL and look at the statements as they come in. This should show you begin and commit statements as well as the queries. It's virtually impossible to troubleshoot this sort of problem without access to the queries. Things to look for include missing COMMITs, and missing statements.
After that, the next thing to do is to look at the circumstances under which your computer rebooted. Is it possible it did so before an expected commit? Or did it lose power and not have the transaction log flushed to disk in time?
Those two should rule out just about all possible causes on the db side in a development environment. In a production environment for old versions of PostgreSQL you do want to verify that the system has autovacuum running properly and that you aren't getting warnings about xid wraparound. In newer versions this is not a problem because PostgreSQL will refuse to accept queries when approaching xid wraparound.