I have a Clojure application using Neo4j in embedded mode. I call (new GraphDatabaseFactory).newEmbeddedDatabase with a new path. How can I tell if the resulting GraphDatabaseService was newly created / empty?
Bonus question: if it wasn't newly created, how can I read metadata in the database to tell what version of my system it was built with? If someone accidentally passes in a path to a valid Neo4j database that wasn't created with my application, I want to throw an Exception.
Full disclosure: I don't know clojure.
Programmatically, I do not see a way to determine this, but you can implement this by checking if the directory exists prior to making the call to the GraphDatabaseFactory.
As for the meta data, you can do this if you cast the GraphDatabaseService to an InternalAbstractGraphDatabase and perform a getConfig() on it. This will allow you access to the Config class, which has a property map that contains arguments that can show you the version, as well as other things.
Related
for a project, I am trying to create a web-app that, among other things, allows training of machine learning agents using python libraries such as Dedupe or TensorFlow. In cases such as Dedupe, I need to provide an interface for active learning, which I currently realize through jquery based ajax calls to a view that takes and sends the necessary training data.
The problem is that I need this agent object to stay alive throughout multiple view calls and be accessible by each individual call. I have tried realizing this via the built-in cache system using Memcached, but the serialization does not seem to keep all the info intact, and while I am technically able to restore the object from the cache, this appears to break the training algorithm.
Essentially, I want to keep the object alive within the application itself (rather than an external memory store) and be able to access it from another view, but I am at a bit of a loss of how to realize this.
If someone knows the proper technique to achieve this, I would be very grateful.
Thanks in advance!
To follow up with this question, I have since realized that the behavior shown seemed to have been an effect of trying to use the result of a method call from the object loaded from cache directly in the return properties of a view. Specifically, my code looked as follows:
#model is the object loaded from cache
#this returns the wrong object (same object as on an earlier call)
return JsonResponse({"pairs": model.uncertain_pairs()})
and was changed to the following
#model is the object loaded from cache
#this returns the correct object (calls and returns the model.uncertain_pairs() method properly)
uncertain = model.uncertain_pairs()
return JsonResponse({"pairs": uncertain})
I am unsure if this specifically happens due to an implementation from Dedupe or Django side or due to Python, but this has undoubtedly fixed the issue.
To return back to the question, Django does seem to be able to properly (de-)serialize objects and their properties in cache, as long as the cache is set up properly (see Apparent bug storing large keys in django memcached which I also had to deal with)
I am working on a django project that, along with having a database for its models and relations, writes to a log directory called activity_logs outside of the project directory to keep track of formatted user activities, one file for each user. This is an alternative file-structure-based solution to having a database table carry this information along, because this offloads some storage from the DB and is relatively easy to format and express such activities. Perhaps some of you may recommend storing this kind of data in the database, which is fine, but I still believe there is question from all of this that I need help answering.
This django project has multiple apps that have an extensive test suite, one for each app. Additionally, there is a logging.py file that encapsulates the logging functionality (writing/reading activities to/from log files), and so both the test cases within the test suites as well as the view functions (and various other utility functions) all utilize these logging functions in order to store these user activities and retrieve them based on model relationships to emulate a user notification system. Since the logging module takes care of this logging, it needs to know where to write to, and so we have a directory structure called activity_logs to which it writes user log files, creating one for a new user and deleting one for a user removed from the database. One of the newest changes we would like to make in this project is to create a separate logging directory for testing this logging functionality, something like test_activity_logs, so that it would never be confused when writing to the test directory for test users or the regular activity log directory for real users.
My problem is this: at runtime, how can I tell the system, at whichever startpoint of execution (whether it be from a view function call through the django test Client object, a test case, an actual HTTPrequest made via a URL, etc.), when to look inside the activity_logs or the test_activity_logs directory? It solely depends on whether I am generating new information for a real user or a test user, but a User is a User in our system, and I'm facing some trouble trying to tell these functions that call some logging functions to write to the test log directory vs. the regular one. For example, one approach I am trying is to pass a keyword argument (kwarg) to the logging functions so that they can be made aware of which directory to read/write to/from, like so:
self.assertTrue(activity_has_been_logged(ACTIVITY_ACCOUNT_CREATED, user.get_profile(), use_test_activity_log_directory=True) == True)
the kwarg called use_test_activity_log_directory=True will tell the logging function called activity_has_been_logged to read the test activity log directory. Unfortunately, apart from being a little inflexible (but tolerable), this doesn't solve the situation where the django test client object sends a GET or POST request via a URL to a view function that writes activities to log files:
response = client.post(propose_match_url, post) #Can't write to test_log_directory if by default it writes to regular directory!
How do I let the client pass on this kwarg to those view functions? I think that it should totally be possible to do this, but I'm not sure if fiddling with these kwargs is the best way, or maybe create a global variable in the project settings file, but maybe that might cause some trouble with race conditions with a shared mutable variable.
Your help would be great. Thanks in advance!
So I just solved this problem. The logging file hosting all logging functionality is really the only place that needs to know where to look (either test_activity_logs or activity_logs), since all other components will invoke functions from the logging module to write/read to/from these directories. I gave an additional field to the model class of the UserProfile class called is_test that is a boolean field to determine whether to look in the test_activity_logs if is_test=True, or activity_logs if is_test=False. That way, the logging module needs only to check the input parameter of type UserProfile and its new field to determine where to perform its logging functionalities. Problem solved!
Check out daemontools if you're on a *nix box or launchd on OS X. Both can make sure your Django instance stays running in whatever mode you prefer (daemontools has a few more options for that) and can isolate a directory for logging stdout/stderr.
You can set environment variables for each instance to help other log files and temporary files know where to be created, which you then get from os.environment or simply use the current working directory as a base if using daemontools.
The directory is automatically created for you using daemontools.
Can anyone point me to a concrete example of attaching metadata to llvm-ir using the c++ api?
I've read this page http://llvm.org/docs/SourceLevelDebugging.html.
Thanks
The above answer is not quite correct (or complete). You can also create metadata at the module level with just MDNode::get(...) which takes a context and an array of values to create metadata from. Named metadata is very heavyweight and you should only use it as an anchor for top level metadata values.
For attaching to instructions you do want to use the setMetadata call on the Instruction to set any particular metadata, however, you'll want to make sure you're using the correct context - otherwise you could overwrite other metadata.
There are two things you can do.
Attach metadata nodes to instructions (like the !dbg nodes from the link you referenced). For that, there's the Instruction::setMetadata method
Create named metadata nodes in a Module, not attached to any particular instruction. For that, use Module::getOrInsertNamedMetadata.
I'm using the entity framework.
In one of my unit tests I have a line like:
this.Set<T>().Add(entity);
On executing that line I get:
System.InvalidOperationException : The model backing the
'InvoiceNewDataContext' context has changed since the database was
created. Either manually delete/update the database, or call
Database.SetInitializer with an IDatabaseInitializer instance. For
example, the DropCreateDatabaseIfModelChanges strategy will
automatically delete and recreate the database, and optionally seed it
with new data.
Well I've actually deleted the database and removed the connection string.
I'm surprised this error is happening on adding as I wouldn't expect it to happen until I saved the data and it discovered there was no database.
In previous projects/solutions I created during unit tests I have been able to add to the context for test purposes without actually calling SaveChanges.
Would anyone know why this would be happening in my latest projects/solutions?
Are you sure it really didn't use database in your previous projects? If you do not specify any connection string it will silently use a default one to SQLExpress database with local .mdf file so make sure that isn't happening now.
Im currently using ie as an active x com thing on wxWidgets and was wanting to know if there is any easy way to change the user agent that will always work.
Atm im changing the header but this only works when i manually load the link (i.e. call setUrl)
The only way that will "always work," so far as I've been able to find, is changing the user-agent string in the registry. That will, of course, affect every web browser instance running on that machine.
You might also try a Google search on DISPID_AMBIENT_USERAGENT. From this Microsoft page:
MSHTML will also ask for a new user
agent via DISPID_AMBIENT_USERAGENT
when navigating to clicked hyperlinks.
This ambient property can be
overridden, but it is not used when
programmatically calling the Navigate
method; it will also not cause the
userAgent property of the DOM's
navigator object or clientInformation
behavior to be altered - this property
will always reflect Internet
Explorer's own UserAgent string.
I'm not familiar with the MSHTML component, so I'm not certain that's helpful.
I hope that at least gives you a place to start. :-)
I did a bit of googling today with the hint you provided head geek and i worked out how to do it.
wxWidgets uses an activex rapper class called FrameSite that handles the invoke requests. What i did was make a new class that inherits from this, handles the DISPID_AMBIENT_USERAGENT event and passes all others on. Thus now i can return a different user agent.
Thanks for the help.
Head Geek already told you where in the registy IE will look by default.
This is just a default, though. If you implement [IDocHostUIHandler::GetOptionKeyPath](http://msdn.microsoft.com/en-us/library/aa753258(VS.85%29.aspx) or [IDocHostUIHandler2::GetOverrideKeyPath](http://msdn.microsoft.com/en-us/library/aa753274(VS.85%29.aspx), IE will use that registry entry instead.
You'll probably want to use SysInternal's RegMon to debug this.