I'm attempting to sync a SQL Server CE 3.5 database with a SQL Server 2008 database using MS Sync Services. I am using VS 2008. I created a Local Database Cache, connected it with SQL Server 2008 and picked the tables I wanted to sync. I selected SQL Server Tracking. It modified the database for change tracking and created a local copy (SDF) of the data.
I need two way syncing so I created a partial class for the sync agent and added code into the OnInitialized() to set the SyncDirection for the tables to Bidirectional. I've walked through with the debugger and this code runs.
Then I created another partial class for cache server sync provider and added an event handler into the OnInitialized() to hook into the ApplyChangeFailed event. This code also works OK - my code runs when there is a conflict.
Finally, I manually made some changes to the server data to test syncing. I use this code to fire off a sync:
var agent = new FSEMobileCacheSyncAgent();
var syncStats = agent.Synchronize();
syncStats seems to show the count of the # of changes I made on the server and shows that they were applied. However, when I open the local SDF file none of the changes are there.
I basically followed the instructions I found here:
http://msdn.microsoft.com/en-us/library/cc761546%28SQL.105%29.aspx
and here:
http://keithelder.net/blog/archive/2007/09/23/Sync-Services-for-SQL-Server-Compact-Edition-3.5-in-Visual.aspx
It seems like this should "just work" at this point, but the changes made on the server aren't in the local SDF file. I guess I'm missing something but I'm just not seeing it right now.
I thought this might be because I appeared to be using version 1 of Sync Services so I removed the references to Microsoft.Synchronization.* assemblies, installed the Sync framework 2.0 and added the new version of the assemblies to the project. That hasn't made any difference.
Ideas?
Edit: I wanted to enable tracing to see if I could track this down but the only way to do that is through a WinForms app since it requires entries in the app.config file (my original project was a class library). I created a WinForms project and recreated everything and suddenly everything is working. So apparently this requires a WinForm project for some reason?
This isn't really how I planned on using this - I had hoped to kick off syncing through another non-.NET application and provide the UI there so the experience was a bit more seemless to the end user. If I can't do that, that's OK, but I'd really like to know if/how to make this work as a class library project instead.
You can load a dll's config file this way (in your class' constructor):
AppDomain.CurrentDomain.SetData("APP_CONFIG_FILE", System.IO.Path.Combine(Environment.CurrentDirectory, "<dll name>.config"));
System.Configuration.ConfigurationManager.RefreshSection("configuration");
The senario you describe should work. I have a similar app that used the sync inside a class, and thats ok.. You do need to be able to set the trace properties, most easily through a app.config, but thats not your problem... No errors reported by the Syncronize method?
No Conflict errors raised?
If you are still having difficulty you can add the following bits to your applications config file, in the section
<switches>
<add name="DefaultSwitch" value="Information" />
<!-- Sync Tracer Setting 0-off, 1-error, 2-warn, 3-info, 4-verbose. -->
<add name="SyncTracer" value="4" />
</switches>
<trace autoflush="true">
<listeners>
<add name="TestSyncListener" type="System.Diagnostics.TextWriterTraceListener" initializeData="c:\SyncTraceFile.txt" />
</listeners>
</trace>
Then have a look in the c:\SyncTraceFile.txt for any issues.
Related
I am using 3 webservices in my project and it was running correctly. But in these days it is crashing when creating it's client and I haven't changed anything.
How can I solve it, could you help me, please?
It's saying your service config file is not found. Are you referencing it correctly from the app.config?
It looks like you're using WPF or Silverlight so find your app.config file.
You cannot apply configSource= to since that is a config section group - not a simple config section, and the configSource attribute is only available on simple configuration sections.
You should however absolutely be able to apply the configSource attribute to any of the nodes inside - I do this all the time, in production systems - and it just works; for behaviors, services, clients, bindings, etc; each in a separate .config file.
.ClientConfig is also a bad extension. All configuration files should have .config extension.
In the worst case scenario that you can't configure the external config source for the settings, move them back tot he app.config file of the application!
I have an application written in Django that has now become quite sizeable, the application is in constant use throughout the day and making changes 'on-the-fly' risks disruption.
I am fairly new to software development and am not sure what the best way is to develop a django application where I can issue updates grouped together and release a 'version'. So instead of me updating the main application as and when in a live environment, I'd like if I can have a development server where I can develop & test any updates and then I can roll these out once a month.
I can just copy the view/model files over and overwrite the old ones but what is the best way to handle database changes? I assume I will have to write SQL to add/drop changed columns and overwrite the django_content_type table completely?
Any advice appreciated!
I'd go further than Ashish: you must use version control. You should not be "copying files over" and overwriting old ones. In 2014 there is no excuse for not using something like git (or Mercurial, or even SVN).
For the database changes, you should of course be using migrations. In version 1.7, due to be released very soon now, these are included in core Django. In previous versions, you will need to install the third-party library South.
As for the source files, you can use git. You test your changes on a dev (or test) server, and once you're satisfied, push the changes to git. On the prod server, you then pull the changes and restart server.
The database changes can be accomplished using south and migrations. Again, you need to test your migrations on dev or test before pushing them to git. Once you're satisfied, you can then move the changes to prod.
The flow is ...
develop --> test --> commit changes to remote git --> pull changes on production --> migrate --> restart server
Look here for important links
git --> http://git-scm.com
githut --> http://github.com
south --> http://south.aeracode.org/
We have a peculiar situation here that is causing our automated tests to fail on a newly created lab environment, using TFS 2012.
We've always had a bunch of 'unit' tests that tested our DAL code, which in turn uses the Enterprise Library Data Application Block to perform operations on the database. This was setup quite a few years ago, to enable our clients to choose either SqlServer or Oracle databases alongside our product, taking advantage of the DatabaseFactory class and all the supporting generic interfaces and classes in the entlib.data. I mentioned 'unit' like this because these are actually not pure unit tests but integration ones, seeing as they require a real database to work.
To test the same SQL code against both databases, we maintain two separate .config files inside a 'Resources' folder in our TFS project branch, pointing to our test databases:
Resources\SqlServer\ConnectionStrings.config (SqlServer specific connection strings)
Resources\Oracle\ConnectionStrings.config (Oracle specific connection strings)
In the root Resources folder, there are two accompanying .testsettings files, responsible for deploying files specific to each database:
Resources\SqlServer.testsettings (which deploys the SqlServer\ConnectionStrings.config file)
Resources\Oracle.testsettings (which deploys the Oracle\ConnectionStrings.config file)
Since the whole structure is in source control, the testsettings is able to find the .config files by using relative paths, allowing us to test everything without having to setup parameters manually.
On devs machines, we always select the SqlServer.testsettings file when running the tests, so that they don't need to have the whole oracle environment installed to validate their changes before checking in the code. The Oracle side of the validation always occurred in our build process, where we actually test every method twice: first using the same SqlServer.testsettings used by the developers, and then using the Oracle.testsettings.
This way, we can setup our test assemblies' app.configs to redirect the connectionStrings node to an external file, like this:
<configuration>
<connectionStrings configSource="ConnectionStrings.config"/>
...
When the tests are run, mstest copies the adequate ConnectionStrings.config file to the test's working folder, based on which .testsettings was used to initiate the run.
This was working fine until today, when I discovered that tests started through Microsoft Test Manager ignore the Visual Studio .testsettings files. Now I'm trying to run these same tests in our lab environment but the ConnectionStrings.config files are not deployed (understandably) and the tests fail.
How can we achieve this without using .testsettings files? After having huge headaches trying to setup oracle correctly in our new x64 build server, we disabled Oracle tests in the build definition. Now that we started setting up our lab environment, we thought about having one of the machines in it configured with our whole system using Oracle, enabling us to again run these 'unit tests' with oracle-specific connection strings to validate our queries. At the same time, we want to keep testing everything locally and on the build server using SqlServer also.
I think using [DeploymentItem] in this case is impossible, since it is meant for static files and not selectable, dynamic ones like our current setup.
Is there any equivalent to the .testsettings deployment process that we could use with TestCases inside MTM/Lab Env? On the Properties tab for our TestPlan, I can see the Automated Runs -> Test Settings option, but that only seems to allow deployment by specifying absolute paths (which will actually be resolved on the target machines). Is there a way to specify a relative path there, pointing to our ConnectionStrings.config files checked in on TFS? Maybe yet another alternative exists that I'm missing, perhaps using multiple build configurations?
Create separate build configurations for each of the server types by going into Configuration Manager and click New under Active solution configurations. Edit the project file and do something like this:
<PropertyGroup Condition="'$(Configuration)' == 'Oracle'">
<appConfig>App.Oracle.Config</AppConfig>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)' == 'SQL'">
<appConfig>App.SQL.Config</AppConfig>
</PropertyGroup>
Then ensure you have the correct connection strings in each of the config files. You can then configure TFS to build using those build configurations.
More info on using PropertyGroup and Condition, MSBuild Configurations and MSBuild project properties
Has anyone here successfully connected to neo4j using ColdFusion?
I was able to connect to neo4j 1.6.1 using this guide as a starting point: http://ghostednotes.com/2010/04/29/using-neo4j-graph-databases-with-coldfusion
. However, it was a short lived success. I have since uninstalled neo4j 1.6.1 and installed 1.7.
I am now running Apache, CF 9.0.1 on windows XP as a local dev box. I added ...\neo4j-community-1.7\lib to my CF class path and the libraries are listed in CF Server Java Class Path. neo4j is running fine, as I can use their administrator interface: http://localhost:7474/webadmin/# . CF and Apache are also running fine. I use them daily.
While the code below works, I'd really like to 'see' what's going on using the neo4j web admininistrator. So I can coordinate my learning neo4j while using the data in a CF application.
Code: (Works)
dbroot = "/tmp/neo4jtest1/";
graphDb = createObject('java', 'org.neo4j.kernel.EmbeddedGraphDatabase');
graphDb.init( dbroot & 'var/myFirstGraphDB');
So I tried to connect to the neo4j db graph.db . However the code fails.
Code: (fails)
graphDb = createObject('java', 'org.neo4j.kernel.EmbeddedGraphDatabase');
graphDb.init( dbroot & 'graph.db');
Error:
Object instantiation exception.
An exception occurred while instantiating a Java object. The class must not be an interface or an abstract class. Error: ''.
If I remove the "." in graph.db it does create a "graphdb" in the neo4j data folder, and successfully connects to it. However, that db is not viewable with their admin :(
I'm a novice, so please dumb down your answer.
Ok, I think what you're trying to achieve is not possible. It is not possible to access Neo4J within CF (via Java) and have the admin interface working (caveat 1 applies).
If you have put all the jars of the Neo4J package into Adobe CF then most likely the Neo4J admin interface is looking at it's own Neo4J file system. When you create the Embedded server it is not connecting to the same database because it simply can't.
Embedded Neo4J doesn't work like a standard database connection. One Embedded Neo4J reads and writes to one directory location (key word: directory, it doesn't open a single file but a whole bunch of them). No two Neo4J instances can access the same directory location (caveat 2 applies).
Ok, the caveats:
1- it is possible, in theory, to manually start up the admin interface programatically so that it uses the Embedded server that you create via Java. The Java code looks simple enough (taken from Using the server (including web administration) with an embedded database):
// Create your embedded graph db somewhere
src = CreateObject("java", "org.neo4j.server.WrappingNeoServerBootstrapper")
.init(graphDb);
srv.start();
// The server is now running
// until we stop it:
srv.stop();
I did not get this working, mostly because the admin server hasa bunch of dependencies that were incompatible with the rest of my setup, so I can't advise on how well the above will work.
2- it is possible to have 1 read/write Neo4J accessing one location and then have multiple read-only Neo4Js (EmbeddedReadOnlyGraphDatabase) reading the same location (but I've never tried it).
You do have the option of using the REST interface - either manually, or via the Neo4J Java REST Binding (kinda slow, though).
It might be worth reading the Deployment Scenarios documentation before getting too deep in this.
There is at least one CF/Neo4J bridge out there, but it's pretty incomplete. I have one that I worked on, but I need to figure out if I can open source it!
Just a small addition to otupman's comments. I can confirm his theory of connecting to the admin interface from CF. Adding the following jars to the CF class path seemed to be enough to get the basics up and running. You may need additional jars if you are using more advanced features. Note, I am using Tomcat so the exact jars may differ slightly for your environment
neo4j-community-1.7/lib/*.* (entire directory)
neo4j-community-1.7/system/lib: (ONLY the jars below)
asm-3.1.jar
asm-analysis-3.2.jar
asm-commons-3.2.jar
asm-tree-3.2.jar
asm-util-3.2.jar
commons-configuration-1.6.jar
jackson-core-asl-1.8.3.jar
jackson-jaxrs-1.8.3.jar
jackson-mapper-asl-1.8.3.jar
jersey-core-1.9.jar
jersey-multipart-1.9.jar
jersey-server-1.9.jar
jetty-6.1.25.jar
jetty-util-6.1.25.jar
neo4j-server-1.7-static-web.jar
neo4j-server-1.7.jar
rrd4j-2.0.7.jar
Then started the server and database in onApplicationStart
factory = createObject("java", "org.neo4j.graphdb.factory.GraphDatabaseFactory");
dbroot = ExpandPath("/neo4jtest/");
graphDb = factory.newEmbeddedDatabase(dbroot & 'myFirstGraphDB');
Bootstrapper = createObject("java", "org.neo4j.server.WrappingNeoServerBootstrapper");
graphServer = Bootstrapper.init( graphDb );
graphServer.start();
application.graphServer = graphServer;
application.graphDb = graphDB;
And closed both in onApplicationEnd
application.graphDb.shutDown();
application.graphServer.stop();
Edit: After some further testing, I think is better to load them once in OnServerStart. Then use a shutdown hook to close them. But since this is just for a local development box, it is less critical.
Trying out BizTalk with a web service call
The request/response is working fine on my own dev machine, but not on test ...
Exported the MSI over to my deployment test server (a seperate virtual), created the application with the Application/Import and then tested it to find it not working ...
checking the event log and it's shows an XLANG error with "Could not load file or assembly"
checked the GAC and the BizTalk assembly isn't there ...
so manually added it and that seemed to do the trick
Does the added web service reference mean the assembly has to be GAC'ed as part of a deployment?
And just to get me confused, tried deploying to the real test server, import the MSI, manual copy of the DLL to the GAC ... and it fails with the XLANG error in eventlog :-(
any idea what's going on here ?
What adapter are you using? I am assuming the SOAP adapter. If so you can look at your bindings. Look here near figure 6:
http://msdn.microsoft.com/en-us/magazine/cc163464.aspx
-Bryan
Just importing the MSI will by default not include the dll.You need to run it as well (like from windows explorer or the option given after install). This will physically move the file. This behavior is quite useful when scaling out. I don't think it have anything to do with the web reference specifically.