Using icloud as a Database storage - icloud

I have copy my sqlite DB in icloud and I want to use that icloud sqlite DB as insert/update and delete. I did code for it but it is not working in other device(I have 2 devices). Means when I perform any DB operation it is working for one this other device does not show any change in device. If I Insert one record in iPhone it does not show in iPad. I am useing copyItemAtPath,removeItemAtPath and removeItemAtURL methods from FileMenager class.
It's really urgent.
Thanks

Use CoreData + iCloud. Check out https://devforums.apple.com/thread/126670?tstart=0 you should have Developer acc to access the forum.

Related

Outsourcing the dashboard for others - how to keep the privacy but can still fixing the bugs of the report?

I need your help.
I create a dashboard for another sector of our company. The data for the dashboard is from google docs, and people from that sector edit it daily (sometimes changing the name of the columns or removing the column), which makes me manually check twice per week to make sure that the dashboard is okay.
After the dashboard was created that sector doesn't want me to continue accessing their data. Is there any solution that: 1/allow me to check the dashboard when it has problem(s) 2/minimize my access to their private data?
No, if you want to be able to check the report you will need access to the workspace. If you can't have access to the data, then a new report owner who does have access to it will have to take it over from you.
The only other way would be to create a copy of the google docs, with anonymised data, for column changes. You base a report on that, change the connection settings, then deploy it to the workspace. But if you can deploy it, you can technically access the live data in the work space.

Unable to upload Chinook DB on APEX

Is there a way to upload and import the Chinook DB into Oracles' APEX. I downloaded the db file from the chinook website (https://chinookdatabase.codeplex.com/) but having trouble finding how to upload it.
I think you got it wrong. Chinook documentation page suggests that - once you download the file and extract ZIP contents into some directory - you have to connect to SQL*Plus in order to perform installation.
It says that you should create a database user (which means that you have to have access to that database as a privileged user, such as SYS). Then you'd create objects and insert data into tables.
Therefore, Apex comes far, far in the future (regarding your current position). Besides, I'd say that it is YOU who should create an Apex application based on Chinook database; you won't get anything Apex-ish during Chinook installation.

SharePoint 2013 Admin Content DB is corrupted - Recover/Fix options?

After recovering from a recent hardware failure on our SharePoint server (single server farm), all the SQL DBs were in suspect mode, to change the mode back to normal, we ran the consistency checks on all DBs and successfully changed back to normal mode. However, one particular database i.e. SharePoint_AdminContent_ is still causing SQL crashes with messages like:
The Database ID 6, Page (1:11812), slot 22 for LOB data type node does not exist. This is usually caused by transactions that can read uncommitted data on a data page. Run DBCC CHECKTABLE.
dbcc checkdb with REPAIR_ALLOW_DATA_LOSS fails and does not complete successfully.
I have set the DB to single user mode for now, the central admin works when I set to multi user mode but the SQL logs very quickly fill up the hard drive with crash dumps. I suspect that the hardware failure has caused some serious damage to the DB which cannot be repaired.
I tried to move central admin site to a new content db using move-spsite but it fails with the error given above.
Now, in an attempt to repair central admin, I have tried to unprovision the central admin and tried to re-create the central admin using both Configuration Wizard GUI and PowerShell, one by one but both these methods return the same error that I have specified above, while trying to create new central admin.
I have tried to backup the corrupted DB and restore it to a new DB to see if it works, but it does not. The corruption transfers to restored DB as well.
I have also tried to detach the corrupted AdminContent DB from SQL and then tried to create a new central admin site (hoping that it will create a new admin content DB) but it complains that it cannot find the old admin content db (I suppose SharePoint_Config DB holds the references to old AdminContent DB), anyways this method fails as well because the old DB detached, and is not available.
Then, I have tried to create a new content database under central admin web application, unprovisioned central admin site, removed corrupted AdminContent DB (through central admin) and tried to create new central admin site using psconfiggui, it did not open the site until I attached corrupted admin content DB through powershell (mount-spcontentdatabase)
I have a full farm backup taken using SharePoint native tools through powershell. It has central admin backup but it cannot be restored individually, I will need to restore the whole farm somewhere to even try to see if restored admin content DB will work and even if it works, how would I transfer it back to original farm because it will have a new guid and how would I associate it with original farm? I cannot restore it to original farm because the backup is 3-4 days older and I can only restore admincontent if I perform a full farm restore which will overwrite all the content as well.
Is there any way I can setup a new Admin Content DB and create a new central admin site using that DB? or anything I can do to fix this? Any help will be appreciated.
After 7 months you probably fixed it, on that case please share your approach with us, otherwise let me recommend you to have a look on:
http://www.sqlskills.com/blogs/paul/finding-table-name-page-id/
Particularly on this clause and how to read its output:
DBCC PAGE (6, 1, 11812, 0) WITH TABLERESULTS;
Note: The article's author is Paul Randal.

Making database schema changes using Microsoft Sync framework without losing any tracking table data

I am using Microsoft Synch Service Framework 4.0 for synching Sql server Database tables with SqlLite Database on the Ipad side.
Before making any Database schema changes in the Sql Server Database, We have to Deprovision the database tables. ALso after making the schema changes, we ReProvision the tables.
Now in this process, the tracking tables( i.e. the Synching information) gets deleted.
I want the tracking table information to be restored after Reprovisioning.
How can this be done? Is it possible to make DB changes without Deprovisioning.
e.g, the application is in Version 2.0, The synching is working fine. Now in the next version 3.0, i want to make some DB changes. SO, in the process of Deprovisioning-Provisioning, the tracking info. gets deleted. So all the tracking information from the previous version is lost. I do not want to loose the tracking info. How can i restore this tracking information from the previous version.
I believe we will have to write a custom code or trigger to store the tracking information before Deprovisioning. Could anyone suggest a suitable method OR provide some useful links regarding this issue.
the provisioning process should automatically populate the tracking table for you. you don't have to copy and reload them yourself.
now if you think the tracking table is where the framework stores what was previously synched, the answer is no.
the tracking table simply stores what was inserted/updated/deleted. it's used for change enumeration. the information on what was previously synched is stored in the scope_info table.
when you deprovision, you wipe out this sync metadata. when you synch, its like the two replicas has never synched before. thus you will encounter conflicts as the framework tries to apply rows that already exists on the destination.
you can find information here on how to "hack" the sync fx created objects to effect some types of schema changes.
Modifying Sync Framework Scope Definition – Part 1 – Introduction
Modifying Sync Framework Scope Definition – Part 2 – Workarounds
Modifying Sync Framework Scope Definition – Part 3 – Workarounds – Adding/Removing Columns
Modifying Sync Framework Scope Definition – Part 4 – Workarounds – Adding a Table to an existing scope
Lets say I have one table "User" that I want to synch.
A tracking table will be created "User_tracking" and some synch information will be present in it after synching.
WHen I make any DB changes, this Tracking table "User_tracking" will be deleted AND the tracking info. will be lost during the Deprovisioning- Provisioning process.
My workaround:
Before Deprovisioning, I will write a script to copy all the "User_tracking" data into another temporary table "User_tracking_1". so all the existing tracking info will be stored in "User_tracking_1". WHen I reprovision the table, a new trackin table "User_Tracking" will be created.
After Reprovisioning, I will copy the data from table "User_tracking_1" to "User_Tracking" and then delete the contents from table "User_Tracking_1".
UserTracking info will be restored.
Is this the right approach...

Coldfusion: Move data from one datasource to another

I need to move a series of tables from one datasource to another. Our hosting company doesn't give shared passwords amongst the databases so I can't write a SQL script to handle it.
The best option is just writing a little coldfusion scripty that takes care of it.
Ordinarily I would do something like:
SELECT * INTO database.table FROM database.table
The only problem with this is that cfquery's don't allow you to use two datasources in the same query.
I don't think I could use a QoQ's either because you can't tell it to use the second datasource, but to have a dbType of 'Query'.
Can anyone think of any intelligent ways of getting this done? Or is the only option to just loop over each line in the first query adding them individually to the second?
My problem with that is that it will take much longer. We have a lot of tables to move.
Ok, so you don't have a shared password between the databases, but you do seem to have the passwords for each individual database (since you have datasources set up). So, can you create a linked server definition from database 1 to database 2? User credentials can be saved against the linked server, so they don't have to be the same as the source DB. Once that's set up, you can definitely move data between the two DBs.
We use this all the time to sync data from our live database into our test environment. I can provide more specific SQL if this would work for you.
You CAN access two databases, but not two datasources in the same query.
I wrote something a few years ago called "DataSynch" for just this sort of thing.
http://www.bryantwebconsulting.com/blog/index.cfm/2006/9/20/database_synchronization
Everything you need for this to work is included in my free "com.sebtools" package:
http://sebtools.riaforge.org/
I haven't actually used this in a few years, but I can't think of any reason why it wouldn't still work.
Henry - why do any of this? Why not just use SQL manager to move over the selected tables usign the "import data" function? (right click on your dB and choose "import" - then use the native client and permissions for the "other" database to specify the tables. Your SQL manager will need to have access to both DBs, but the db servers themselves do not need access to each other. Your manager studio will serve as a conduit.