I added SOLR to my existing webapp. When I query the SOLR data from the
browser it is returning zero records (this is expected since there is nothing there). But when I try to add documents using HttpSolrServer no indexes are created and no exceptions are thrown. When I request update url from a browser it looks good.
Every time I run my program to create indexes a write.lock file is created in my solr data folder but again no indexes created.
Related
Is there a way to upload and import the Chinook DB into Oracles' APEX. I downloaded the db file from the chinook website (https://chinookdatabase.codeplex.com/) but having trouble finding how to upload it.
I think you got it wrong. Chinook documentation page suggests that - once you download the file and extract ZIP contents into some directory - you have to connect to SQL*Plus in order to perform installation.
It says that you should create a database user (which means that you have to have access to that database as a privileged user, such as SYS). Then you'd create objects and insert data into tables.
Therefore, Apex comes far, far in the future (regarding your current position). Besides, I'd say that it is YOU who should create an Apex application based on Chinook database; you won't get anything Apex-ish during Chinook installation.
I'm trying to use a regular python script to add data to a table that was created via the standard django process (start project/app/create model etc).
In the same DB, I set up another table with the same columns as the django DB to test on, and wrote a script that successfully parsed data and inserted it into that DB.
When I changed the table name so that the data would be written to the standard Django table, nothing was inserted and no error was thrown.
Is there something that prevents access to the Django tables that I'm unaware of?
I am running a Django app with 2 processes (Apache + mod_wsgi).
When a certain view is called, the content of a folder is read and the process adds entries to my database based on what files are new/updated in the folder.
When 2 such views execute at the same time, both see the new file and both want to create a new entry. I cannot manage to have only one of them write the new entry.
I tried to use select_on_update, with transaction.atomic(), get_or_create, but without any success (maybe used wrongly?).
What is the proper way of locking to avoid writing an entry with the same content twice with get_or_create ?
I ended up enforcing the unicity at the database (model) level, and catching the resulting IntegrityError in the code.
After recovering from a recent hardware failure on our SharePoint server (single server farm), all the SQL DBs were in suspect mode, to change the mode back to normal, we ran the consistency checks on all DBs and successfully changed back to normal mode. However, one particular database i.e. SharePoint_AdminContent_ is still causing SQL crashes with messages like:
The Database ID 6, Page (1:11812), slot 22 for LOB data type node does not exist. This is usually caused by transactions that can read uncommitted data on a data page. Run DBCC CHECKTABLE.
dbcc checkdb with REPAIR_ALLOW_DATA_LOSS fails and does not complete successfully.
I have set the DB to single user mode for now, the central admin works when I set to multi user mode but the SQL logs very quickly fill up the hard drive with crash dumps. I suspect that the hardware failure has caused some serious damage to the DB which cannot be repaired.
I tried to move central admin site to a new content db using move-spsite but it fails with the error given above.
Now, in an attempt to repair central admin, I have tried to unprovision the central admin and tried to re-create the central admin using both Configuration Wizard GUI and PowerShell, one by one but both these methods return the same error that I have specified above, while trying to create new central admin.
I have tried to backup the corrupted DB and restore it to a new DB to see if it works, but it does not. The corruption transfers to restored DB as well.
I have also tried to detach the corrupted AdminContent DB from SQL and then tried to create a new central admin site (hoping that it will create a new admin content DB) but it complains that it cannot find the old admin content db (I suppose SharePoint_Config DB holds the references to old AdminContent DB), anyways this method fails as well because the old DB detached, and is not available.
Then, I have tried to create a new content database under central admin web application, unprovisioned central admin site, removed corrupted AdminContent DB (through central admin) and tried to create new central admin site using psconfiggui, it did not open the site until I attached corrupted admin content DB through powershell (mount-spcontentdatabase)
I have a full farm backup taken using SharePoint native tools through powershell. It has central admin backup but it cannot be restored individually, I will need to restore the whole farm somewhere to even try to see if restored admin content DB will work and even if it works, how would I transfer it back to original farm because it will have a new guid and how would I associate it with original farm? I cannot restore it to original farm because the backup is 3-4 days older and I can only restore admincontent if I perform a full farm restore which will overwrite all the content as well.
Is there any way I can setup a new Admin Content DB and create a new central admin site using that DB? or anything I can do to fix this? Any help will be appreciated.
After 7 months you probably fixed it, on that case please share your approach with us, otherwise let me recommend you to have a look on:
http://www.sqlskills.com/blogs/paul/finding-table-name-page-id/
Particularly on this clause and how to read its output:
DBCC PAGE (6, 1, 11812, 0) WITH TABLERESULTS;
Note: The article's author is Paul Randal.
This is my scenario: I need to copy files to a sharepoint document library using its web services and set metadata on them. That's all possible with CopyIntoItems (from Copy webservice) except for Lookup fields. CopyIntoItems ignores them, so i need another way to set data on those fields.
I've tried to create a list item with the mandatory and lookup fields metadata and then, using the item ID (creating a FieldInformation field with the ID, as well as some other simple metadata), called the CopyIntoItems method and, instead of updating the item, sharepoint created a new one.
I can't do this in the reverse order because i have no way to get the ID from the item created by CopyIntoItems...
So, the question is: How can i upload a file to a sharepoint document library and set all its metadata? Including Lookup fields.
Use a regular PUT WebRequest to to upload the document into the library
Query the document library to find the ID of the item you just uploaded (based on path)
Use the Lists.asmx web service to update the document metadata
Helpful link: Uploading files to the SharePoint Document Library and updating any metadata columns
Keep in mind that if the destination folder item count + the ancestor folders item count exceeds the list view threshold then you can't query the list for the id (step 2 from Kit's answer).
Queries can be done more efficiently if constrained to a particular branch in the folder hierarchy. A workaround would be to modify the site settings, but the queries would be sluggish and would make the solution less portable because the threshold for Office365 and BPOS can't be changed.
This explains it much better: http://office.microsoft.com/en-us/office365-sharepoint-online-enterprise-help/create-or-delete-a-folder-in-a-list-or-library-HA102771961.aspx