Sharepoint 2013 Query very slow - sharepoint-2013

we set up a new SharePoint 2013 Server to test how it would work as Document-Storage.
The Problem is, that it is very slow and I dont know why..
I adapted from msdn:
ClientContext _ctx;
private void btnConnect_Click(object sender, RoutedEventArgs e)
{
try
{
_ctx = new ClientContext("http://testSP1");
Web web = _ctx.Web;
Stopwatch w = new Stopwatch();
w.Start();
List list = _ctx.Web.Lists.GetByTitle("Test");
Debug.WriteLine(w.ElapsedMilliseconds); //24 first time, 0 second time
w.Restart();
CamlQuery q = CamlQuery.CreateAllItemsQuery(10);
ListItemCollection items = list.GetItems(q);
_ctx.Load(items);
_ctx.ExecuteQuery();
Debug.WriteLine(w.ElapsedMilliseconds); //1800 first time, 900 second Time
}
catch (Exception)
{
throw;
}
}
There arent very much Documents in the Test list.
Just 3 Folders and 1 Word-File.
Any suggestions/ideas why it is this slow?

Storing unstructured content (Word docs, PDFs, anything except metadata) in SharePoint's SQL content database is going to result in slower upload and retrieval than if the files are stored on the file system. That's why Microsoft created the Remote BLOB (Binary Large Object) Storage interface to enable files to be managed in SharePoint but live on the file system or in the cloud. The bigger the files, the greater the performance hit.
There are several third-party solutions that leverage this interface, including my company's offering, Metalogix StoragePoint. You can reach out to me at trossi#metalogix.com if you would like to learn more or visit http://www.metalogix.com/Products/StoragePoint/StoragePoint-BLOB-Offloading.aspx

Related

NodeJs blockchain on private ethereum

I have created simple blockchain application using NodeJS. The blockchain data file is getting stored on local File System. There is no mining blocks, no difficulty level involved in this blockchain.
Please suggest, if I can host this application on private ethereum / hyperledge, and what all changes I would need to do for this? Below code I'm using for creating blocks.
Sample Genesis Block
[{"index":0,"previousHash":"0","timestamp":1465154705,"transaction":{"id":"0","transactionHash":"0","type":"","data":{"StudInfo":[{"id":"","studentId":"","parenterId":"","schemeId":"","batchId":"","instructorId":"","trainingId":"","skillId":""}]},"fromAddress":""},"hash":"816534932c2b7154836da6afc367695e6337db8a921823784c14378abed4f7d7"}]
Sample Code(NodeJS)
var generateNextBlock = (blockData) => {
var previousBlock = getLatestBlock();
var nextIndex = previousBlock.index + 1;
var nextTimestamp = new Date().getTime() / 1000;
var nextHash = calculateHash(nextIndex, previousBlock.hash, nextTimestamp, blockData);
return new Block(nextIndex, previousBlock.hash, nextTimestamp, blockData, nextHash);
};
var calculateHashForBlock = (block) => {
return calculateHash(block.index, block.previousHash, block.timestamp, block.transaction);
};
var calculateHash = (index, previousHash, timestamp, transaction) => {
return CryptoJS.SHA256(index + previousHash + timestamp + transaction).toString();
};
var addBlock = (newBlock) => {
if (isValidNewBlock(newBlock, getLatestBlock())) {
blockchain.push(newBlock);
blocksDb.write(blockchain);
}
};
var isValidNewBlock = (newBlock, previousBlock) => {
if (previousBlock.index + 1 !== newBlock.index) {
console.log('invalid index');
return false;
} else if (previousBlock.hash !== newBlock.previousHash) {
console.log('invalid previoushash');
return false;
} else if (calculateHashForBlock(newBlock) !== newBlock.hash) {
console.log(typeof (newBlock.hash) + ' ' + typeof calculateHashForBlock(newBlock));
console.log('invalid hash: ' + calculateHashForBlock(newBlock) + ' ' + newBlock.hash);
return false;
}
return true;
};
Congratulations if you have gotten this far you have successfully setup Geth on AWS. Now we will go over how to configure an Ethereum node. Make sure you are in your home directory on your cloud server with the pwd command and then create a new folder called whatever you want to create the genesis block of your Ethereum blockchain. You can do this with the following commands. The first command is to create the folder, change directory into the folder and then create a file called genesis.json.
mkdir mlg-ethchain cd mlg-ethchain nano genesis.json
To create a private blockchain you need to define the genesis block. Genesis blocks are usually embedded in the client but with Ethereum you are able to configure a genesis block using a json object. Paste the following JSON object into your genesis.json file and we explain each variable in the following section.
{
"nonce": "0xdeadbeefdeadbeef",
"timestamp": "0x0",
"parentHash": "0x0000000000000000000000000000000000000000000000000000000000000000",
"extraData": "0x0",
"gasLimit": "0x8000000",
"difficulty": "0x400",
"Mixhash":
"0x0000000000000000000000000000000000000000000000000000000000000000", "coinbase": "0x3333333333333333333333333333333333333333",
"alloc": {
}
}
The coinbase is the default address of the mining address. Because you have not created a wallet yet this can be set to whatever you want provided that it is a valid Ethereum address. Difficulty is how hard it is for a miner to create a new block. For testing and development purposes it is recommended that you start with a low difficulty and then increase it. The parentHash is the hash of the parent block which doesnt exist because this is the first block in the chain. The gasLimit is the maximum amount of gas that is required to execute a transaction or send tokens from one account to another. The nonce is a random number for this block. This is the number that every miner has to guess to define the block but for the first block it must be hardcoded. You are able to provide any extraData is the extraData section. In the alloc section you can allocate a number of premined tokens or ether to certain addresses at the beginning of the blockchain. We do not want to do this so we will keep it blank.
After you have confirmed this you can check the file to make sure it has been configured correctly with the cat command. From the same directory input this command.
cat genesis.json
From what I understand, your JS code does some sort of low-level block generation (albeit in a centralised/standalone way). I'm not sure of the purpose of that code, but it's not an app that you could "port to" Ethereum or Hyperledger [Fabric], since the role of those blockchain engines is precisely to implement the block generation logic for you.
A JS app designed to work with Ethereum wouldn't do any of the block management. At the contrary, it would perform high-level and client-level interaction with a smart contract, which is basically a class (similar to Java classes) available on the whole network and whose methods are guaranteed by the network to do what they're meant to. Your JS app would essentially call the smart contract's methods as a client without caring about any block production issues.
In other, more vague but maybe more familiar terms:
Client: the JS app
Server/backend: the smart contract
Infrastructure/engine: Ethereum
On Ethereum, the way you get to the smart contract as a client is by sending RPC calls in JSON (hence the name JSON-RPC) to the contract's methods. The communication is done by embedding that JSON over HTTP to an Ethereum node, preferably your own. In Javascript, a few libraries such as web3 give you a higher-level abstraction view so that you don't need to care about JSON-RPC and you can think of your contract's methods as normal Javascript functions.
Also, since you're asking about private Ethereum: another consequence of that distribution of layers is that client code and smart contracts don't need to care about whether the Ethereum network is a public or a private one, i.e. what consensus protocol is in place. To make a bold analogy, it's similar to how SQL queries or schemas stay the same no matter how the database is persisted on disk.
Interaction with Hyperledger Fabric is similar in concept, except that you do plain REST calls to an HTTP endpoint exposed by the network. I'm not sure what client-level abstraction libraries are available.

Sharepoint document library storing files on filesystem

I'm in a bit of trouble here. Here is the context:
One of our customers asked us to develop an alternative solution to storing documents of a document library in the content database as their content database is growing too fast. They provided us with a network storage so that the documents could be stored in the filesystem instead. After googling a bit, I've found a feature called Remote Blob Storage RBS RBS, but as the references say, this is a per content database feature which is not acceptable for the context. The other option I've come up with is the use of SPItemEventReceiver so that in the ItemAdded event I could save the SPFile associated with the ListItem of the SPItemEventProperties property to the filesystem and possibly delete or truncate the SPFile object
public static void DeleteAssociatedFile(SPWeb web, SPListItem item)
{
try
{
if (item == null) { throw new ArgumentNullException("item"); }
if (item.FileSystemObjectType == SPFileSystemObjectType.File)
{
web.AllowUnsafeUpdates = true;
using (var fileStream = item.File.OpenBinaryStream())
{
if (fileStream.CanWrite)
{
fileStream.SetLength(0);
}
}
item.File.Update();
}
}
catch (Exception ex)
{
// log error message
Logger.Unexpected("ListItemHelper.DeleteAssociatedFile", ex.Message);
throw;
}
finally
{
web.AllowUnsafeUpdates = false;
}
}
forcing it to not store its content into the content database. But it didn't work out. Everytime that I somehow manage to delete or truncate the SPFile associated with the ListItem, the ListItem itself either gets deleted from the document library or the file doesn't get affected by the change. So my is question is: is there a solution for this problem? Any other thoughts that could help me in this quest?
Thanks in advance!
As you have asked other thoughts
One thing coming into my mind is one drive for business instead of network storage
Another is develop custom file upload, upload the file directly to network storage and once uploaded, add an entry in SharePoint list.

Bulk content load and Page Creation in Sitecore

I have a process where i am getting xml resultset , from which i can process the data and programmatically create pages in Sitecore. This is simple if we have to few pages that even once .
Now my problem is that i have to create minimum 50k pages in Sitecore twice a day from xml.So to load that much data in sitecore once is really slow process.
Is there is a optimum way to create these pages in Sitecore .
I am using Sitecore 7.
process for page creation
using (new Sitecore.SecurityModel.SecurityDisabler())
{
for (int i = 0; i < item.count; i++)
{
Item newCityItem = parentCityItem.Add("Page_" + i, template1);
newCityItem.Editing.BeginEdit();
try
{
newCityItem.Fields["html"].Value = mPages[i].ToString();
newCityItem.Editing.EndEdit();
}
catch (System.Exception ex)
{
// The update failed, write a message to the log
Sitecore.Diagnostics.Log.Error("Could not update item " + newCityItem.Paths.FullPath + ": " + ex.Message, this);
// Cancel the edit (not really needed, as Sitecore automatically aborts // the transaction on exceptions, but it wont hurt your code)
newCityItem.Editing.CancelEdit();
}
}
}
Any help ...
Wrap your loop in a BulkUpdateContext which disables events, indexes, etc
using(new BulkUpdateContext())
{
// code here
}
I don't think it's another way to create Sitecore Items.
What I suggest you, to disable indexing on master database, because it will slow a little bit when new items are created. If you need to index after creating your items you can enable again the indexes and start reindexing.
If these items are being replaced twice a day in Sitecore, I assume that they're not being edited and you're using Sitecore for the presentation layer.
If so, you could map your xml as a Sitecore DataProvider. This way, the xml is used as the source of the items - although they can still be read in Sitecore, and the Sitecore presentation layer sees them as regular sitecore items.
There's a blog post explaining it at http://blog.horizontalintegration.com/2013/03/17/an-introduction-to-sitecore-data-providers/, as well as some documentation in the SDN.
Edit (thanks to jammykam)
I wouldn't map direct to an xml file - maybe put that into a db and then map that into sitecore.
Everytime when you save an item, the statistics were updated (like modified user, modified date etc.) and all the events were fired (item saved, index building, etc). You can disable both of these:
item.Editing.BeginEdit();
item["Title"] = "My new title";
item.Editing.EndEdit(false, true);
Depending on your requirements you may need to rebuild the index at the end of your import.

MVC3/ASP.NET Best practice for ensuring a file argument is local and not trying to go outside of my application root directory?

Another developer maintains a large collection of crystal reports. I need to make these reports available with my ASP.NET MVC3 page without requiring a full Crystal Reports Server product.
The current reporting site is a classic ASP page with all of the args passed e.g Prompt0&Prompt1...etc
To that end, I've created an aspx page that sits in my MVC app and serves these reports out of a directory in my app like so:
public partial class Report : System.Web.UI.Page
{
protected void Page_Load(object sender, EventArgs e)
{
iLogger logger = LoggingFactory.CreateLogger();
ReportDocument rd = new ReportDocument();
string fileName = Request.QueryString["reportfile"];
if(!Regex.IsMatch(fileName,#"^[ 0-9a-zA-Z-_\\]+.rpt$"))
{
//log and throw
}
if(Path.IsPathRooted(fileName))
{
//log and throw
}
string rootPath = Server.MapPath("~/Reports/");
string path = Path.Combine(rootPath, fileName);
if (File.Exists(path))
{
rd.Load(path);
}
//get all keys starting with Prompt
var prompts = Request.QueryString.AllKeys.Where(q => q.StartsWith("Prompt"));
foreach (string promptKey in prompts)
{
//try to convert the rest of the string to an int
//yes, this should probably not just be a replace here...
string withoutPrompt = promptKey.Replace("Prompt", "");
int promptVal;
if (int.TryParse(withoutPrompt, out promptVal))
{
rd.SetParameterValue(promptVal, Request.QueryString[promptKey]);
}
//rd.SetParameterValue(promptKey, Request.QueryString[promptKey]);
}
CrystalReportViewer1.ReportSource = rd;
}
}
This works suprisingly well for the amount of effort (the report designer just needs to change the links within the report/query pages from e.g mywebserver.foo/Report1.rpt?Prompt....etc.etc to mywebserver.foo/mymvcreport/report.aspx?Filename=report1.rpt&Prompt... etc
So great, we can quickly move over to our MVC app and avoid having to have 10 sites go out and buy Crystal Server.
My obvious concern is that in the filename arg, someone could put just about anything in there, eg: "C:\foo\bar" or "../bla/blah",etc. Is there a single best practice for escaping these filenames and ensuring that it is a local path to my app?
I'd like to be able to take a parameter of eg: /Sales/Quarterly/Quarterly.rpt
My first thought is to just use a regex of eg [0-9a-zA-z-_\]+ to ensure no colon or dot characters can be used. Any suggestions on the most complete way to handle this?
Thanks!
EDIT:
Updated with preliminary checks I put in...
as long as you have assigned the right permissions to the application pool user this is something you shouldnt worry about.

How to integration test an object with database queries

How can i write unitintegration tests that talk to a database. e.g.:
public int GetAppLockCount(DbConnection connection)
{
string query :=
"SELECT"+CRLF+
" tl.resource_type AS ResourceType,"+CRLF+
" tl.resource_description AS ResourceName,"+CRLF+
" tl.request_session_id AS spid"+CRLF+
"FROM sys.dm_tran_locks tl"+CRLF+
"WHERE tl.resource_type = 'APPLICATION'"+CRLF+
"AND tl.resource_database_id = ("+CRLF+
" SELECT dbid"+CRLF+
" FROM master.dbo.sysprocesses"+CRLF+
" WHERE spid = ##spid)";
IRecordset rdr = Connection.Execute(query);
int nCount = 0;
while not rdr.EOF do
{
nCount := nCount+1;
rdr.Next;
}
return nCount;
}
In this case i am trying to exorcise this code of bugs (the IRecordset returns empty recordset).
[UnitTest]
void TestGetLockCountShouldAlwaysSucceed();
{
DbConnection conn = GetConnectionForUnit_IMean_IntegrationTest();
GetAppLockCount(conn);
CheckTrue(True, "We should reach here, whether there are app locks or not");
}
Now all i need is a way to connect to some database when running a unit integration testing.
Do people store connection strings somewhere for the test-runner to find? A .ini or .xml or .config file?
Note: Language/framework agnostic. The code intentionally contains elements from:
C#
Delphi
ADO.net
ADO
NUnit
DUnit
in order to drive that point home.
Now all i need is a way to connect to some database when running a unit integration testing.
Either use an existing database or an in-memory database. I've tried both an currently use an existing database that is splatted and rebuilt using Liquibase scripts in an ant file.
Advantages to in-memory - no dependencies on other applications.
Disadvantages - Not quite as real, can take time to start up.
Advantages to real database - Can be identical to the real world
Disadvantages - Requires access to a 3rd party machine. More work setting up a new user (i.e. create new database)
Do people store connection strings somewhere for the test-runner to find? A .ini or .xml or .config file?
Yeap. In C# I used a .config file, in java a .props file. With in-memory you can throw this into the version control as it will be the same for each person, with a real database running somewhere it will need to be different for each user.
You will also need to consider seed data. In Java I've used dbUnit in the past. Not the most readable, but works. Now I use a Ruby ActiveRecord task.
How do you start this? First can you rebuild your database? You need to be able to automate this before going to far down this road.
Next you should build up a blank local database for your tests. I go with one-per-developer, some other teams share but don't commit. In a .NET/MS SQL world I think in memory would be quite easy to do.