IIS7 native module - setting request entity with file handle - c++

So I'm working on an IIS7 native module, and part of what it will do is process some fairly large uploaded files. I'm trying to work on ways to reduce the memory footprint of the module while doing this.
One thing I was able to do with the processed response data, which is nice, is pass open file handles to the underlying system instead of memory buffers by using the HttpDataChunkFromFileHandle chunk types. I'm trying to do the same thing with the request data, but so far no joy.
What I am doing is first I am reading all of the request data, processing it, and then setting the entity chunks in the raw HTTP_REQUEST like this:
HTTP_REQUEST* rawRequest = _context->GetRequest()->GetRawHttpRequest();
rawRequest->EntityChunkCount = 1;
rawRequest->pEntityChunks = new HTTP_DATA_CHUNK[1];
rawRequest->pEntityChunks[0].DataChunkType = HttpDataChunkFromFileHandle;
rawRequest->pEntityChunks[0].FromFileHandle.FileHandle = _requestFile.handle();
rawRequest->pEntityChunks[0].FromFileHandle.ByteRange.StartingOffset.QuadPart = 0;
rawRequest->pEntityChunks[0].FromFileHandle.ByteRange.Length.QuadPart = _requestFile.size();
and returning RQ_NOTIFICATION_CONTINUE.
This results in a 403 response from the server.
If I use a memory chunk instead, it works correctly:
char* bufferOut = static_cast<char*>(_context->AllocateRequestMemory( _requestFile.size() ));
std::memcpy( bufferOut, _requestFile.map( 0, _requestFile.size() ), _requestFile.size() );
HTTP_REQUEST* rawRequest = _context->GetRequest()->GetRawHttpRequest();
rawRequest->EntityChunkCount = 1;
rawRequest->pEntityChunks = new HTTP_DATA_CHUNK[1];
rawRequest->pEntityChunks[0].DataChunkType = HttpDataChunkFromMemory;
rawRequest->pEntityChunks[0].FromMemory.pBuffer = (PVOID)bufferOut;
rawRequest->pEntityChunks[0].FromMemory.BufferLength = _requestFile.size()
So... is HttpDataChunkFromFileHandle just not supported for request entities? Or is there something else I need to do for it to work?
Do I need to set any specific security permissions on the file?

Related

Postman accessing the stored results in the database leveldb

So I have a set of results in Postman from a runner on a collection using some data file for iterations - I have the stored data from the runner in the Postman app on Linux, but I want to know how I can get hold of the data. There seems to be a database hidden away in the ~/.config directory (/Desktop/file__0.indexeddb.leveldb) - that looks like it has the data from the results there.
Is there anyway that I can get hold of the raw data - I want to be able to save the results from the database and not faff around with running newman or hacking a server to post the results and then save, I already have 20000 results in a collection. I want to be able to get the responseData from each post and save it to a file - I will not execute the posts again, I need to just work out a way
I've tried KeyLord, FastNoSQL (this crashes), levelDBViewer(Jar), but not having any luck here.
Any suggestions?
inline 25024 of runner.js a simple yet hack for small numbers of results I can do the following
RunnerResultsRequestListItem = __WEBPACK_IMPORTED_MODULE_2_pure_render_decorator___default()(_class = class RunnerResultsRequestListItem extends __WEBPACK_IMPORTED_MODULE_0_react__["Component"] {
constructor(props) {
super(props);
var text = props.request.response.body,
blob = new Blob([text], { type: 'text/plain' }),
anchor = document.createElement('a');
anchor.download = props.request.ref + ".txt";
anchor.href = (window.webkitURL || window.URL).createObjectURL(blob);
anchor.dataset.downloadurl = ['text/plain', anchor.download, anchor.href].join(':');
anchor.click();
it allows me to save but obviously I have to click save for now, anyone know how to automate the saving part - please add something here!

Multi-part form data upload with Akka HTTP

I'm trying to figure out how to create a multi-part form data request
with Akka HTTP (client API) but I can't find a way to express form data.
Does anyone know how to create form data that would take a file or input stream?
I guess a bit late right now, but this example, has both a client and a server.
I copy the relevant part
def createEntity(file: File): Future[RequestEntity] = {
require(file.exists())
val formData =
Multipart.FormData(
Source.single(
Multipart.FormData.BodyPart(
"test",
HttpEntity(MediaTypes.`application/octet-stream`, file.length(), SynchronousFileSource(file, chunkSize = 100000)), // the chunk size here is currently critical for performance
Map("filename" -> file.getName))))
Marshal(formData).to[RequestEntity]
}
def createRequest(target: Uri, file: File): Future[HttpRequest] =
for {
e ← createEntity(file)
} yield HttpRequest(HttpMethods.POST, uri = target, entity = e)
Simplest way to achieve this would be:
val formData = Multipart.FormData.
fromFile("<FORM_DATA_KEY>",
MediaTypes.`application/octet-stream`,
file = file,
100000)
val httpRequest = HttpRequest(HttpMethods.POST, uri = target, entity = formData.toEntity)
On the first line you can also use Multipart.FormData.fromPath which will accept file path instead of file object itself.

How to access unwanted save file name and delete it?

I need to filter my raster image by a fixed threshold. So I use ILogicalOp functions. Whenever I use them, an output file will be saved on workspace, which is unwanted due to my large database. The saving happens exactly after rasOut[i] = RMath.LessThan(inputRas[i], cons01). How can I prevent this? Or how to get saved file name and delete it? Any comments would be Appreciated?
private IGeoDataset[] CalcColdThreshold(IGeoDataset[] inputRas)
{
IGeoDataset[] rasOut = new IGeoDataset[inputRas.Length];
IGeoDataset emptyRas=null;
ILogicalOp RMath;
RMath = new RasterMathOpsClass();
IRasterAnalysisEnvironment env;
env = (IRasterAnalysisEnvironment)RMath;
IWorkspaceFactory workspaceFactory = new RasterWorkspaceFactoryClass();
IWorkspace workspace = workspaceFactory.OpenFromFile(System.IO.Path.GetFullPath(workSpace_save.Text), 0);
env.OutWorkspace = workspace;
IRasterMakerOp Rmaker = new RasterMakerOpClass();
IGeoDataset cons01;
Threshold_value = 15000;
cons01 = Rmaker.MakeConstant(Threshold_value, false);
for (int i = 0; i < inputRas.Length; i++)
{
rasOut[i] = RMath.LessThan(inputRas[i], cons01);
}
return rasOut;
}
(disclaimer: I'm not actually a C++ programmer, just trying to provide some pointers to get you going since no one else seems to have any answers.) (converted from comment)
The IScratchWorkspaceFactory interface sounds like it will do what you want - instead of creating your workspace variable using IWorkspaceFactory.OpenFromFile, try creating a scratch workspace instead? According to the documentation it will be automatically cleaned up when your application exits.
Just remember to use a different workspace for your final output. :)

Sitecore Clear Cache Programmatically

I am trying to publish programmatically in Sitecore. Publishing works fine. But doing so programmatically doesn't clear the sitecore cache. What is the best way to clear the cache programmatically?
I am trying to use the webservice that comes with the staging module. But I am getting a Bad request exception(Exception: The remote server returned an unexpected response: (400) Bad Request.). I tried to increase the service receivetimeout and sendtimeout on the client side config file but that didn't fix the problem. Any pointers would be greatly appreciated?
I am using the following code:
CacheClearService.StagingWebServiceSoapClient client = new CacheClearService.StagingWebServiceSoapClient();
CacheClearService.StagingCredentials credentials = new CacheClearService.StagingCredentials();
credentials.Username = "sitecore\adminuser";
credentials.Password = "***********";
credentials.isEncrypted = false;
bool s = client.ClearCache(true, dt, credentials);
I am using following code to do publish.
Database master = Sitecore.Configuration.Factory.GetDatabase("master");
Database web = Sitecore.Configuration.Factory.GetDatabase("web");
string userName = "default\adminuser";
Sitecore.Security.Accounts.User user = Sitecore.Security.Accounts.User.FromName(userName, true);
user.RuntimeSettings.IsAdministrator = true;
using (new Sitecore.Security.Accounts.UserSwitcher(user))
{
Sitecore.Publishing.PublishOptions options = new Sitecore.Publishing.PublishOptions(master, web,
Sitecore.Publishing.PublishMode.Full, Sitecore.Data.Managers.LanguageManager.DefaultLanguage, DateTime.Now);
options.RootItem = master.Items["/sitecore/content/"];
options.Deep = true;
options.CompareRevisions = true;
options.RepublishAll = true;
options.FromDate = DateTime.Now.AddMonths(-1);
Sitecore.Publishing.Publisher publisher = new Sitecore.Publishing.Publisher(options);
publisher.Publish();
}
In Sitecore 6, the CacheManager class has a static method that will clear all caches. The ClearAll() method is obsolete.
Sitecore.Caching.CacheManager.ClearAllCaches();
Just a quick note, in Sitecore 6.3, that is not needed anymore. Caches are being cleared automatically after a change happens on a remote server.
Also, if you are on previous releases, instead of clearing all caches, you can do partial cache clearing.
There is a free shared source component called Stager that does that.
http://trac.sitecore.net/SitecoreStager
If you need a custom solution, you can simply extract the source code from there.
I got this from Sitecore support. It clears all caches:
Sitecore.Context.Database = this.WebContext.Database;
Sitecore.Context.Database.Engines.TemplateEngine.Reset();
Sitecore.Context.ClientData.RemoveAll();
Sitecore.Caching.CacheManager.ClearAllCaches();
Sitecore.Context.Database = this.ShellContext.Database;
Sitecore.Context.Database.Engines.TemplateEngine.Reset();
Sitecore.Caching.CacheManager.ClearAllCaches();
Sitecore.Context.ClientData.RemoveAll();
Out of the box solution provided by Sitecore to clean caches (ALL of them) is utilized by the following page: http://sitecore_instance_here/sitecore/admin/cache.aspx and code behind looks like the following snippet:
foreach (var cache in Sitecore.Caching.CacheManager.GetAllCaches())
cache.Clear();
Via the SDN:
HtmlCache cache = CacheManager.GetHtmlCache(Context.Site);
if (cache != null) {
cache.Clear();
}

Why does WebSharingAppDemo-CEProviderEndToEnd sample still need a client db connection after scope creation to perform sync

I'm researching a way to build an n-tierd sync solution. From the WebSharingAppDemo-CEProviderEndToEnd sample it seems almost feasable however for some reason, the app will only sync if the client has a live SQL db connection. Can some one explain what I'm missing and how to sync without exposing SQL to the internet?
The problem I'm experiencing is that when I provide a Relational sync provider that has an open SQL connection from the client, then it works fine but when I provide a Relational sync provider that has a closed but configured connection string, as in the example, I get an error from the WCF stating that the server did not receive the batch file. So what am I doing wrong?
SqlConnectionStringBuilder builder = new SqlConnectionStringBuilder();
builder.DataSource = hostName;
builder.IntegratedSecurity = true;
builder.InitialCatalog = "mydbname";
builder.ConnectTimeout = 1;
provider.Connection = new SqlConnection(builder.ToString());
// provider.Connection.Open(); **** un-commenting this causes the code to work**
//create anew scope description and add the appropriate tables to this scope
DbSyncScopeDescription scopeDesc = new DbSyncScopeDescription(SyncUtils.ScopeName);
//class to be used to provision the scope defined above
SqlSyncScopeProvisioning serverConfig = new SqlSyncScopeProvisioning();
....
The error I get occurs in this part of the WCF code:
public SyncSessionStatistics ApplyChanges(ConflictResolutionPolicy resolutionPolicy, ChangeBatch sourceChanges, object changeData)
{
Log("ProcessChangeBatch: {0}", this.peerProvider.Connection.ConnectionString);
DbSyncContext dataRetriever = changeData as DbSyncContext;
if (dataRetriever != null && dataRetriever.IsDataBatched)
{
string remotePeerId = dataRetriever.MadeWithKnowledge.ReplicaId.ToString();
//Data is batched. The client should have uploaded this file to us prior to calling ApplyChanges.
//So look for it.
//The Id would be the DbSyncContext.BatchFileName which is just the batch file name without the complete path
string localBatchFileName = null;
if (!this.batchIdToFileMapper.TryGetValue(dataRetriever.BatchFileName, out localBatchFileName))
{
//Service has not received this file. Throw exception
throw new FaultException<WebSyncFaultException>(new WebSyncFaultException("No batch file uploaded for id " + dataRetriever.BatchFileName, null));
}
dataRetriever.BatchFileName = localBatchFileName;
}
Any ideas?
For the Batch file not available issue, remove the IsOneWay=true setting from IRelationalSyncContract.UploadBatchFile. When the Batch file size is big, ApplyChanges will be called even before fully completing the previous UploadBatchfile.
// Replace
[OperationContract(IsOneWay = true)]
// with
[OperationContract]
void UploadBatchFile(string batchFileid, byte[] batchFile, string remotePeer1
I suppose it's simply a stupid example. It exposes "some" technique but assumes you have to arrange it in proper order by yourself.
http://msdn.microsoft.com/en-us/library/cc807255.aspx