I am using TestComplete 7. I am writing a test that download the file from web and puts the downloaded file in Stores. I am using C++ script for achieving this. But I am having problem. I don't know how to download file from web using its URL in C++ Script. Can somebody give me any suggestion
function Test(){
// Specify the names of the source and destination files
var strFileURL = "http://www.automatedqa.com/file to get";
var strHDLocation = "c:\\temp\\filename";
// Download the file
var objHTTP = new ActiveXObject("MSXML2.XMLHTTP");
objHTTP.open("GET", strFileURL, false);
objHTTP.send();
while((objHTTP.readyState != 4) && (objHTTP.readyState != 'complete')) {
Delay(100);
}
if (200 != objHTTP.Status) {
Log.Error("The " + strFileURL + " file was not found." + " The returned status is " + objHTTP.Status);
return;
}
var objADOStream = new ActiveXObject("ADODB.Stream");
objADOStream.Open();
objADOStream.Type = 1; //adTypeBinary
objADOStream.Write(objHTTP.ResponseBody);
objADOStream.Position = 0; //Set the stream position to the start
var objFSO = new ActiveXObject("Scripting.FileSystemObject");
if (objFSO.FileExists(strHDLocation)) objFSO.DeleteFile(strHDLocation)
objADOStream.SaveToFile(strHDLocation);
objADOStream.Close();
Files.Add(strHDLocation);
}
Related
I was reading this post -> upload to google cloud storage signed url with javascript
and it reads the entire file into the reader, then seems to send the entire file. Is there a way instead to read a chunk, send a chunk with GCP Storage signed urls? In this way, we do not blow memory on a very large file and can do a progress bar as well as we upload?
We are fine with any javascript client as we do not currently use any right now.
thanks,
Dean
A resumable uploads work by sending multiple requests, each of which contains a portion of the object you're uploading.
When working with resumable uploads, you only create and use a signed URL for the POST request that initiates the upload. This initial request returns a session URI that you use in subsequent PUT requests to upload the data. Since the session URI acts as an authentication token, the PUT requests do not use any signed URLs.
Once you've initiated a resumable upload, there are two ways to upload the object's data:
In a single chunk: This approach is usually best, since it requires fewer requests and thus has better performance.
In multiple chunks: Use this approach if you need to reduce the amount of data transferred in any single request, such as when there is a fixed time limit for individual requests, or if you don't know the total size of the upload at the time the upload begins.
You can use the Cloud Storage Node.js library. Do note that when using a signed URL to start a resumable upload session, you will need to specify the x-goog-resumable header with start value in the request or else signature validation will fail. Refer to this documentation for additional samples, and guides for getting a signed url to allow limited time access to a bucket.
We are doing chunked uploads with composing - so we chunk the file and create a signed URL for every chunk. These chunks are then composed.
Here is a fully working C# example for chunked upload and download of a test file to a Google cloud storage bucket (it took me a long time to put my original solution together because I didn't find much online). To compile you need to install from Nuget:
https://www.nuget.org/packages/MimeTypes
https://www.nuget.org/packages/Crc32.NET/1.2.0/
You also need to install the Google Cloud storage API
https://www.nuget.org/packages/Google.Cloud.Storage.V1/
Finally it is assumed that you have a JSON file with credentials downloaded from the Google cloud console (here it is called credentials.json).
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using Google.Cloud.Storage.V1;
using Google.Apis.Storage.v1.Data;
using System.Net.Http;
using System.Net.Http.Headers;
using System.IO;
using System.Xml;
using System.Web;
using Google.Apis.Auth.OAuth2;
using System.Security.Cryptography;
using Force.Crc32;
namespace GoogleCloudPOC
{
class Program
{
static StorageClient storage;
static UrlSigner urlSigner;
static string bucketName = "ratiodata";
static void Main(string[] args)
{
var credential = GoogleCredential.FromFile("credentials.json");
storage = StorageClient.Create(credential);
urlSigner = UrlSigner.FromServiceAccountPath("credentials.json");
//create a dummy file
var arr = new byte[1000000];
var r = new Random();
for(int i = 0; i < arr.Length; i++)
{
arr[i] = (byte) r.Next(255);
}
//now upload this file in two chunks - we use two threads to illustrate that it is done in parallel
Console.WriteLine("Starting parallel upload ...");
string cloudFileName = "parallel_upload_test.dat";
var threadpool = new Thread[2];
int offset = 0;
int buflength = 100000;
int blockNumber = 0;
var blockList = new SortedDictionary<int, string>();
for(int t = 0; t < threadpool.Length; t++)
{
threadpool[t] = new Thread(delegate ()
{
while (true)
{
int currentOffset = -1;
int currentBlocknumber = -1;
lock (arr)
{
if (offset >= arr.Length) { break; }
currentOffset = offset;
currentBlocknumber = blockNumber;
offset += buflength;
blockNumber++;
}
int len = buflength;
if (currentOffset + len > arr.Length)
{
len = arr.Length - currentOffset;
}
//create signed url
var dict = new Dictionary<string, string>();
//calculate hash
var crcHash = Crc32CAlgorithm.Compute(arr, currentOffset, len);
var b = BitConverter.GetBytes(crcHash);
if (BitConverter.IsLittleEndian)
{
Array.Reverse(b);
}
string blockID = $"__TEMP__/{cloudFileName.Replace('/', '*')}.part_{currentBlocknumber}_{Convert.ToBase64String(b)}";
lock (blockList)
{
blockList.Add(currentBlocknumber, blockID);
}
dict.Add("x-goog-hash", $"crc32c={Convert.ToBase64String(b)}");
//add custom time
var dt = DateTimeOffset.UtcNow.AddHours(-23); //cloud storage will delete the temp files 6 hours after through lifecycle policy (if set to 1 day after custom time)
var CustomTime = String.Format("{0:D4}-{1:D2}-{2:D2}T{3:D2}:{4:D2}:{5:D2}.{6:D2}Z", dt.Year, dt.Month, dt.Day, dt.Hour, dt.Minute, dt.Second, dt.Millisecond / 10);
dict.Add("x-goog-custom-time", CustomTime);
var signedUrl = getSignedUrl(blockID, 1, "upload", dict);
//now perform the actual upload with this URL - this part could run in the browser as well
using (var client = new HttpClient())
{
var content = new ByteArrayContent(arr, currentOffset, len);
content.Headers.ContentType = MediaTypeHeaderValue.Parse("application/octet-stream");
foreach (var kvp in dict)
{
client.DefaultRequestHeaders.Add(kvp.Key, kvp.Value);
}
var response = client.PutAsync(signedUrl, content).Result;
if (!response.IsSuccessStatusCode)
{
throw new Exception("upload failed"); //this should be replaced with some sort of exponential backoff
}
}
}
});
threadpool[t].Start();
}
for (int t = 0; t < threadpool.Length; t++)
{
threadpool[t].Join();
}
//now we compose the chunks into a single file - we can do at most 32 at a time
BlobCombine(blockList.Values.ToArray(), cloudFileName);
Console.WriteLine("... parallel upload finished");
//now use chunked download
Console.WriteLine("Starting parallel download ...");
var downloadedArr = new byte[arr.Length];
threadpool = new Thread[2];
offset = 0;
buflength = 200000;
var downloadUrl = getSignedUrl(cloudFileName, 1, "download"); //single download URL is sufficient
for (int t = 0; t < threadpool.Length; t++)
{
threadpool[t] = new Thread(delegate ()
{
while (true)
{
int currentOffset = -1;
lock (downloadedArr)
{
if (offset >= arr.Length) { break; }
currentOffset = offset;
offset += buflength;
}
int len = buflength;
if (currentOffset + len > downloadedArr.Length)
{
len = downloadedArr.Length - currentOffset;
}
//now perform the actual download with this URL - this part could run in the browser as well
var tags = new Dictionary<string, string>();
tags.Add("Range", $"bytes={currentOffset}-{currentOffset + len - 1}");
using (var client = new HttpClient())
{
var request = new HttpRequestMessage { RequestUri = new Uri(downloadUrl) };
foreach (var kvp in tags)
{
client.DefaultRequestHeaders.Add(kvp.Key, kvp.Value);
}
var response = client.SendAsync(request).Result;
var buffer = new byte[len];
lock (downloadedArr)
{
response.Content.ReadAsStream().Read(buffer, 0, len);
}
lock (downloadedArr)
{
Array.Copy(buffer, 0, downloadedArr, currentOffset, len);
}
}
}
});
threadpool[t].Start();
}
for (int t = 0; t < threadpool.Length; t++)
{
threadpool[t].Join();
}
Console.WriteLine("... parallel download finished");
//compare original array and downloaded array
for(int i = 0; i < arr.Length; i++)
{
if (arr[i] != downloadedArr[i])
{
throw new Exception("download is different from original data");
}
}
Console.WriteLine("good job: original and downloaded data are the same!");
}
static string getSignedUrl(string cloudFileName, int hours, string capability, Dictionary<string, string> tags = null)
{
string url = null;
switch (capability)
{
case "download":
url = urlSigner.Sign(bucketName, cloudFileName, TimeSpan.FromHours(hours), HttpMethod.Get);
break;
case "upload":
var requestHeaders = new Dictionary<string, IEnumerable<string>>();
if (tags != null)
{
foreach (var kvp in tags)
{
requestHeaders.Add(kvp.Key, new[] { kvp.Value });
}
}
UrlSigner.Options options = UrlSigner.Options.FromDuration(TimeSpan.FromHours(hours));
UrlSigner.RequestTemplate template = UrlSigner.RequestTemplate
.FromBucket(bucketName)
.WithObjectName(cloudFileName).WithHttpMethod(HttpMethod.Put);
if (requestHeaders.Count > 0)
{
template = template.WithRequestHeaders(requestHeaders);
}
url = urlSigner.Sign(template, options);
break;
case "delete":
url = urlSigner.Sign(bucketName, cloudFileName, TimeSpan.FromHours(hours), HttpMethod.Delete);
break;
}
return url;
}
static bool BlobCombine(string[] inputFiles, string outputFile)
{
var sourceObjects = new List<ComposeRequest.SourceObjectsData>();
foreach (var fn in inputFiles)
{
sourceObjects.Add(new ComposeRequest.SourceObjectsData { Name = fn });
}
while (sourceObjects.Count > 32)
{
var prefix = sourceObjects.First().Name.Split('.').First();
var newSourceObjects = new List<ComposeRequest.SourceObjectsData>();
var currentSplit = new List<ComposeRequest.SourceObjectsData>();
var sb = new StringBuilder();
for (int i = 0; i < sourceObjects.Count; i++)
{
sb.Append(sourceObjects[i].Name.Split('.').Last());
currentSplit.Add(sourceObjects[i]);
if (currentSplit.Count == 32)
{
var targetName = $"{prefix}.{HashStringOne(sb.ToString())}";
if (!condense(currentSplit, targetName, false))
{
return false;
}
newSourceObjects.Add(new ComposeRequest.SourceObjectsData() { Name = targetName });
currentSplit = new List<ComposeRequest.SourceObjectsData>();
sb = new StringBuilder();
}
}
if (currentSplit.Count == 1)
{
newSourceObjects.Add(currentSplit[0]);
}
if (currentSplit.Count > 1)
{
var targetName = $"{prefix}.{HashStringOne(sb.ToString())}";
if (!condense(currentSplit, targetName, false))
{
return false;
}
newSourceObjects.Add(new ComposeRequest.SourceObjectsData() { Name = targetName });
}
sourceObjects = newSourceObjects;
}
return condense(sourceObjects, outputFile, true);
}
static ulong HashStringOne(string s)
{
ulong hash = 0;
for (int i = 0; i < s.Length; i++)
{
hash += (ulong)s[i];
hash += (hash << 10);
hash ^= (hash >> 6);
}
hash += (hash << 3);
hash ^= (hash >> 11);
hash += (hash << 15);
return hash;
}
static bool condense(List<ComposeRequest.SourceObjectsData> input, string targetName, bool lastRound)
{
try
{
storage.Service.Objects.Compose(new ComposeRequest
{
SourceObjects = input
}, bucketName, targetName).Execute();
if (!lastRound)
{
//set custom time
var file = storage.GetObject(bucketName, targetName);
file.CustomTime = DateTime.UtcNow.AddHours(-23);
file = storage.UpdateObject(file);
}
else
{
//try to set mime type based on file extensions
var file = storage.GetObject(bucketName, targetName);
file.ContentType = MimeTypes.GetMimeType(targetName);
file = storage.UpdateObject(file);
}
return true;
}
catch (Exception e)
{
return false;
}
}
}
}
The upload is performed in parallel using signed URLs. Even though this is a C# command line program you could easily put that code into some ASP net core backend. There are a few lines code of code where the actual upload/download happens using httpclient - those could be done in Javascript in the browser.
The only thing that has to run on the backend is creating signed URLs - plus the compositing of the chunks (this could probably be done in the browser - but this typically isn't heavy operation and Google recommends to do these operations not using signed Urls).
Note, that you have to create a different signed URL for each upload chunk - but a single signed url is sufficient for the download.
Also note that the composition code is a bit involved because you can only combine up to 32 chunks into a new object on cloud storage - hence you might need a few rounds of composition (you can compose objects that are already composed).
I am including CRC32C hashes in the upload to make sure it's uploaded correctly. There should be some Javascript library to perform this in the browser. If you run this in the browser you need to send the hash to the backend when requesting a signed upload url because this parameter is embedded in the put header and has to be encrypted as part of the signed url.
The custom time is included and set to -23 hours from current time so that you can set a lifecycle rule on your bucket which deletes the temporary chunks one day after custom time (effectively it will be a few hours later even though it should be 1 hour after creating the chunk). You can also manually delete the chunks but I would use the custom time approach anyway to make sure you are not gunking up your bucket with failed uploads.
The above approach is truly parallel upload/download. If you just care about chunking (for a progress bar say) but you don't care about parallel threads doing the upload/download then a resumable upload is possible (you would still use the same download approach as outlined above). Such an upload has to be initiated with a single POST call and then you can upload the file chunk by chunk (similar to the way the download code works).
The aspose.barcode reader is unable to read the barcode of type DecodeType.Code128
Workflow Steps
1>Using Aspose.Barcode we have created a barcode using DecodeType.Code128 and put on PDF page ( our clients use this page as separator sheet)
2>Our client then insert this barcode page between several physical documents and scanned them all, which creates big single PDF
3>Our splitting process then, loop through all pages and check if any page is barcode page, and splits the big PDF into individual small PDF
Issue is some times the scanned quality of the barcode is not that great, and in such case ASPOSE.Barcode unable to read the barcode.
I have attached couple of barcode PDF with low scanned quality, and aspose is not able to read these barcodes. I have tried different combinations of RecognitionMode and ManualHints options without any luck
Below is my code to identity barcode page
using (var fs = new FileStream(file, FileMode.Open))
{
var pdfDocument = new Document(fs);
foreach (Page page in pdfDocument.Pages)
{
var isSeparator = splitter.IsSeparator(page);
Assert.IsTrue(isSeparator);
}
}
public bool IsSeparator(Page page)
{
if (page.Resources.Images != null && page.Resources.Images.Count >= 1)
{
var img = page.Resources.Images[1];
using (MemoryStream barcodeImage = new MemoryStream())
{
img.Save(barcodeImage, ImageFormat.Jpeg);
barcodeImage.Seek(0L, SeekOrigin.Begin);
using (BarCodeReader barcodeReader = new BarCodeReader(barcodeImage, _barcodeDecodeType))
{
barcodeReader.RecognitionMode = RecognitionMode.MaxQuality;
while (barcodeReader.Read())
{
var barcodeText = barcodeReader.GetCodeText();
if (barcodeText.ToLower() == "eof")
{
return true;
}
}
}
}
}
return false;
}
Unable to reproduce the issue at my end. I used the following sample code snippet to recognize the barcode along with latest version of the API. It is always recommended to use the latest version of the API as it contains new features and improvements.
CODE:
Aspose.Pdf.License licensePdf = new Aspose.Pdf.License();
licensePdf.SetLicense(#"Aspose.Total.lic");
// bind the pdf document
Aspose.Pdf.Facades.PdfExtractor pdfExtractor = new Aspose.Pdf.Facades.PdfExtractor();
pdfExtractor.BindPdf(#"173483_2.pdf");
// extract the images
pdfExtractor.ExtractImage();
// save images to stream in a loop
while (pdfExtractor.HasNextImage())
{
// save image to stream
System.IO.MemoryStream imageStream = new System.IO.MemoryStream();
pdfExtractor.GetNextImage(imageStream);
imageStream.Position = 0;
Aspose.BarCode.BarCodeRecognition.BarCodeReader barcodeReader =
new Aspose.BarCode.BarCodeRecognition.BarCodeReader(imageStream);
while (barcodeReader.Read())
{
Console.WriteLine("Codetext found: " + barcodeReader.GetCodeText() + ", Symbology: " + barcodeReader.GetCodeType().ToString());
}
// close the reader
barcodeReader.Close();
}
Further to update you that the same query has been post on Aspose.BarCode support forum. You may please visit the link for details.
I work as developer evangelist at Aspose.
Can we Cache Dynamically Created Lists or View till the webservices are called in background. I want to achieve something like the FaceBook App does. I know its possible in Android Core but wanted to try it in Titanium (Android and IOS).
I would further explain it,
Consider I have a app which has a list. Now When I open for first time, it will obviously hit the webservice and create a dynamic list.
Now I close the app and again open the app. The old list should be visible till the webservice provides any data.
Yes Titanium can do this. You should use a global variable like Ti.App.myList if it is just an array / a list / a variable. If you need to store more complex data like images or databases you should use the built-in file system. There is a really good Documentation on the Appcelerator website.
The procedure for you would be as follows:
Load your data for the first time
Store your data in your preferred way (Global variable, file system)
During future app starts read out your local list / data and display it until your sync is successfull.
You should consider to implement some variable to check wether any update is needed to minimize the network use (it saves energy and provides a better user experience if the users internet connection is slow).
if (response.state == "SUCCESS") {
Ti.API.info("Themes successfully checked");
Ti.API.info("RESPONSE TEST: " + response.value);
//Create a map of the layout names(as keys) and the corresponding url (as value).
var newImageMap = {};
for (var key in response.value) {
var url = response.value[key];
var filename = key + ".jpg"; //EDIT your type of the image
newImageMap[filename] = url;
}
if (Ti.App.ImageMap.length > 0) {
//Check for removed layouts
for (var image in Ti.App.imageMap) {
if (image in newImageMap) {
Ti.API.info("The image " + image + " is already in the local map");
//Do nothing
} else {
//Delete the removed layout
Ti.API.info("The image " + image + " is deleted from the local map");
delete Ti.App.imageMap[image];
}
}
//Check for new images
for (var image in newImageMap) {
if (image in Ti.App.imageMap) {
Ti.API.info("The image " + image + " is already in the local map");
//Do nothing
} else {
Ti.API.info("The image " + image + " is put into the local map");
//Put new image in local map
Ti.App.imageMap[image] = newImageMap[image];
}
}
} else {
Ti.App.imageMap = newImageMap;
}
//Check wether the file already exists
for (var key in response.value) {
var url = response.value[key];
var filename = key + ".png"; //EDIT YOUR FILE TYPE
Ti.API.info("URL: " + url);
Ti.API.info("FILENAME: " + filename);
imagesOrder[imagesOrder.length] = filename.match(/\d+/)[0]; //THIS SAVES THE FIRST NUMBER IN YOUR FILENAME AS ID
//Case1: download a new image
var file = Ti.Filesystem.getFile(Ti.Filesystem.resourcesDirectory, "/media/" + filename);
if (file.exists()) {
// Do nothing
Titanium.API.info("File " + filename + " exists");
} else {
// Create the HTTP client to download the asset.
var xhr = Ti.Network.createHTTPClient();
xhr.onload = function() {
if (xhr.status == 200) {
// On successful load, take that image file we tried to grab before and
// save the remote image data to it.
Titanium.API.info("Successfully loaded");
file.write(xhr.responseData);
Titanium.API.info(file);
Titanium.API.info(file.getName());
};
};
// Issuing a GET request to the remote URL
xhr.open('GET', url);
// Finally, sending the request out.
xhr.send();
}
}
In addition to this code which should be placed in a success method of an API call, you need a global variable Ti.App.imageMap to store the map of keys and the corresponding urls. I guess you have to change the code a bit to fit your needs and your project but it should give you a good starting point.
I have a need to create an action that will:
1. copy a selected part (selected by hand) of an image in an already opened file
2. paste selection into new file
3. save new file as jpg file, but not with default file name of "untitled.jpg" - instead use a unique name or use a auto-increment suffix
Because the action will be run multiple times on different selections from the same image, saving each selection with a unique name or auto-incremented suffix would save the step of manually supplying the filename each time a different selection is saved.
I can create an action that gets to the save-as step, but don't know if it is possible to modify the default save as name as described above. Is it possible?
No. Tried it before with no success. You have to save manually.
Don't think this is possible with an action but you can write a script do to it.
I have created a script for similar work. It uses a technique to generate unique filenames and save the file.
/************************************************************************
* Author: Nishant Kumar
* Description: This script iterates through a template to create
* jpg images with id card numbers.
* Date: 08-03-2015
***********************************************************************/
//current id count
var id_count = 0;
//total no of id cards to produce
var total_id_cards = 42;
//no. of cards per sheet
var card_per_sheet = 21;
//Save path related to current file
var save_path = app.activeDocument.path;
//do an iteration, number the cards and save file
do{
//iterate 24 nos in each document
for(var i = 0; i<card_per_sheet; i++){
id_count++;
app.activeDocument.layers[i].textItem.contents = id_count;
}
//Create a jpg document with standard options
jpgSaveOptions = new JPEGSaveOptions();
jpgSaveOptions.embedColorProfile = true;
jpgSaveOptions.formatOptions = FormatOptions.STANDARDBASELINE;
jpgSaveOptions.matte = MatteType.NONE;
jpgSaveOptions.quality = 12;
//Save jpg with incremental file names (1.jpg, 2.jpg), make sure the path exists
jpgFile = new File( save_path + "/output/" + id_count/card_per_sheet + ".jpeg" );
app.activeDocument.saveAs(jpgFile, jpgSaveOptions, true, Extension.LOWERCASE);
}while(id_count < total_id_cards);
I know this is old, but still. You can use the following script.
How to use a script:
Copy the following script in notepad, and save it in directory similar to "C:\Program Files (x86)\Adobe\Adobe Photoshop CS2\Presets\Scripts" with the extension JSX.
To run the scrip in photoshop, go to File > Scripts > "Your Script".
#target photoshop
main();
function main(){
if(!documents.length) return;
var Name = app.activeDocument.name.replace(/.[^.]+$/, '');
Name = Name.replace(/\d+$/,'');
try{
var savePath = activeDocument.path;
}catch(e){
alert("You must save this document first!");
}
var fileList= savePath.getFiles(Name +"*.jpg").sort().reverse();
var Suffix = 0;
if(fileList.length){
Suffix = Number(fileList[0].name.replace(/\.[^\.]+$/, '').match(/\d+$/));
}
Suffix= zeroPad(Suffix + 1, 4);
var saveFile = File(savePath + "/" + Name + "_" + Suffix + ".jpg");
SaveJPG(saveFile);
}
function SaveJPG(saveFile){
//Create a jpg document with standard options
jpgSaveOptions = new JPEGSaveOptions();
jpgSaveOptions.embedColorProfile = true;
jpgSaveOptions.formatOptions = FormatOptions.STANDARDBASELINE;
jpgSaveOptions.matte = MatteType.NONE;
jpgSaveOptions.quality = 12;
//Save jpg with incremental file names (1.jpg, 2.jpg), make sure the path exists
activeDocument.saveAs(saveFile, jpgSaveOptions, true, Extension.LOWERCASE);
};
function zeroPad(n, s) {
n = n.toString();
while (n.length < s) n = '0' + n;
return n;
};
I would like to make a GUI to host a Minecraft Server in Windows. Minecraft servers use a .jar file and a .bat file to run the .jar file and read output and give input to/from it.
How to make a C++ program that will open the .jar file, read the output and give input to it?
I tried with execlp, but when I #include <unistd.h> I get error that the "source file could not be read" (I think that this is because it is made for POSIX, but I'm not sure).
Any help would be appreciated!
(Also, just so you know, I'm very new to programming and C++)
I managed to do this in C#. Here is the code (I think the C++ code would be quite similiar):
var arguments = "-jar -Xms" + Settings.ServerStartInfo.InitialRam + "M -Xmx" +
Settings.ServerStartInfo.MaximumRam + "M \"" + Settings.ServerStartInfo.FileName +
"\" -nojline" + Settings.ServerStartInfo.Arguments;
var processStartInfo = new ProcessStartInfo
{
FileName = "javaw.exe",
Arguments = arguments,
CreateNoWindow = true,
ErrorDialog = false,
RedirectStandardOutput = true,
RedirectStandardError = true,
RedirectStandardInput = true,
StandardOutputEncoding = Encoding.UTF8,
StandardErrorEncoding = Encoding.UTF8,
UseShellExecute = false,
WorkingDirectory = Settings.ServerStartInfo.WorkingDirectory
};
Process = new Process { StartInfo = processStartInfo };
Process.OutputDataReceived += ServerOutputHandler.ServerOutputReceived;
Process.ErrorDataReceived += ServerOutputHandler.ServerOutputReceived;
Process.Start();
Process.BeginOutputReadLine();
Process.BeginErrorReadLine();
Process.WaitForExit();
Process.Start();
For more info, please take a look here: https://servercrafter.codeplex.com/SourceControl/latest#ServerCrafter/ServerCrafter.ClassLibrary/ClassLibrary/Server/Server.cs