Accessing uploaded files from Camunda task - camunda

The final task of a Camunda process must write the files uploaded by the user to an specific folder. So, I've created the following 'Service task' as the last one of the process:
Then, from the Java project, I've added the FinishArchiveDelegate class with the following code:
package com.ower.abpar.agreements;
import org.camunda.bpm.engine.delegate.DelegateExecution;
import org.camunda.bpm.engine.delegate.JavaDelegate;
public class FinishArchiveDelegate implements JavaDelegate {
#Override
public void execute(DelegateExecution execution) throws Exception {
System.out.println("Process finished: "+execution.getVariables());
}
}
When I check the logs, I see that I can see the document names, like:
document_1 => FileValueImpl [mimeType=image/jpeg, filename=Test_agreement1.jpg, type=file, isTransient=false]
The problem is that it only shows the file name and I'd need to request it from Camunda's database to copy it to another folder. Any suggestion or idea?
Thanks!

After some tests, I realized that I can get not only the name but all the uploaded files content using execution.getVariable(DOCUMENT_VARIABLE_NAME). So this is what I did:
// Get the uploaded file content
Object fileData = execution.getVariable("filename");
// The following returns a FileValueImpl object with metadata
// about the uploaded file, such as the name
FileValueImpl fileMetadata = FileValueImpl)execution.getVariableLocalTyped("filename")
// Set the destination file name
String destinationFileName = DEST_FOLDER + fileMetadata.getFilename();
...
// Create an InputStream from the file's content
InputStream in = (ByteArrayInputStream)fileData;
// Create an OutputStream to copy the data to the destination folder
OutputStream out = new FileOutputStream(destinationFileName);
byte[] buf = new byte[1024];
int len;
while ((len = in.read(buf)) > 0) {
out.write(buf, 0, len);
}
in.close();
out.close();
Hope this helps someone, cheers!

Related

Unable to read my config text file(Column Names) from GCS in dataflow

I have one source CSV file (without header) as well as header config CSV file (contains only column names) in GCS. I also have static table in Bigquery. I want to load source file into static table by using column header mapping (config file).
I was tried with different approach earlier(I was maintain source file which contain header and data in same file and then tried to split header from source file then insert those data into Bigquery by using header column mapping. I noticed this approach is NOT possible because dataflow shuffle data into multiple worker node. so i dropped this approach.
The below code i have used hard coded column names. I am looking approach to read column names from external config file (I want to make my code as dynamic).
package com.coe.cog;
import java.io.BufferedReader;
import java.util.*;
import org.apache.beam.sdk.Pipeline;
import org.apache.beam.sdk.io.TextIO;
import org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO;
import org.apache.beam.sdk.options.PipelineOptions;
import org.apache.beam.sdk.options.PipelineOptionsFactory;
import org.apache.beam.sdk.transforms.Create;
import org.apache.beam.sdk.transforms.DoFn;
import org.apache.beam.sdk.transforms.MapElements;
import org.apache.beam.sdk.transforms.ParDo;
import org.apache.beam.sdk.transforms.SimpleFunction;
import org.apache.beam.sdk.values.PCollection;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import com.google.api.services.bigquery.model.TableReference;
import com.google.api.services.bigquery.model.TableRow;
public class SampleTest {
private static final Logger LOG = LoggerFactory.getLogger(SampleTest.class);
public static TableReference getGCDSTableReference() {
TableReference ref = new TableReference();
ref.setProjectId("myownproject");
ref.setDatasetId("DS_Employee");
ref.setTableId("tLoad14");
return ref;
}
static class TransformToTable extends DoFn<String, TableRow> {
#ProcessElement
public void processElement(ProcessContext c) {
String csvSplitBy = ",";
String lineHeader = "ID,NAME,AGE,SEX"; // Hard code column name but i want to read these header from GCS file.
String[] colmnsHeader = lineHeader.split(csvSplitBy); //Only Header array
String[] split = c.element().split(csvSplitBy); //Data section
TableRow row = new TableRow();
for (int i = 0; i < split.length; i++) {
row.set(colmnsHeader[i], split[i]);
}
c.output(row);
// }
}
}
public interface MyOptions extends PipelineOptions {
/*
* Param
*
*/
}
public static void main(String[] args) {
MyOptions options = PipelineOptionsFactory.fromArgs(args).withValidation().as(MyOptions.class);
options.setTempLocation("gs://demo-bucket-data/temp");
Pipeline p = Pipeline.create(options);
PCollection<String> lines = p.apply("Read From Storage", TextIO.read().from("gs://demo-bucket-data/Demo/Test/SourceFile_WithOutHeader.csv"));
PCollection<TableRow> rows = lines.apply("Transform To Table",ParDo.of(new TransformToTable()));
rows.apply("Write To Table",BigQueryIO.writeTableRows().to(getGCDSTableReference())
.withWriteDisposition(BigQueryIO.Write.WriteDisposition.WRITE_APPEND)
.withCreateDisposition(BigQueryIO.Write.CreateDisposition.CREATE_NEVER));
p.run();
}
}
Source File:
1,John,25,M
2,Smith,30,M
3,Josephine,20,F
Config File (Headers only):
ID,NAME,AGE,SEX
You have a couple of options:
Use a Dataflow/Beam side input to read the config/header file into some sort of collection e.g. a a ArrayList. It will be available to all workers in the cluster. You can then use the side input to dynamically assign the schema to the BigQuery table using DynamicDestinations.
Before dropping into your Dataflow pipeline, call the GCS api directly to grab your config/header file, parse it and then it the results to setup your pipeline.
Using Beam's FileSystems API for reading config files from GCS, is another approach.
Advantages:
No need of additional dependencies, it's included with beam API.
Using GCP's client libraries can lead to dependency version issues.
We can use a beam's FileSystems API in any transforms.
Here is a snippet for reading files.
//filePath format: gs://bucket/file
public static String loadSchema(String filePath) {
MatchResult.Metadata metadata;
try {
metadata = FileSystems.matchSingleFileSpec(filePath); // searching
} catch (IOException e) {
throw new RuntimeException(e);
}
String schema;
try {
// reading file
schema = CharStreams.toString(
Channels.newReader(
FileSystems.open(metadata.resourceId()),
StandardCharsets.UTF_8.name()
)
);
} catch (IOException e) {
throw new RuntimeException(e);
}
// returning content as string. We can process it now.
return schema;
}
Disadvantages of Sideinput
File's orientation changes.
It's hard to parse multiline file like Json and others.
Side Input can work for single line static values.

C++/CX - GetFileAsync throws breakpoint error

I am trying to open a xml file from my Assets folder, but unfortunately I am only able to open my xml file by using a FileOpenPicker which is not the most ideal situation when I have to constantly fetch my xml file, without disturbing the user of course.
FileOpenPicker^ openPicker = ref new FileOpenPicker();
openPicker->ViewMode = PickerViewMode::List;
openPicker->SuggestedStartLocation = PickerLocationId::Desktop;
openPicker->FileTypeFilter->Append(".xml");
task<StorageFile^>(
openPicker->PickSingleFileAsync()).then([this](StorageFile^ file) {
if (nullptr != file) {
task<Streams::IRandomAccessStream^>(file->OpenAsync(FileAccessMode::Read)).then([this](Streams::IRandomAccessStream^ stream)
{
IInputStream^ deInputStream = stream->GetInputStreamAt(0);
DataReader^ reader = ref new DataReader(deInputStream);
reader->LoadAsync(stream->Size);
String^ strXml = reader->ReadString(stream->Size);
});
}
});
I am now trying to reconstruct this code into a code which loads up my xml file without letting the user choose. I tried the following approach:
String^ xmlFile = "Assets\MyXmlFile.xml";
StorageFolder^ InstallationFolder = Windows::ApplicationModel::Package::Current->InstalledLocation;
task<StorageFile^>(
InstallationFolder->GetFileAsync(xmlFile)).then([this](StorageFile^ file) {
if (nullptr != file) {
task<Streams::IRandomAccessStream^>(file->OpenAsync(FileAccessMode::Read)).then([this](Streams::IRandomAccessStream^ stream)
{
IInputStream^ deInputStream = stream->GetInputStreamAt(0);
DataReader^ reader = ref new DataReader(deInputStream);
reader->LoadAsync(stream->Size);
String^ strXml = reader->ReadString(stream->Size);
stream->FlushAsync();
});
}
});
I think I get errors at the GetFileAsync which I am not able to solve and I am asking you, the community to try and help me.
Your code worked for me with one modification: the xmlFile string contains a backslash that needs to be escaped:
String^ xmlFile = "Assets\\MyXmlFile.xml";
Note also that if you just right-clicked "Assets" in your project and chose "Add new item", that item may have ended up in your root project folder (which is the default). If you want it to be deployed to the Assets subfolder it will need to physically live there on disk in the assets subdirectory, not just be in the Assets filter. (Unlike in C#, the C++ project "folders" are actually filters and do not reflect physical directory location.)

Multi-part form data upload with Akka HTTP

I'm trying to figure out how to create a multi-part form data request
with Akka HTTP (client API) but I can't find a way to express form data.
Does anyone know how to create form data that would take a file or input stream?
I guess a bit late right now, but this example, has both a client and a server.
I copy the relevant part
def createEntity(file: File): Future[RequestEntity] = {
require(file.exists())
val formData =
Multipart.FormData(
Source.single(
Multipart.FormData.BodyPart(
"test",
HttpEntity(MediaTypes.`application/octet-stream`, file.length(), SynchronousFileSource(file, chunkSize = 100000)), // the chunk size here is currently critical for performance
Map("filename" -> file.getName))))
Marshal(formData).to[RequestEntity]
}
def createRequest(target: Uri, file: File): Future[HttpRequest] =
for {
e ← createEntity(file)
} yield HttpRequest(HttpMethods.POST, uri = target, entity = e)
Simplest way to achieve this would be:
val formData = Multipart.FormData.
fromFile("<FORM_DATA_KEY>",
MediaTypes.`application/octet-stream`,
file = file,
100000)
val httpRequest = HttpRequest(HttpMethods.POST, uri = target, entity = formData.toEntity)
On the first line you can also use Multipart.FormData.fromPath which will accept file path instead of file object itself.

facebook c# api, video getting uploaded but its PUBLIC

I am able to upload video from the code below. But the video isn't PRIVATE. I need video to be PRIVATE or in other words it should be SELF.
FacebookMediaObject mediaObject1 = new FacebookMediaObject
{
FileName = pbmFile.fullPath,
ContentType = Path.GetExtension(filePath)
};
byte[] fileBytes1 = System.IO.File.ReadAllBytes(filePath);
mediaObject1.SetValue(fileBytes1);
IDictionary<string, object> upload1 = new Dictionary<string, object>();
upload1.Add("image", mediaObject1);
staticGlobalConst.fbClient1.Post("/me/videos", upload1, staticGlobalConst.del1Video) as JsonObject
When I add the code below it throws exception
I tried adding parameter
upload1.Add("privacy", "SELF");
How can I upload a private video?
Thanks
Sujit
I solved it by adding below parameter
upload1.Add("privacy", "{\"value\":\"SELF\"}");
that is depends on what was your choice for " Visibility of app and posts: " option when the first time you authenticated with the app.
you can change it to : Public,Friends,Only Me , from your Account Settings>Apps and choose your application and change it .

Why does WebSharingAppDemo-CEProviderEndToEnd sample still need a client db connection after scope creation to perform sync

I'm researching a way to build an n-tierd sync solution. From the WebSharingAppDemo-CEProviderEndToEnd sample it seems almost feasable however for some reason, the app will only sync if the client has a live SQL db connection. Can some one explain what I'm missing and how to sync without exposing SQL to the internet?
The problem I'm experiencing is that when I provide a Relational sync provider that has an open SQL connection from the client, then it works fine but when I provide a Relational sync provider that has a closed but configured connection string, as in the example, I get an error from the WCF stating that the server did not receive the batch file. So what am I doing wrong?
SqlConnectionStringBuilder builder = new SqlConnectionStringBuilder();
builder.DataSource = hostName;
builder.IntegratedSecurity = true;
builder.InitialCatalog = "mydbname";
builder.ConnectTimeout = 1;
provider.Connection = new SqlConnection(builder.ToString());
// provider.Connection.Open(); **** un-commenting this causes the code to work**
//create anew scope description and add the appropriate tables to this scope
DbSyncScopeDescription scopeDesc = new DbSyncScopeDescription(SyncUtils.ScopeName);
//class to be used to provision the scope defined above
SqlSyncScopeProvisioning serverConfig = new SqlSyncScopeProvisioning();
....
The error I get occurs in this part of the WCF code:
public SyncSessionStatistics ApplyChanges(ConflictResolutionPolicy resolutionPolicy, ChangeBatch sourceChanges, object changeData)
{
Log("ProcessChangeBatch: {0}", this.peerProvider.Connection.ConnectionString);
DbSyncContext dataRetriever = changeData as DbSyncContext;
if (dataRetriever != null && dataRetriever.IsDataBatched)
{
string remotePeerId = dataRetriever.MadeWithKnowledge.ReplicaId.ToString();
//Data is batched. The client should have uploaded this file to us prior to calling ApplyChanges.
//So look for it.
//The Id would be the DbSyncContext.BatchFileName which is just the batch file name without the complete path
string localBatchFileName = null;
if (!this.batchIdToFileMapper.TryGetValue(dataRetriever.BatchFileName, out localBatchFileName))
{
//Service has not received this file. Throw exception
throw new FaultException<WebSyncFaultException>(new WebSyncFaultException("No batch file uploaded for id " + dataRetriever.BatchFileName, null));
}
dataRetriever.BatchFileName = localBatchFileName;
}
Any ideas?
For the Batch file not available issue, remove the IsOneWay=true setting from IRelationalSyncContract.UploadBatchFile. When the Batch file size is big, ApplyChanges will be called even before fully completing the previous UploadBatchfile.
// Replace
[OperationContract(IsOneWay = true)]
// with
[OperationContract]
void UploadBatchFile(string batchFileid, byte[] batchFile, string remotePeer1
I suppose it's simply a stupid example. It exposes "some" technique but assumes you have to arrange it in proper order by yourself.
http://msdn.microsoft.com/en-us/library/cc807255.aspx