I am creating a web service in java in Domino that from a stream that represents a file and its name, a notes document is created, the stream is transformed into a file and attached to the document. The problem is that for name of files with accentuation occurs the truncation of letters with accentuation. I have tried several alternatives to handle this exception, but none have had the desired effect. Has anyone ever had this problem? The code snippet where the attachment is named follows.
header.setHeaderVal("attachment; filename=\""+nomeArq+"\"");
Related
I'm having a Django stack process issue and am wondering if there's a way to request user input.
To start with, the user is loading Sample data (Oxygen, CHL, Nutrients, etc.) which typically comes from an excel file. The user clicks on a button indicating what type of sample is being loaded on the webpage and gets a dialog to choose the file to load. The webpage passes the file to python via VueJS/Django where python passes the file down to the appropriate parser to read that specific sample type. The file is processed and sample data is written to a database.
Issues (I.E sample ID is outside an expected range of IDs because it was keyed in wrong when the sample was taken) get passed back to the webpage as well as written to the database to tell the user when something happened:
(E.g "No Bottle exists for sample with ID 495619, expected range is 495169 - 495176" or "356 is an unreasonable salinity, check data for sample 495169"). Maybe there's fifty samples without bottle IDs because the required bottle file wasn't loaded before the sample data. Generally you have one big 10L bottle with water in it, the ocean depth (pressure) and bottle ID where the bottle was closed is in a bottle file, and samples are placed into different vials with that bottle's unique id and the vials are run thought different machines and tests to produce the sample files.
My issue occurs when the user picks a file that contains data that has already been loaded. If the filename matches the existing file data was loaded from I clear data associated with that file and reload the database with the new data, sometimes data is spread over several files that were already loaded and uploader will merge all the files, including some that weren't uploaded, together.
A protocol for uploading data is for the uploader to append/prepend their initials onto a copy of a file if corrections were made and not to modify the original file; a chain of custody. Sometimes a file can be modified by multiple people and each person will create a copy and append/prepend their initials so people will know who all touched the data. (I don't make the rules I just work with what I have)
So we get all the way back to the parser and it's discovered the data already exists (for a given sample ID), but the filename is different. At this point I want to ask the user, do you want to reload all the data loaded from the other file, update existing data with the new file or ignore existing data and only append new data.
Is there a way for Django to make a request to the webpage to ask the user how it should handle this data without having to terminate the current request? - which the webpage is waiting for a response from the server to say the data was loaded and what errors with the data might have been found -
My current thoughts are to:
Ask the user before every file upload how a collision should be handled, if it happens
Or
Abort the data load, pass an error with a code back to the webpage, the error code indicates to the webpage that the user has to decide what to do. Upon the user answering, the load process is restarted with the same file, but with a flag to tell the parser what to do when the issue is eventually encountered again.
Nothing is written to the database until a whole file is read so no problem aborting the process and restarting if the parser doesn't know what to do, but I feel like there might be a better way.
I have two separate normalized text files that I want to train my BlazingText model on.
I am struggling to get this to work and the documentation is not helping.
Basically I need to figure out how to supply multiple files or S3 prefixes as "inputs" parameter to the sagemaker.estimator.Estimator.fit() method.
I first tried:
s3_train_data1 = 's3://{}/{}'.format(bucket, prefix1)
s3_train_data2 = 's3://{}/{}'.format(bucket, prefix2)
train_data1 = sagemaker.session.s3_input(s3_train_data1, distribution='FullyReplicated', content_type='text/plain', s3_data_type='S3Prefix')
train_data2 = sagemaker.session.s3_input(s3_train_data2, distribution='FullyReplicated', content_type='text/plain', s3_data_type='S3Prefix')
bt_model.fit(inputs={'train1': train_data1, 'train2': train_data2}, logs=True)
this doesn't work because SageMaker is looking for the key specifically to be "train" in the inputs parameter.
So then i tried:
bt_model.fit(inputs={'train': train_data1, 'train': train_data2}, logs=True)
This trains the model only on the second dataset and ignores the first one completely.
Now finally I tried using a Manifest file using the documentation here: https://docs.aws.amazon.com/sagemaker/latest/dg/API_S3DataSource.html
(see manifest file format under "S3Uri" section)
the documentation says the manifest file format is a JSON that looks like this example:
[
{"prefix": "s3://customer_bucket/some/prefix/"},
"relative/path/to/custdata-1",
"relative/path/custdata-2"
]
Well, I don't think this is valid JSON in the first place but what do I know, I still give it a try.
When I try this:
s3_train_data_manifest = 'https://s3.us-east-2.amazonaws.com/bucketpath/myfilename.manifest'
train_data_merged = sagemaker.session.s3_input(s3_train_data_manifest, distribution='FullyReplicated', content_type='text/plain', s3_data_type='ManifestFile')
data_channel_merged = {'train': train_data_merged}
bt_model.fit(inputs=data_channel_merged, logs=True)
I get an error saying:
ValueError: Error training blazingtext-2018-10-17-XX-XX-XX-XXX: Failed Reason: ClientError: Data download failed:Unable to parse manifest at s3://mybucketpath/myfilename.manifest - invalid format
I tried replacing square brackets in my manifest file with curly braces ...but still I feel the JSON file format seems to be missing something that documentation fails to describe correctly?
You can certainly match multiple files with the same prefix, so your first attempt could have worked as long as you organize your files in your S3 bucket to suit. For e.g. the prefix: s3://mybucket/foo/ will match the files s3://mybucket/foo/bar/data1.txt and s3://mybucket/foo/baz/data2.txt
However, if there is a third file in your bucket called s3://mybucket/foo/qux/data3.txt that you don't want matched (while still matching the first two) there is no way to do achieve that with a single prefix. In these cases a manifest would work. So, in the above example, the manifest would simply be:
[
{"prefix": "s3://mybucket/foo/"},
"bar/data1.txt",
"baz/data2.txt"
]
(and yes, this is valid json - it is an array whose first element is an object with an attribute called prefix and all subsequent elements are strings).
Please double check your manifest (you didn't actually post it so I can't do that for you) and make sure it conforms to the above syntax.
If you're still stuck please open up a thread on the AWS sagemaker forums - https://forums.aws.amazon.com/forum.jspa?forumID=285 and after you do that we can setup a PM to try and get to the bottom of this (never post your AWS account id in a public forum like StackOverflow or even in AWS forums).
I am looking for a tool to generate documentation from a WSDL file. I have found wsdl-viewer.xsl which seems promissing. See https://code.google.com/p/wsdl-viewer/
However, I am seeing a limitation where references to complex data types are not mapped out (or explained). For example, say we have a createSnapshot() operation that creates a snapshot and returns an object representing an snapshot. Runing xsltproc(1) and using wsdl-viewer.xsl, the rendered documentation has an output section that describes the output as
Output: createSnapshotOut
parameter type ceateSnapshotResponse
snapshots type Snapshot
I'd like to be able to click on "Snapshot" and see the schema definition of Snapshot.
Is this possible? Perhaps I am not using xsltproc(1) correctly. Perhaps it is not able to find the xsd files. Here are some of the relevant files I have:
SnapshotMgmntProvider.wsdl
SnapshotMgmntProviderDefinitions.xsd
SnapshotMgmntProviderTypes.xsd
Thanks
Medi
When a inbound-endpoint file is received, mule lets us define a regex filter to determine if it's the right file.
<file:inbound-endpoint responseTimeout="10000" doc:name="File" path="${Inbound_Path}" pollingFrequency="${PollingFrequency}" connector-ref="nameConnector">
<file:filename-regex-filter pattern="BOB\d+\.FILE" caseSensitive="true"/>
</file:inbound-endpoint>
Is there a similar function or method where we can do the same thing with the mule requester method. We need to import a file in the middle of our flow process, but the file name is not specifically defined. See below for an example for what I am trying to do.
<mulerequester:request config-ref="muleRequesterConfig" resource="file://${Inbound_Path}/<want to add a regex to define the relative file name>?connector=nameConnector" doc:name="Mule Requester"/>
Basically, we do not know the exact file name because the inbound file gets time stamped every time it is stored into the file system. This makes it hard to provide a specific file address to the mule application.
Does anybody know if we can use some similar regex filter function that mirrors the capability of the file:filename-regex-filter for mule requester.
Update - Per Anton's response, I did the following.
Created the file:endpoint element outside of the flow.
<file:endpoint doc:name="File" name="File_Name" path="Inbound_Path" responseTimeout="10000" connector-ref="FileConnector">
<file:filename-regex-filter pattern="BOB\d+\.FILE" caseSensitive="true"/>
</file:endpoint>
Had the resource attribute reference the file:endpoint element
<mulerequester:request config-ref="muleRequesterConfig" resource="File_Name" doc:name="Mule Requester"/>
You can use a file:endpoint as the resource for mulerequester:request. Just resource="myFileEndpointName".
I'm trying out type providers in F#. I've had some success using the WsdlService provider in the following fashion:
type ec2 = WsdlService<"http://s3.amazonaws.com/ec2-downloads/ec2.wsdl">
but when I download that wsdl, rename it to .wsdlschema and supply it as a local schema according to the method specified in this example:
type ec2 = WsdlService< ServiceUri="N/A", ForceUpdate = false,
LocalSchemaFile = """C:\ec2.wsdlschema""">
Visual Studio emits an error message:
The type provider
'Microsoft.FSharp.Data.TypeProviders.DesignTime.DataProviders'
reported an error: Error: No valid input files specified. Specify
either metadata documents or assembly files
This message is wrong, since the file quite plainly is valid, as the previous example proves.
I've considered permissions issues, and I've repeated the same example from my user folder, making sure to grant full control to all users in both cases, as well as running VS as administrator.
Why does the F# compiler think the file isn't valid?
edit #1: I have confirmed that doing the same thing doesn't work for http://gis1.usgs.gov/arcgis/services/gap/GAP_Land_Cover_NVC_Class_Landuse/MapServer?wsdl either (a USGS vegetation-related API) whereas referencing the wsdl online works fine.
Hmmm, it appears that the type provider is rather stubborn and inflexible in that it requires a true "wsdlschema" doc when using the LocalSchemaFile option. A wsdlschema document can contain multiple .wsdl and .xsd files within it, wrapped in some XML to keep them separate. I'm guessing this is some kind of standard thing in the Microsoft toolchain, but perhaps others (e.g. Amazon) don't expose stuff like this.
The first thing the TP attempts to do is unpack the wsdlschema file into its separate parts, and sadly it does the wrong thing if in fact there is no unpacking to be done. Then, when it tries to point svcutil.exe at the unpacked schema files to do the codegen, this dies with the error message you are seeing.
Workaround: Add the expected bits of XML into your file, and it will work.
<?xml version="1.0" encoding="utf-8"?>
<ServiceMetadataFiles>
<ServiceMetadataFile name="ec2.wsdl">
[body of your WSDL goes here]
</ServiceMetadataFile>
</ServiceMetadataFiles>