different output with mono on linux as on visual studio on win7 with calling a webservice - web-services

I use the Exchange webservices to extract attachments from exchange mailserver.
When i call the code on linux with mono a certain text attachment contain some mixed-up strings.
like so
"sam winglin vz" becomes "sainglin vz" -> so it is missing "m w".
I see this about 3 times in a 150kb file. 3 bytes are missing in the linux output vs the windows output.
When i extract it from visual studio the text attachment is perfect.
It is like this example
Save attachments from exchange inbox
Any idea in what direction i should look to fix this?
Code:
#r "Microsoft.Exchange.WebServices.dll"
open Microsoft
open Microsoft.Exchange.WebServices.Data
open System
open System.Net
type PgzExchangeService(url,user,password) =
let service = new ExchangeService(ExchangeVersion.Exchange2007_SP1,
TimeZoneInfo.CreateCustomTimeZone("Central Standard Time",new TimeSpan(-6, 0, 0),"(GMT-06:00) Central Time (US & Canada)","Central Standard Time"))
do
ServicePointManager.ServerCertificateValidationCallback <- ( fun _ _ _ _ -> true )
service.Url <- new Uri(url)
service.Credentials <- new WebCredentials(user, password, "domain")
member this.Service with get() = service
member this.InboxItems = this.Service.FindItems(WellKnownFolderName.Inbox, new ItemView(10))
member this.GetFileAttachments ( item : Item ) =
let emailMessage =
EmailMessage.Bind( this.Service,
item.Id,
new PropertySet(BasePropertySet.IdOnly, ItemSchema.Attachments))
item, emailMessage.Attachments |> Seq.choose (fun attachment -> match box attachment with
| :? FileAttachment as x -> Some(x) | _ -> None)
let mailAtdomain = new PgzExchangeService("https://xx.xx.XX.XX/EWS/Exchange.asmx", "user", "passw")
let printsave (item : Item ,att : seq<FileAttachment>) =
if (Seq.length att) > 0 then
printfn "%A - saving %i attachments" item.Subject (Seq.length att)
att |> Seq.iter ( fun attachment -> printfn "%A" attachment.Name
attachment.Load(#"/tmp/test/" + attachment.Name ) )
// filter so we only have items with attachements and ...
let itemsWithAttachments = mailAtdomain.InboxItems
|> Seq.map mailAtdomain.GetFileAttachments
|> Seq.iter printsave
The code doesn't run on Windows with mono due to a bug in TimeZoneInfo
This sample code runs on linux but not on windows. because of the TimeZoneInfo bug.
But with this the code that works on linux to extract attachments.
Try csv attachments and see if the result is the same. i loose data ! about 3 bytes every somemany lines
mail me if you need the sample csv attachment that gives the problem
Here is a c# version that i used for testing. Running from VS2010 it works perfect but on linux with mono the attachment has wrong size , some bytes are missing?!.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using Microsoft.Exchange.WebServices.Data;
using System.Net;
namespace Exchange_SDP_Attachment_Extracter
{
public class PgzExchangeService
{
public void Extract()
{
ExchangeService service = new ExchangeService (ExchangeVersion.Exchange2007_SP1,TimeZoneInfo.Local);
service.Credentials = new NetworkCredential("user", "pass", "domain");
service.Url = new Uri("https://xx.xx.xx.xx/EWS/Exchange.asmx");
ServicePointManager.ServerCertificateValidationCallback = delegate { return true; };
FindItemsResults<Item> findResults = service.FindItems(WellKnownFolderName.Inbox, new ItemView(10));
foreach (Item item in findResults.Items)
{
EmailMessage e = EmailMessage.Bind
(service,
item.Id,
new PropertySet(BasePropertySet.IdOnly, ItemSchema.Attachments));
foreach ( Attachment att in e.Attachments )
{
if (att is FileAttachment)
{
FileAttachment fileAttachment = (FileAttachment)att;
fileAttachment.Load(#"/tmp/testsdp/" + fileAttachment.Name);
}
}
}
}
}
class Program
{
static void Main(string[] args)
{
PgzExchangeService pgz = new PgzExchangeService();
pgz.Extract();
}
}
}

My suggestion would be to try examining the text attachment with a hex editor. There has be something unusual about those three occurrences. You need to find out what those three lines have in common before any of us can recommend a course of action to you.

Related

How to preview mermaid graph in RStudio viewer?

Background:
It's possible to "use of an external text file with the .mmd file extension can provide the advantage of syntax coloring and previewing in the RStudio Viewer" (DiagrammeR Docs)
What should look like this:
Problem:
In my minimal working example the graph is not rendered in the viewer panel but the plain text from the mermaid.mmd-file is printed (see below). How to fix this behavior, so that the chart is rendered?
mermaid.mmd:
graph LR
A-->B
Output in viewer panel:
The text inside the mermaid.mmd-file is printed in the viewer panel, but not the rendered graph
My Setup
RStudio 2022.07.2 (<- newest version)
R version 4.2.1 (2022-06-23 ucrt)
DiagrammerR version 1.0.9 (<- newest version)
knitr version 1.40 (<- newest version)
Technical Reason for the Problem
I found the problem. It's the implementation of the handling of extern .mmd-files in the DigrammeR::mermaid()-function.
Within the mermaid()-function the htmlwidgets::createWidget(name = "DiagrammeR", x = x, width = NULL, height = NULL, package = "DiagrammeR")-functions takes the processed input x and renders the graph. This functions expects an input in the format "\ngraph LR\nA-->B\n", where every input start and ends with "\n" and each line in your mermaid-code is also separated by "\n". But the input from an extern .mmd-file (readLines("mermaid.mmd", encoding = "UTF-8", warn = FALSE)) looks like this:
"graph LR" "A-->B" (separated strings for each line of mermaid-code)
Transforming the input into the required format can be done by mermaid.code <- paste0("\n",paste0(mermaid.code, collapse = "\n"),"\n")
Unfortunately this processing step is not implemented for extern .mmd-files in DigrammeR::mermaid()
Soultion
Build a new mermaid()-function, including the required processing step
Replace the mermaid()-function within the DiagrammeR-packages by the new function
# Build new mermaid()-function
mermaid.new = function (diagram = "", ..., width = NULL, height = NULL) {
is_connection_or_file <- inherits(diagram[1], "connection") ||
file.exists(diagram[1])
if (is_connection_or_file) {
diagram <- readLines(diagram, encoding = "UTF-8", warn = FALSE)
diagram <- paste0("\n",paste0(d, collapse = "\n"),"\n") # NEW LINE
}
else {
if (length(diagram) > 1) {
nosep <- grep("[;\n]", diagram)
if (length(nosep) < length(diagram)) {
diagram[-nosep] <- sapply(diagram[-nosep], function(c) {
paste0(c, ";")
})
}
diagram = paste0(diagram, collapse = "")
}
}
x <- list(diagram = diagram)
htmlwidgets::createWidget(name = "DiagrammeR", x = x, width = width,
height = height, package = "DiagrammeR")
}
#Replace mermaid()-function in DiagrammeR-package
if(!require("R.utils")) install.packages("R.utils")
library(R.utils)
reassignInPackage(name="mermaid", pkgName="DiagrammeR", mermaid.new, keepOld=FALSE)
# Test new function
DiagrammeR::mermaid("mer.mmd")
You can preview your codes as simple as running them like this:
library(DiagrammeR)
DiagrammeR(
"
**graph LR
A-->B**
")
You should be able to see
this

How to write custom ppx decorator to rescript?

I need to generate a value with a different type from my passed type. This is the first time I write on ocaml-like, and for example, in a familiar me haskell I would use Data.Generics.
How I have understood I need to use decorator and ppx. I wrote simple example
let recordHandler = (loc: Location.t, _recFlag: rec_flag, _t: type_declaration, fields: list(label_declaration)) => {
let (module Builder) = Ast_builder.make(loc);
let test = [%str
let schema: Schema = { name: "", _type: String, properties: [] }
]
let moduleExpr = Builder.pmod_structure(test);
[%str
module S = [%m moduleExpr]
]
}
let str_gen = (~loc, ~path as _, (_rec: rec_flag, t: list(type_declaration))) => {
let t = List.hd(t)
switch t.ptype_kind {
| Ptype_record(fields) => recordHandler(loc, _rec, t, fields);
| _ => Location.raise_errorf(~loc, "schema is used only for records.");
};
};
let name = "my_schema";
let () = {
let str_type_decl = Deriving.Generator.make_noarg(str_gen);
Deriving.add(name, ~str_type_decl) |> Deriving.ignore;
};
And
open Ppxlib;
let _ = Driver.run_as_ppx_rewriter()
But in using in rescript code
module User = {
#deriving(my_schema)
type my_typ = {
foo: int,
};
};
I caught:
schema is not supported
. And I made myself sure me to connect it right when I had changed #deriving(my_schema) for #deriving(abcd) and #deriving(sschema).
I got different error
Ppxlib.Deriving: 'abcd' is not a supported type deriving generator.
And my last experiment was to copy past existing library deriving accessors .
ppx_accessor
I copied-pasted it and renamed for accessors_2. And I got same error such as experiment.
accessors_2 is not supported
Also I haven't found examples "ppx rescript". Can you please help me.
What am I doing wrong (ALL , I know)
I have found answer in the article
Dropping support for custom PPXes such as ppx_deriving (the deriving
attribute is now exclusively interpreted as bs.deriving)

Testing Spring cloud stream with kafka stream binder: using TopologyTestDriver I get the error of "The class is not in the trusted packages"

I have this simple stream processor (not a consumer/producer) using kafka streams binder.
#Bean
fun processFoo():Function<KStream<FooName, FooAddress>, KStream<FooName, FooAddressPlus>> {
return Function { input-> input.map { key, value ->
println("\nPAYLOAD KEY: ${key.name}\n");
println("\nPAYLOAD value: ${value.address}\n");
val output = FooAddressPlus()
output.address = value.address
output.name = value.name
output.plus = "$value.name-$value.address"
KeyValue(key, output)
}}
}
I'm trying to test it using the TopologyTestDriver:
#SpringBootTest(
webEnvironment = SpringBootTest.WebEnvironment.NONE,
classes = [Application::class, FooProcessor::class]
)
class FooProcessorTests {
var testDriver: TopologyTestDriver? = null
val INPUT_TOPIC = "input"
val OUTPUT_TOPIC = "output"
val inputKeySerde: Serde<FooName> = JsonSerde<FooName>()
val inputValueSerde: Serde<FooAddress> = JsonSerde<FooAddress>()
val outputKeySerde: Serde<FooName> = JsonSerde<FooName>()
val outputValueSerde: Serde<FooAddressPlus> = JsonSerde<FooAddressPlus>()
fun getStreamsConfiguration(): Properties? {
val streamsConfiguration = Properties()
streamsConfiguration[StreamsConfig.APPLICATION_ID_CONFIG] = "TopologyTestDriver"
streamsConfiguration[StreamsConfig.BOOTSTRAP_SERVERS_CONFIG] = "dummy:1234"
streamsConfiguration[JsonDeserializer.TRUSTED_PACKAGES] = "*"
streamsConfiguration["spring.kafka.consumer.properties.spring.json.trusted.packages"] = "*"
return streamsConfiguration
}
#Before
fun setup() {
val builder = StreamsBuilder()
val input: KStream<FooName, FooAddress> = builder.stream(INPUT_TOPIC, Consumed.with(inputKeySerde, inputValueSerde))
val processor = FooProcessor()
val output: KStream<FooName, FooAddressPlus> = processor.processFoo().apply(input)
output.to(OUTPUT_TOPIC, Produced.with(outputKeySerde, outputValueSerde))
testDriver = TopologyTestDriver(builder.build(), getStreamsConfiguration())
}
#After
fun tearDown() {
try {
testDriver!!.close()
} catch (e: RuntimeException) {
// https://issues.apache.org/jira/browse/KAFKA-6647 causes exception when executed in Windows, ignoring it
// Logged stacktrace cannot be avoided
println("Ignoring exception, test failing in Windows due this exception:" + e.localizedMessage)
}
}
#org.junit.Test
fun testOne() {
val inputTopic: TestInputTopic<FooName, FooAddress> =
testDriver!!.createInputTopic(INPUT_TOPIC, inputKeySerde.serializer(), inputValueSerde.serializer())
val key = FooName()
key.name = "sherlock"
val value = FooAddress()
value.name = "sherlock"
value.address = "Baker street"
inputTopic.pipeInput(key, value)
val outputTopic: TestOutputTopic<FooName, FooAddressPlus> =
testDriver!!.createOutputTopic(OUTPUT_TOPIC, outputKeySerde.deserializer(), outputValueSerde.deserializer())
val message = outputTopic.readValue()
assertThat(message.name).isEqualTo(key.name)
assertThat(message.address).isEqualTo(value.address)
}
}
When running it, I get this error in line inputTopic.pipeInput(key, value)
The class 'package.FooAddress' is not in the trusted packages: [java.util, java.lang]. If you believe this class is safe to deserialize, please provide its name. If the serialization is only done by a trusted source, you can also enable trust all ().*
Any ideas on how to solve this? Setting those properties in getStreamsConfiguration() is not helping. Please note that this is a stream processor, not a consumer/producer.
Thanks a lot!
When Kafka creates the Serde itself, it applies the properties by calling configure().
Since you are instantiating the Serde yourself, you need to call configure() on it passing in the map of properties.
That's how the trusted packages property gets propagated to the deserializer.
Or, you can call setTrustedPackages() on the deserializer.
So, for completeness, here's how the code looks when configuring the serde as #GaryRussell suggests:
private fun getStreamsConfiguration(): Properties? {
// Don't set the trusted packages here since topology test driver does not know about Spring
val streamsConfiguration = Properties()
streamsConfiguration[StreamsConfig.APPLICATION_ID_CONFIG] = "TopologyTestDriver"
streamsConfiguration[StreamsConfig.BOOTSTRAP_SERVERS_CONFIG] = "dummy:1234"
}
#Before
fun setup() {
val builder = StreamsBuilder()
// Set the trusted packages for all serdes
val config = mapOf<String, String>(JsonDeserializer.TRUSTED_PACKAGES to "*")
inputKeySerde.configure(config, true)
inputValueSerde.configure(config, false)
outputKeySerde.configure(config, true)
outputValueSerde.configure(config, false)
}
And the rest of the code remains as described in the question. All credit to #GaryRusell.

Updating a nested record with new data in elm

I have two pieces of JSON that I've successfully decoded sequentially. I would like to take the new html_fragment and update my existing html_fragment. Generally this would be simple but my data structure is giving me difficulties:
type PostDataContainer
= PostDataContainer PostData
type alias PostData =
{ title : String
, comments : List Comment
}
type alias Comment =
{ comment_id : Int
, html_fragment : String
}
type alias CommentHtml =
{ id : Int
, html_fragment : String
}
I've just gotten CommentHtml and would like to update the existing html_fragment in Comment. This is what I have so far:
MergeCommentHtml commentHtmlData ->
case commentHtmlData of
Err err ->
Debug.log ("Error decoding CommentHtmlData" ++ toString err)
( mdl, Cmd.none )
Ok commentHtml ->
case mdl.maybePostDataContainer of
Just (PostDataContainer postData) ->
let
updatedCommentData = -- I dont know how to calculate this?
in
( { mdl | postData = { postData | comments = updatedCommentData } }, Cmd.none )
Note that commentHtml here is a List CommentHtml. Any thoughts on how to update my old comment.html_fragment with the new values in commentHtml?
Option 1:
just decode the data as it stands. When it's time to display it, arrange it appropriately via some function you write like rawJsonDataToNicerData.
Option 2:
Suppose you implement the following function:
-- given a new comment, and some PostData, return the new version of the PostData
updateData : CommentHtml -> PostData -> PostData
-- so now, assuming we can decode a CommentHtml with commentHtmlDeocder
-- we can do the following
dataUpdaterDecoder : Decoder (PostData -> PostData)
dataUpdaterDecoder
commentHtmlDecoder |> Decoder.andThen (\commentHtml -> updateData commentHtml)
Now wherever we were going to decode a commentHtmlDeocder we can decode a dataUpdaterDecoder instead, and use a bunch of these to update our data.
Here is an example of a relational data decoder in action using the idea above:
https://ellie-app.com/3KWmyJmMrDsa1
Given that commentHtmlData is a List according to a comment, I think the easiest approach is to convert it to a Dict keyed by id, then map over the existing comments looking for the comment_id in the dict. If it exists, replace html_fragment, if not then return the original unmodified:
let
commentHtmlDict =
commentHtmlData
|> List.map (\c -> (c.id, c))
|> Dict.fromList
updatedCommentData =
postData.comments
|> List.map (\comment ->
case Dict.get comment.comment_id commentHtmlDict of
Just commentHtml ->
{ comment | html_fragment = commentHtml.html_fragment }
Nothing ->
comment
)

List object properties from a instance in Jena

How can I list all Object Properties associated to a instance in Jena?
For example:
A Person has an Object Property called "hasVehicle" which is associated with a class Vehicle
The appropriate Jena method is OntClass.listDeclaredProperties. There are some nuances to be aware of; the Jena RDF frames how-to explains in detail.
Update
OK, I've looked at your code sample, and read your description, and I'm afraid I don't understand what you want to do. What I've done is re-write your code sample so that it does something that I guess you might want, based on your description in the comment:
package test;
import com.hp.hpl.jena.ontology.*;
import com.hp.hpl.jena.rdf.model.*;
import com.hp.hpl.jena.util.FileManager;
import com.hp.hpl.jena.util.iterator.ExtendedIterator;
public class LeandroTest
{
public static String NS = "http://www.owl-ontologies.com/TestProject.owl#";
public static void main( String[] args ) {
OntModel m = ModelFactory.createOntologyModel( OntModelSpec.OWL_MEM, null );
FileManager.get().readModel( m, "./src/main/resources/project-test.owl" );
OntClass equipe = m.getOntClass( NS + "Equipe" );
OntProperty nome = m.getOntProperty( NS + "nome" );
for (ExtendedIterator<? extends OntResource> instances = equipe.listInstances(); instances.hasNext(); ) {
OntResource equipeInstance = instances.next();
System.out.println( "Equipe instance: " + equipeInstance.getProperty( nome ).getString() );
// find out the resources that link to the instance
for (StmtIterator stmts = m.listStatements( null, null, equipeInstance ); stmts.hasNext(); ) {
Individual ind = stmts.next().getSubject().as( Individual.class );
// show the properties of this individual
System.out.println( " " + ind.getURI() );
for (StmtIterator j = ind.listProperties(); j.hasNext(); ) {
Statement s = j.next();
System.out.print( " " + s.getPredicate().getLocalName() + " -> " );
if (s.getObject().isLiteral()) {
System.out.println( s.getLiteral().getLexicalForm() );
}
else {
System.out.println( s.getObject() );
}
}
}
}
}
}
This gives the following output, by first listing all resources of rdf:type #Equipe, then for each of those it lists the resources in the model that link to that Equipe, then for of those linked resources it lists all of the RDF the properties. I don't think that's a particularly useful thing to do, but hopefully it will show you some patterns for traversing RDF graphs in Jena.
Equipe instance: Erica
Equipe instance: Etiene
http://www.owl-ontologies.com/TestProject.owl#EtapaExecucao_01
EtapaExecucao_DataModificao -> 2010-03-29T10:54:05
caso_de_teste -> http://www.owl-ontologies.com/TestProject.owl#CasoDeTeste_01
EtapaExecucao_StatusTeste -> Passou
EtapaExecucao_Reprodutibilidade -> Sempre
type -> http://www.owl-ontologies.com/TestProject.owl#EtapaExecucao
EtapaExecucao_VersaoDefeitoSurgiu -> Release ICAMMH_01.00
EtapaExecucao_Severidade -> Minimo
EtapaExecucao_VersaoDefeitoCorrigiu -> Release ICAMMH_02.00
DataExecucao -> 2009-07-10T09:42:02
EtapaExecucao_StatusDoDefeito -> Nao sera corrigido
EtapaExecucao_DataSubmissao -> 2009-06-30T09:43:01
Tipos_Fases -> http://www.owl-ontologies.com/TestProject.owl#FaseTesteExecucao
EtapaExecucao_Resolucao -> Fechado
executor_do_teste -> http://www.owl-ontologies.com/TestProject.owl#Etiene
EtapaExecucao_PrioridadeCorrecao -> Normal
Equipe instance: Fabio
Equipe instance: Melis
Some general suggestions, particularly if you have any follow-up questions:
ask specific questions, it's very hard to answer a vague unclear question;
provide runnable code if possible: you can take my code below, drop it into a code environment like Eclipse and try it out
provide the code and data in the question, not linked off on pastebin
take some time to reduce the code and data to the minimum form necessary to show the problem: your Protégé file was over 600 lines long