Transmit raw vertex information to XTK? - xtk

We're using XTK to display data processed and created on a server. In our particular case, it's a parallel isocontouring application. As it currently stands we're converting to the (textual) VTK format and passing the entire (imaginary) VTK file over the wire to the client, where XTK renders it. This provides some substantial overhead, as the text format outweighs in the in-memory format by a considerably amount.
Is there a recommended mechanism available for transmitting binary data directly, either through an alternate format that is well-described or by constructing XTK primitives inside the JavaScript code itself?

It should be supported to parse an X.object from JSON. So you could generate the JSON on the serverside and use the X.object(jsonobject) copy constructor to safe down cast it. This should also give the advantage that the objects can be 'webgl-ready' and do not require any clientside parsing which should result in instant loading.
I was planning to play with that myself soon but if you get anything to work, please let us know.
Just have in mind that you need to match the X.object structure even in JSON. The best way to see what is expected by xtk is to JSON.stringify a webgl-ready X.object.

XMLHTTPRequest, in its second specification (the last one), allows trans-domain http requests (but you must have the control of the php header on the server side).
In addition it allows to sent ArrayBuffer, or Blobs or Documents (look here). And then on the client side you can write your own parser for that blob or (I think it fits more in you case) that BinaryBuffer using binary buffer views (see doc here). However XMLHTTPRequest is from client to server, but look HTML5 WebSocket, it seems it can transfert binaryArrays too (they say it here : ).
In every case you will need a parser to transform binary to string or to X.object at the client side.
I wish it helped you.

Related

Windows Media Foundation: IMFSourceReader::SetCurrentMediaType execution time issue

I'm currently working on retrieving image data from a video capturing device.
It is important for me that I have raw output data in a rather specific format and I need a continuous data stream. Therefore I figured to use the IMFSourceReader. I pretty much understand how it is working. For the whole pipeline to work I checked the output formats of the camera and looked at the available Media Foundation Transforms(MFTs).
The critical function here is IMFSourceReader::SetCurrentMediaType. I'd like to elaborate one critical functionality I discovered. If I just use the function with the parameters of my desired output format, it changes some parameters like fps or resolution, but the call succeeds. When I first call the function with a native media type with my desired parameters and a wrong subtype (like MJPG or sth.) and call it again with my desired parameters and the correct subtype the call succeeds and I end up with my correct parameters. I suspect this is only true, if fitting MFTs (decoders) are available.
So far I've pretty much beaten the WMF to get what I want. The Problem now is, that the second call of IMFSourceReader::SetCurrentMediaType takes a long time. The duration depends heavily on the camera used. Varying from 0.5s to 10s. To be honest I don't really know why its taking so long, but I think the calculation of the correct transformation paths and/or the initialization of the transformations is the problem. I recognized an excessive amount of loading and unloading of the same dlls(ntasn1.dll, ncrypt.dll, igd10iumd32.dll). But loading them once myself didn't change anything.
So does anybody know this issue and has a quick fix for it?
Or does anybody know a work around to:
Get raw image data via media foundation without the use ofIMFSourceReader?
Select and load the transformations myself, to support the source reader call?
You basically described the way Source Reader is supposed to work in first place. The underlying media source has its own media types and the reader can supply a conversion if it ever needs to fit requested media type and closest original.
Video capture devices tend to expose many [native] media types (I have a webcam which enumerates 475 of them!), so if format fitting does not go well, source reader might take some time to try one conversion or another.
Note that you can disable source reader's conversions by applying certain attributes like MF_READWRITE_DISABLE_CONVERTERS, in which case inability to set a video format directly on the source would result in an failure.
You can also read data in original device's format and decode/convert/process yourself by feeding the data into one or a chain of MFTs. Typically, when you set respective format on the source reader, the source reader manages the MFTs for you. If you however prefer, you can do it yourself too. Unfortunately you cannot build a chain of MFTs for the source reader to manage. Either you leave it on source reader completely, or you set native media type, you read the data in original format from the reader, and then you manage the MFTs on your side by doing IMFTransform::ProcessInput, IMFTransform::ProcessOutput and friends. This is not as easy as source reader, but is doable.
Since VuVirt does not want to write any answer, I'd like to add one for him and everybody who has the same issue.
Under some conditions the call IMFSourceReader::SetCurrentMediaType takes a long time, when the target format is RGB of some kind and is not natively available. So to get rid of it, I adjusted my image pipeline to be able to interpret YUV (YUY2). I still have no idea, why this is the case, but this is a working work around for me. I don't know any alternative to speed the call up.
Additional hint: I've found that there are usually several IMFTransforms to decode many natively available formats to YUY2. So, if you are able to use YUY2, you are safe. NV12 is another working alternative. Though there are probably more.
Thanks for your answer anyways

Are there tools for generating the clients model from an odata metadata?

Back in the good old days of flex (anyone?) flash builder provided a tool for generating the clients model based on the server model. Is there something similar for generating, say an ember's app model, based on the odata metadata?
datajs documentation does mention the subject. Though the reference for OData.read used in the sample doesn't say explicitly that it interprets the metadata somehow, it seems implied. You'll have to verify that.
It does take an optional metadata object however, suggesting there exists a formal representation for metadata to the library -- I would imagine generated via OData.read. Documentation seems non-existent. I don't know what that looks like.
From there, you should be able to further transform the model to something suitable for Ember.
(datajs is a low-level javascript library that implements client-side OData operations.)
I also know that JayStack provides a JaySvcUtil, a CLI process assembly (.NET program) that extracts metadata. The destination format is JavaScript code, though the model it uses is specific to JayData. Still, you may be able to work from there.
As mentioned by Maya, Microsoft provides the OData Client Code Generator, which generates .NET proxies. Might be more difficult to transform that.
If none of these work for you (which is actually likely), you can always parse the $metadata resource yourself -- I believe it always uses an XML representation in all current versions of OData.
If you need to do it dynamically in the browser, use DOMParser or XMLHttpRequest. More information.
If you can do it statically, then by all means do so -- it's simply best for performance. In this case, you can use whatever language and runtime tools you want to fetch, parse, transform and re-serialize the model.
The format (CSDL) is specified for OData here (v4) and here (v3).
Finally, check out this list, something new may appear that better fits your needs.
There are two suggestion which may help you.
1, OData provide client code generator to generate client-side proxy class. Just need to pass metadata url, .net client code will be generate for you. You can follow the following blog:
http://blogs.msdn.com/b/odatateam/archive/2014/03/11/how-to-use-odata-client-code-generator-to-generate-client-side-proxy-class.aspx
2, If the model means "EdmModel", you can just de-serialize $metadata. OData reader can de-serialize the $metadata to IEdmModel, which can be used in client side. The following is sample code:
HttpWebRequestMessage message = new HttpWebRequestMessage(new Uri(ServiceBaseUri.AbsoluteUri + "$metadata", UriKind.Absolute));
message.SetHeader("Accept", MimeTypes.ApplicationXml);
using (var messageReader = new ODataMessageReader(message.GetResponse()))
{
Model = messageReader.ReadMetadataDocument();
}

Difference between json and rss?

What is the difference between rss and json?
My knowledge is that these two are all data support(feed info)..
I want to know advantages and disadvantages of using these two and
performance between these two?
Which one is better for android?
RSS (Rich Site Summary) and JSON (JavaScript Object Notation) are the
program-readable formats of data. Web publishers make these feeds so
their content will be easily accessible for re-use.
The difference between RSS and JSON really lie in how they are parsed.
Although they are both strings (RSS is essentially just plain-text
XML), JSON is far lighter-weight than RSS. Even though RSS is
plain-text, it will still have to be parsed/traversed in a
DOM/ElementTree similar to how one would read raw HTML data. As you
can imagine this can be a big pain. JSON is a string that can be
easily evaluated into a JavaScript object and traversed naively.
Another big advantage to JSON over RSS is that you can read it
remotely using JSONP, whereas RSS blocks cross-domain requests. This
means that you would have to use a programming language which operates
on the server's side (e.g. PHP/Ruby/Python) in order to download that
page as a proxy and then parse it.
Source.
JavaScript can't read RSS feeds from remote sites, so you're limited to your own domain. JSON, however, works cross-domain. Its the biggest one.
Another is,
The difference between RSS and JSON really lie in how they are parsed. Although they are both strings (RSS is essentially just plain-text XML), JSON is far lighter-weight than RSS. Even though RSS is plain-text, it will still have to be parsed/traversed in a DOM/ElementTree similar to how one would read raw HTML data. As you can imagine this can be a big pain. JSON is a string that can be easily evaluated into a JavaScript object and traversed naively.

How to send/receive XML data with sockets in Qt using string?

I have a Qt TCP Server and Client program which can interact with each other. The Server can send some function generated data to the socket using Qtextstream. And the Client reads the data from the socket using simple readAll() and displays to a QtextEdit.
Now my data from Server side is huge (around 7000+ samples ) and I need the data to appear on the Client side instantaneously. I have learned that using XML will help in my case. So, I made an Qt XML Server and it generates the whole xml data into a .xml file. I read the .xml file in Client side and I can get to display its contents. I used the DOM method for parsing. But I get the data to display only when all the 7000+ samples have been generated on the Server side.
I need clarifications on these questions:
How do I write each element of the XML Server side in to a String and send them through socket? I learnt tagName() can help me, but I have not been able to figure out how.
Is there any other way other than the String method to get a single element generated in the Server side to appear in the Client side.
PS: I am a newbie, forgive my ignorance. Thank you.
Most DOM XML parsers require a complete, well-formed XML document before they'll do anything with it. That's precisely what you see: your data is processed only after all of the samples have been received.
You need to use an incremental parser that doesn't care about the XML document not being complete yet.
On the other hand: if you're not requiring XML for interoperability with 3rd party systems, you're probably wasting a lot of resources by using it. I don't know where you've "learned" that XML will "help in your case". To me it's not learning, it's just following the crowd without understanding what's going on. Is your requirement to use XML or to move the data around? Moving data around has been a well understood problem for decades. Computers "speak" binary. No need to work around it, you know. If all you need is to move around some numbers, use QDataStream and be done with it. It'll be two orders of magnitude faster than the fastest XML parsers, you'll transmit an order of magnitude less data, and everyone will live happily ever after*.
*living happily ever after not guaranteed, individual results may vary.

Should output be encoded at the API or client level?

We are moving our Web app architecture to being microservice based. We have an internal debate as to whether an REST API that provides content (in JSON, let's say) should be looking to encode content to make it safe, or whether the consumers that take that content and display it (in HTML, for example, or otherwise use it) should be responsible for that encoding. The use case is to prevent XSS attacks and similar.
The provider stance is "Well, we can't know how to encode it for everyone, or how you're going to use the content, so of course the consumers should encode the content."
The consumer stance is "There is one provider and multiple consumers, so it's more secure to do it once in the providing API than to hope that every consumer does it."
Are there any generally accepted best practices on this and why?
As a rule, data when passing through "internal" processes (whatever that might mean to use) should be stored or encoded in whatever "internal" format makes sense. The format chosen is typically designed to minimize encoding/decoding steps and to prevent data loss.
Then, upon output, data is encoded using whatever output format makes sense. Preventing data loss is important, but also proper escaping and formatting is key here.
So for example, with internal APIs, data in binary format may be sufficent. But when you output JSON or HTML or XML or PDF, you have to encode and escape your data appropriately to fit the output format.
The important point here is that different output formats have different concepts of "safe". What's "safe" for HTML may not be safe for JSON, and what's safe for JSON may not be safe for SQL. Data is encoded upon output specifically so that you can use the proper encoding for the task. You cannot assume that this step is done for you ahead of time, nor should you put your output function in the position to determine whether or not encoding must be done. If you stick with the rule: "output function ALWAYS encodes for safety", then you will never have to worry about data injection attacks.
I would say that the two important points are the following:
The encoding used by the provider MUST be specified with extreme clarity and precision in a reference document, so that all consumer implementors can know what to expect.
Whatever default encoding is used by the provider MUST keep all needed information, i.e. still be amenable to transcoding by any consumer who would wish to do it.
If you follow these two rules then you will have done 95% of the job for reliability and security.
As for your specific question, a good practice is a middle-ground: the provider follows by default a "generic" encoding, but consumers can ask (optionally) for a specific encoding which the provider may then apply -- this allows the provider to support a number of dumb, lightweight clients of possibly different kinds and can be extended later on with extra encodings without breaking the API.
I firmly believe it is both the consumer and the provider that need to do their part in being good citizens in the security space.
As the provider I want to make sure I deliver a secure product. I don't need to know the context in which my client is going to use my product, all I need to know is how I am going to deliver it. If my delivery is in JSON, then I can use that context to escape my data before sending it off, similarly for XML, plain text, etc. Further more there are transport methods that aid in security already. JSONP is one such delivery method. This ensures the payload is consumed appropriately.
As the consumer, which by the way in our environment no one is the final consumer, we are all providers to the final end client (the end users via a web browser mostly.). Because of this we have to also secure the data at this end. I would never trust a black box API to do this job for me, I would always make a point to ensure a secure payload. There are many tools out there, the ESAPI project from OWASP comes to mind, that will aid in the sanitization by context of data. Remember that you are eventually sending this data on to the end-user (browser) and if there is something awry you won't be able to pass the buck. Your service will be viewed as the vulnerable one regardless of where the flaw lies. Additionally, as the consumer, you may not always be able to rely on the black box provider to fix their flaws in a timely fashion. What if their support is lacking or they have higher priorities. Does that mean you continue to provide a known flaw to your end-users?
Security is about layers, and having safeguards at the source and end-points is always preferable.