Do GraphDB Google Charts send my data anywhere besides my own server or client? - google-visualization

After running a SPARQL query in the GraphDB workbench, one can make some nice visualizations with Google Charts (in addition to an un-attributed "Pivot Table" tab.)
I have hospital records that are de-identified, but I still need to be as conservative as possible.
If I use either the Google Charts or Pivot Table tab offered in GraphDB, will the results of my query be sent to any computer besides my server and my client, in order to create the visualization?
If you don't know the definitive answer but can suggest a method for doing my own research, I would certainly try it on my own. Maybe something with my browser's developer's tools? I'm hoping to avoid using a packer sniffer application.

I doubt that it is sending your data to some other server to render the chart or the pivot table. Google charts depends on .js being loaded (potentially from remote server), but rendering is done locally (client machine).
However, to make absolutely sure for yourself you can do the following on your server.
Load some test data (not your patient data, i.e. example data coming with graphdb) and draw something with Google charts. This is to ensure all .js libraries are loaded.
Disconnect from the network. Ensure both LAN and Wifi are disabled.
Now draw a different chart (to ensure cached chart is not re-displayed). If the chart can be drawn, it requires no external server to draw the chart.

Related

Where/how to store frequently updated data feeding shiny apps?

I built a shiny app which works with data that needs frequent update. The app doesn't change, just the data. At the beginning, i used to update my data locally, and republish my app every time the data had been updated, which i quickly found quite annoying.
I then started to store my datasets online (on arcgis online, for various reasons) so that i wouldn't need to republish my shiny app anymore, just need to handle the process of data updates.
The problem is that my app is quite slow as the datasets are very big.
Now i would like to transform my datasets as api's so that the requests coming from shiny could be more targeted.
But i don't know really know to do that. Handling the update of datasets on arcgis online through an R script was fine. But updating the same datasets as hosted feature service, i can't make it work.
Would anyone have an idea?
Or more general question, if moving away from my Arcgis online storage, what would be the best way to store data that needs frequent updates and that feed shiny apps?
You can look into caching data using 'Pins' package

PowerBI Web Embed Has Mixed Refresh Data

My organization recently publish a PowerBI dashboard via the Publish to Web and an embed code. We have configured a daily refresh via a gateway running on a virtual machine that's always on. The data refreshes automatically daily. This is all successful and works well.
The issue we are running into is that the data seems to update incrementally on the embedded version. For example, data in one tab will update to the most current data, while changing a slicer selection will continue to display the previous day's data.
This is incredibly confusing, especially as it's a publish facing dashboard.
Is there a way to resolve this?
Thanks!

How to configure Sitecore processing server?

I just installed Sitecore Experience Platform and configured it according to the Sitecore scaling recommendations for processing servers.
But I want to know the following things:
1.How can I use the sitecore processing server?
2.How can I check whether processing server is working fine?
3.How collections DB data is processed and send to reporting server?
The processing server is a piece of the whole analytics (xDB) part of the Sitecore solution. More info can be found here.
Snippet:
"The processing and aggregation component extracts information from
captured, raw analytics data and transforms it into a form suitable
for use in reporting applications. It also performs specific tasks on
the collection database that involve mass updates.
You implement processing and aggregation on a Sitecore application
server connected to both the collection and reporting databases. A
processing server can run independently on a dedicated server, or on
the same server together with other Sitecore components. By
implementing multiple processing or aggregation servers, it is
possible to achieve higher performance on high-traffic solutions."
In short: the processing server will aggregate the data in Mongo and processes it (to the reporting database). This can be put on a separate server in order to spare resources on your other servers. I'm not quite sure what it all does behind the scenes and how to check exactly and only that part of the process, but you could check the the reporting tools in the Sitecore backend, like Experience Analytics. If those are working, you probably are fine. Also, check the logs on the processing server - that will give you an indication what he is doing and if any errors occur.

Power BI Desktop vs Web Client

What is the difference between the Power BI desktop client and web client. Both seem to have the same features. What can the desktop client do that the web client cannot do?
I'm not going to be exhaustive since there are a ton of features in both experiences. The Power BI Desktop is intended as a tool for Analysts to work with data. It includes data load, mashup, data modeling, and reporting capabilities. You can create models with relationships, calculated columns and DAX measures. You can create crazy transforms to manipulate the data to shape it into as good shape or merge data from multiple sources into a single data model. The web version of reports really focuses on the reporting piece. If someone else is doing all the data modeling for you, then the web reporting UI is pretty comprehensive. If you need to do the data modeling yourself, then Desktop is the way to go. Desktop does have the added benefit of a file you can save or archive. It doesn't support the direct query sources or push data sets like the web report feature. So there are at least some limitations. Which you use really depends on the types of problems you're trying to overcome.

How to load ~100 numpy arrays into django powered webpage

Well, these arrays are actually pixel data from DICOM images. When i just send them as an array (size 100) of pixel arrays (each about 1mb) it overloads the browser quite obviously. Being a newbie to programming, i would appreciate being pointed in the right direction to start working on economically loading large files , i.e., image-stacks into browser window , preferably dynamically. Apologies if the query is not clear
Hmmm... So your application is both at the same time? DICOM server and DICOM viewer?
The server is ment to respond to query-retrieve SOP classes and deliver images, the viewer will be the system sending those requests.
DICOM concepts would say your server should just be serving the images, the viewer is displaying them (including local file management, ...).
That would end up with a fat client (e.g. javascript if you want the viewer to be in a web browser), otherwise you won't be able to to interact with images at the speed you would like to (smooth scrolling, maybe 3D reconstruction, etc.).
If you just want to have some image pre-view capabilities on your server, I would suggest making a page with ~10-16 thumbnail images, one view for one or two full-size images, and load them per request if a user wants to preview image series or a single image. With that, you reduce load time significantly and your user is able to pre-select the images prior to sending a full request.
For the full viewer, I would consider a local solution on the client. If it should be web-based or not (or maybe mixed) depends on your further applications.
In past projects, we used existing open source DICOM servers like dcm4che for handlng the server part and just pulled our data from there. But for a teaching system, it might be interesting to build the server on your own as well to show all parts of the concept.
Feel free to comment, I'll try to update the post to answer questions.
I had exactly this same problem trying to develop a multiframe dicom viewer using just plain html5.
In order to be able to load such a big amount of imaging data in the browser, I decided to try using some web storage technology on the client side. In particular, I was considering to do some initial tests with indexedDB.
Sadly, I am currently involved in other tasks, so the web viewer development is now stopped (again!) and these tests are still undone.
Anyway, I would try to do a proof of concept with indexedDB.