I have the following requirement:
1) A user browser fetches data points (essentially time series) from the server and plots on the web page.
2) User should be able to alter(also view) the data points using the mouse and doing this should alter the plot dynamically. In this way, user can alter all the data points until he is satisfied.
3) Finally, user needs to send his "version of data points" back to the server.
I am new to web development and would like to know if there anything out there already that I can refer to or any set of standard tools that I can use?
Regards,
Vivek
You may have a look at the googleVis package of the R software. The package provides and interface between R and the Google Chart Tools. This can be a convenient way to generate the html displaying a graphic of the data.
In this document the authors of the package explain how to use googleVis along with the packages Rook and shiny. These packages allow the user to upload a data file and update the graphic. The user would have to edit the file outside the browser but you could add a form in order to edit the file through the browser and update the graphic when the data are submitted.
Related
Please allow me to ask a rather newbie question. So far, I have been using local tools like imagemagick or GOCR to perform the job, but that is rather old-fashioned, and I am urged to "move to google cloud AI".
The setup
I have a (training) data set of various documents (as JPG and PDF) of different kinds, and by certain features (like prevailing color, repetitive layout) I intend to classify them, e.g. as invoice type 1, invoice type 2, not an invoice. In a 2nd step, I would like to OCR certain predefined areas of each document and extract e.g. the address of the company sending the invoice and the date.
The architecture I am envisioning
In a modern platform as a service (pass), I have already set up an UI where I can upload new files. These are then locally stored in a directory with filenames (or in a MongoDB). Meta info like upload timestamp, user, original file name is stored in a DB.
The newly uploaded file should should then be submitted to google cloud which should do the classification step, and deliver back the label to be saved in the database.
The document pages should be auto-cropped, i.e. black or white margins are removed, most probably with google cloud as well. The parameters of the crop should be persisted in the DB.
In case it is e.g. an invoice, OCR should be performed (again by google cloud) for certain regions of the documents, e.g. a bounding box of spanning from the mid of the page to the right margin in the upper 10% of the cropped page. The results of the OCR should be again persisted locally.
The problem
I seem to be missing the correct search term to figure out how to do it with google cloud. Is there an google-API (e.g. REST), I can use to upload and which gives me back the results of steps 2 to 4?
I think that your best option here is to use Document AI (REST API and Libraries).
Using Document AI, you can:
Convert images to text
Classify documents
Analyze and extract entities
Additionally, for your use case, we have a new Document AI feature that is still in preview and has limited access which is the Invoice parser.
Invoice parser is similar to Form parser but for invoices instead of forms. Check out the Invoice parser page and you will see what I mean by preview and limited access.
AFIK, there isn't any GCP tool for image edition.
I am trying to set a default selected item in a semantic-ui-react dropdown. If I select an item from the dropdown, when I reopen the dropdown it opens on the correct item. However, this item is persisted, and when I refresh the page, the correct items are displayed on the dropdown, but it does not open on the correct item.
Please advise.
Matt, it sounds like you are only using the internal component state. Whatever your components initialize with, they will always start that same way. Your entire React application works this way. If you are expecting your data to be persistent, it needs to be stored somewhere. When you refresh you are starting over again. If the state of your application is not being put elsewhere, you lose that state every single time you refresh because the only copy of state is in your client browser.
Basically you currently only have a frontend application that is not storing data anywhere. Depending on your needs, you could do this in a lot of different ways. A REST API. A GraphQL API. One simple way to accomplish this if you are just creating a simple website would be to use a headless CMS. That will give you a database to store your application data. There are a lot of interesting ones out there that you can explore based on your needs.
I want to scrape the data from an ArcGIS map. The following map has a popup when we click the red features. How do I access that data programmatically?
Link : https://cslt.maps.arcgis.com/apps/MapSeries/index.html?appid=2c9f3e737cbf4f6faf2eb956fa26cdc5
Note: Please respect the access and use constraints of any ArcGIS Online item you access. When in doubt, don't save a copy of someone else's data.
The ArcGIS Online REST interface makes it relatively simple to get the data behind ArcGIS Online items. You need to use an environment that can make HTTP requests and parse JSON text. Most current programming languages either have these capabilities built in or have libraries available with these capabilities.
Here's a general workflow that your code could follow.
Use the app ID and the item data endpoint to see the app's JSON text:
https://www.arcgis.com/sharing/rest/content/items/2c9f3e737cbf4f6faf2eb956fa26cdc5/data
Search that text for webmap and see that the app uses the following web maps:
d2b4a98c39fd4587b99ac0878c420125
7b1af1752c3a430184fbf7a530b5ec65
c6e9d07e4c2749e4bfe23999778a3153
Look at the item data endpoint for any of those web maps:
https://www.arcgis.com/sharing/rest/content/items/d2b4a98c39fd4587b99ac0878c420125/data
The list of operationalLayers specifies the feature layer URLs from which you could harvest data. For example:
https://services2.arcgis.com/gWRYLIS16mKUskSO/arcgis/rest/services/VHR_Areas/FeatureServer/0
Then just run a query with a where of 0=0 (or whatever you want) and an outFields of *:
https://services2.arcgis.com/gWRYLIS16mKUskSO/arcgis/rest/services/VHR_Areas/FeatureServer/0/query?where=0%3D0&outFields=%2A&f=json
Use f=html instead if you want to see a human-readable request form and results.
Note that feature services have a limit of how many features you can get per request, so you will probably want to filter by geometry or attribute values. Read the documentation to learn everything you can do with feature service queries.
I have a question that I can't seem to find a straightforward answer to. I am loading a roughly 20GB CSV file into Tableau Desktop to create some worksheets for future internal views. Where I am stumbling is whether to first use Extract or Live data source. The data itself will not change, only the reports or worksheets generated based on the data. The computations within the worksheets on Tableau Desktop take forever to complete.
On to the publishing to Tableau Server. I am under the assumption that I must upload my locally stored file to the server for future use. Would it be better to find a network drive to have the Tableau Server data source point to?
In short, what is the best method to manipulate a large dataset on Tableau Desktop and present on Tableau Server? Additionally, what Regex methodology does Tableau follow, as it doesn't seem to follow standards I use for Python.
Thanks!
When publishing a workbook with a file-based data source, choose the Include external files option in the publishing to server dialog box. This will eliminate the need to have the file in a networked location accessible by the server.
This approach only works if the data doesn't change. It remains static and embedded in the workbook. Under this option, if the data changes and you want your viz to reflect changes, you would need to update the data in Desktop and republish.
As you mentioned, you have to connect/load your 20GB csv file into Tableau Desktop for proper visualizations, if I understand your requirement correctly.
Please find below steps to do the same:
Yes, you have to manually update your data in that csv file whenever you want to show the same on Tableau (Note: Make sure "Name" of entire csv file & columns remain same)
After this on opening Tableau, you have to click on "Refresh data source" to get visualization of latest data present in csv file
enter image description here
For this case, I would say connection (LIVE / EXTRACT) won't help as this there is no role of this, However please use EXTRACT technique as per my understanding (means extract will update once we load latest data)
I want slash command to output data in a table format?
I know that I will have to setup a custom integration for this. I did that using GET Method.
I can setup my own web service on EC2 machine, but how should I make sure that data comes in table format.
May be something like this
My problem is how should I make available data present in tabular format?
It's unfortunately not possible to format Slack messages as a table in this way. You would need to resort to generating an image and referencing it in a message attachment. There is limited support for displaying simple fields and labels, but may not quite meet your needs.
We had the same problem. So We made a slack app and made it free for public. Please feel free to check it out. https://rendreit.digital
After installing the app to your slack. You can do /tableit and paste in csv data or anything you copied from a spreadsheet (Excel or Google Sheet).
It also let your preview the rendered table before you send it to the chat.