is there a computer vision api (not necessarily google) where I can specify which label I want to query for on an image?
everything I've looked at so far (aws, azure & gcs) all only give a method that returns a set of labels chosen by the service but I want to send an image with eg the label "dog" and ensure that I will get a response that gives me a rating on that image for "dog".
The Vision API Labels feature supports the detection of broad sets of categories within an image; however, it is not currently possible to set the specific the label that you want to validate. Based on this, a possible workaround could be to perform the Vision API request and go through the response content in order to determine if the label that you are querying has been found based on the image sent to the service.
In case this feature doesn't cover your current needs, you can use the Send Feedback button, located at the lower left and upper right corners of the service public documentation, as well as take a look the Issue Tracker tool in order to raise a Vision API feature request and notify to Google about this desired functionality.
Related
I am a new cloud data fusion user and have run into a problem I cant find a solution for.
I have a table in BQ with ~150 rows of latitude and longitude points. For each row, I want to pass the lat and lng into an HTTP post request to get a result from TravelTime API. Ultimately I want to have a table with all my original rows with a column with the response for each one/
Where I am stuck is that so far I have only been able to hard-code the body of the post request into the HTPP Source plugin and successfully write the response to a file in gcs. However, I expect the rows will change over time, so I would like to dynamically generate and pass in the POST request body from my BQ data.
Is this possible with data fusion? Is this an advisable approach? Or is there a better way?
As #Albert Shau and #user3750486 agreed in the comments:
There is no out-of-the-box way to pass data from BQ rows dynamically in a POST HTTP request.
A possible workaround is to have an HTTP transform plugin that sits in the middle of the pipeline and can be configured to make calls based on the input data. Then you would have a BQ source followed by that plugin followed by the GCS sink. I think your best bet would be to write a custom transform.
This can be done by following this link that #Albert Shau provided or to do a custom code using GCP's Cloud Function as OP did.
Posting the answer as community wiki for the benefit of the community that might encounter this use case in the future.
Feel free to edit this answer for additional information.
I want to go ahead and create a classifier, I and I do not like the Google's Browser Labeling Service. Is there a tool similar to vott or some code, that I can use to go ahead and import my vott labeled data and import it Google AutoML.
The Google Labeling Service looks something like this and is very slow in loading images and inefficient it literally has a white labeling cursor and I have light background in my images
As seen in the Image Here.
On the Other Hand can I import it using vott which is much more better in every way. So is there a way for me to do this using vott to import the labeled csv into Google's Cloud AutoML.
I don't think that it is currently possible to import already labeled data from other apps (like VOTT).
At the moment there are 3 ways to label images in Cloud Vision. It's described in the Annotating imported training images
Provide bounding boxes with labels for your training images via labeled bounding boxes in your .csv import file
In the CSV file you would need to provide GCS url and label/labels
Labeled: gs://my-storage-bucket-vcm/flowers/images/img100.jpg,daisy
Multi-label: gs://my-storage-bucket-vcm/flowers/images/img384.jpg,dandelion,tulip,rose
Assigned to a set: TEST,gs://my-storage-bucket-vcm/flowers/images/img805.jpg,daisy
More details can be found here.
Provide unannotated images in your .csv import file and use the UI to provide image annotations
Not labeled: gs://my-storage-bucket-vcm/flowers/images/img403.jpg
However, later you will need to label it using UI, otherwise it will be ignored.
AutoML Vision ignores items without a category label.
Request manual image annotation with Google's Human Labeling service
This option would include human labelers and would need to provide additional information like dataset, label set and instructions for people.
In the documentation you can also find information that currently API is not supporting any method for labeling.
The AutoML API does not currently include methods for labeling.
However, you can propose Feature Request via IssueTracker to add some additional import methods from different apps or enable the use API.
Please allow me to ask a rather newbie question. So far, I have been using local tools like imagemagick or GOCR to perform the job, but that is rather old-fashioned, and I am urged to "move to google cloud AI".
The setup
I have a (training) data set of various documents (as JPG and PDF) of different kinds, and by certain features (like prevailing color, repetitive layout) I intend to classify them, e.g. as invoice type 1, invoice type 2, not an invoice. In a 2nd step, I would like to OCR certain predefined areas of each document and extract e.g. the address of the company sending the invoice and the date.
The architecture I am envisioning
In a modern platform as a service (pass), I have already set up an UI where I can upload new files. These are then locally stored in a directory with filenames (or in a MongoDB). Meta info like upload timestamp, user, original file name is stored in a DB.
The newly uploaded file should should then be submitted to google cloud which should do the classification step, and deliver back the label to be saved in the database.
The document pages should be auto-cropped, i.e. black or white margins are removed, most probably with google cloud as well. The parameters of the crop should be persisted in the DB.
In case it is e.g. an invoice, OCR should be performed (again by google cloud) for certain regions of the documents, e.g. a bounding box of spanning from the mid of the page to the right margin in the upper 10% of the cropped page. The results of the OCR should be again persisted locally.
The problem
I seem to be missing the correct search term to figure out how to do it with google cloud. Is there an google-API (e.g. REST), I can use to upload and which gives me back the results of steps 2 to 4?
I think that your best option here is to use Document AI (REST API and Libraries).
Using Document AI, you can:
Convert images to text
Classify documents
Analyze and extract entities
Additionally, for your use case, we have a new Document AI feature that is still in preview and has limited access which is the Invoice parser.
Invoice parser is similar to Form parser but for invoices instead of forms. Check out the Invoice parser page and you will see what I mean by preview and limited access.
AFIK, there isn't any GCP tool for image edition.
I want to scrape the data from an ArcGIS map. The following map has a popup when we click the red features. How do I access that data programmatically?
Link : https://cslt.maps.arcgis.com/apps/MapSeries/index.html?appid=2c9f3e737cbf4f6faf2eb956fa26cdc5
Note: Please respect the access and use constraints of any ArcGIS Online item you access. When in doubt, don't save a copy of someone else's data.
The ArcGIS Online REST interface makes it relatively simple to get the data behind ArcGIS Online items. You need to use an environment that can make HTTP requests and parse JSON text. Most current programming languages either have these capabilities built in or have libraries available with these capabilities.
Here's a general workflow that your code could follow.
Use the app ID and the item data endpoint to see the app's JSON text:
https://www.arcgis.com/sharing/rest/content/items/2c9f3e737cbf4f6faf2eb956fa26cdc5/data
Search that text for webmap and see that the app uses the following web maps:
d2b4a98c39fd4587b99ac0878c420125
7b1af1752c3a430184fbf7a530b5ec65
c6e9d07e4c2749e4bfe23999778a3153
Look at the item data endpoint for any of those web maps:
https://www.arcgis.com/sharing/rest/content/items/d2b4a98c39fd4587b99ac0878c420125/data
The list of operationalLayers specifies the feature layer URLs from which you could harvest data. For example:
https://services2.arcgis.com/gWRYLIS16mKUskSO/arcgis/rest/services/VHR_Areas/FeatureServer/0
Then just run a query with a where of 0=0 (or whatever you want) and an outFields of *:
https://services2.arcgis.com/gWRYLIS16mKUskSO/arcgis/rest/services/VHR_Areas/FeatureServer/0/query?where=0%3D0&outFields=%2A&f=json
Use f=html instead if you want to see a human-readable request form and results.
Note that feature services have a limit of how many features you can get per request, so you will probably want to filter by geometry or attribute values. Read the documentation to learn everything you can do with feature service queries.
I'm getting started with Amazon MWS and I can't seem to see any real information on the correct flow for listing an item as an existing ASIN. Let's say for example I am selling a "Vulli Sophie the Giraffe Teether". I do an initial lookup using "listMatchingProducts" and find that my item already exists with the ASIN "B000IDSLOG". What is the next stage in the process?. All the documentation talks about the fact that the product feed is intended to match our SKU to the Amazon ASIN but i've not seen any definitive information to suggest how this actually works - especially in the scenario where you already know the ASIN you wish to use.
Ideally i'm interested in seeing the correct flow for each scenario (existing product for search found/not found) in terms of what API calls should be made in what order.
Thanks
The process of listing an item on Amazon is actually very similar for existing ASINs and new ones.
Listing items can consist of these steps:
Call SubmitFeed() to send a _POST_PRODUCT_DATA_ feed
is mandatory in all cases. You can omit product details if you're adding your listing to an existing item. If you list new products, this feed must be successfully processed before sending any other feed for those same item(s), I'm not sure if the same is true for existing products.
Call SubmitFeed() to send a _POST_PRODUCT_RELATIONSHIP_DATA_ feed
This step can be skipped for existing products or products without variants or other parent/child relations
Call SubmitFeed() to send a _POST_PRODUCT_IMAGE_DATA_ feed
This step can be skipped for existing products. Amazon is currently in the process of making images mandatory, so for new products or products currently not showing an image, you really should submit at least one image
Call SubmitFeed() to send a _POST_PRODUCT_PRICING_DATA_ feed
is mandatory in all cases
Call SubmitFeed() to send a _POST_INVENTORY_AVAILABILITY_DATA_ feed
is mandatory in all cases
Call SubmitFeed() to send a _POST_PRODUCT_OVERRIDES_DATA_ feed
is optional, and only used for items that have special shipping rates applied (e.g. expedited products)
More information on feeds is available on the Amazon Developer Documentation website and in Selling on Amazon: Guide to XML
It seems in the case of adding a product with an existing ASIN you can actually send a very basic XML request such as this, making sure to include the ASIN:
<AmazonEnvelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="amznenvelope.xsd">
<Header>
<DocumentVersion>1.01</DocumentVersion>
<MerchantIdentifier>MERCHANT_IDENTIFIER</MerchantIdentifier>
</Header>
<MessageType>Product</MessageType>
<PurgeAndReplace>false</PurgeAndReplace>
<Message>
<MessageID>1</MessageID>
<OperationType>Update</OperationType>
<Product>
<SKU>UNIQUE-TO-ME-1234</SKU>
<StandardProductID>
<Type>ASIN</Type>
<Value>B000A0S46M</Value>
</StandardProductID>
<Condition>
<ConditionType>New</ConditionType>
</Condition>
</Product>
</Message>
</AmazonEnvelope>
Essentially though, from what i've read elsewhere it seems that Amazon will attempt to match a product to an existing ASIN according to the data within the _POST_PRODUCT_DATA_ feed even if an ASIN isn't provided. It will use elements such as title, manufacturer, brand, and other product specific information to compare that to their catalog and determine if it is an existing item or a new one to be added. If you do know it already has an ASIN though you can provide a very simple XML feed as shown above.
You can simply use flat file template from amazon to load your feeds to marketplace with your seller account credentials using marketplace web service.
Use 'inventory loader' file type template that will override the existing items or create new if doesn't exists.
You can define 'ASIN-Hint' fields/column in file for the items those already exists over marketplace as your case is.
Idea behind is that amazon matches the provided ASIN value with feed with the already existed product detail and synch information accordingly.
Try uploading your product without ASIN-Hint and see process report you will get a good idea then.
You may also refer http://prashantpandeytech.blogspot.in/2015/03/mws-amazon-marketplace-web-service-api.html for step wise implementation