I have used Google Book API, gTTs, googletrans, and PyPDF4 in my Django Project. I am trying to draw a Data Flow Diagram of my project. I am confused about the diagram should I draw like
My problem is that I want to know, It is valid to do as I have done in the image. Can I use API and module as a data store in Data Flow Diagram?
please help me to figure it out
Related
So I have been collecting data of numerous text-descriptions about articles, where as each description was structred differently. Now, I would have to "create" an algorithm, which sorts out the title of that article for me what is a hard task. I have come around Google ML natural language and it seems to be able to create one for me.
Unfortunately, I am not really able to exactly find out how I can use it,
so my question is... How precisely can I set it up ? And additionally, it would be helpful to know if firebase has such a service, since I am planning to build a firebase project.
Thanks in advance for any help !
Unfortunately models created using Google AutoML Natural Language are not exportable to Tensorflow lite (mobile models). Based from your use case you will need a model for text classification, the provided link has a sample of how this model work. You can follow this tutorial to train a custom model using the data that you have so it can identify if a title of a article is a hard task or not.
Once training is done you can now:
Deploy it in Firebase
Download the model in your device and perform testing.
You can find detailed instructions from training the model to testing it on your device for either iOS or android.
I'm using AutoML Video Intelligence and it's very tedious and I was wondering if there was an easier way to create Datasets for the object tracking. An easy way to get the time and position of the box?
I'm pretty sure that you can find the answers on the mentioned questions reading GCP knowledge base documentation in particular about AutoML Video Intelligence product.
At least Object tracking process is nicely explained in terms of implementation with either GCP console UI or constructing HTTP calls to Cloud REST AutoML API.
Furthermore, you can find example tutoring the way how to handle video segments positioning for the relevant prediction requests.
You can adjust initial question, extending it with a certain details about your use case in order to preciously address the solution.
I want to use google cloud vision API in my android app to detect whether the uploaded picture is mainly food or not. the problem is that the response JSON is rather big and confusing. it says a lot about the picture but doesn't say what the whole picture is of (food or something like that). I contacted the support team but didn't get an answer.
What you really want is a custom classification, not specifically raw Cloud Vision annotation.
Either use the https://cloud.google.com/automl/ or invent an own wheel like I did: https://stackoverflow.com/a/55880316/322020
I am new to Azure Cognitive services. I want to detect multiple objects in a single image. Is it possible with custom vision api.
Any help is appreciated. Thank you.
You should be able to with the Object Detection part of Custom Vision. Simply give it images of multiples to train on and it should start detecting both items.
For example, I was playing with it a while ago to see if it could detect red and white wines. After sending a few images with both to train on I started getting results like the below.
I am working with my team to prep a project for a potential client. We've researched Amazon MWS API, and we're trying to develop an algorithm using the data scraped from this API.
Just want to make sure we understand the research correctly:
Is it possible to scrape data from Amazon.com like the plugins RevSeller or HowMany do? Then can we add that data to a database for use in an algorithm to determine whether or not an Amazon reseller should invest in reselling a product?
Thanks!
I am doing a similar project. I don't know the specifics of RevSeller or HowMany, but another very popular plugin is Amzpecty. If you use a tool like Fiddler, you can see the HTTP traffic and figure out what it does. They basically scrape out the ASIN and offer listing ID's on the current page you are looking at and one-by-one call the Amazon Product Advertising API, which is not the same thing as MWS. Out of that data returned, they produce a nice overlay that tells you all kinds of important stuff.
Instead of a browser plugin, I'm just writing an app that makes HTTP calls based on a list of ASIN's to the PA API and then I can run the results through my own algorithms. Hope that gives you a starting point.