I've downloaded the samples for ember.js and followed the directions to start the contact example via node and localhost:3000 But, there's no contacts.json file that I can find so nothing loads. Clicking New Contact does nothing. I'm not quite sure how to troubleshoot this but I'm wondering if it has to do with no contacts.json file being present when according to the description there should be.
Unfortunately, the Ember examples appear to be a little out-of-date at the moment (the contacts demo still references the SC namespace last I checked) and some may be broken. I suspect that once Ember 1.0 is released this will get cleaned up. You can view issues for emberjs/examples on github.
I don't think your problem is because of the missing contacts.json file because it's broken on the live contacts demo too--console shows:
Failed to load resource: the server responded with a status of 404 (Not Found)
You can comment-out the parts of the contacts example that load the JSON data to make that error go away, e.g.,
//App.contactsController.loadContacts();
or you could create a new JSON file, e.g.,
[
{
"firstName": "Alice",
"lastName": "Anderson",
"phoneNumbers": [
"+1 123-456-7890"
]
},
{
"firstName": "José",
"lastName": "Zulu",
"phoneNumbers": [
"+55 (11) 5000-6000"
]
}
]
However, it might be better not to base your education on the examples right now. Check out the reference documentation and some of the good articles for getting started instead. I found trek's article: Advice on & Instruction in the Use Of Ember.js to be helpful.
Related
I am training a model using Google's Document AI. The training fails with the following error (I have included only a part of the JSON file for simplicity but the error is identical for all documents in my dataset):
"trainingDatasetValidation": {
"documentErrors": [
{
"code": 3,
"message": "Invalid document.",
"details": [
{
"#type": "type.googleapis.com/google.rpc.ErrorInfo",
"reason": "INVALID_DOCUMENT",
"domain": "documentai.googleapis.com",
"metadata": {
"num_fields": "0",
"num_fields_needed": "1",
"document": "5e88c5e4cc05ddb8.json",
"annotation_name": "INCOME_ADJUSTMENTS",
"field_name": "entities.text_anchor.text_segments"
}
}
]
}
What I understand from this error is that the model expects the field INCOME_ADJUSTMENTS to appear (at least) once in the document but instead, it finds zero instances of it.
That would have been understandable except I have already defined the field INCOME_ADJUSTMENTS in my schema as "Optional Once", i.e., this field can appear either zero or one time.
Am I missing something? Why does this error persist despite the fact that it is addressed in the schema?
p.s. I have also tried "Optional multiple" (and "Required once" and "Required multiple") and the error persists.
EDIT: As requested, here's what one of the JSON files looks like. Note that there is no PII here as the details (name, SSN, etc.) are synthetic data.
I have/had the same issue as you in the past and also having it right now.
What I managed to do was to get the document string from the error message and then searching for the images in the Storage bucket that has the dataset.
Then I opened the image and searched for that image in my 1000+ images dataset.
Then I deleted the bounding box for the label with the issue and then relabeled it. This seemed to solve 90%of the issues I had.
It`s a ton of manual work and I wish google thought of more when they released the Web app for Doc AI because the ML part is great but the app is really lackluster.
I would also be very happy for any other fixes
EDIT: another quicker workaround I have found is deleting the latest revision of the labeled documents from the Dataset in cloud storage. Like, take faulty document name from the operation json dump, then search for it in documents/ and then just delete latest revision.
Will probably mess up labeling and make you lose work, but it`s a quick fix to at least make some progress if you want.
Removed a few empty boxes and a lot of intersecting boxes fixed it for me.
i had the same problem.
so i deleted all my dataset and imported and re-labeled again.
then the training worked fine.
I'm not a developer, so this is a little above my head.
My team has implemented a project in dialogflow, one for an old app and one from a new app. I have basic access to the old dialogflow account and I can see that it has an intent called glossaries, same intent name as in the new one. In glossaries, there is a training phrase called "What is a red talk?". This phrase only works in one of my apps and I need to know why.
There is no default response or anything under context. If I copy that curl link into a terminal, the payload doesn't return with any information.
I found the API for the new app and red talks is definitely not in the payload when I do a GET/all. There may be an old API somewhere, but no one knows where.
Where can I find this information? I'm very confused and all the basic training for dialogflow points to default response, which we're not using. I have read through the docs. I have searched the three company github repos that have the application in the name but I have not found anything. I am looking for an app.intent phrase with glossaries in it or just the word glossaries.
I have found only this json and a glossaryTest.php that doesn't seem helpful:
"meta": {
"total": 2,
"page": 1,
"limit": 10,
"sort": "createdAt",
"direction": "desc",
"load-more": false
},
"results": [
{
"term": "This is a term",
"definition": "This is a definition",
"links": [
{
"id": "1",
"url": "http:\/\/example.com\/1",
"title": "KWU Course: Lead Generation 36:12:3",
"ordering": "1"
},
{
"id": "2",
"url": "http:\/\/example.com\/2",
"title": "",
"ordering": "2"
}
]
}
]
}
There is also a json with a lot data for API calls but no glossaries there either.
If we're using fulfillment to handle these intents, I don't see a fullfillment header like google docs say there should be. I may not have full access so perhaps I would be viewing more information in the screen if I had that, I have no idea. The devs who created this are long gone. The devs who also created the new app are also long gone.
Am I missing an API in my environment documentation? Is the intent hard coded? I suspect it was. How do I prove that or move forward?
Yes, your intent are somehow hard-coded [0], or defined through the UI.
Each intent has a setting to enable fulfillment. If an intent requires
some action by your system or a dynamic response, you should enable
fulfillment for the intent. If an intent without fulfillment enabled
is matched, Dialogflow uses the static response you defined for the
intent. [2]
Perhaps you are using a custom integration [1]. So, unless you are using static response (those you see in the UI), the frontend code may be managed by your project API (not Dialogflow API), and perhaps the content modified before performing any further or eventually returning the response.
As I understand you should contact your colleagues for understanding about the integration solution they have created. Or otherwise if the Intent has been created through the API, look for its relative files where there may be They may have created the integration through the SDK, while picking up training data from a source out of the codebase. So perhaps you cannot see it directly in the code. Nonetheless, you should be able to access it through the UI once it has been created.
In case my answer was not of your help, please do not hesitate to further clarify your needs, perhaps providing some further information.
[0] https://cloud.google.com/dialogflow/docs/manage-intents#create_intent
[1] https://cloud.google.com/dialogflow/docs/integrations
[2] https://cloud.google.com/dialogflow/docs/fulfillment-overview
I am trying to convert airport GeoCoordinate data i.e. [IATA Code, latitude, longitude] to Gremlin Vertex in an Azure Cosmos DB Graph API project.
Vertex conversion is mainly done through an Asp.Net Core 2.0 console application using CSVReader to stream and convert data from a airport.dat (csv) file.
This process involves converting over 6,000 lines...
So for example, in original airport.dat source file, the Montreal Pierre Elliott Trudeau International Airport would be listed using a similar model as below:
1,"Montreal / Pierre Elliott Trudeau International Airport","Montreal","Canada","YUL","CYUL",45.4706001282,-73.7407989502,118,-5,"A","America/Toronto","airport","OurAirports"
Then if I define a Gremlin Vertex creation query in my cod as followed:
var gremlinQuery = $"g.addV('airport').property('id', \"{code}\").property('latitude', {lat}).property('longitude', {lng})";
then when the console application is launched, the Vertex conversion process would be generated successfully in exact similar fashion:
1 g.addV('airport').property('id', "YUL").property('latitude', 45.4706001282).property('longitude', -73.7407989502)
Note that in the case of Montreal Airport (which is located in N.A not in the Far East...), the longitude is properly formatted with minus (-) prefix, though this seems to be lost underway when doing a query on Azure Portal.
{
"id": "YUL",
"label": "airport",
"type": "vertex",
"properties": {
"latitude": [
{
"id": "13a30a4f-42cc-4413-b201-11efe7fa4dbb",
"value": 45.4706001282
}
],
"longitude": [
{
"id": "74554911-07e5-4766-935a-571eedc21ca3",
"value": 73.7407989502 <---- //Should be displayed as -73.7407989502
}
]
}
This is a bit awkward. If anyone has encountered a similar issue and was able to fix it, then I'm fully open to suggestion.
Thanks
According to your description, I just executed Gremlin query on my side and I could retrieve the inserted Vertex as follows:
Then, I just queried on Azure Portal and retrieved the record as follows:
Per my understanding, you need to check the execution of your code and verify the response of your query to narrow down this issue.
Thank you for your suggestion, though problem has now been solved in my case.
What was previously suggested as a working answer scenario [and voted 1...] has long been settled in case of .Net 4.5.2 [& .Net 4.6.1] version used in combination with Microsoft.Azure.Graph 0.2.4 -preview. The issue of my question didn't really concern that and may have been a bit more subtle... Perhaps I should have put a bit more emphasis on the fact that the issue was mainly related to Microsoft.Azure.Graph 0.3.1 -preview used in Core 2.0 + dotnet CLI scenario.
According to following Graph - Multiple issues with parsing of numeric constants in the graph gremlin query #438 comments on Github,
https://github.com/Azure/azure-documentdb-dotnet/issues/438
there are indeed some fair reasons to believe that the issue was a bug with Microsoft.Azure.Graph 0.3.1 -preview. I chose to use Gremlin.Net approach instead and managed to get the proper result I expected.
I want to be able to build all branches that are not master, however when I try ^((?!master).)*$ the UI correctly shows all non-master branches but saving returns a HTTP 400 error.
{
"error": {
"code": 400,
"message": "trigger_template branch_name is not a valid regular expression",
"status": "INVALID_ARGUMENT"
}
}
this is stupid ... but works by ignoring everything that starts with "master"
^(?:[^m]|m[^a]|ma[^s]|mas[^t]|mast[^e]|maste[^r]|master.)
The regex used must be compatible with Go's regex library, which this one is not. (It is compatible with JavaScript, which is why the UI works with it.) https://regex101.com/ is useful for playing with different language parsers. (A teammate just showed it to me.) Go's regex documentation is on GitHub.
I'm in the process of creating a REST API. Among others, there is a resource-type called company which has quite a lot of attributes/fields.
The two common use cases when dealing with company resources are:
Load a whole company and all its attributes with one single request
Update a (relatively small) set of attributes of a company, but never all attributes at the same time
I came up with two different approaches regarding the design of the API and need to pick one of them (maybe there are even better approaches, so feel free to comment):
1. Using subresources for fine-grained updates
Since the attributes of a company can be grouped into categories (e.g. street, city and state represent an address... phone, mail and fax represent contact information and so on...), one approach could be to use the following routes:
/company/id: can be used to fetch a whole company using GET
/company/id/address: can be used to update address information (street, city...) using PUT
/company/id/contact: can be used to update contact information (phone, mail...) using PUT
And so on.
But: Using GET on subresources like /company/id/address would never happen. Likewise, updating /company/id itself would also never happen (see use cases above). I'm not sure if this approach follows the idea of REST since I'm loading and manipulating the same data using different URLs.
2. Using HTTP PATCH for fine-grained updates
In this approach, there are no extra routes for partial updates. Instead, there is only one endpoint:
/company/id: can be used to fetch a whole company using GET and, at the same time, to update a subset of the resource (address, contact info etc.) using PATCH.
From a technical point of view, I'm quite sure that both approaches would work fine. However, I don't want to use REST in a way that it isn't supposed to be used. Which approach do you prefer?
Do you really nead each and every field contained in the GET response all the time? If not, than its more than just fine to create own resources for addresses and contacts. Maybe you will later find a further use-case where you might reuse these resources.
Moreover, you can embed other resources as well in resources. JSON HAL (hal-json) f.e. explicitely provides an _embedded property where you can embed the current state of f.e. sub-resources. A simplified HAL-like JSON representation of an imaginary company resource with embedded resources could look like this:
{
"name":"Test Company",
"businessType":"PLC",
"foundingYear": 2010,
"founders": [
{
"name": "Tim Test",
"_links": {
"self": {
"href": "http://example.org/persons/1234"
}
}
}
],
...
"_embedded": {
"address": {
"street": "Main Street 1",
"city": "Big Town",
"zipCode": "12345",
"country": "Neverland"
"_links": {
"self": {
"href": "http://example.org/companies/someCompanyId/address/1"
},
"googleMaps": {
"href": "http://maps.google.com/?ll=39.774769,-74.86084"
}
}
},
"contacts": {
"CEO": {
"name": "Maria Sample",
...
"_links": {
"self": {
"href": "http://example.org/persons/1235"
}
}
},
...
}
}
}
Updating embedded resources therefore is straigtforward by sending a PUT request to the enclosed URI of the particluar resource. As GET requests my be cached, you might need to provide finer grained caching settings (f.e. with conditional GET requests a.k.a If-Modified-Since or ETAG header fields) to retrieve the actual state after an update. These headers should consider the whole resource (incl. embedded once) in order to return the updated state.
Concerning PUT vs. PATCH for "partial updates":
While the semantics of PUT are rather clear, PATCH is often confused with a partial update by just sending the new state for some properties to the service. This article however describes what PATCH really should do.
In short, for a PATCH request a client is responsible for comparing the current state of a resource and calculating the necessary steps to transform the current resource to the desired state. After calculating the steps, the request will have to contain instructions the server has to understand to execute these instructions and therefore produces the updated version. A PATCH request is furthermore atomic - either all instructions succeed or none. This adds some transaction requirements to this request.
In this particular case I'd use PATCH instead of subresources approach. First of all this isn't a real subresources. It's just a fake abstraction introduced to eliminate the problem of updating the whole big entity (resource). Whereas PATCH is a REST compatible, well established and common approach.
And (IMO ultima ratio), imagine that you need to extend company somehow (by adding magazine, venue, CTO, whatever). Will you be adding a new endpoint to enable client to update this newly-added part of a resource? How it finishes? With multiple endpoint that no one understands. With PATCH your API is ready for new elements of a company.