Send a function along with the scheme from backend in Alpacajs - alpacajs

I am having an API in backend to return the full json (schema and options) for a AlpacaJS form. The content-type of the response is application/json. Following is a sample response,
{
"options": {
"fields": {
"students": {
"items": {
"type": "tablerow"
},
"type": "table"
}
},
"form": {
"buttons": {
"submit": {
"click": "function(){alert(\"sample\");}"
}
}
}
},
"schema": {
"properties": {
"students": {
"items": {
"properties": {
"name": {
"required": true,
"title": "Name",
"type": "string"
},
"contact-number": {
"title": "Age",
"type": "string"
}
},
"type": "object"
},
"type": "array"
}
},
"type": "object"
}
}
When I click on the Submit button, I get the following error in browser console,
Uncaught TypeError: t.call is not a function
I think the issue is that the function is considered as a string in the following section of the response.
"form": {
"buttons": {
"submit": {
"click": "function(){alert(\"sample\");}"
}
}
}
Is there a way in AlpacaJS to send a javascript function from backend, or is there a way to convert the function string to a javascript function in frontend?

In order to get that, you should transform the stringified function to a function by doing new Function('return ' + val)(); (beware this is a form of eval and eval is evil).
Here's a working fiddle for that.
Tell me if it didn't work for you.

Related

Postman response schema in API Documentation

I created a collection in Postman that should work as my API documentation. I know, that for every endpoint I can save example responses, that will be included in the documentation.
Now I would like to include a response schema as well, so that people see a general definition of the data types and structure of the response. In OpenApi this is possible within the "response" block like this:
"responses": {
"200": {
"description": "200 response",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/User"
}
}
}
}
}
...
"components": {
"schemas": {
"User": {
"title": "User Schema",
"type": "object",
"properties": {
"name": {
"type": "string"
}
}
}
}
}
Is there a similar way to do this in Postman as well? I searched the documentation for quite some time but could not find anything useful except for one line here that sounds like it should be possible:
Each collection / request listing indicates the method, required authorization type, URL, description, headers, request and response structures, and examples.
Description supports markdown language you can use below content :
# Schema:
```
"responses": {
"200": {
"description": "200 response",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/User"
}
}
}
}
}
...
"components": {
"schemas": {
"User": {
"title": "User Schema",
"type": "object",
"properties": {
"name": {
"type": "string"
}
}
}
}
}
```
output:

ElasticSearch reindexing with selected fields result into addition of non selected empty field

Scenario:
We are using AWS ElasticSearch 6.8. We got an index (index-A) with a mapping structure consist of multiple nested objects and JSON hierarchy. We need to create new index (index-B) and move all documents from index-A to index-B.
We need to create index-B with only specific fields.
We need to rename field names while reindexing
e.g.
index-A mapping:
{
"userdata": {
"properties": {
"payload": {
"type": "object",
"properties": {
"Alldata": {
"Username": {
"type": "keyword"
},
"Designation": {
"type": "keyword"
},
"Company": {
"type": "keyword"
},
"Region": {
"type": "keyword"
}
}
}
}
}
}}
Expected structure of index-B mapping after reindexing with rename (Company-cnm, Region-rg) :-
{
"userdata": {
"properties": {
"cnm": {
"type": "keyword"
},
"rg": {
"type": "keyword"
}
}
}}
Steps we are Following:
First we are using Create index API to create index-B with above mapping structure
Once index is created we are creating an ingest pipeline.
PUT ElasticSearch domain endpoint/_ingest/pipeline/my_rename_pipeline
{
"description": "rename field pipeline",
"processors": [{
"rename": {
"field": "payload.Company",
"target_field": "cnm",
"ignore_missing": true
}
},
{
"rename": {
"field": "payload.Region",
"target_field": "rg",
"ignore_missing": true
}
}
]
}
Perform reindexing operation, payload for the same below
let reindexParams = {
wait_for_completion: false,
slices: "auto",
body: {
"conflicts": "proceed",
"source": {
"size": 8000,
"index": "index-A",
"_source": ["payload.Company", "payload.Region"]
},
"dest": {
"index": "index-B",
"pipeline": "my_rename_pipeline",
"version_type": "external"
}
}
};
Problem:
Once the reindexing is complete as expected all documents transferred to new index with renamed fields but there is one additional field which is not selected. As you can see below the "payload" object with metadata is also added to the new index after reindexing. This field is empty and consist of no data.
index-B looks like below after reindexing:
{
"userdata": {
"properties": {
"cnm": {
"type": "keyword"
},
"rg": {
"type": "keyword"
},
"payload": {
"properties": {
"Alldata": {
"type": "object"
}
}
}
}
}}
We are unable to find the workaround and need help how to stop this field from creating. Any help will be appreciated.
Great job!! You're almost there, you simply need to remove the payload field within your pipeline using the remove processor and you're good:
{
"description": "rename field pipeline",
"processors": [
{
"rename": {
"field": "payload.Company",
"target_field": "cnm",
"ignore_missing": true
}
},
{
"rename": {
"field": "payload.Region",
"target_field": "rg",
"ignore_missing": true
}
},
{
"remove": { <--- add this processor
"field": "payload"
}
}
]
}

Logstash keep creating field despite the dynamic_mapping being deactivated

I have defined my own template to be used by logstash where I have deactivate the dynamic mapping:
{
"my_index": {
"order": 0,
"template": "my_index",
"settings": {
"index": {
"mapper": {
"dynamic": "false"
},
"analysis": {
"analyzer": {
"nlp_analyzer": {
"filter": [
"lowercase"
],
"type": "custom",
"tokenizer": "nlp_tokenizer"
}
},
"tokenizer": {
"nlp_tokenizer": {
"pattern": ""
"(\w+)|(\s*[\s+])"
"",
"type": "pattern"
}
}
},
"number_of_shards": "1",
"number_of_replicas": "0"
}
},
"mappings": {
"author": {
"properties": {
"author_name": {
"type": "keyword"
},
"author_pseudo": {
"type": "keyword"
},
"author_location": {
"type": "text",
"fields": {
"standard": {
"analyzer": "standard",
"term_vector": "yes",
"type": "text"
},
"nlp": {
"analyzer": "nlp_analyzer",
"term_vector": "yes",
"type": "text"
}
}
}
}
}
}
}
}
To test if elasticsearch won’t generate new field I try to let a field in my events that is not present in my mapping, let’s say that I have this event:
{
“type” => “author”,
“author_pseudo” => “chloemdelorenzo”,
“author_name” => “Chloe DeLorenzo”,
“author_location” => “US”,
}
Elasticsearch will generate a new field in the mapping when indexing this event:
"type": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
}
I know that Logstash is using my template because in my mapping I use a custom analyser and I can find it back into the mapping generated. But apparently it doesn’t take into consideration that the dynamic field is disabled.
I want elasticsearch to ignore fields that are not present in my mapping but to index the field that have a defined mapping. How can I avoid logstash to create new field?
You should enforce the mapping at the document type level.
https://www.elastic.co/guide/en/elasticsearch/reference/current/dynamic-mapping.html
Regardless of the value of this setting, types can still be added
explicitly when creating an index or with the PUT mapping API.
So your mapping will look like:
"mappings": {
"author": {
"dynamic": false,
"properties": {
"author_name": {
"type": "keyword"
},
"author_pseudo": {
"type": "keyword"
},
"author_location": {
"type": "text",
"fields": {
"standard": {
"analyzer": "standard",
"term_vector": "yes",
"type": "text"
},
"nlp": {
"analyzer": "nlp_analyzer",
"term_vector": "yes",
"type": "text"
}
}
}
}
}
}
This answer is not exactly what you are requesting, but you can manually remove fields with a logstash filter like this:
filter {
mutate {
remove_field => ["fieldname"]
}
}
If your events have a defined list of fields, you could solve your problem this way.

How to iterate and get the properties and values of a JSONAPI response in emberJS?

I have following ember request
this.store.createRecord('food_list',requestObj
).save().then((response) => {
console.log(response);
console.log(response.id); // This is working
console.log(response.food_list_code); //this does NOT work !!!!!
}
It will call an API and save a record to database and then returns following response.
{
"links": {
"self": "/api/food_list"
},
"data": {
"type": "",
"id": "da6b8615-3f4334-550544442",
"attributes": {
"food_list_date": "2013-02-14 23:35:19",
"food_list_id": "da6b8615-3f4334-550544442",
"food_list_code": "GORMA",
},
"relationships": {
"food_list_parameters": {
"data": [
{
"type": "food_list_parameter",
"id": "RERAFFASD9ASD09ASDFA0SDFASD"
}
]
},
"food_new_Name": {
"data": {
"type": "food_new_Name",
"id": "AKASDJFALSKDFKLSDF23W32KJ2L23"
}
}
},
"links": {
"self": "/api/BLAH/BLAH/BLAH"
}
}
}
but since above response is a JSONAPI in form of an ember object, I dont know how to parse it.
If I try to get response.id, I get the string da6b8615-3f4334-550544442
But how to get value for food_list_code in response block. Or how to iterate the response object to get "food_list_code" and "food_list_date" ?
The output for console.log(response) is as following ember class
Class {__ember1500143184544: "ember1198", store: Class, _internalModel: InternalModel, currentState...
I appreciate your help.
M.

ElasticSearch (AWS): How to use another index as a query/match parameter?

Basically I am trying to implement this strategy.
Sample Data:
PUT /newsfeed_consumer_network/consumer_network/urn:viadeo:member:me
{
"producerIds": [
"urn:viadeo:member:ned",
"urn:viadeo:member:john",
"urn:viadeo:member:mary"
]
}
PUT /newsfeed/news/urn:viadeo:news:33
{
"producerId": "urn:viadeo:member:john",
"published": "2014-12-17T12:45:00.000Z",
"actor": {
"id": "urn:viadeo:member:john",
"objectType": "member",
"displayName": "John"
},
"verb": "add",
"object": {
"id": "urn:viadeo:position:10",
"objectType": "position",
"displayName": "Software Engineer # Viadeo"
},
"target": {
"id": "urn:viadeo:profile:john",
"objectType": "profile",
"displayName": "John's profile"
}
}
Sample Query:
POST /newsfeed/news/_search
{
"query": {
"bool": {
"must": [{
"match": {
"actor.id": {
"producerId": {
"index": "newsfeed_consumer_network",
"type": "consumer_network",
"id": "urn:viadeo:network:me",
"path": "producerIds"
}
}
}
}]
}
}
}
I am getting the following error:
"type": "query_parsing_exception",
"reason": "[match] query does not support [index]"
How can I use an index to support a matching query? Is there any way to implement this?
Basically I just want to use another document as the source of the matching parameter for my query. Is this even possible with ElasticSearch?