How to create a multi root flatbuffer json file? - c++

How to create a multi root flatbuffer json file?
table Login {
name:string;
password:string;
}
table Attack {
damage:short;
}
I created the following json file
{
"Login": {
"name": "a",
"password": "a",
}
}
but get error: no root type set to parse json with

Add root_type Login to the bottom of your schema file. If you also want to parse JSON from the command-line with Attack then stick that into its own schema, or use --root-type manually.
Also see the documentation, e.g. https://google.github.io/flatbuffers/flatbuffers_guide_using_schema_compiler.html

Related

Removed link from json file while generating json from Informatica

I am creating JSON file from Informatica. What I did so far:
configure the basic entity in Informatica MDM hub
create a Base object in the provisioning tool
Then, I accessed the predefined URL by Informatica like
"server:port"/cmx/cs/"databaseid"/"baseobjectname"/id.json.
By default, Informatica places the link attribute inside of the JSON file for parent/child/self if any
Is there any way we can remove the link attribute?
I am getting below output
{
"link": [
{
"href": "serveraddress//1.json?depth=2",
"rel": "children"
},
{
"href": "serveraddress//1.json",
"rel": "self"
}
],
"rowidObject": "2"
}
Expected:
{
"rowidObject": "2"
}
Finally i am able to solve this using the suppressLinks=true in the url call
http://:/cmx/cs//?q=''&depth=3&suppressLinks=true

Can you execute a collection which reads a parameterize body from file which also uses a data file?

I am trying to run newman cli and specifying a collection where the body is read from file as below
"body": {
"mode": "file",
"file": {
"src": "body.json"
}
}
and passing a CSV data file but for some reason newman is not replacing the parameterised body with the data from the CSV.
I have a json file containing the body of my request
{
"name": "{{name}}
}
and a CSV file containing the following
name
Pete
Joe
So my body is parameterised stored in a file and I am want to use a data file to populate the requests in the running which will execute 2 runner with the values in my CVS file.
Can this be done somehow?
Assuming that other parts of the collection file are correct the body should be like below:
"body": {
"mode": "formdata",
"formdata": [
{
"key": "file",
"type": "file",
"src": "body.json"
}
]
}
The body.json file should live in the same directory that the collection exist. otherwise full path is required
This is basically the JSON representation of what you in the Postman UI. You can provide any key you'd like but type should be file
You can provide any key you'd like but type should be file

Google Cloud Vision Api only return "name"

I am trying to use Google Cloud Vision API.
I am using the REST API in this link.
POST https://vision.googleapis.com/v1/files:asyncBatchAnnotate
My request is
{
"requests": [
{
"inputConfig": {
"gcsSource": {
"uri": "gs://redaction-vision/pdf_page1_employment_request.pdf"
},
"mimeType": "application/pdf"
},
"features": [
{
"type": "DOCUMENT_TEXT_DETECTION"
}
],
"outputConfig": {
"gcsDestination": {
"uri": "gs://redaction-vision"
}
}
}
]
}
But the response is always only "name" like below:
{
"name": "operations/a7e4e40d1e1ac4c5"
}
My "gs" location is valid.
When I write the wrong path in "gcsSource", 404 not found error is coming.
Who knows why my response is weird?
This is expected, it will not send you the output as a HTTP response. To see what the API did, you need to go to your destination bucket and check for a file named "xxxxxxxxoutput-1-to-1.json", also, you need to specify the name of the object in your gcsDestination section, for example: gs://redaction-vision/test.
Since asyncBatchAnnotate is an asynchronous operation, it won't return the result, it instead returns the name of the operation. You can use that unique name to call GetOperation to check the status of the operation.
Note that there could be more than 1 output file for your pdf if the pdf has more pages than batchSize and the output json file names change depending on the number of pages. It isn't safe to always append "output-1-to-1.json".
Make sure that the uri prefix you put in the output config is unique because you have to do a wildcard search in gcs on the prefix you provide to get all of the json files that were created.

Mapping geo_point data when importing data to AWS Elasticsearch

I have a set of data inside dynamodb that I am importing to AWS Elasticsearch using this tutorial: https://medium.com/#vladyslavhoncharenko/how-to-index-new-and-existing-amazon-dynamodb-content-with-amazon-elasticsearch-service-30c1bbc91365
I need to change the mapping of a part of that data to geo_point.
I have tried creating the mapping before importing the data with:
PUT user
{
"mappings": {
"_doc": {
"properties": {
"grower_location": {
"type": "geo_point"
}
}
}
}
}
When I do this the data doesn't import, although I don't receive an error.
If I import the data first I am able to search it, although the grower_location: { lat: #, lon: # } object is mapped as an integer and I am unable to run geo_distance.
Please help.
I was able to fix this by importing the data once with the python script in the tutorial.
Then running
GET user/_mappings
Copying the auto generated mappings to clipboard, then,
DELETE user/
Then pasting the copied mapping to a new mapping and changing the type for the geo_point data.
PUT user/
{
"mappings": {
"user_type": {
"properties": {
...
"grower_location": {
"type": "geo_point"
}
...
}
}
}
}
Then re-importing the data using the python script in the tutorial.
Everything is imported and ready to be searched using geo_point!

How to upload large amounts of stopwords into AWS Elasticsearch

Is it possible to upload a stopwords.txt onto AWS Elasticsearch and specify it as a path by stop token filter?
If your using aws elasticsearch, the only option to do this is using the elasticsearch rest APIs.
To import large data sets, you can use the bulk API.
Edit: You can now upload "packages" to AWS Elasticsearch service, which lets you add custom lists of stopwords etc. See https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/custom-packages.html
No, it isn't possible to upload a stopwords.txt file to the hosted AWS Elasticsearch service.
What you will have to do is specify the stopwords in a custom analyzer. More details on how to do that can be found in the official documentation.
The official documentation then says to "close and reopen" the index, but again, AWS Elasticsearch doesn't allow that, so you will then have to reindex.
Example:
1. Create an index with your stopwords listed inline within a custom analyzer, e.g.
PUT /my_new_index
{
"settings": {
"analysis": {
"analyzer": {
"english_analyzer": {
"type": "english",
"stopwords": "['a', 'the', 'they', 'and']"
}
}
}
}
}
2. Reindex
POST _reindex
{
"source": {
"index": "my_index"
},
"dest": {
"index": "my_new_index"
}
}
Yes it is possible by setting stopwords_path while defining your stop token filter.
stopwords_path => A path (either relative to config location, or
absolute) to a stopwords file configuration. Each stop word should be
in its own "line" (separated by a line break). The file must be UTF-8
encoded.
Here is how I did it.
Copied stopwords.txt file in the config folder of my elasticsearch home path.
Created a custom token filter with the path set in stopwords_path
PUT /testindex
{
"settings": {
"analysis": {
"filter": {
"teststopper": {
"type": "stop",
"stopwords_path": "stopwords.txt"
}
}
}
}
}
Verified if the filter was working as expected with _analyze API.
GET testindex/_analyze
{
"tokenizer" : "standard",
"token_filters" : ["teststopper"],
"text" : "this is a text to test the stop filter",
"explain" : true,
"attributes" : ["keyword"]
}
The tokens 'a', 'an', 'the', 'to', 'is' were filtered out since I had added them in config/stopwords.txt file.
For more info:
https://www.elastic.co/guide/en/elasticsearch/reference/current/analysis-stop-tokenfilter.html
https://www.elastic.co/guide/en/elasticsearch/reference/2.2/_explain_analyze.html