Currently while using babel-plugin-react-intl, separate json for every component is created with 'id', 'description' and 'defaultMessage'. What I need is that only a single json to be created which contains a single object with all the 'id' as the 'key' and 'defaultMessage' as the 'value'
Present situation:
ComponentA.json
[
{
"id": "addEmoticonA",
"description": "Add emoticon",
"defaultMessage": "Insert Emoticon"
},
{
"id": "addPhotoA",
"description": "Add photo",
"defaultMessage": "Insert photo"
}
]
ComponentB.json
[
{
"id": "addEmoticonB",
"description": "Add emoji",
"defaultMessage": "Insert Emoji"
},
{
"id": "addPhotoB",
"description": "Add picture",
"defaultMessage": "Insert picture"
}
]
What I need for translation.
final.json
{
"addEmoticonA": "Insert Emoticon",
"addPhotoA": "Insert photo",
"addEmoticonB": "Insert Emoji",
"addPhotoB": "Insert picture"
}
Is there any way to accomplish this task? May it be by using python script or anything. i.e to make a single json file from different json files. Or to directly make a single json file using babel-plugin-react-intl
There is a translations manager that will do this.
Or for a custom option see below
The script below which is based on this script goes through the translation messages created by
babel-plugin-react-intl and creates js files that contain all messages from all components in the json format.
import fs from 'fs'
import {
sync as globSync
}
from 'glob'
import {
sync as mkdirpSync
}
from 'mkdirp'
import * as i18n from '../lib/i18n'
const MESSAGES_PATTERN = './_translations/**/*.json'
const LANG_DIR = './_translations/lang/'
// Ensure output folder exists
mkdirpSync(LANG_DIR)
// Aggregates the default messages that were extracted from the example app's
// React components via the React Intl Babel plugin. An error will be thrown if
// there are messages in different components that use the same `id`. The result
// is a flat collection of `id: message` pairs for the app's default locale.
let defaultMessages = globSync(MESSAGES_PATTERN)
.map(filename => fs.readFileSync(filename, 'utf8'))
.map(file => JSON.parse(file))
.reduce((collection, descriptors) => {
descriptors.forEach(({
id, defaultMessage, description
}) => {
if (collection.hasOwnProperty(id))
throw new Error(`Duplicate message id: ${id}`)
collection[id] = {
defaultMessage, description
}
})
return collection
}, {})
// Sort keys by name
const messageKeys = Object.keys(defaultMessages)
messageKeys.sort()
defaultMessages = messageKeys.reduce((acc, key) => {
acc[key] = defaultMessages[key]
return acc
}, {})
// Build the JSON document for the available languages
i18n.en = messageKeys.reduce((acc, key) => {
acc[key] = defaultMessages[key].defaultMessage
return acc
}, {})
Object.keys(i18n).forEach(lang => {
const langDoc = i18n[lang]
const units = Object.keys(defaultMessages).map((id) => [id, defaultMessages[id]]).reduce((collection, [id]) => {
collection[id] = langDoc[id] || '';
return collection;
}, {});
fs.writeFileSync(`${LANG_DIR}${lang}.json`, JSON.stringify(units, null, 2))
})
You can use babel-plugin-react-intl-extractor for aggregate your translations in single file. Also it provides autorecompile translation files on each change of your messages.
Related
I'm working on a SageMaker labeling job with custom datatypes. For some reason though, I'm not getting the correct label in the AWS web console. It should have the selected label which is "Native", but instead, I'm getting the <labelattributename> which is "new-test-14".
After Ground Truth runs the post-annotation lambda, it seems to modify the metadata before returning a data object. The data object it returns doesn't contain a class-name key inside the metadata attribute, even when I hard-code the lambda to return an object that contains it.
My manifest file looks like this:
{"source-ref" : "s3://<file-name>", "text" : "Hello world"}
{"source-ref" : "s3://"<file-name>", "text" : "Hello world"}
And the worker response looks like this:
{"answers":[{"acceptanceTime":"2021-05-18T16:08:29.473Z","answerContent":{"new-test-14":{"label":"Native"}},"submissionTime":"2021-05-18T16:09:15.960Z","timeSpentInSeconds":46.487,"workerId":"private.us-east-1.ea05a03fcd679cbb","workerMetadata":{"identityData":{"identityProviderType":"Cognito","issuer":"https://cognito-idp.us-east-1.amazonaws.com/us-east-1_XPxQ9txEq","sub":"edc59ce1-e09d-4551-9e0d-a240465ea14a"}}}]}
That worker response gets processed by my post-annotation lambda which is modeled after this aws sample ground truth recipe. Here's my code:
import json
import sys
import boto3
from datetime import datetime
def lambda_handler(event, context):
# Event received
print("Received event: " + json.dumps(event, indent=2))
labeling_job_arn = event["labelingJobArn"]
label_attribute_name = event["labelAttributeName"]
label_categories = None
if "label_categories" in event:
label_categories = event["labelCategories"]
print(" Label Categories are : " + label_categories)
payload = event["payload"]
role_arn = event["roleArn"]
output_config = None # Output s3 location. You can choose to write your annotation to this location
if "outputConfig" in event:
output_config = event["outputConfig"]
# If you specified a KMS key in your labeling job, you can use the key to write
# consolidated_output to s3 location specified in outputConfig.
# kms_key_id = None
# if "kmsKeyId" in event:
# kms_key_id = event["kmsKeyId"]
# # Create s3 client object
# s3_client = S3Client(role_arn, kms_key_id)
s3_client = boto3.client('s3')
# Perform consolidation
return do_consolidation(labeling_job_arn, payload, label_attribute_name, s3_client)
def do_consolidation(labeling_job_arn, payload, label_attribute_name, s3_client):
"""
Core Logic for consolidation
:param labeling_job_arn: labeling job ARN
:param payload: payload data for consolidation
:param label_attribute_name: identifier for labels in output JSON
:param s3_client: S3 helper class
:return: output JSON string
"""
# Extract payload data
if "s3Uri" in payload:
s3_ref = payload["s3Uri"]
payload_bucket, payload_key = s3_ref.split('/',2)[-1].split('/',1)
payload = json.loads(s3_client.get_object(Bucket=payload_bucket, Key=payload_key)['Body'].read())
# print(payload)
# Payload data contains a list of data objects.
# Iterate over it to consolidate annotations for individual data object.
consolidated_output = []
success_count = 0 # Number of data objects that were successfully consolidated
failure_count = 0 # Number of data objects that failed in consolidation
for p in range(len(payload)):
response = None
dataset_object_id = payload[p]['datasetObjectId']
log_prefix = "[{}] data object id [{}] :".format(labeling_job_arn, dataset_object_id)
print("{} Consolidating annotations BEGIN ".format(log_prefix))
annotations = payload[p]['annotations']
# print("{} Received Annotations from all workers {}".format(log_prefix, annotations))
# Iterate over annotations. Log all annotation to your CloudWatch logs
annotationsFromAllWorkers = []
for i in range(len(annotations)):
worker_id = annotations[i]["workerId"]
anotation_data = annotations[i]["annotationData"]
annotation_content = anotation_data["content"]
annotation_content_json = json.loads(annotation_content)
annotation_job = annotation_content_json["new_test"]
annotation_label = annotation_job["label"]
consolidated_annotation= {
"workerId": worker_id,
"annotationData": {
"content": {
"annotatedResult": {
"instances": [{"label":annotation_label }]
}
}
}
}
annotationsFromAllWorkers.append(consolidated_annotation)
consolidated_annotation = {"annotationsFromAllWorkers": annotationsFromAllWorkers} # TODO : Add your consolidation logic
# Build consolidation response object for an individual data object
response = {
"datasetObjectId": dataset_object_id,
"consolidatedAnnotation": {
"content": {
label_attribute_name: consolidated_annotation,
label_attribute_name+ "-metadata": {
"class-name": "Native",
"confidence": 0.00,
"human-annotated": "yes",
"creation-date": datetime.strftime(datetime.now(), "%Y-%m-%dT%H:%M:%S"),
"type": "groundtruth/custom"
}
}
}
}
success_count += 1
# print("{} Consolidating annotations END ".format(log_prefix))
# Append individual data object response to the list of responses.
if response is not None:
consolidated_output.append(response)
failure_count += 1
print(" Consolidation failed for dataobject {}".format(p))
print(" Unexpected error: Consolidation failed." + str(sys.exc_info()[0]))
print("Consolidation Complete. Success Count {} Failure Count {}".format(success_count, failure_count))
print(" -- Consolidated Output -- ")
print(consolidated_output)
print(" ------------------------- ")
return consolidated_output
As you can see above, the do_consolidation method returns an object hard-coded to include a class-name of "Native", and the lambda_handler method returns that same object. Here's the post-annotation function response:
[{
"datasetObjectId": "4",
"consolidatedAnnotation": {
"content": {
"new-test-14": {
"annotationsFromAllWorkers": [{
"workerId": "private.us-east-1.ea05a03fcd679cbb",
"annotationData": {
"content": {
"annotatedResult": {
"instances": [{
"label": "Native"
}]
}
}
}
}]
},
"new-test-14-metadata": {
"class-name": "Native",
"confidence": 0,
"human-annotated": "yes",
"creation-date": "2021-05-19T07:06:06",
"type": "groundtruth/custom"
}
}
}
}]
As you can see, the post-annotation function return value has the class-name of "Native" in the metadata so I would expect the class-name to be present in the data object metadata, but it's not. And here's a screenshot of the data object summary:
It seems like Ground Truth overwrote the metadata, and now the object doesn't contain the correct label. I think perhaps that's why my label is coming through as the label attribute name "new-test-14" instead of as the correct label "Native". Here's a screenshot of the labeling job in the AWS web console:
The web console is supposed to show the label "Native" inside the "Label" column but instead I'm getting the <labelattributename> "new-test-14" in the label column.
Here is the output.manifest file generated by Ground Truth at the end:
{
"source-ref": "s3://<file-name>",
"text": "Hello world",
"new-test-14": {
"annotationsFromAllWorkers": [{
"workerId": "private.us-east-1.ea05a03fcd679ert",
"annotationData": {
"content": {
"annotatedResult": {
"label": "Native"
}
}
}
}]
},
"new-test-14-metadata": {
"type": "groundtruth/custom",
"job-name": "new-test-14",
"human-annotated": "yes",
"creation-date": "2021-05-18T12:34:17.400000"
}
}
What should I return from the Post-Annotation function? Am I missing something in my response? How do I get the proper label to appear in the AWS web console?
I am trying to tokenize the string value (passed in the tabular format) with custom regex infotype, but having issues when I add more than one row in the table. If I pass the single row, it successfully tokenize the string_value and returns the encoded string. I'm using the python library for this.
Custom info type is currently set to any value in a string for demo purpose and wrapped key is present in cloud KMS (removed it here for security reasons).
Following is the configuration that I am using:
# Construct FPE configuration dictionary
crypto_replace_ffx_fpe_config = {
"crypto_key": {
"kms_wrapped": {
"wrapped_key": wrapped_key,
"crypto_key_name": key_name,
}
}
}
# Add surrogate type
if surrogate_type:
crypto_replace_ffx_fpe_config["surrogate_info_type"] = {
"name": surrogate_type
}
# Construct inspect configuration dictionary
inspect_config = {
#"info_types": [{"name": info_type} for info_type in info_types],
#"min_likelihood": "VERY_UNLIKELY",
"custom_info_types": [
{
"info_type": {
"name": "custom"
},
"exclusion_type": "EXCLUSION_TYPE_UNSPECIFIED",
"likelihood": "POSSIBLE",
"regex": {
"pattern": "(?:.*)"
#"pattern": ".*"
}
}
]
}
# Construct deidentify configuration dictionary
deidentify_config = {
"info_type_transformations": {
"transformations": [
{
"primitive_transformation": {
"crypto_deterministic_config": crypto_replace_ffx_fpe_config
}
}
]
}
}
item={
"table":{
"headers":[{
"name":header
} for header in data_headers
],
"rows":[
{
"values":[
{
"string_value":"asa s.com"
}
]
}, #Issue starts when the below row is added having any value in string_value
{
"values":
[
{
"string_value":"14562#gmail.com"
}
]
}
]
}
}
# Call the API
response = dlp.deidentify_content(
parent,
inspect_config=inspect_config,
deidentify_config=deidentify_config,
item=item,
)
# Print results
return response.item.table
If i am sending one row of data, getting response as
headers {
name: "token"
}
rows {
values {
string_value: "EMAIL_ADDRESS(XX):XXXXXXXXXXXXXXXXXXX="
}
}
And when i am sending item with more than one row, i am getting what i originally sent to api as it is back:
For example:
headers {
name: "token"
}
rows {
values {
string_value: "asa s.com"
}
}
rows {
values {
string_value: "14562#gmail.com"
}
}
It seems like you are using InfoTypeTransformations for DeidentifyConfig.
As per the documentation, you should use RecordTransformations instead, as this category of transformation "is applied to values within submitted tabular text data that are identified as a specific infoType, or on an entire column of tabular data" and treat the dataset as structured.
I'm playing with the New Data API for Amazon Aurora Serverless
Is it possible to get the table column names in the response?
If for example I run the following query in a user table with the columns id, first_name, last_name, email, phone:
const sqlStatement = `
SELECT *
FROM user
WHERE id = :id
`;
const params = {
secretArn: <mySecretArn>,
resourceArn: <myResourceArn>,
database: <myDatabase>,
sql: sqlStatement,
parameters: [
{
name: "id",
value: {
"stringValue": 1
}
}
]
};
let res = await this.RDS.executeStatement(params)
console.log(res);
I'm getting a response like this one, So I need to guess which column corresponds with each value:
{
"numberOfRecordsUpdated": 0,
"records": [
[
{
"longValue": 1
},
{
"stringValue": "Nicolas"
},
{
"stringValue": "Perez"
},
{
"stringValue": "example#example.com"
},
{
"isNull": true
}
]
]
}
I would like to have a response like this one:
{
id: 1,
first_name: "Nicolas",
last_name: "Perez",
email: "example#example.com",
phone: null
}
update1
I have found an npm module that wrap Aurora Serverless Data API and simplify the development
We decided to take the current approach because we were trying to cut down on the response size and including column information with each record was redundant.
You can explicitly choose to include column metadata in the result. See the parameter: "includeResultMetadata".
https://docs.aws.amazon.com/rdsdataservice/latest/APIReference/API_ExecuteStatement.html#API_ExecuteStatement_RequestSyntax
Agree with the consensus here that there should be an out of the box way to do this from the data service API. Because there is not, here's a JavaScript function that will parse the response.
const parseDataServiceResponse = res => {
let columns = res.columnMetadata.map(c => c.name);
let data = res.records.map(r => {
let obj = {};
r.map((v, i) => {
obj[columns[i]] = Object.values(v)[0]
});
return obj
})
return data
}
I understand the pain but it looks like this is reasonable based on the fact that select statement can join multiple tables and duplicated column names may exist.
Similar to the answer above from #C.Slack but I used a combination of map and reduce to parse response from Aurora Postgres.
// declarative column names in array
const columns = ['a.id', 'u.id', 'u.username', 'g.id', 'g.name'];
// execute sql statement
const params = {
database: AWS_PROVIDER_STAGE,
resourceArn: AWS_DATABASE_CLUSTER,
secretArn: AWS_SECRET_STORE_ARN,
// includeResultMetadata: true,
sql: `
SELECT ${columns.join()} FROM accounts a
FULL OUTER JOIN users u ON u.id = a.user_id
FULL OUTER JOIN groups g ON g.id = a.group_id
WHERE u.username=:username;
`,
parameters: [
{
name: 'username',
value: {
stringValue: 'rick.cha',
},
},
],
};
const rds = new AWS.RDSDataService();
const response = await rds.executeStatement(params).promise();
// parse response into json array
const data = response.records.map((record) => {
return record.reduce((prev, val, index) => {
return { ...prev, [columns[index]]: Object.values(val)[0] };
}, {});
});
Hope this code snippet helps someone.
And here is the response
[
{
'a.id': '8bfc547c-3c42-4203-aa2a-d0ee35996e60',
'u.id': '01129aaf-736a-4e86-93a9-0ab3e08b3d11',
'u.username': 'rick.cha',
'g.id': 'ff6ebd78-a1cf-452c-91e0-ed5d0aaaa624',
'g.name': 'valentree',
},
{
'a.id': '983f2919-1b52-4544-9f58-c3de61925647',
'u.id': '01129aaf-736a-4e86-93a9-0ab3e08b3d11',
'u.username': 'rick.cha',
'g.id': '2f1858b4-1468-447f-ba94-330de76de5d1',
'g.name': 'ensightful',
},
]
Similar to the other answers, but if you are using Python/Boto3:
def parse_data_service_response(res):
columns = [column['name'] for column in res['columnMetadata']]
parsed_records = []
for record in res['records']:
parsed_record = {}
for i, cell in enumerate(record):
key = columns[i]
value = list(cell.values())[0]
parsed_record[key] = value
parsed_records.append(parsed_record)
return parsed_records
I've added to the great answer already provided by C. Slack to deal with AWS handling empty nullable character fields by giving the response { "isNull": true } in the JSON.
Here's my function to handle this by returning an empty string value - this is what I would expect anyway.
const parseRDSdata = (input) => {
let columns = input.columnMetadata.map(c => { return { name: c.name, typeName: c.typeName}; });
let parsedData = input.records.map(row => {
let response = {};
row.map((v, i) => {
//test the typeName in the column metadata, and also the keyName in the values - we need to cater for a return value of { "isNull": true } - pflangan
if ((columns[i].typeName == 'VARCHAR' || columns[i].typeName == 'CHAR') && Object.keys(v)[0] == 'isNull' && Object.values(v)[0] == true)
response[columns[i].name] = '';
else
response[columns[i].name] = Object.values(v)[0];
}
);
return response;
}
);
return parsedData;
}
This is my JSON data, which is stored into cosmos db
{
"id": "e064a694-8e1e-4660-a3ef-6b894e9414f7",
"Name": "Name",
"keyData": {
"Keys": [
"Government",
"Training",
"support"
]
}
}
Now I want to write a query to eliminate the keyData and get only the Keys (like below)
{
"userid": "e064a694-8e1e-4660-a3ef-6b894e9414f7",
"Name": "Name",
"Keys" :[
"Government",
"Training",
"support"
]
}
So far I tried the query like
SELECT c.id,k.Keys FROM c
JOIN k in c.keyPhraseBatchResult
Which is not working.
Update 1:
After trying with the Sajeetharan now I can able to get the result, but the issue it producing another JSON inside the Array.
Like
{
"id": "ee885fdc-9951-40e2-b1e7-8564003cd554",
"keys": [
{
"serving": "Government"
},
{
"serving": "Training"
},
{
"serving": "support"
}
]
}
Is there is any way that extracts only the Array without having key value pari again?
{
"userid": "e064a694-8e1e-4660-a3ef-6b894e9414f7",
"Name": "Name",
"Keys" :[
"Government",
"Training",
"support"
]
}
You could try this one,
SELECT C.id, ARRAY(SELECT VALUE serving FROM serving IN C.keyData.Keys) AS Keys FROM C
Please use cosmos db stored procedure to implement your desired format based on the #Sajeetharan's sql.
function sample() {
var collection = getContext().getCollection();
var isAccepted = collection.queryDocuments(
collection.getSelfLink(),
'SELECT C.id,ARRAY(SELECT serving FROM serving IN C.keyData.Keys) AS keys FROM C',
function (err, feed, options) {
if (err) throw err;
if (!feed || !feed.length) {
var response = getContext().getResponse();
response.setBody('no docs found');
}
else {
var response = getContext().getResponse();
var map = {};
for(var i=0;i<feed.length;i++){
var keyArray = feed[i].keys;
var array = [];
for(var j=0;j<keyArray.length;j++){
array.push(keyArray[j].serving)
}
feed[i].keys = array;
}
response.setBody(feed);
}
});
if (!isAccepted) throw new Error('The query was not accepted by the server.');
}
Output:
Using the C# client library for Dialogflow, I am trying to set the output context in a webhook response. However, the output context field is read only. This is my code:
WebhookResponse response = new WebhookResponse
{
FulfillmentText = "This is a test",
OutputContexts = ... //Regardless of what I try and set OutputContexts to be, I get the error "property or indexer 'WebhookResponse.OutputContexts' cannot be assigned to -- it is read only"
};
How do I set the output context?
I know this is an old question but just in case someone has the same problem.
You can not assign a new list to OutputContexts, you have to add them to the list:
For example:
response.OutputContexts.Add(new Context
{
Name = $"{request.Session}/your_context",
LifespanCount = 1
});
I think the response json which you are forming is wrong.
Below is the correct json response which you need to send:
{
"fulfillmentText = "This is a test",
"outputContexts": [
{
"name": "projects/project_id/agent/sessions/session_id/contexts/your_context",
"lifespanCount": 5,
"parameters": {
"foo": "bar",
"foo1": "bar1"
}
}
],
"followupEventInput": {
"name": "even_name"
}
}