Internationalization on serverless backend (AWS) - amazon-web-services

I'm building a serverless node.js web app on AWS (using Serverless Framework) and trying to implement internationalization on the backend (API Gateway/Lambda/DynamoDB).
For front-end(React), I use redux to store the selected language and react-intl to switch multiple languages. For the backend, what's the best way to implement internationalization?
Here are two ways I can think of, but there must be better ones.
A. Translate on the backend (Get language from path parameter)
path: {language}/validate
validate.js
export function main(event, context, callback) {
const language = event.pathParameters.language;
const data = JSON.parse(event.body);
callback(null, validate(language, data));
}
This way, I need to pass the language as a function parameter to everywhere, which is not desirable.
B. Translate on front-end (i18n, react-intl)
backend hello.js response
{
id: "samplePage.message.hello",
defaultMessage: `Hello, ${name}`,
values: { name }
}
frontend hello.js
<FormattedMessage {...response} />
ja.json (translation file for i18n)
{
"samplePage.message.hello": "こんにちは、{name}。",
}
This way, it looks like everything works fine without any trouble, but am I missing anything?

We do the same as you suggest in B)...basically we have our backend on AWS lambda and access data from dynamodb.
All our translation happens in the frontend. Only difference we use i18next (more specific react-i18next but makes no difference if this or react-intl -> just offers a little more backends, caching, language detection,... https://www.i18next.com/).
If you like to learn more or see it in action checkout https://locize.com (or directly try it at https://www.locize.io/ 14d free trial) while the app currently only is available in english all the texts comes in via xhr loading and get applied on runtime (i18n).
If interested in how we use serverless at locize.com see following slides from a speech we gave last year: https://blog.locize.com/2017-06-22-how-locize-leverages-serverless/
Last but not least...if you like to get most out of your ICU messages and validation, syntax highlighting and proper plural conversion and machine translation by not destroying the icu dsl during MT -> Just give our service a try...it comes with 14d free trial.

Related

Can you start AI platform jobs from HTTP requests?

I have a web app (react + node.js) running on App Engine.
I would like to kick off (from this web app) a Machine Learning job that requires a GPU (running in a container on AI platform or running on GKE using a GPU node pool like in this tutorial, but we are open to other solutions).
I was thinking of trying what is described at the end of this answer, basically making an HTTP request to start the job using project.job.create API.
More details on the ML job in case this is useful: it generates an output every second that is stored on Cloud Storage and then read in the web app.
I am looking for examples of how to set this up? Where would the job configuration live and how should I set up the API call to kick off that job? Are the there other ways to achieve the same result?
Thank you in advance!
On Google Cloud, all is API, and you can interact with all the product with HTTP request. SO you can definitively achieve what you want.
I personally haven't an example but you have to build a JSON job description and post it to the API.
Don't forget, when you interact with Google Cloud API, you have to add an access token in the Authorization: Bearer header
Where should be your job config description? It depends...
If it is strongly related to your App Engine app, you can add it in App Engine code itself and have it "hard coded". The downside of that option is anytime you have to update the configuration, you have to redeploy a new App Engine version. But if your new version isn't correct, a rollback to a previous and stable version is easy and consistent.
If you prefer to update differently your config file and your App Engine code, you can store the config out of App Engine code, on Cloud Storage for instance. Like that, the update is simple and easy: update the config on Cloud Storage to change the job configuration. However there is no longer relation between the App Engine version and the config version. And the rollback to a stable version can be more difficult.
You can also have a combination of both, where you have a default job configuration in your App Engine code, and an environment variable potentially set to point to a Cloud Storage file that contain a new version of the configuration.
I don't know if it answers all your questions. Don't hesitate to comment if you want more details on some parts.
As mentionated, you can use the AI Platform api to create a job via a post.
Following is an example using Java Script and request to trig a job.
Some usefull tips:
Jobs console to create a job manually, then use the api to list this job then you will have a perfect json example of how to trig it.
You can use the Try this API tool to get the json output of the manually created job. Use this path to get the job: projects/<project name>/jobs/<job name>.
Get the authorization token using the OAuth 2.0 Playground for tests purposes (Step 2 -> Access token:). Check the docs for a definitive way.
Not all parameters are required on the json, thtas jus one example of the job that I have created and got the json using the steps above.
JS Example:
var request = require('request');
request({
url: 'https://content-ml.googleapis.com/v1/projects/<project-name>/jobs?alt=json',
method: 'POST',
headers: {"authorization": "Bearer ya29.A0AR9999999999999999999999999"},
json: {
"jobId": "<job name>",
"trainingInput": {
"scaleTier": "CUSTOM",
"masterType": "standard",
"workerType": "cloud_tpu",
"workerCount": "1",
"args": [
"--training_data_path=gs://<bucket>/*.jpg",
"--validation_data_path=gs://<bucket>/*.jpg",
"--num_classes=2",
"--max_steps=2",
"--train_batch_size=64",
"--num_eval_images=10",
"--model_type=efficientnet-b0",
"--label_smoothing=0.1",
"--weight_decay=0.0001",
"--warmup_learning_rate=0.0001",
"--initial_learning_rate=0.0001",
"--learning_rate_decay_type=cosine",
"--optimizer_type=momentum",
"--optimizer_arguments=momentum=0.9"
],
"region": "us-central1",
"jobDir": "gs://<bucket>",
"masterConfig": {
"imageUri": "gcr.io/cloud-ml-algos/image_classification:latest"
}
},
"trainingOutput": {
"consumedMLUnits": 1.59,
"isBuiltInAlgorithmJob": true,
"builtInAlgorithmOutput": {
"framework": "TENSORFLOW",
"runtimeVersion": "1.15",
"pythonVersion": "3.7"
}
}
}
}, function(error, response, body){
console.log(body);
});
Result:
...
{
createTime: '2022-02-09T17:36:42Z',
state: 'QUEUED',
trainingOutput: {
isBuiltInAlgorithmJob: true,
builtInAlgorithmOutput: {
framework: 'TENSORFLOW',
runtimeVersion: '1.15',
pythonVersion: '3.7'
}
},
etag: '999999aaaac='
Thank you everyone for the input. This was useful to help me resolve my issue, but I wanted to also share the approach I ended up taking:
I started by making sure I could kick off my job manually.
I used this tutorial with a config.yaml file that looked like this:
workerPoolSpecs:
machineSpec:
machineType: n1-standard-4
acceleratorType: NVIDIA_TESLA_T4
acceleratorCount: 1
replicaCount: 1
containerSpec:
imageUri: <Replace this with your container image URI>
args: ["--some=argument"]
When I had a job that could be kicked off manually, I switched to using
the Vertex AI Node.js API to start the job or cancel it. The API exists in other languages.
I know my original question was about HTTP requests, but having an API in the language was a lot easier for me, in particular because I didn't have to worry about authentification.
I hope that is useful, happy to provide mode details if needed.

Consuming API's using AWS Lambda

I am a newcomer to AWS with very little cloud experience. The project I have is to call and consume a API from NOAA, and then save parse the returned XML document to a database. I have a ASP.NET console app that is able to do this pretty easily and successfully. However, I need to do the same thing, but in the cloud on a serverless architecture. Here are the steps I am wanting it to take:
Lambda calls the API at NOAA everyday at midnight
the API returns an XML doc with results
Parse the data and save the data to a cloud PostgreSQL database
It sounds simple, but I am having one heck of a time figuring out how to do this. I have a DB requisitioned from AWS, as that is where data is currently going through my console app. Does anyone have any advice or a resource I could look at for advice? Also, I would prefer to keep this in .NET, but realize that I may need to move it to Python.
Thanks in advance everyone!
Its pretty simple and you can test your code with below simple python boto3 lambda code.
Create new lambda function with admin access (temporary set Admin role and then you can set required role)
Add the following code
https://github.com/mmakadiya/public_files/blob/main/lambda_call_get_rest_api.py
import json
import urllib3
def lambda_handler(event, context):
# TODO implement
http = urllib3.PoolManager()
r = http.request('GET', 'http://api.open-notify.org/astros.json')
print(r.data)
return {
'statusCode': 200,
'body': json.dumps('Hello from Lambda!')
}
The above code will run the REST API to fetch the data. This is just a sample program and it will help you to go further.
MAKE SURE that Lambda function max run time is `15 minutes and it can not be run >15 min so think accordingly.

DynamoDB to 'vanilla' JSON

I'm starting out using some of the managed services in AWS. One thing that seems like it should be easy, is to use the API gateway to secure and expose calls to DynamoDB.
I've got this working. However, it seems a little clunky. DynamoDB returns something like this:
{
"id":{"N":"3"}
// Lots of other fields
}
When really I (and most other consumers out there) would like something like this:
{
"id":"3"
// Lots of other fields
}
The way I see it, I've got two options.
1) Add a response mapping field by field in the AWS API UI. This seems laborious and error prone:
#set($inputRoot = $input.path('$'))
{
"Id": "$elem.Id.N"
// Lots of other fields
}
2) Write a specific lambda between API Gateway and Dynamo that does this mapping. Like https://stackoverflow.com/a/42231827/2012130 This adds another thing in the mix to maintain.
Is there a better way? Am I missing something? Seems to be so close to awesome.
const AWS = require('aws-sdk');
var db = new AWS.DynamoDB.DocumentClient({
region: 'us-east-1',
apiVersion: '2012-08-10'
});
You can use ODM like dynogels,
https://github.com/clarkie/dynogels
We use that heavily without dealing with dynamodb syntaxes.
This brings lambda and language in the mix, but it is much easier to handle when an object grows larger to perform the mapping.
Hope this helps.
There's another option added today. It'll still involve a lambda step, but..
"The Amazon DynamoDB DataMapper for JavaScript is a high-level client for writing and reading structured data to and from DynamoDB, built on top of the AWS SDK for JavaScript."
https://aws.amazon.com/blogs/developer/introducing-the-amazon-dynamodb-datamapper-for-javascript-developer-preview/

Web service differences between REST and RPC

I have a web service that accepts JSON parameters and have specific URLs for methods, e.g.:
http://IP:PORT/API/getAllData?p={JSON}
This is definitely not REST as it is not stateless. It takes cookies into account and has its own session.
Is it RPC? What is the difference between RPC and REST?
Consider the following example of HTTP APIs that model orders being placed in a restaurant.
The RPC API thinks in terms of "verbs", exposing the restaurant functionality as function calls that accept parameters, and invokes these functions via the HTTP verb that seems most appropriate - a 'get' for a query, and so on, but the name of the verb is purely incidental and has no real bearing on the actual functionality, since you're calling a different URL each time. Return codes are hand-coded, and part of the service contract.
The REST API, in contrast, models the various entities within the problem domain as resources, and uses HTTP verbs to represent transactions against these resources - POST to create, PUT to update, and GET to read. All of these verbs, invoked on the same URL, provide different functionality. Common HTTP return codes are used to convey status of the requests.
Placing an Order:
RPC: http://MyRestaurant:8080/Orders/PlaceOrder (POST: {Tacos object})
REST: http://MyRestaurant:8080/Orders/Order?OrderNumber=asdf (POST: {Tacos object})
Retrieving an Order:
RPC: http://MyRestaurant:8080/Orders/GetOrder?OrderNumber=asdf (GET)
REST: http://MyRestaurant:8080/Orders/Order?OrderNumber=asdf (GET)
Updating an Order:
RPC: http://MyRestaurant:8080/Orders/UpdateOrder (PUT: {Pineapple Tacos object})
REST: http://MyRestaurant:8080/Orders/Order?OrderNumber=asdf (PUT: {Pineapple Tacos object})
Example taken from sites.google.com/site/wagingguerillasoftware/rest-series/what-is-restful-rest-vs-rpc
You can't make a clear separation between REST or RPC just by looking at what you posted.
One constraint of REST is that it has to be stateless. If you have a session then you have state so you can't call your service RESTful.
The fact that you have an action in your URL (i.e. getAllData) is an indication towards RPC. In REST you exchange representations and the operation you perform is dictated by the HTTP verbs. Also, in REST, Content negotiation isn't performed with a ?p={JSON} parameter.
Don't know if your service is RPC, but it is not RESTful. You can learn about the difference online, here's an article to get you started: Debunking the Myths of RPC & REST. You know better what's inside your service so compare it's functions to what RPC is and draw your own conclusions.
As others have said, a key difference is that REST URLs are noun-centric and RPC URLs are verb-centric. I just wanted to include this clear table of examples demonstrating that:
---------------------------+-------------------------------------+--------------------------
Operation | RPC (operation) | REST (resource)
---------------------------+-------------------------------------+--------------------------
Signup | POST /signup | POST /persons
---------------------------+-------------------------------------+--------------------------
Resign | POST /resign | DELETE /persons/1234
---------------------------+-------------------------------------+--------------------------
Read person | GET /readPerson?personid=1234 | GET /persons/1234
---------------------------+-------------------------------------+--------------------------
Read person's items list | GET /readUsersItemsList?userid=1234 | GET /persons/1234/items
---------------------------+-------------------------------------+--------------------------
Add item to person's list | POST /addItemToUsersItemsList | POST /persons/1234/items
---------------------------+-------------------------------------+--------------------------
Update item | POST /modifyItem | PUT /items/456
---------------------------+-------------------------------------+--------------------------
Delete item | POST /removeItem?itemId=456 | DELETE /items/456
---------------------------+-------------------------------------+--------------------------
Notes
As the table shows, REST tends to use URL path parameters to identify specific resources
(e.g. GET /persons/1234), whereas RPC tends to use query parameters for function inputs
(e.g. GET /readPerson?personid=1234).
Not shown in the table is how a REST API would handle filtering, which would typically involve query parameters (e.g. GET /persons?height=tall).
Also not shown is how with either system, when you do create/update operations, additional data is probably passed in via the message body (e.g. when you do POST /signup or POST /persons, you include data describing the new person).
Of course, none of this is set in stone, but it gives you an idea of what you are likely to encounter and how you might want to organize your own API for consistency. For further discussion of REST URL design, see this question.
It is RPC using http. A correct implementation of REST should be different from RPC. To have a logic to process data, like a method/function, is RPC. getAllData() is an intelligent method. REST cannot have intelligence, it should be dumb data that can be queried by an external intelligence.
Most implementation I have seen these days are RPC but many mistakenly call it as REST. REST with HTTP is the saviour and SOAP with XML the villain. So your confusion is justified and you are right, it is not REST. But keep in mind that REST is not new(2000) eventhough SOAP/XML is old, json-rpc came later(2005).
Http protocol does not make an implementation of REST. Both REST(GET, POST, PUT, PATCH, DELETE) and RPC(GET + POST) can be developed through HTTP(eg:through a web API project in visual studio for example).
Fine, but what is REST then?
Richardson maturity model is given below(summarized). Only level 3 is RESTful.
Level 0: Http POST
Level 1: each resource/entity has a URI (but still only POST)
Level 2: Both POST and GET can be used
Level 3(RESTful): Uses HATEOAS (hyper media links) or in other words self
exploratory links
eg: level 3(HATEOAS):
Link states this object can be updated this way, and added this way.
Link states this object can only be read and this is how we do it.
Clearly, sending data is not enough to become REST, but how to query the data, should be mentioned too. But then again, why the 4 steps? Why can't it be just Step 4 and call it REST? Richardson just gave us a step by step approach to get there, that is all.
You've built web sites that can be used by humans. But can you also
build web sites that are usable by machines? That's where the future
lies, and RESTful Web Services shows you how to do it.
This book RESTful Web Services helps
A very interesting read RPC vs REST
REST is best described to work with the resources, where as RPC is more about the actions.
REST
stands for Representational State Transfer. It is a simple way to organize interactions between independent systems.
RESTful applications commonly use HTTP requests to post data (create and/or update), read data (e.g., make queries), and delete data. Thus, REST can use HTTP for all four CRUD (Create/Read/Update/Delete) operations.
RPC
is basically used to communicate across the different modules to serve user requests.
e.g. In openstack like how nova, glance and neutron work together when booting a virtual machine.
The URL shared looks like RPC endpoint.
Below are examples for both RPC and REST. Hopefully this helps in understanding when they can be used.
Lets consider an endpoint that sends app maintenance outage emails to customers.
This endpoint preforms one specific action.
RPC
POST https://localhost:8080/sendOutageEmails
BODY: {"message": "we have a scheduled system downtime today at 1 AM"}
REST
POST https://localhost:8080/emails/outage
BODY: {"message": "we have a scheduled system downtime today at 1 AM"}
RPC endpoint is more suitable to use in this case. RPC endpoints usually are used when the API call is performing single task or action. We can obviously use REST as shown, but the endpoint is not very RESTful since we are not performing operations on resources.
Now lets look at an endpoint that stores some data in the database.(typical CRUD operation)
RPC
POST https://localhost:8080/saveBookDetails
BODY: {"id": "123", "name": "book1", "year": "2020"}
REST
POST https://localhost:8080/books
BODY: {"id": "123", "name": "book1", "year": "2020"}
REST is much better for cases like this(CRUD). Here, read(GET) or delete(DELETE) or update(PUT) can be done by using appropriate HTTP methods. Methods decide the operation on the resources(in this case 'books').
Using RPC here is not suitable as we need to have different paths for each CRUD operation(/getBookDetails, /deleteBookDetails, /updateBookDetails) and this has to be done for all resources in the application.
To summarize,
RPC can be used for endpoints that perform single specific action.
REST for endpoints where the resources need CRUD operations.
Slack uses this style of HTTP RPC Web API's - https://api.slack.com/web
There are bunch of good answers here. I would still refer you to this google blog as it does a really good job of discussing the differences between RPC & REST and captures something that I didn't read in any of the answers here.
I would quote a paragraph from the same link that stood out to me:
REST itself is a description of the design principles that underpin HTTP and the world-wide web. But because HTTP is the only commercially important REST API, we can mostly avoid discussing REST and just focus on HTTP. This substitution is useful because there is a lot of confusion and variability in what people think REST means in the context of APIs, but there is much greater clarity and agreement on what HTTP itself is. The HTTP model is the perfect inverse of the RPC model—in the RPC model, the addressable units are procedures, and the entities of the problem domain are hidden behind the procedures. In the HTTP model, the addressable units are the entities themselves and the behaviors of the system are hidden behind the entities as side-effects of creating, updating, or deleting them.
I would argue thusly:
Does my entity hold/own the data? Then RPC: here is a copy of some of my data, manipulate the data copy I send to you and return to me a copy of your result.
Does the called entity hold/own the data? Then REST: either (1) show me a copy of some of your data or (2) manipulate some of your data.
Ultimately it is about which "side" of the action owns/holds the data. And yes, you can use REST verbiage to talk to an RPC-based system, but you will still be doing RPC activity when doing so.
Example 1: I have an object that is communicating to a relational database store (or any other type of data store) via a DAO. Makes sense to use REST style for that interaction between my object and the data access object which can exist as an API. My entity does not own/hold the data, the relational database (or non-relational data store) does.
Example 2: I need to do a lot of complex math. I don't want to load a bunch of math methods into my object, I just want to pass some values to something else that can do all kinds of math, and get a result. Then RPC style makes sense, because the math object/entity will expose to my object a whole bunch of operations. Note that these methods might all be exposed as individual APIs and I might call any of them with GET. I can even claim this is RESTful because I am calling via HTTP GET but really under the covers it is RPC. My entity owns/holds the data, the remote entity is just performing manipulations on the copies of the data that I sent to it.
This is how I understand and use them in different use cases:
Example: Restaurant Management
use-case for REST: order management
- create order (POST), update order (PATCH), cancel order (DELETE), retrieve order (GET)
- endpoint: /order?orderId=123
For resource management, REST is clean. One endpoint with pre-defined actions. It can be seen a way to expose a DB (Sql or NoSql) or class instances to the world.
Implementation Example:
class order:
on_get(self, req, resp): doThis.
on_patch(self, req, resp): doThat.
Framework Example: Falcon for python.
use-case for RPC: operation management
- prepare ingredients: /operation/clean/kitchen
- cook the order: /operation/cook/123
- serve the order /operation/serve/123
For analytical, operational, non-responsive, non-representative, action-based jobs, RPC works better and it is very natural to think functional.
Implementation Example:
#route('/operation/cook/<orderId>')
def cook(orderId): doThis.
#route('/operation/serve/<orderId>')
def serve(orderId): doThat.
Framework Example: Flask for python
Over HTTP they both end up being just HttpRequest objects and they both expect a HttpResponse object back. I think one can continue coding with that description and worry about something else.

Connect Adobe Flex To Arcgis Service?

I'm trying to connect my flex app to my Arcgis webservices. I tried using the connect to webservice interface. But I keep getting this error.
There was an error during service
introspection. WSDLException:
faultCode=PARSER_ERROR: Problem
parsing
'http://localhost/ArcGIS/rest/services/geodata/MapServer'.:
org.xml.sax.SAXParseException: The
element type "link" must be terminated
by the matching end-tag "/link".
My web service looks like this
ArcGIS Services Directory Home >
geodata (MapServer) Help | API
Reference geodata (MapServer) View In:
ArcMap ArcGIS Explorer ArcGIS
JavaScript Google Earth ArcGIS.com
Map
View Footprint In: Google Earth
Service Description:
Map Name: Layers
Legend
All Layers and Tables
Layers:
Geocoding_Result layer (0)
Tables:
Description:
Copyright Text:
Spatial Reference: 4326
Single Fused Map Cache: false
Intial Extent:
XMin: -95.901360470612
YMin: 29.4513469530748
XMax: -95.1472749640384
YMax: 30.045474927951
Spatial Reference: 4326
Full Extent:
XMin: -100.3273442
YMin: 29.451583
XMax: -94.8230278
YMax: 31.250677
Spatial Reference: 4326
Units: esriDecimalDegrees
Supported Image Format Types:
PNG24,PNG,JPG,DIB,TIFF,EMF,PS,PDF,GIF,SVG,SVGZ,AI,BMP
Document Info:
Title:
Author:
Comments:
Subject:
Category:
Keywords:
Credits:
Supported Interfaces: REST SOAP
Supported Operations: Export Map
Identify Find
Maybe you already know... but if you are trying to connect to an ArcGIS Server using Flex you might be interested in the ArcGIS API for Flex - http://links.esri.com/flex - it will take care of most of that for you.
Antarr,
It's a little hard to tell from your question what exactly you are trying to do. But here are a couple possibilities:
1) It looks like you might be trying to add a reference to this service via Flash Builder's "Connect to Web Service" dialog, which you would use for a SOAP web service, but not for the REST endpoint you note above (http://localhost/ArcGIS/rest/services/geodata/MapServer). If you are intending to use the REST endpoints, then you need to use the appropriate ESRI ArcGIS API for Flex class (for example DynamicMapServiceLayer or Locator) for whatever you are trying to do (generate a map image, geocode addresses, etc). Look at the ESRI help on the Flex API for more information:
http://help.arcgis.com/en/webapi/flex/apiref/index.html
2) If you are intending to use the ESRI SOAP API then you do want to use Flash Builder's "Connect to Web Service" dialog, but then you must use the SOAP service endpoint, which would be something like: http://localhost/ArcGIS/services/geodata//MapServer?wsdl (though I don't know why you'd want to do this since the Flex API is really designed to be used with ESRI's REST service endpoints).
3) The only layer in your service is called "Geocoding_Result" - is that an actual feature layer or just a temporary output from a geocoding operation done with ArcMap? I am not sure whether a temporary result would work when published as a service.
See if any of these suggestions help. If not, then clarify what you are trying to do and perhaps I can give you more specific assistance.