Best way to translate JSON using AWS Translate - amazon-web-services

Is there a suggested way to translate a JSON object using the AWS Translate API, while leaving the keys untranslated?
I've tried a recursive loop that translates each string individually, but this has resulted in inconsistent results. Is there a better way?

Related

Write C++ data to Apache Parquet: ParquetFileWriter or Write Arrow Table?

I'm looking for the proper way to write data to a Parquet file in Cpp/C++. It seems like there are two choices: either writing direct to Parquet or writing to Arrow then Parquet.
Is writing to Arrow then converting to Parquet with WriteTable preferred? Would either performance considerations or ease of use drive one to write directly to Parquet with the ParquetFileWriter or some other tool?
Looking first at the code it seemed like the ParquetFileWriter was the proper bet. But the usage in the unittest seemed clunky.
Then I found the docs which say to use the WriteTable free fn. WriteTable takes an Apache Arrow Table so it seems I must write to that first. I was taken aback at first because then I must open the lid on Arrow.

Preprocessing data in EMR

I want to crunch 10 PB data. The input data is in some proprietary format (stored in S3) and first preprocessing step is to convert this proprietary data to CSV and move it back to S3. Due to some constraints, I can't couple the preprocessing step with Map task. What would be the correct way to do that?
I'm planning to use AWS EMR for the same. One way would be to run a separate EMR job with no reduce task and upload data to S3 in the Map phase. Is there any better way to do that as running a map-reduce job without reduce task for preprocessing data looks like a hacky solution.
It would seem you have at least two options:
Convert the data into a format you find easier to work with. You might want to look at formats such as Parquet or Avro. Using a map-only task for this is an appropriate method, you would only use a reducer in this case if you wanted to control the number of files produced, ie combine lots of small files into a larger one.
Create a custom InputFormat and just read the data directly. There are lots of resources on the net about how to do this. Depending on what this proprietary formats looks like you might need to do this anyway to achieve #1.
A few things for you to think about are:
Is the proprietary format space efficient compared with other formats?
How easy is the format to work with, would making it into a CSV make your processing jobs simpler?
Is the original data ever updated or added to, would you continually need to convert it to another format or update already converted data?

Retrieve data from Account Server

I'm trying to make a game launcher in C++ and I was wondering if I could get some guidance on how to carry out this task. Basically I've created an API which outputs the account data in JSON format.
i.e {"success":true,"errorCode":0,"reason":"You're now logged in!"}
http_fetch("http://www.example.com/api/login.php?username="+username+"&password="+password+"");
How am I able to retrieve the data?
Sorry if you don't understand. English isn't my first language :)
-brownzilla
Look for a library that allows you to parse Json. Some examples:
Picojson
Rapidjson
Both are quite easy to use and allow you to turn json into a model that can later be used to map to your own model. Alternatively you could write your own Json parser though that would be a bit of work (reinventing the wheel perhaps).

Selecting content based on locale

Given I have description of say product x in multiple languages along with its price and availability dictated by region/locale, how do I go about telling django to render the most appropriate variant of the content based on region of request origin? Amazon would be a good example of what I am trying to achieve.
Is is best to store each variant in the database, and afterwards look at request header to serve the most appropriate content, or is there a best practise way to achieve this.
I was struggling with the same problem. The localeurl library seems to handle these cases, so you don't have to write the logic by yourself. I still haven't tested the library, but at first glance it seems to be exactly what we need. You can read more about it here

json to std::string in c++ from an URL

I want to implement something where I will get an URL which will be a link to JSON db, now I need to get the json from this URL and convert it to std:string in a c++ file. Is there any easy way to do this.
Use a library such as cURL (or one of many others available) to fetch the URL, and then store the result in an std::string.
If you are on Windows then there is no need to use cURL library - try the api call URLOpenBlockingStream. It is not hard to use and will download the JSON response from the server. JSON can be thought of as XML - it is structurally similar although there are noticeable syntactical differences. That said, it shouldn't be too hard to write a simple parser.