Selecting content based on locale - django

Given I have description of say product x in multiple languages along with its price and availability dictated by region/locale, how do I go about telling django to render the most appropriate variant of the content based on region of request origin? Amazon would be a good example of what I am trying to achieve.
Is is best to store each variant in the database, and afterwards look at request header to serve the most appropriate content, or is there a best practise way to achieve this.

I was struggling with the same problem. The localeurl library seems to handle these cases, so you don't have to write the logic by yourself. I still haven't tested the library, but at first glance it seems to be exactly what we need. You can read more about it here

Related

What is design pattern to store JSON objects in C++?

A co-worker and I have been discussed the best way to store data in memory within our C++ server. Basically, we need to store all requisitions made by clients. Those requisitions come as JSONs objects, so each requisition may have different number of parameters. Later, clients can ask the server for a list of those requisitions.
The total number of requisitions is small (order of 10^3). Clients ask for the list of requisitions using pagination.
So my question is what is the standard way of doing that?
1) Create a class that stores every JSON and then, when requested, send the list of those JSONs.
2) Deserialize the JSON, store it in a class then serialize the data again when requested.
If 2, what is the best way of doing that in modern C++?
3) Another option?
Thank you.
If the client asks you to support JSON, the are only two steps you need to do:
Add some JSON (e.g this) library with a suitable license to project.
Use it.
If the implementation of JSON is not the main goal of the project, this should work.
Note: you can also get a lot of design hints inspecting the aforementioned repo.

What are the best practices for user uploads with S3?

I was wondering what you recommend for running a user upload system with s3. I plan on using MongoDB for storing metadata such as the uploader, size, etc. How should I go about storing the actual file in s3.
Here are some of my ideas, what do you think is the best? All of these examples would involve saving the metadata to MongoDB.
1.Should I just store all the files in a bucket?
2. Maybe organize them into dates (e.g. 6/8/2014/mypicture.png)?
3.Should I save them all in one bucket, but with an added string (such as d1JdaZ9-mypicture.png) to avoid duplicates.
4. Or should I generate a long string for a folder, and store the file in that folder. (to retain the original file name). e.g. sh8sb36zkj391k4dhqk4n5e4ndsqule6/mypicture.png
This depends primarily on how you intend to use the pictures and which objects/classes/modules/etc. in your code will actually deal with retrieving them.
If you find yourself wanting to do things like - "all user uploads on a particular day" - A simple naming convention with folders for the year, month and day along with a folder at the top level for the user's unique ID will solve the problem.
If you want to ensure uniqueness and avoid collisions in your bucket, you could generate a unique string too.
However, since you've got MongoDB which (i'm assuming) will actually handle these queries for user uploads by date, etc., it makes the choice of your bucket more aesthetic than functional.
If all you're storing in mongoDB is the key/URL, it doesn't really matter what the actual structure of your bucket is. Nevertheless, it makes sense to still split this up in some coherent way - maybe group all a user's uploads and give each a unique name (either generate a unique name or prefix a unique prefix to the file name).
That being said, do you think there might be a point when you might look at changing how your images are stored? You might move to a CDN. A third party might come up with an even cheaper/better product which you might want to try. In a case like that, simply storing the keys/URLs in your MongoDB is not a good idea since you'll have to update every entry.
To make this relatively future-proof, I suggest you give your uploads a definite structure. I usually opt for:
bucket_name/user_id/yyyy/mm/dd/unique_name.jpg
Your database then only needs to store the file name and the upload time stamp.
You can introduce a middle layer in your logic (a new class perhaps or just a helper function/method) which then generates the URL for a file based on this info. That way, if you change your storage method later, you only need to make a small change in this middle layer (after migrating your files of course) and not worry about MongoDB.

Best practice for creating an unalterable report file in c++

I am currently developping a windows application who test railroad equipments to find any defaults.
Utility A => OK
Utility B => NOK
...
This application will check the given equipment and generate a report.
This report needs to be written once, and no further modifications are allowed since this file can be used as working proof for the equipment.
My first idea was ta use pdf files (haru lib looks great), but pdf can also be modified.
I told myself that I could obsfuscate the report, and implement a homemade reader inside my application, but whatever way I store it, the file would always be possibly accessed and modified right?
So I'm running out of ideas.
Sorry if my approach and my problem appear naive but it's an intership.
Thanks for any help.
Edit: I could also add checksums for files after I generated them, and keep a "checksums record file", and implement a checksums comparison tool for verification? just thought about this.
I believe the answer to your question is to use any format whatosever, and use a digital signature anybody can verify, e.g., create a gnupg, get that key signed by the people who require to check your documents, upload it to one of the key servers, and use it to sign the documents. You can publish the documents, and have a link to your public key available for verification; for critical cases someone verifying must be trust your signature (i.e., trust somebody who signed your key).
People's lives depend on the state of train inspections. Therefore, I find it hard to believe that someone expects you to solve this problem only using free-as-in-beer components.
Adobe supports a strong digital signature model. If you buy into their technology base, you can create PDF's that are digitally signed, and are therefore tamper-evident, as the consumer can check for the signature.
You can, as someone else pointed out, use GNUpg, or for that matter OpenSSL, to implement your own signature scheme, but railroad regulators are somewhat less likely to figure out how to work with it.
I would store reports in an encrypted/protected datastore.
When a user accesses a report (requests a copy, the original is of course always in the database and cannot be modified), it includes the text "Report #XXXXX". If you want to validate the report, retrive a new copy from the system using the Report ID.

YAHOO! Place Finder API

Is there a way to get all of the POI (Points of Interest) and AOI (Areas of Interest) lets say for a specific state.
I would like to be able to auto-complete while someone is typing if it contains any of that data, but I don't know how to get a list of all that data.
I would even be good with storing it my database if need be, because I can't see it changing very often.
No, PlaceFinder itself does not provide a way to download its places database. It's only intended for individual lookups.
You might look at the GeoPlanet data downloads which make available under CC license the entire WOEID (Where on Earth ID) geo database.

Web Service to return complex object with optional parts

I'm trying to think of the correct design for a web service. Essentially, this service is going to perform a client search in a number of disparate systems, and return the results.
Now, a client can have various pieces of information attached - e.g. various pieces of contact information, their address(es), personal information. Some of this information may be complex to retrieve from some systems, so if the consumer isn't going to use it, I'd like them to have some way of indicating that to the web service.
One obvious approach would be to have different methods for different combinations of wanted detail - but as the combinations grow, so too do the number of methods. Another approach I've looked at is to add two string array parameters to the method call, where one array is a list of required items (e.g. I require contact information), and the other is optional items (e.g. if you're going to pull in their names anyway, you might as well return that to me).
A third approach would be to add additional methods to retrieve the detail. But that's going to explode the number of round trips if I need all the details for potentially hundreds of clients who make up the result.
To be honest, I'm not sure I like any of the above approaches. So how would you design such a generic client search service?
(Considered CW since there might not be a single "right" answer, but I'll wait and see what sort of answers arrive)
Create a "criteria" object and use that as a parameter. Such an object should have a bunch of properties to indicate the information you want. For example "IncludeAddresses" or "IncludeFullContactInformation".
The consumer is then responsible to set the right properties to true, and all combinations are possible. This will also make the code in the service easier to do. You can simply write if(criteria.IncludeAddresses){response.Addresses = GetAddresses;}
Any non-structured or semi-structured data is best handled by XML. You might pass XML data via a string or wrap it up in a class adding some functionality to it. Use XPathNavigator to go through XML. You can also use XMLDocument class although it is not too friendly to use. Anyway, you will need some kind of class to handle XML content of course.
That's why XML was invented - to handle data which structure is not clearly defined.
Regards,
Maciej