I'm curious if docmosis can support pulling information from a database to generate a table. For ex, i want to generate a document that shows in a table topics about various states, like state capital, state flower, state population, etc.
I have a form that collects information from the user: which states do you want to include (from a multi-select pick list) and which topics about each state do you want to include (again, from a multi-select pick list). But the topics about each state is stored in a separate "database". It could be a GoogleSheet, SharePoint list, etc. That information is NOT included in the docmosis template.
When docmosis generates the document, it would iterate through the information provided (every state and topic), pull that information from the database and then insert that into the generated document.
If so, how is this done?
Docmosis expects the data to be provided at the time the document generate request is made. This means your code typically sources and organizes the data, then provides that to Docmosis to use.
Doing it this way also has a significant advantage for diagnosing problems. When something is not working as expected, reviewing the data along side the template can often reveal the problem.
Specifically, the Docmosis Java product can pull data out of a database using a DataProviderBuilder.addSQL() methods and you can add multiple SQL sources. However, as mentioned above, it is better to extract the data as a separate stage so it may be debugged and diagnosed independently.
Related
A co-worker and I have been discussed the best way to store data in memory within our C++ server. Basically, we need to store all requisitions made by clients. Those requisitions come as JSONs objects, so each requisition may have different number of parameters. Later, clients can ask the server for a list of those requisitions.
The total number of requisitions is small (order of 10^3). Clients ask for the list of requisitions using pagination.
So my question is what is the standard way of doing that?
1) Create a class that stores every JSON and then, when requested, send the list of those JSONs.
2) Deserialize the JSON, store it in a class then serialize the data again when requested.
If 2, what is the best way of doing that in modern C++?
3) Another option?
Thank you.
If the client asks you to support JSON, the are only two steps you need to do:
Add some JSON (e.g this) library with a suitable license to project.
Use it.
If the implementation of JSON is not the main goal of the project, this should work.
Note: you can also get a lot of design hints inspecting the aforementioned repo.
In the hybris wiki trails, there is mention of core data vs. essential data vs. sample data. What is the difference between these three types of data?
Ordinarily, I would assume that sample data is illustrative gobbledygook data created to populate the example apparel and electronics storefronts. However, the wiki trails suggest that core data is for non-store specific data and the sample data is for store specific data.
On the same page, the wiki states that core data contains cockpit and catalog definitions, email templates, CMS layout, and site definitions (countries and user groups impex are included in this as well). This seems rather store specific to me. Does anyone have an explanation for this?
Yes, I have an explanation. Actually a lot of this is down to arbitrary decisions I made on separating data between acceleratorcore and acceleratorsampledata extensions as part of the Accelerator in 4.5 (later these had y- prefix added).
Essential and Project Data are two sets of data that are used within hybris' init/update process. These steps are controlled for each extension via particular Annotations on classes and methods.
Core vs Sample data is more about if I thought the impex file, or lines, were specific to the sample store or were more general. You will notice your CoreSystemSetup has both essential and projectdata steps.
Lots of work has happened in various continents since then, so, like much of hybris now, its a bit of a mess.
There are a few fun bugs related to hybris making certain things part of essentialdata. But these are in the platform not something I can fix without complaining to various people etc.
To confuse matters further, there is the yacceleratorinitialdata extension. This extension was a way I hoped to make projects easier, by giving some impex skeletons for new sites and stores. This would be generated for you during modulegen. It has rotted heavily though since release, now very out of date.
For a better explanation, take a look at this answer from answers.sap.com.
Hybris imports two types of data on initialization and update processes; first is essentialdata and other one is projectdata.
Essentialdata is the coredata setup which is mandatory and will import when you run initialization or update.
sampledata is your projectdata and it is not mandatory it will import when you select project while updating the system.
I'm new to ExactTarget, and I'm having some hard time to do something that should be simple.
We direct our e-mails using a list of All Subscribers, and filter then using data extensions. When I go on the tracking page of a particular e-mail, there it is in the Summary, all the info and the Data Extensions, on the format:
name (number) sent (Using All Subscribers) (number sent)
The thing is, when I do a tracking extract I don't get this information. I tried to extract everything that was possible, and couldn't get this information. All I get is, for each sendID (for each subscriber in fact), the ListID, which will always be the same (the one for All Subscribers). I want to know which segment we used.
Tried to deep dive on the SOAP API, with no luck, again all I can retrieve is the List, never the data extension.
This must be retrievable, once it is on the Tracking Summary. So my question is how do I retrieve is.
I considered programming a simple robot to scrape for this info, but there must be a better way.
Thanks
Guess nobody cares much, but in case someone faces the same problem, I've got a final answer for support, the information is not retrievable (except through the tracking summary, which must be accessed one by one) through API or UI
But a custom report can be asked for. Still waiting to see if it's free of charge (should be, right? As it's something that should be available in first place)
I was wondering what you recommend for running a user upload system with s3. I plan on using MongoDB for storing metadata such as the uploader, size, etc. How should I go about storing the actual file in s3.
Here are some of my ideas, what do you think is the best? All of these examples would involve saving the metadata to MongoDB.
1.Should I just store all the files in a bucket?
2. Maybe organize them into dates (e.g. 6/8/2014/mypicture.png)?
3.Should I save them all in one bucket, but with an added string (such as d1JdaZ9-mypicture.png) to avoid duplicates.
4. Or should I generate a long string for a folder, and store the file in that folder. (to retain the original file name). e.g. sh8sb36zkj391k4dhqk4n5e4ndsqule6/mypicture.png
This depends primarily on how you intend to use the pictures and which objects/classes/modules/etc. in your code will actually deal with retrieving them.
If you find yourself wanting to do things like - "all user uploads on a particular day" - A simple naming convention with folders for the year, month and day along with a folder at the top level for the user's unique ID will solve the problem.
If you want to ensure uniqueness and avoid collisions in your bucket, you could generate a unique string too.
However, since you've got MongoDB which (i'm assuming) will actually handle these queries for user uploads by date, etc., it makes the choice of your bucket more aesthetic than functional.
If all you're storing in mongoDB is the key/URL, it doesn't really matter what the actual structure of your bucket is. Nevertheless, it makes sense to still split this up in some coherent way - maybe group all a user's uploads and give each a unique name (either generate a unique name or prefix a unique prefix to the file name).
That being said, do you think there might be a point when you might look at changing how your images are stored? You might move to a CDN. A third party might come up with an even cheaper/better product which you might want to try. In a case like that, simply storing the keys/URLs in your MongoDB is not a good idea since you'll have to update every entry.
To make this relatively future-proof, I suggest you give your uploads a definite structure. I usually opt for:
bucket_name/user_id/yyyy/mm/dd/unique_name.jpg
Your database then only needs to store the file name and the upload time stamp.
You can introduce a middle layer in your logic (a new class perhaps or just a helper function/method) which then generates the URL for a file based on this info. That way, if you change your storage method later, you only need to make a small change in this middle layer (after migrating your files of course) and not worry about MongoDB.
I am currently developping a windows application who test railroad equipments to find any defaults.
Utility A => OK
Utility B => NOK
...
This application will check the given equipment and generate a report.
This report needs to be written once, and no further modifications are allowed since this file can be used as working proof for the equipment.
My first idea was ta use pdf files (haru lib looks great), but pdf can also be modified.
I told myself that I could obsfuscate the report, and implement a homemade reader inside my application, but whatever way I store it, the file would always be possibly accessed and modified right?
So I'm running out of ideas.
Sorry if my approach and my problem appear naive but it's an intership.
Thanks for any help.
Edit: I could also add checksums for files after I generated them, and keep a "checksums record file", and implement a checksums comparison tool for verification? just thought about this.
I believe the answer to your question is to use any format whatosever, and use a digital signature anybody can verify, e.g., create a gnupg, get that key signed by the people who require to check your documents, upload it to one of the key servers, and use it to sign the documents. You can publish the documents, and have a link to your public key available for verification; for critical cases someone verifying must be trust your signature (i.e., trust somebody who signed your key).
People's lives depend on the state of train inspections. Therefore, I find it hard to believe that someone expects you to solve this problem only using free-as-in-beer components.
Adobe supports a strong digital signature model. If you buy into their technology base, you can create PDF's that are digitally signed, and are therefore tamper-evident, as the consumer can check for the signature.
You can, as someone else pointed out, use GNUpg, or for that matter OpenSSL, to implement your own signature scheme, but railroad regulators are somewhat less likely to figure out how to work with it.
I would store reports in an encrypted/protected datastore.
When a user accesses a report (requests a copy, the original is of course always in the database and cannot be modified), it includes the text "Report #XXXXX". If you want to validate the report, retrive a new copy from the system using the Report ID.