What is the best way to export/import transaction from a Gnucash file to another Gnucash file?
The most obvious (and actually only way I can find) is to Export Transactions to CSV... and then to import from your other account.
But CSV import doesn't allow to import all of Gnucash information categories, and at best can only recreate basic transaction.
For instance in the exported CSV, all the splits are included, and actually all the details of each every transactions.
But when you try to import such a file, you get errors for all the lines that don't include Dates (all the splits) to start with, and so on.
If you close the account it gives you the option to transfer the transactions to a different account.
Here's one way:
Change the view from "Basic Ledger" to "Transaction Journal" (from "View" drop-down menu).
Click on the account box, open the drop-down menu, and select the new account
The only way I found to do this is to edit the Gnucash XML file itself, and add the transactions from the other book manually.
You have to change the guid of all the accounts to match the ones of the destination book.
You can just type the (new) account number into the 'account' field - and you are done. Yes it is that easy - provided you know the account number ...
I am (in Germany) using for over ten years a worked-out SKR04 that shows the acount numbers prepended to the account's text which makes working with GnuCash a pleasure. The SKR04 from the GnuCash is practically unusable, since it does not show provide account numbers in the account's text - which makes the whole thing a blind-flight.
See here for worked-out SKR04 (German)...
https://www.facebook.com/GnuCash-DE-400197317114290/
Related
What is the best practice
to get a company wide (one or more organisations each with multiple folders and projects)
INTO one central and all metadata contained data catalog ?
(if "multiple orgs" is too complex than let's start with one)
I've put together a sample showing how to work with one organization.
The main ideia is to use a Tag Central Project, where you store the common resources, like Tag Templates, Policy Tags and Custom Entries that could be reused.
So you have:
Tag Central Project
List of Analytics Projects (Where you have the data assets)
Then the next thing is the user personas you would use, I'd suggest starting with:
Data Governors
Data Curators
Data Analysts
This google-datacatalog-governance-best-pratices GitHub repo contains the code which uses terraform to automatically set up those governance best practices I mentioned.
You can adapt those samples to work at folder or organization level by changing the terraform resources.
It seems that there are two different Web UI for AWS Tag Editor (you need an AWS account to try them):
https://resources.console.aws.amazon.com/r/tags
I got this link from AWS Doc
https://eu-west-1.console.aws.amazon.com/resource-groups/tag-editor/find-resources?region=eu-west-1
In Management Console, if you select Resource Group > Tag Editor on the top of the console page, it will take you to this page
The two WebUI behave differently:
The former is global but the latter is region-specific (it will put you into a region even if you don't put the region parameter in the URL)
The former allows you to search for Not tagged in the filter; but the latter does not
The UI are slightly different
Is one of UI a newer version?
Update (2019-05-14)
(Please also see an explanation about the two links being NEW and OLD UIs that AWS offered at a certain point in time) By now the first link is gone. If you visit it, you will get a 404 Not Found error from AWS.
I am part of the team building the new Tag Editor. Yes, you are correct: Classic Tag Editor is deprecated, and will be shut down soon entirely. We are working on full feature parity between the two Editors, so you will very soon find everything you can do in the old one as well in the new one.
To add some more context on your different items below:
1) Both old and new Tag Editor use the same underlying tagging infrastructure, so this should never happen. Maybe there is some browser issue involved here? Feel free to open a support issue so we can look deeper into it, if this continues the case.
2) Yes, the new one also includes Lambda, and will very soon add more resource types. The same by the way for regions: The old Tag Editor supports not all regions, for example eu-north-1 or eu-west-3.
3) No, Route53 Hosted Zones are supported in both Editors. Route53 resources only exists in the us-east-1 region, so maybe you used the Tag Editor in another region?
4) Both Editors show the same data. The old editor merged what you used as Name Tag and the ID in the same field - in the new one, you see only the ID in the column ID, and the Name Tag is displayed in the column Tag: Name.
Searching across regions is something the new Editor soon will support, too, and the same applies for the filter you mention. For showing resources without a specific tag, there is a workaround you already can do: Click on the settings icon in the top right of the table, and enable the tag you are interested in as a column. You then can sort this column so that all untagged ones show up on top.
If you have any other ideas or requests for the Tag Editor, please let us know. The fastest and most reliable way is to just use the 'Feedback' Button in the console in the bottom left.
Cheers,
Florian
Hi I am providing my own answer here (thanks my colleagues Kannan for the insight)
#1 above is what AWS called Class Tag Editor. If you click on the Question mark on the Web UI (upper right corner), you will be taken to a page that says:
This documentation is for classic Tag Editor, which has been
deprecated
So #2 is the version that AWS want us to use.
Below I will called #1 Old and #2 New
I compared the example outputs from our environment (about 50 resources). The two outputs differ in these respects:
New seems to retain past resources for a longer time. For example, if an EC2 instance has been terminated, it may take a
longer time to be removed from the listing of New
New seems to include resources for DynamoDB but Old does not
Old seems to include resources for Route 53 Hosted Zones but New does not.
Both New and Old show Security Groups, but the ID strings are rendered slightly differently.
New renders an ID as sg-xxxxxxxxxxxxxxxxxxxxxx
Old renders an ID as someName (sg-xxxxxxxxxxxxxxxxx)
I am trying to create a template in NiFi just like a data ingest template which provide by kylo.
Basically I want to allow user to select input data source it can be database or a file. If he selects file and then database processor should automatically gets disabled.
I have create a template in NiFi and imported it kylo, but while creating feed It does not show the feed input option.
How I can do this.
While registering the template, in the "Input Properties" section, you have to select which properties have to be shown in feed creation UI for user input i.e. Enable "Allow user input?"
Attached the screenshot for reference:
I think the best approach here would be to use the RouteOnAttribute as the step after the Data Source/ Data type is chosen.
This way you don't have to overthink it.
I have been working on Kylo for around 3 months now and surely know a thing or two about it.
While starting a Feed, Kylo asks you for which source you want to start a feed no matter you have a single processor or multiple which can act as a data producer or fetcher. Once you select one and start a feed, rest of the source processors get disabled automatically by Kylo in the resultant deployment of feed.
I'm trying to create a web service that is able to store user-upload files in S3. The problem is that we want the files stored in "dated directories".
For example, if a user uploads a.txt on 12/1/2017 at 9:15am, the file should look like this in S3:
https://s3-eu-west-1.amazonaws.com/test-bucket/uploaded/2017/12/1/9/a.txt
Does S3 have any API to help us achieving this or do we need to hand-craft this solution?
There is no such API in S3. Think of Amazon S3 as a storage service, not an application or database.
It is the responsibility of your application to store the data in the desired naming format -- just like storing data on a disk.
By the way, your naming format could do with some improvement:
Always expand fields to the correct number of digits (use 01 for January rather than 1) so that they sort correctly.
Think about your use-case -- if you will be scanning documents by year, then the /2017/12/01/09/a.txt naming format makes sense since you can look in the 2017 directory (not that directories really exist in S3). If not, then simply store it as /2017-12-01-09-a.txt.
Make it very clear which one is month vs day -- the USA is the only country in the world that treats "12/1/2017" as December 1st. The rest of the world reads it as "12 January". Using the format of 2017-12-01 makes it clear that it is 1-December-2017.
What about naming conflicts? Can only one person upload a file with a given name on a given day? How are you going to differentiate between different users uploading a file with the same name?
The reality is, the filename is totally irrelevant -- your application should use a database to keep track of objects that users
upload and assign each of them a unique name. When a file is later
requested, lookup the filename in the database and then provide that
file. Do not use S3 filenames as a pseudo-database where the name
conveys particular meaning, otherwise you'll often have to rename
files to add more meaning!
Directories don't actually exist in S3 -- they are just part of the filename. So, you can create a file in a given directory just by storing it -- there is no need to pre-create directories.
AWS S3 does not provide you with such logic. But it should by fairly easy to use the time information of your application to create such a s3 object key ("path").
Good luck!
In the Drive.Files.List I can, using the 'q' parameter, get all files a user can read/write or own. I would like to be able to use regular expression in the query value. For example set q to be "not '.+#my-org.com' in writers".
Is such a query already supported?
Do I have another way (except invoking Drive.Permissions.List for each and every file in my Drive) to get this information from?
Seems the only account level drive API is part of the report API - activities list. This API (and admin console - audit - drive) section is only supported in the unlimited license. Still haven't found the proper API get the drive state (list all files metadata in the account, permissions etc.) seems that the state can only be inferred from analyzing the relevant activity events assuming the activity is not being evicted after a predefined period of time.
My conclusion, at the moment, is that there is no "root" directory at the account level. "root" is only with respect to the logged in user.
I would be more than happy to be proved wrong.
Uri