I have a ova file, i want to replace a disk.vmdk from it with disk2.vmdk
How i can achieve it, what are the steps and procedure to be done like changing values of it in .mf file and .ovf file ....etc
Thanks in advance
The order should be [.ovf, .mf,disk1.vmdk, disk2.vmdk]. Also as mentioned above by #windstorm, you have to update the .ovf file with new disk size and .mf file with new sha256sum. It worked for me
Related
I want to change the data of a sample .pbix but when I'm updating the data can not find the path of the Excel since I don't have it. Is there a way without having it?
You can check exists source from Transform data --> data source settings and find the path of exists excel and in case if you did not found you can add other data sources to replace it.
I have a talend job which is simple like below:
ts3Connection -> ts3Get -> tfileinputDelimeted -> tmap -> tamazonmysqloutput.
Now the scenario here is that some times I get the file in .txt format and sometimes I get it in a zip file.
So I want to use tFileUnarchive to unzip the file if it's in zip or process it bypassing the tFileUnarchive component if the file is in unzipped format i.e only in .txt format.
Any help on this is greatly appreciated.
The trick here is to break the file retrieval and potential unzipping into one sub job and then the processing of the files into another sub job afterwards.
Here's a simple example job:
As normal, you connect to S3 and then you might list all the relevant objects in the bucket using the tS3List and then pass this to tS3Get. Alternatively you might have another way of passing the relevant object key that you want to download to tS3Get.
In the above job I set tS3Get up to fetch every object that is iterated on by the tS3List component by setting the key as:
((String)globalMap.get("tS3List_1_CURRENT_KEY"))
and then downloading it to:
"C:/Talend/5.6.1/studio/workspace/S3_downloads/" + ((String)globalMap.get("tS3List_1_CURRENT_KEY"))
The extra bit I've added starts with a Run If conditional link from the tS3Get which links the tFileUnarchive with the condition:
((String)globalMap.get("tS3List_1_CURRENT_KEY")).endsWith(".zip")
Which checks to see if the file being downloaded from S3 is a .zip file.
The tFileUnarchive component then just needs to be told what to unzip, which will be the file we've just downloaded:
"C:/Talend/5.6.1/studio/workspace/S3_downloads/" + ((String)globalMap.get("tS3List_1_CURRENT_KEY"))
and where to extract it to:
"C:/Talend/5.6.1/studio/workspace/S3_downloads"
This then puts any extracted files in the same place as the ones that didn't need extracting.
From here we can now iterate through the downloads folder looking for the file types we want by setting the directory to "C:/Talend/5.6.1/studio/workspace/S3_downloads" and the global expression to "*.csv" in my case as I wanted to read in only the CSV files (including the zipped ones) I had in S3.
Finally, we then read the delimited files by setting the file to be read by the tFileInputDelimited component as:
((String)globalMap.get("tFileList_1_CURRENT_FILEPATH"))
And in my case I simply then printed this to the console but obviously you would then want to perform some transformation before uploading to your AWS RDS instance.
i am currently triying to create PDF files from different documents always with the same header. I tried using a template in unoconv but it messes all of my document.
I was wondering if anyone knows how to do this?
Thanks for the help!
I solved it.
You need to create an ott file with the header you would like to user and save it as template.ott
Once you have done this you need to change the command for creating PDFs to
unoconv -t template.ott -f pdf name_of_the_file.extension
But be carefull, this ott template only works for doc, docx, and odt.
The presentation files like PPT or PPTX will not work with this template.
You need to generate another template for the presentation files.
Hope this helps someone.....
HIn Django, when uploading a file with spaces, and brackets, it's stored in the file system with a different filename.
For example, when uploading the file 'lo go (1).jpg' via the admin interface, it's stored on the filesystem as 'lo__go_1.jpg'.
How can I know what the file will be called at upload time? I can't seem to find the source code that replaces the characters.
I found out the answer to my question.
https://github.com/django/django/blob/master/django/db/models/fields/files.py#L310
https://github.com/django/django/blob/master/django/core/files/storage.py#L58
https://github.com/django/django/blob/master/django/utils/text.py#L234
I need to download an xml file from AWS-S3.
I tried using get_contents_to_filename(fname) , it worked.
But i need to download the file without specifying fname, because if i specify the fname my downloaded file gets saved tofname.
I want to save the file as it is, with its name.
this is my current code
k = Key(bucket)
k.set_contents_from_filename(fname)
can someone please help me to download and fetch the file without using key.
Thanks in advance!
I'm not sure which library you're using, but if k is the AWS key you want to download, then k.name is probably the key name, so k.get_contents_to_filename(k.key) would probably do more or less what you want.
The one problem is that the key name might not be a legal file name, or it may have file path separators. So if the key name were something like '../../../../somepath/somename' the file would be saved somewhere you don't expect. So copy k.name to a string and either sanitize it by changing all dangerous characters to safe ones, or just extract the part of the key name you want to use for the file name.