Does Chilkat support temporary file name during FTP, FTPS and SFTP uploads? - chilkat

Our software uses the Chilkat library to upload files using FTP, FTPS and SFTP. Some of these files are large and take time to complete the upload. Does Chilkat support using a temporary file name (e.g. file.filepart) while uploading large files to prevent the receiving system from trying to take the file until is has been uploaded and renamed to it's final name (e.g. file.xml)?
If it does, how do we enable this functionality? Also, does this functionality require a disconnect/connect before the renaming is done?

There's no need for Chilkat to bake into the API a feature for it.
The various functions that do uploads have arguments such that the local filename and remote filename are separate. You can set the remote filename (i.e. destination/target filename) to a temporary filename, and then after the upload you can call the appropriate function to rename/move the file to the final filename.

Related

How to download file in Amazon S3 storage using only a unique id without subfolder name?

i want to download a file from Amazon S3 using only a certain unique id i can use from it's api, without using a folder or subfolder name. I created a folder/subfolder structure with hierarchy levels to organize the files.
The same of what I did in Google Drive API v3, regardless of which the folder or subfolder name or hierarchy level of folders the file was saved, i can download the file using only the fileid.
i haven't read yet about the file versioning docs since there are tons to read.
any help would greatly be appreciated. thank you.
You can't do this with S3. You need to know the bucket name (--bucket) and full key (--key) of the file you want to download. Since a given file can have multiple versions, you can also provide a version id (--version-id).

Google Cloud - Download large file from web

I'm trying to download GhTorrent dump from http://ghtorrent-downloads.ewi.tudelft.nl/mysql/mysql-2020-07-17.tar.gz which is about 127gb
I tried in the cloud but after 6gb it stops, I believe that there is a size limit for using curl
curl http://ghtorrent... | gsutil cp - gs://MY_BUCKET_NAME/mysql-2020-07-17.tar.gz
I cannot use Data Transfer as I need to specify the url, size in bytes (which I have) and hash MD5 which I don't have and I only can generate by having the file in my disk. I think(?)
Is there any other option to download and upload the file directly to the cloud?
My total disk size is 117gb sad beep
Worked for me with Storage Transfer Service: https://console.cloud.google.com/transfer/
Have a look on the pricing before moving TBs especially if your target is nearline/coldline: https://cloud.google.com/storage-transfer/pricing
Simple example that copies a file from a public url, to my bucket using a Transfer Job:
Create a file theTsv.tsv and specify the complete list of files that must be copied. This example contains just one file:
TsvHttpData-1.0
http://public-url-pointint-to-the-file
Upload the theTsv.tsv file to your bucket or any publicly accessible url. In this example I am storing my .tsv file on my bucket https://storage.googleapis.com/<my-bucket-name>/theTsv.tsv
Create a transfer job - List of object URLs
Add the url that points to the theTsv.tsv file in the URL of TSV file field;
Select the target bucket
Run immediately
My file, named MD5SUB was copied from the source url into my bucket, under an identical directory structure.

EMR creates 0 byte files while using HDFS's moveFromLocalFile API

I'm using EMR to move a folder from local file system to S3 in Spark using fs.moveFromLocalFile API. Everything works fine except a 0-byte file created by EMRFS with name _$folder$ for EVERY folder that is uploaded.
Is there any way to move folders without this dummy file creation for every folder? (other than manually deleting this file). Also, why is this dummy file created? I'm currently using s3:// protocol recommended by EMR team.
My experience is that the mkdir() function usually called for local file systems or hdfs will result in an s3 empty file being created with the name of the mkdir folder and appended by _$folder$. In S3, there is no concept of an "empty folder" because you cannot have a key (pathname) with a null value (the file).
In a perfect world, mkdir(s3://bucket/path) should be a noop.
Don't know about EMR fs; this sounds like the same extension used by The S3n client. These files are stripped in the client when listing/stat-ing paths.
ASF's S3a creates one with a "/" suffix.

Replacing a filefield in a form_clean function in django

In my website, users are uploading audio files. I wish to convert all supported audio files to MP3.
I prefered to make the convertion in the filefield_clean function because in that moment I am also able to confirm that the file format uploaded is supported. In addition I am using S3 so it just before to file is uploaded to my S3 Storage.
My problem is how can I redirect the physical file used in the InMemoryUploadedFile ? I need to redirect it to my new converted file.
Is it possible to accomplish this in a clean operation ? If not, what is the best approch ?

how to set permissions for cfftp getfile from coldfusion code?

I am using and getting a file from ftp server. But when I use to read the downloaded file, its not allowing me to read the file. Its because of the permissions on the files.
How can i set permission to 777 or full access for that file from code. I don't want to do that manually. I am using Mac OS.
Thanks..
For setting permissions on a file in ColdFusion use the optional mode attribute of cffile with the octal values of UNIX chmod command.
<cffile action="write" destination="#fileToWrite#" mode=777>
This applies to Unix/Linux only.
If this is about files uploaded to your server and you have access to your ftp admin / config files, then you probably would want to modify the upload mask to adjust permissions of the files uploaded.
If you download the files yourself manually, then you would have to put it in a folder where coldfusion at least has read access, or tell your ftp client to store the file accessible for cf.
You can write permissions using CFFILE. I don't think there is a way to do only that, but you could do it as part of a rename or move operation. Check the docs for more specifics on it.