How to auto refresh remote list, after uploading a local folder to a Storj bucket through FileZilla? - filezilla

In FileZilla client, when a local folder is dragged and dropped into a remote directory, which part of the FileZilla code is recursively sending command to transfer (upload) all local files and sub-folders (within the selected local folder) to the remote end?
My main purpose is to insert command to either list or refresh the remote directory, once the upload is complete. Although this is being done in ftp and sftp protocols, but I am not able to do so for storj feature.
I have tried the including the "list" or refersh commands at the following points in different codes:
at the end of the "put" command within /src/storj/fzstorj.cpp file
after the "Transfers finished" notification in void CQueueView::ActionAfter(bool warned) function in /src/interface/QueueView.cpp file
Reason: this notification is displayed when all files and subfolders of a selected local folder have been uploaded to a Storj bucket.
I also tried tracking files that take part in the process, mainly those within /src/engine/storj folder, like, file_transfer.cpp sending "put" command through int CStorjFileTransferOpData::Send()function
This did not help much.
While checking who is giving command to the storj engine, I observed it is being done by calling void CCommandQueue::ProcessCommand(CCommand *pCommand, CCommandQueue::command_origin origin) in /src/interface/commandqueue.cpp
Expected output is autorefreshing of Storj bucket or upload path, when all desired files and sub-folders are uploaded from the local end through FileZilla client.
Any hint towards the solution would be of great help to me.
Thank You!

Related

AWS CloudFront still caching when set to "CachingDisabled"

We are using an S3 bucket to hold customer zip files they created and made ready for them to download. We are using CloudFront only to handle the SSL. We have caching disabled.
The customer receives an email to download their zip file, and that works great. The S3 lifecycle removes the file after 2 weeks. Now, if they add more photos to their account and re-request their zip file, it overwrites the current zip file with the new version. So the link is exactly the same. But when they download, it's the previous zip file, not the new one.
Additionally after the two weeks, the file is removed and they try to download they get an error that basically says they need to login and re-request their photos. So they generate a zip file but their link still gives them the error message.
I could have the lambda that creates the zip file invalidate the file when it creates it, but I didn't think I needed to invalidate since we aren't caching?
Below is the screenshot of the caching policy I have selected in CloudFront

Springboot server in Elastic Beanstalk creates files that I can't see

I have a Springboot server that is deployed to an Elastic Beanstalk environment in AWS. The basic functionality is this:
1. Upload a file to the server
2. The server processes file by doing some data manipulation.
3. Then the file that is created is sent to a user via email.
The strange thing is that, the functionality mentioned above is working. The output file is sent to my email inbox successfully. However, the file cannot be seen when SSHed into the instance. The entire directory that gets created for the data manipulation is just not there. I have looked everywhere.
To test this, I even created a simple function in my Springboot Controller like this:
#GetMapping("/")
public ResponseEntity<String> dummyMethod() {
// TODO : remove line below after testing
new File(directoryToCreate).mkdirs();
return new ResponseEntity<>("Successful health check. Status: 200 - OK", HttpStatus.OK);
}
If I use Postman to hit this endpoint, the directory CANNOT be seen via the terminal that I am SSHed into. The program is working so I know that the code is correct in that sense, but its like the files and directories are invisible to me.
Furthermore, if I were to run the server locally (using Windows OR Linux) and hit this endpoint, the directory is successfully created.
Update:
I found where the app lives in the environment at /var/app. But my folders and files are still not there, only the source code files, ect are there. The files that my server is supposed to be creating are still missing. I can even print out the absolute path to the file after creating it, but that file still doesn't exist. Here is an example:
Files.copy(source, dest);
logger.info("Successfully copied file to: {}", dest.getAbsolutePath());
will print...
Successfully copied file to: /tmp/TESTING/Test-Results/GVA_output_2021-12-13 12.32.58/results_map_GVA.csv
That path DOES NOT exist in my server, but I CAN send it to me via email from the server code after being processed. But if I SSH into the instance and go to that path, nothing is there.
If I use the command: find . -name "GVA*" (to search for the file I am looking for) then it prints this:
./var/lib/docker/overlay2/fbf04e23e39d61896a1c935748a63f2d3836487d9b166bae490764c30b8870ae/diff/tmp/TESTING/Test-Results/GVA_output_2021-12-09 18.15.59
./var/lib/docker/overlay2/fbf04e23e39d61896a1c935748a63f2d3836487d9b166bae490764c30b8870ae/diff/tmp/TESTING/Test-Results/GVA_output_2021-12-13 12.26.34
./var/lib/docker/overlay2/fbf04e23e39d61896a1c935748a63f2d3836487d9b166bae490764c30b8870ae/diff/tmp/TESTING/Test-Results/GVA_output_2021-12-13 12.32.58
./var/lib/docker/overlay2/fbf04e23e39d61896a1c935748a63f2d3836487d9b166bae490764c30b8870ae/merged/tmp/TESTING/Test-Results/GVA_output_2021-12-09 18.15.59
./var/lib/docker/overlay2/fbf04e23e39d61896a1c935748a63f2d3836487d9b166bae490764c30b8870ae/merged/tmp/TESTING/Test-Results/GVA_output_2021-12-13 12.26.34
./var/lib/docker/overlay2/fbf04e23e39d61896a1c935748a63f2d3836487d9b166bae490764c30b8870ae/merged/tmp/TESTING/Test-Results/GVA_output_2021-12-13 12.32.58
But this looks like it is keeping track of differences between versions of files since I see diff and merged in the file path. I just want to find where that file is actually residing.
If you need to store an uploaded file somewhere from a Spring BOOT app, look at using an Amazon S3 bucket as opposed to writing the file to a folder on the server. For example, assume you are working with a Photo app and the photos can be uploaded via the Spring BOOT app. Instead of placing this in a directory on the server, use the Amazon S3 Java API to store the file in an Amazon S3 bucket.
Here is an example of using a Spring BOOT app and handling uploaded files by placing them in a bucket.
Creating a dynamic web application that analyzes photos using the AWS SDK for Java
This example app also shows you how to use the SES API to send data (a report in this example) to a user via email.

Access denied when trying to upload audio files to aws s3 bucket

I'm trying to upload audio files to a folder in my s3 bucket. I'm doing this by dragging and dropping from my laptop and hitting the upload button once I have dropped the last file. Some of the files failed to upload and instead gave me an error message saying
Access Denied. You don't have permissions to upload files and folders.
How do I fix that?
Adding to Frank Din's answer, I was able to upload a folder's 80+ images at once by selecting "Add Folder" instead of drag-and-dropping all the images at once.
I was eventually able to upload all the audio files.
I think the problem in my case was that I was trying to upload all the files practically at the same time by just dragging and dropping them all in the same breath.
I fixed that by uploading each file one at a time and only after the previous file was done uploading.

Issue with uploading GeoLite2-City.mmdb.missing file in mautic

I have a mautic marketing automation installed on my server (I am a beginner)
However i replicated this issue when configuring GeoLite2-City IP lookup
Automatically fetching the IP lookup data failed. Download http://geolite.maxmind.com/download/geoip/database/GeoLite2-City.mmdb.gz, extract if necessary, and upload to /home/ol*****/public_html/mautic4/app/cache/prod/../ip_data/GeoLite2-City.mmdb.
What i attempted
i FTP into the /home/ol****/public_html/mautic4/app/cache/prod/../ip_data/GeoLite2-City.mmdb. directory
uploaded the file (the original GeoLite2-City.mmdb has '0 byte', while the newly added file is about '6000 kb'
However, once i go back into mautic to implement the lookup, the newly added file reverts back to '0byte" and i still cant get the IP lookup configured.
I have also changed the file permission to 0744, but the issue still replicates.
Did you disable the cron job which looks for the file? If not, or if you clicked the button again in the dashboard, it will overwrite the file you manually placed there.
As a side note, the 2.16 release addresses this issue, please take a look at https://www.mautic.org/blog/community/announcing-mautic-2-16/.
Please ensure you take a full backup (files and database) and where possible, run the update at command line to avoid browser timeouts :)

AWS CLI sync from S3 to local fails with Errno 20

Im using the following command
aws s3 sync s3://mys3bucket/ .
to download all the files AND directories from my s3 bucket "mys3bucket" into an empty folder. In this bucket is a directory called "albums". However instead of copying the files into a "albums" directory, I am receiving the following error message (an example)
download failed: s3://mys3bucket//albums/albums/5384 to albums/albums/5384 [Errno 20] Not a directory: u'/storage/mys3bucket//albums/albums/5384'
When I look in the folder to see what files, if any, did get copied into the albums folder, there is only 1 file in there called "albums" which when I edit it contains the text "{E40327AD-517B-46e8-A6D2-AF51BC263F50}".
This behavior is similar for all the other directories in this bucket. I see more of the error #20 by far than I see successful downloads. There is over 100GB of image files in the albums folder but not a single one is able to download.
Any suggestions?
I suspect the problem here is that you have both a 'directory' and a 'file' on S3 which have the same name. If you delete the 'file' from S3 then you should find that the directory will sync again.
I have found that this situation can occur when using desktop clients to view an S3 bucket, or something like s3sync.
http://www.witti.ws/blog/2013/12/03/transitioning-s3sync-aws-cli/