I have a Springboot server that is deployed to an Elastic Beanstalk environment in AWS. The basic functionality is this:
1. Upload a file to the server
2. The server processes file by doing some data manipulation.
3. Then the file that is created is sent to a user via email.
The strange thing is that, the functionality mentioned above is working. The output file is sent to my email inbox successfully. However, the file cannot be seen when SSHed into the instance. The entire directory that gets created for the data manipulation is just not there. I have looked everywhere.
To test this, I even created a simple function in my Springboot Controller like this:
#GetMapping("/")
public ResponseEntity<String> dummyMethod() {
// TODO : remove line below after testing
new File(directoryToCreate).mkdirs();
return new ResponseEntity<>("Successful health check. Status: 200 - OK", HttpStatus.OK);
}
If I use Postman to hit this endpoint, the directory CANNOT be seen via the terminal that I am SSHed into. The program is working so I know that the code is correct in that sense, but its like the files and directories are invisible to me.
Furthermore, if I were to run the server locally (using Windows OR Linux) and hit this endpoint, the directory is successfully created.
Update:
I found where the app lives in the environment at /var/app. But my folders and files are still not there, only the source code files, ect are there. The files that my server is supposed to be creating are still missing. I can even print out the absolute path to the file after creating it, but that file still doesn't exist. Here is an example:
Files.copy(source, dest);
logger.info("Successfully copied file to: {}", dest.getAbsolutePath());
will print...
Successfully copied file to: /tmp/TESTING/Test-Results/GVA_output_2021-12-13 12.32.58/results_map_GVA.csv
That path DOES NOT exist in my server, but I CAN send it to me via email from the server code after being processed. But if I SSH into the instance and go to that path, nothing is there.
If I use the command: find . -name "GVA*" (to search for the file I am looking for) then it prints this:
./var/lib/docker/overlay2/fbf04e23e39d61896a1c935748a63f2d3836487d9b166bae490764c30b8870ae/diff/tmp/TESTING/Test-Results/GVA_output_2021-12-09 18.15.59
./var/lib/docker/overlay2/fbf04e23e39d61896a1c935748a63f2d3836487d9b166bae490764c30b8870ae/diff/tmp/TESTING/Test-Results/GVA_output_2021-12-13 12.26.34
./var/lib/docker/overlay2/fbf04e23e39d61896a1c935748a63f2d3836487d9b166bae490764c30b8870ae/diff/tmp/TESTING/Test-Results/GVA_output_2021-12-13 12.32.58
./var/lib/docker/overlay2/fbf04e23e39d61896a1c935748a63f2d3836487d9b166bae490764c30b8870ae/merged/tmp/TESTING/Test-Results/GVA_output_2021-12-09 18.15.59
./var/lib/docker/overlay2/fbf04e23e39d61896a1c935748a63f2d3836487d9b166bae490764c30b8870ae/merged/tmp/TESTING/Test-Results/GVA_output_2021-12-13 12.26.34
./var/lib/docker/overlay2/fbf04e23e39d61896a1c935748a63f2d3836487d9b166bae490764c30b8870ae/merged/tmp/TESTING/Test-Results/GVA_output_2021-12-13 12.32.58
But this looks like it is keeping track of differences between versions of files since I see diff and merged in the file path. I just want to find where that file is actually residing.
If you need to store an uploaded file somewhere from a Spring BOOT app, look at using an Amazon S3 bucket as opposed to writing the file to a folder on the server. For example, assume you are working with a Photo app and the photos can be uploaded via the Spring BOOT app. Instead of placing this in a directory on the server, use the Amazon S3 Java API to store the file in an Amazon S3 bucket.
Here is an example of using a Spring BOOT app and handling uploaded files by placing them in a bucket.
Creating a dynamic web application that analyzes photos using the AWS SDK for Java
This example app also shows you how to use the SES API to send data (a report in this example) to a user via email.
Related
I have a mautic marketing automation installed on my server (I am a beginner)
However i replicated this issue when configuring GeoLite2-City IP lookup
Automatically fetching the IP lookup data failed. Download http://geolite.maxmind.com/download/geoip/database/GeoLite2-City.mmdb.gz, extract if necessary, and upload to /home/ol*****/public_html/mautic4/app/cache/prod/../ip_data/GeoLite2-City.mmdb.
What i attempted
i FTP into the /home/ol****/public_html/mautic4/app/cache/prod/../ip_data/GeoLite2-City.mmdb. directory
uploaded the file (the original GeoLite2-City.mmdb has '0 byte', while the newly added file is about '6000 kb'
However, once i go back into mautic to implement the lookup, the newly added file reverts back to '0byte" and i still cant get the IP lookup configured.
I have also changed the file permission to 0744, but the issue still replicates.
Did you disable the cron job which looks for the file? If not, or if you clicked the button again in the dashboard, it will overwrite the file you manually placed there.
As a side note, the 2.16 release addresses this issue, please take a look at https://www.mautic.org/blog/community/announcing-mautic-2-16/.
Please ensure you take a full backup (files and database) and where possible, run the update at command line to avoid browser timeouts :)
In FileZilla client, when a local folder is dragged and dropped into a remote directory, which part of the FileZilla code is recursively sending command to transfer (upload) all local files and sub-folders (within the selected local folder) to the remote end?
My main purpose is to insert command to either list or refresh the remote directory, once the upload is complete. Although this is being done in ftp and sftp protocols, but I am not able to do so for storj feature.
I have tried the including the "list" or refersh commands at the following points in different codes:
at the end of the "put" command within /src/storj/fzstorj.cpp file
after the "Transfers finished" notification in void CQueueView::ActionAfter(bool warned) function in /src/interface/QueueView.cpp file
Reason: this notification is displayed when all files and subfolders of a selected local folder have been uploaded to a Storj bucket.
I also tried tracking files that take part in the process, mainly those within /src/engine/storj folder, like, file_transfer.cpp sending "put" command through int CStorjFileTransferOpData::Send()function
This did not help much.
While checking who is giving command to the storj engine, I observed it is being done by calling void CCommandQueue::ProcessCommand(CCommand *pCommand, CCommandQueue::command_origin origin) in /src/interface/commandqueue.cpp
Expected output is autorefreshing of Storj bucket or upload path, when all desired files and sub-folders are uploaded from the local end through FileZilla client.
Any hint towards the solution would be of great help to me.
Thank You!
I am using Google Speech API in my Django web-app. I have set up a service account for it and am able to make API calls locally. I have pointed the local GOOGLE_APPLICATION_CREDENTIALS environment variable to the service account's json file which contains all the credentials.
This is the snapshot of my Service Account's json file:
I have tried setting heroku's GOOGLE_APPLICATION_CREDENTIALS environment variable by running
$ heroku config:set GOOGLE_APPLICATION_CREDENTIALS="$(< myProjCreds.json)"
$ heroku config
GOOGLE_APPLICATION_CREDENTIALS: {
^^ It gets terminated at the first occurrence of " in the json file which is immediately after {
and
$ heroku config:set GOOGLE_APPLICATION_CREDENTIALS='$(< myProjCreds.json)'
$ heroku config
GOOGLE_APPLICATION_CREDENTIALS: $(< myProjCreds.json)
^^ The command gets saved into the environment variable
I tried setting heroku's GOOGLE_APPLICATION_CREDENTIALS env variable to the content of service account's json file but it didn't work (because apparently the this variable's value needs to be an absolute path to the json file) . I found a method which authorizes a developer account without loading json accout rather using GOOGLE_ACCOUNT_TYPE, GOOGLE_CLIENT_EMAIL and GOOGLE_PRIVATE_KEY. Here is the GitHub discussion page for it.
I want something similar (or something different) for my Django web-app and I want to avoid uploading the json file to my Django web-app's directory (if possible) for security reasons.
Depending on which library you are using for communicating with Speach API you may use several approaches:
You may serialize your JSON data using base64 or something similar and set resulting string as one environment variable. Than during you app boot you may decode this data and configure your client library appropriately.
You may set each pair from credentials file as separate env variables and use them accordingly. Maybe library that you're using support authentication using GOOGLE_ACCOUNT_TYPE, GOOGLE_CLIENT_EMAIL and GOOGLE_PRIVATE_KEY similar to the ruby client that you're linking to.
EDIT:
Assuming that you are using google official client library, you have several options for authenticating your requests, including that you are using (service account): https://googlecloudplatform.github.io/google-cloud-python/latest/core/auth.html You may save your credentials to the temp file and pass it's path to the Client object https://google-auth.readthedocs.io/en/latest/user-guide.html#service-account-private-key-files (but it seems to me that this is very hacky workaround). There is a couple of other auth options that you may use.
EDIT2:
I've found one more link with the more robust approach http://codrspace.com/gargath/using-google-auth-apis-on-heroku/. There is ruby code, but you may do something similar in Python for sure.
Let's say the filename is key.json
First, copy the content of the key.json file and add it to the environment variable, let's say KEY_DATA.
Solution 1:
If my command to start the server is node app.js, I'll do echo $KEY_DATA > key.json && node app.js
This will create a key.json file with the data from KEY_DATA and then start the server.
Solution 2:
Save the data from KEY_DATA env variable in the some variable and then parse it to JSON, so you have the object which you can pass for authentication purposes.
Example in Node.js:
const data = process.env.KEY_DATA;
const dataObj = JSON.parse(data);
I am trying to run a demo project for uploading to S3 with Grails 3.
The project in question is this, more specifically the S3 upload is only for the 'Hotel' example at the end.
When I run the project and go to upload the image, I get an updated message but nothing actually happens - there's no inserted url in the dbconsole table.
I think the issue lies with how I am running the project, I am using the command:
grails -Daws.accessKeyId=XXXXX -Daws.secretKey=XXXXX run-app
(where I am supplementing the X's for my keys obviously).
This method of running the project appears to be slightly different to the method shown in the example. I run my project from the command line and I do not use GGTS, just Sublime.
I have tried inserting my AWS keys into the application.yml but I receive an internal server error then.
Can anyone help me out here?
Check your bucket policy in s3. You need to grant permissions to the API user to allow uploads.
I'm working on a Laravel 5.2 application where users can send a file by POST, the application stores that file in a certain location and retrieves it on demand later. I'm using Amazon Elastic Beanstalk. For local development on my machine, I would like the files to store in a specified local folder on my machine. And when I deploy to AWS-EB, I would like it to automatically switch over and store the files in S3 instead. So I don't want to hard code something like \Storage::disk('s3')->put(...) because that won't work locally.
What I'm trying to do here is similar to what I was able to do for environment variables for database connectivity... I was able to find some great tutorials where you create an .env.elasticbeanstalk file, create a config file at ~/.ebextiontions/01envconfig.config to automatically replace the standard .env file on deployment, and modify a few lines of your database.php to automatically pull the appropriate variable.
How do I do something similar with file storage and retrieval?
Ok. Got it working. In /config/filesystems.php, I changed:
'default' => 'local',
to:
'default' => env('DEFAULT_STORAGE') ?: 'local',
In my .env.elasticbeanstalk file (see the original question for an explanation of what this is), I added the following (I'm leaving out my actual key and secret values):
DEFAULT_STORAGE=s3
S3_KEY=[insert your key here]
S3_SECRET=[insert your secret here]
S3_REGION=us-west-2
S3_BUCKET=cameraflock-clips-dev
Note that I had to specify my region as us-west-2 even though S3 shows my environment as Oregon.
In my upload controller, I don't specify a disk. Instead, I use:
\Storage::put($filePath, $filePointer, 'public');
This way, it always uses my "default" disk for the \Storage operation. If I'm in my local environment, that's my public folder. If I'm in AWS-EB, then my Elastic Beanstalk .env file goes into effect and \Storage defaults to S3 with appropriate credentials.