Unable to upload to S3 with Grails S3 Demo Application - amazon-web-services

I am trying to run a demo project for uploading to S3 with Grails 3.
The project in question is this, more specifically the S3 upload is only for the 'Hotel' example at the end.
When I run the project and go to upload the image, I get an updated message but nothing actually happens - there's no inserted url in the dbconsole table.
I think the issue lies with how I am running the project, I am using the command:
grails -Daws.accessKeyId=XXXXX -Daws.secretKey=XXXXX run-app
(where I am supplementing the X's for my keys obviously).
This method of running the project appears to be slightly different to the method shown in the example. I run my project from the command line and I do not use GGTS, just Sublime.
I have tried inserting my AWS keys into the application.yml but I receive an internal server error then.
Can anyone help me out here?

Check your bucket policy in s3. You need to grant permissions to the API user to allow uploads.

Related

aws s3 sync --delete, does not remove directory manually created from console

aws s3 sync <> <> --delete works fine but I have a scenario wherein someone created directories using AWS console and put in some files inside those directories using manual upload. So now when I run the sync command, those files get removed but the manually created directories still persist.
Is this an expected behavior of the command?
This issue is tracked here - https://github.com/aws/aws-cli/issues/2685 . This is a known bug and no direct solution is available yet so we need to create wayarounds according to our condition.

Springboot server in Elastic Beanstalk creates files that I can't see

I have a Springboot server that is deployed to an Elastic Beanstalk environment in AWS. The basic functionality is this:
1. Upload a file to the server
2. The server processes file by doing some data manipulation.
3. Then the file that is created is sent to a user via email.
The strange thing is that, the functionality mentioned above is working. The output file is sent to my email inbox successfully. However, the file cannot be seen when SSHed into the instance. The entire directory that gets created for the data manipulation is just not there. I have looked everywhere.
To test this, I even created a simple function in my Springboot Controller like this:
#GetMapping("/")
public ResponseEntity<String> dummyMethod() {
// TODO : remove line below after testing
new File(directoryToCreate).mkdirs();
return new ResponseEntity<>("Successful health check. Status: 200 - OK", HttpStatus.OK);
}
If I use Postman to hit this endpoint, the directory CANNOT be seen via the terminal that I am SSHed into. The program is working so I know that the code is correct in that sense, but its like the files and directories are invisible to me.
Furthermore, if I were to run the server locally (using Windows OR Linux) and hit this endpoint, the directory is successfully created.
Update:
I found where the app lives in the environment at /var/app. But my folders and files are still not there, only the source code files, ect are there. The files that my server is supposed to be creating are still missing. I can even print out the absolute path to the file after creating it, but that file still doesn't exist. Here is an example:
Files.copy(source, dest);
logger.info("Successfully copied file to: {}", dest.getAbsolutePath());
will print...
Successfully copied file to: /tmp/TESTING/Test-Results/GVA_output_2021-12-13 12.32.58/results_map_GVA.csv
That path DOES NOT exist in my server, but I CAN send it to me via email from the server code after being processed. But if I SSH into the instance and go to that path, nothing is there.
If I use the command: find . -name "GVA*" (to search for the file I am looking for) then it prints this:
./var/lib/docker/overlay2/fbf04e23e39d61896a1c935748a63f2d3836487d9b166bae490764c30b8870ae/diff/tmp/TESTING/Test-Results/GVA_output_2021-12-09 18.15.59
./var/lib/docker/overlay2/fbf04e23e39d61896a1c935748a63f2d3836487d9b166bae490764c30b8870ae/diff/tmp/TESTING/Test-Results/GVA_output_2021-12-13 12.26.34
./var/lib/docker/overlay2/fbf04e23e39d61896a1c935748a63f2d3836487d9b166bae490764c30b8870ae/diff/tmp/TESTING/Test-Results/GVA_output_2021-12-13 12.32.58
./var/lib/docker/overlay2/fbf04e23e39d61896a1c935748a63f2d3836487d9b166bae490764c30b8870ae/merged/tmp/TESTING/Test-Results/GVA_output_2021-12-09 18.15.59
./var/lib/docker/overlay2/fbf04e23e39d61896a1c935748a63f2d3836487d9b166bae490764c30b8870ae/merged/tmp/TESTING/Test-Results/GVA_output_2021-12-13 12.26.34
./var/lib/docker/overlay2/fbf04e23e39d61896a1c935748a63f2d3836487d9b166bae490764c30b8870ae/merged/tmp/TESTING/Test-Results/GVA_output_2021-12-13 12.32.58
But this looks like it is keeping track of differences between versions of files since I see diff and merged in the file path. I just want to find where that file is actually residing.
If you need to store an uploaded file somewhere from a Spring BOOT app, look at using an Amazon S3 bucket as opposed to writing the file to a folder on the server. For example, assume you are working with a Photo app and the photos can be uploaded via the Spring BOOT app. Instead of placing this in a directory on the server, use the Amazon S3 Java API to store the file in an Amazon S3 bucket.
Here is an example of using a Spring BOOT app and handling uploaded files by placing them in a bucket.
Creating a dynamic web application that analyzes photos using the AWS SDK for Java
This example app also shows you how to use the SES API to send data (a report in this example) to a user via email.

AWS .Net API - The provided token has expired

I am facing this weird scenario. I generate my AWS AccessKeyId, SecretAccessKey and SessionToken by running assume-role-with-saml command. After copying these values to .aws\credentials file, I try to run command "aws s3 ls" and can see all the S3 buckets. Similarly I can run any AWS command to view objects and it works perfectly fine.
However, when I write .Net Core application to list objects, it doesn't work on my computer. Same .Net application works find on other colleagues' computers. We all have access to AWS through the same role. There are no users in IAM console.
Here is the sample code, but I am not sure there is nothing wrong with the code, because it works fine on other users' computers.
var _ssmClient = new AmazonSimpleSystemsManagementClient();
var r = _ssmClient.GetParameterAsync(new Amazon.SimpleSystemsManagement.Model.GetParameterRequest
{
Name = "/KEY1/KEY2",
WithDecryption = true
}).ConfigureAwait(false).GetAwaiter().GetResult();
Any idea why running commands through CLI works and API calls don't work? Don't they both look at the same %USERPROFILE%.aws\credentials file?
I found it. Posting here since it can be useful for someone having same issue.
Go to this folder: %USERPROFILE%\AppData\Local\AWSToolkit
Take a backup of all files and folders and delete all from above location.
This solution applies only if you can run commands like "aws s3 ls" and get the results successfully, but you get error "The provided token has expired" while running the same from .Net API libraries.

AWS Pipeline: Staging local files to S3 failed. The request signature we calculated does not match the signature you provided

Here's my setup:
I am trying to copy files from an external Webserver to a S3 Bucket using the DataPipeline.
To do this I'm using the ShellCommandActivity which uses a script to Download the files to the Output-Bucket specified in the Pipeline. In the script I use the environment variable ${OUTPUT1_STAGING_DIR} to adress the bucket. Of course I turned 'staging' to true in my pipeline.
When the script finishes, the state of the Activity becomes "FAILED" with following Error:
Staging local files to S3 failed. The request signature we calculated does not match the signature you provided. Check your key and signing method
When I look in the stdout file, I can see that my script finished sucessfully, only the staging to the bucket did not work.
I recon this could be an permission problem with the bucket but I have no idea which things I have to change!
I came across some discussions, where people got this error because the path to the bucket was configured wrong, so this is how I did it in the Pipeline Datanode Directory Path:
s3://testBucket
Is this correct?
I would appreciate any help here!
The problem was the datanode directory Path: It cannot be just a bucket, but HAS to be a directory inside a bucket.
Like this:
s3://testBucket/test
Great work with the error messages, Amazon!

Can not run 'rake paperclip:refresh:thumbnails CLASS=Spree::Image' in rails spree app console getting No Such Key

I am trying to RAILS_ENV=production run rake paperclip:refresh:thumbnails CLASS=Spree::Image
on my remote server in my current rails app directory, so I can refresh the spree images that I have uploaded in the past.
I am using S3, my bucket is setup correctly as I can see each of my product's images in individual ID folders in my AWS S3 bucket.
But each time I run the above command I get a 'No Such Key' Error when the rake is aborted.
This command runs locally and works fine. (obviously without the RAILS_ENV=production locally)
Ok so I wrote this question to answer it myself. I hope the question makes sense.
For clarity, I had this issue because it was old images (old non existing paths that were associated with an old S3 Key) that I had uploaded with another S3 Key in previous testing on the same rails app. I did this earlier while trying to get S3 to work with my Rails Spree Application.
What I did to solve this was go into my Rails console on my remote server with this command:
$RAILS_ENV=production rails c
I then ordered the list of all Spree:Images them with this:
$y Spree::Image.all(:order => 'attachment_updated_at')
The 'y' is a nice little yaml way of displaying the information of the Spree:Image that's a little more human.
Next I looked at the ID of each Image and noticed that there was a good amount of them with IDs that did not match folders in my AWS S3 bucket.
In my Case the lowest ID number that was in fact a folder in my S3 bucket was '1078' so I ran this:
$Spree::Image.where('id < ?', 1078).destroy_all
This deleted any Spree::Image that had an ID of 1077 or less.
Finally, I closed rails console and ran this on my remote server inside my current rails app directory. (In my case is was /home/deployer/apps/potentialapp/current/)
$RAILS_ENV=production rake paperclip:refresh:thumbnails CLASS=Spree::Image
This reformatted my uploaded images on Spree and everything is now working great.
Hope this saves someone a great big headache. (Oh and empty your cache when you go to test and see if the images have in fact reloaded, almost cried at 4 am last night.)
I solved the same problem using the console and skipping errors (old/broken S3 assets):
Spree::Image.all.each { |i| i.attachment.reprocess! rescue nil }