Can not run 'rake paperclip:refresh:thumbnails CLASS=Spree::Image' in rails spree app console getting No Such Key - amazon-web-services

I am trying to RAILS_ENV=production run rake paperclip:refresh:thumbnails CLASS=Spree::Image
on my remote server in my current rails app directory, so I can refresh the spree images that I have uploaded in the past.
I am using S3, my bucket is setup correctly as I can see each of my product's images in individual ID folders in my AWS S3 bucket.
But each time I run the above command I get a 'No Such Key' Error when the rake is aborted.
This command runs locally and works fine. (obviously without the RAILS_ENV=production locally)

Ok so I wrote this question to answer it myself. I hope the question makes sense.
For clarity, I had this issue because it was old images (old non existing paths that were associated with an old S3 Key) that I had uploaded with another S3 Key in previous testing on the same rails app. I did this earlier while trying to get S3 to work with my Rails Spree Application.
What I did to solve this was go into my Rails console on my remote server with this command:
$RAILS_ENV=production rails c
I then ordered the list of all Spree:Images them with this:
$y Spree::Image.all(:order => 'attachment_updated_at')
The 'y' is a nice little yaml way of displaying the information of the Spree:Image that's a little more human.
Next I looked at the ID of each Image and noticed that there was a good amount of them with IDs that did not match folders in my AWS S3 bucket.
In my Case the lowest ID number that was in fact a folder in my S3 bucket was '1078' so I ran this:
$Spree::Image.where('id < ?', 1078).destroy_all
This deleted any Spree::Image that had an ID of 1077 or less.
Finally, I closed rails console and ran this on my remote server inside my current rails app directory. (In my case is was /home/deployer/apps/potentialapp/current/)
$RAILS_ENV=production rake paperclip:refresh:thumbnails CLASS=Spree::Image
This reformatted my uploaded images on Spree and everything is now working great.
Hope this saves someone a great big headache. (Oh and empty your cache when you go to test and see if the images have in fact reloaded, almost cried at 4 am last night.)

I solved the same problem using the console and skipping errors (old/broken S3 assets):
Spree::Image.all.each { |i| i.attachment.reprocess! rescue nil }

Related

Springboot server in Elastic Beanstalk creates files that I can't see

I have a Springboot server that is deployed to an Elastic Beanstalk environment in AWS. The basic functionality is this:
1. Upload a file to the server
2. The server processes file by doing some data manipulation.
3. Then the file that is created is sent to a user via email.
The strange thing is that, the functionality mentioned above is working. The output file is sent to my email inbox successfully. However, the file cannot be seen when SSHed into the instance. The entire directory that gets created for the data manipulation is just not there. I have looked everywhere.
To test this, I even created a simple function in my Springboot Controller like this:
#GetMapping("/")
public ResponseEntity<String> dummyMethod() {
// TODO : remove line below after testing
new File(directoryToCreate).mkdirs();
return new ResponseEntity<>("Successful health check. Status: 200 - OK", HttpStatus.OK);
}
If I use Postman to hit this endpoint, the directory CANNOT be seen via the terminal that I am SSHed into. The program is working so I know that the code is correct in that sense, but its like the files and directories are invisible to me.
Furthermore, if I were to run the server locally (using Windows OR Linux) and hit this endpoint, the directory is successfully created.
Update:
I found where the app lives in the environment at /var/app. But my folders and files are still not there, only the source code files, ect are there. The files that my server is supposed to be creating are still missing. I can even print out the absolute path to the file after creating it, but that file still doesn't exist. Here is an example:
Files.copy(source, dest);
logger.info("Successfully copied file to: {}", dest.getAbsolutePath());
will print...
Successfully copied file to: /tmp/TESTING/Test-Results/GVA_output_2021-12-13 12.32.58/results_map_GVA.csv
That path DOES NOT exist in my server, but I CAN send it to me via email from the server code after being processed. But if I SSH into the instance and go to that path, nothing is there.
If I use the command: find . -name "GVA*" (to search for the file I am looking for) then it prints this:
./var/lib/docker/overlay2/fbf04e23e39d61896a1c935748a63f2d3836487d9b166bae490764c30b8870ae/diff/tmp/TESTING/Test-Results/GVA_output_2021-12-09 18.15.59
./var/lib/docker/overlay2/fbf04e23e39d61896a1c935748a63f2d3836487d9b166bae490764c30b8870ae/diff/tmp/TESTING/Test-Results/GVA_output_2021-12-13 12.26.34
./var/lib/docker/overlay2/fbf04e23e39d61896a1c935748a63f2d3836487d9b166bae490764c30b8870ae/diff/tmp/TESTING/Test-Results/GVA_output_2021-12-13 12.32.58
./var/lib/docker/overlay2/fbf04e23e39d61896a1c935748a63f2d3836487d9b166bae490764c30b8870ae/merged/tmp/TESTING/Test-Results/GVA_output_2021-12-09 18.15.59
./var/lib/docker/overlay2/fbf04e23e39d61896a1c935748a63f2d3836487d9b166bae490764c30b8870ae/merged/tmp/TESTING/Test-Results/GVA_output_2021-12-13 12.26.34
./var/lib/docker/overlay2/fbf04e23e39d61896a1c935748a63f2d3836487d9b166bae490764c30b8870ae/merged/tmp/TESTING/Test-Results/GVA_output_2021-12-13 12.32.58
But this looks like it is keeping track of differences between versions of files since I see diff and merged in the file path. I just want to find where that file is actually residing.
If you need to store an uploaded file somewhere from a Spring BOOT app, look at using an Amazon S3 bucket as opposed to writing the file to a folder on the server. For example, assume you are working with a Photo app and the photos can be uploaded via the Spring BOOT app. Instead of placing this in a directory on the server, use the Amazon S3 Java API to store the file in an Amazon S3 bucket.
Here is an example of using a Spring BOOT app and handling uploaded files by placing them in a bucket.
Creating a dynamic web application that analyzes photos using the AWS SDK for Java
This example app also shows you how to use the SES API to send data (a report in this example) to a user via email.

Expo - changing the project name without deleting the entire project in TestFlight

Is it possible to change the name of an Expo project without having to go through the entire process of building it again and submitting to the app store?
I accidentally didn't update the project name in the app.json file and now am stuck with an app called exmilti in TestFlight.
It took a few days to build, submit, and get the approval for TestFlight so I would love to avoid that process if there is a simple fix.
When I attempt to rebuild it in the CLI with the new name I get an error:
Reason: Unexpected response, raw:
{"responseId":"ed00c05f-82a0-41d6-9a7c-b48d04e68a1a","resultCode":35,"resultString":"There
were errors in the data supplied. Please correct and
re-submit.","userString":"Multiple profiles found with the name
'com.myComapanyName.AppName AppStore'. Please remove the duplicate profiles
and try again."
Which means to me that I am going to have to fully remove the app from TestFlight (yikes) and then re-upload the newly named App and wait for them to approve it again.
Any advice?
Update - I did not find an easier way so I just compiled the project and pushed it via expo again with the correct name.
It didn't take as long as I had expected.

Apache Superset permissions issue upload csv

Tried to upload a csv on superset installed in centos7. Gives an error message "erro no 13 permission denied on /app/superset/app/
chmod -R incubator-superset gave the necessary permissions recursively to folder-subfolder-files.
Not needed to restart the app as well
Which database are you trying to upload the CSV to? You cannot upload CSVs to the "main" or "examples" databases, to the best of my recollection. You'd have to connect another database of your own.
Once you have a viable database selected, you have to edit the database and check the Allow Csv Upload box, as well as make some changes in the Extra section as per the instructions you see right below that input.

Executing Alexa Tutorial Code ALWAYS Fails - Beginner

I'm new to Alexa Skill development and I'm sure this issue is process/environmental due to lack of experience.
Whenever I try to use a sample from an offical Alexa tutorial, I can never get the skill to pass the first TEST - always getting an error :(
In this case I am trying to run and fiddle with this tutorial:
https://developer.amazon.com/blogs/post/TxHGKH09BL2VA1/New-Alexa-Skills-Kit-Template-Step-by-Step-Guide-to-Build-a-Decision-Tree-Skill
What is happening / What I've done:
I download the Node SDK from the Git link, I also download the sample from the Git link. I then create a new ZIP that contains the sample code with the Node SDK included in the path /src/alexa-sdk/
I go to AWS and create a new function, not using a blueprint. I 'author from scratch' and create a function with the Skills Kit as a trigger. I name the function and use Node 6.10 runtime.
I upload my ZIP file and leave all boxes default, for Role I choose Custom Role then pick Basic Execution from the Role screen.
I leave the rest blank, go to NEXT and CREATE.
The function is created okay, but I do see this error 'This function contains external libraries. Uploading a new file will override these libraries.'
Here's the problem - this is the point of failure on all tutorials I've tried so far. I go to Configure Test Event, I choose ALEXA START SESSION as the template and click Save And Test...
EXECUTION RESULT FAILED:
{
"errorMessage": "Cannot find module '/var/task/index'",
"errorType": "Error",
"stackTrace": [
"require (internal/module.js:20:19)"
]
}
Here's something from associated error logs, unsure if it's useful:
Unable to import module 'index': Error
at Module.require (module.js:497:17)
at require (internal/module.js:20:19)
I have noticed two things that I suspect may be an issue:
1) When I go to the CODE tab for this function, I see this message:
Your Lambda function "testprojectx" cannot be edited inline since the file name specified in the handler does not match a file name in your deployment package.
2) When I look at the code that's inserted into the test when I choose ALEXA SESSION START, I see many instances of 'unique value here':
amzn1.echo-api.session.[unique-value-here]
Although, there is no mention of this in the tutorial link I am referencing.
I'm really downhearted about it now as this is like the 3rd tutorial code I've tried to configure. Can anybody with experience follow the steps I've taken and point me in the right direction.
Thank you SO MUCH in advance if so.
EDIT: Absolute Clarification on how I am creating the ZIP file
I'm using Windows 10 and Chrome to download the files from GitHub.
I download the skill-sample-nodejs-decision-tree-master ZIP file from GitHub,
I do not know how to use NPM so I do this simply via downloading to desktop.
I then download the alexa-skills-kit-sdk-for-nodejs-master.ZIP file to desktop.
I unzip the contents of decision-tree-master into a folder on the desktop also called alexa-skills-kit-sdk-for-nodejs-master.
Within this folder, I navigate to /src/ and create a new folder called 'node_modules' within /src/.
Within /src/node_modules/ I now create another new folder called 'alexa-sdk'.
I unzip the contents of alexa-skills-kit-sdk-for-nodejs-master.zip into /src/node_modules/alexa-sdk/.
I have tried two approaches from here - both fail:
1) I ZIP only the contents of /src/ (not including the /src/ folder itself) and upload to Amazon.
2) I ZIP the entire 'decision-tree-master' folder and upload to Amazon.
I must be missing something, as I said this is just one of many Alexa tutorials I've tried to get working and this always happens :( So disheartened now.
This is common issue I have seen in many posts. Most of the cases it is the way zipping the files making the problem. Instead of zipping the folder you have to select all files and zip it like below,

Unable to upload to S3 with Grails S3 Demo Application

I am trying to run a demo project for uploading to S3 with Grails 3.
The project in question is this, more specifically the S3 upload is only for the 'Hotel' example at the end.
When I run the project and go to upload the image, I get an updated message but nothing actually happens - there's no inserted url in the dbconsole table.
I think the issue lies with how I am running the project, I am using the command:
grails -Daws.accessKeyId=XXXXX -Daws.secretKey=XXXXX run-app
(where I am supplementing the X's for my keys obviously).
This method of running the project appears to be slightly different to the method shown in the example. I run my project from the command line and I do not use GGTS, just Sublime.
I have tried inserting my AWS keys into the application.yml but I receive an internal server error then.
Can anyone help me out here?
Check your bucket policy in s3. You need to grant permissions to the API user to allow uploads.