Past Pushes on AWS parse-server dashboard not showing/connecting - amazon-web-services

AWS Elastic-beanstalk Parse-Server 2.2.11 with dashboard 1.0.13. Everything works fine including test.html page. Apps registering/posting/pulling data from DB and dashboard sending push notes all working.
The only issues is Past Pushes not showing/connecting. When I click on the link I only get the activity indicator gif.
How can I debug this ? Where or how do I get to the log files ?
(can there be a conflict with all the old pushs sent prior to migration ? )
(I have pulled the last 100 lines of log off of aws eb instance to no avail)
-Thanks for the help

Parse is aware of the issue and plans on fixing the problem in the next release of the dashboard.
https://recordnotfound.com/parse-dashboard-ParsePlatform-77430/issues
"Past Pushes infinite loop"

Related

AWS Glue Crawler and JDCBConnection : "Expected string length >= 1, but found 0 for params.Targets.JdbcTargets[0].customJdbcDriverClassName"

I am trying to setup an AWS Glue Crawler using a JDBC connection in order to populate my AWS Glue Data Catalog databases.
I already have a Connection which passes the test but when I submit my crawler creation, I have this error : "Expected string length >= 1, but found 0 for params.Targets.JdbcTargets[0].customJdbcDriverClassName" as you can see in the first screenshot.
The only clue I have for now is that there is no Class Name attached to my connection. However I cannot edit it while editing the connection
Does it ring a bell to someone?
Thanks a lot
I've also had this issue, and even tried using aws-cli to create/update my connection to try to manually input the required parameter.
Turns out this is an AWS UI issue caused by a recent update. According to this post you can create it using the Legacy console for now (on the sidbar, there is a Legacy section where you can find the Legacy pages). I just tried it on my end and it worked =)

how to edit a already deployed pipeline in data fusion?

I am trying to edit a pipeline which is already deployed I understand that we can duplicate a same pipeline and rename it but how can do make a change in existing pipeline as renaming would require a change in production scheduling jobs as well.
There is one way thru http calls executor..
Open https://<cdf instnace url ..datafusion.googleusercontent.com>/cdap/httpexecutor
Select PUT(to change pipeline code) from drop down and give
namespaces/<namespaces_name>/apps/<pipeline_name>
Go to body part and paste the new pipeline code (export the code of updated pipeline to i.e. json formatted)
Click on SEND and Response would come as "Deploy Complete" with status code 200.

AWS CloudWatch logs open from middle

All of sudden the AWS CloudWatch logs started to open from the middle, or from the beginning of the log stream. They used to open from the end of the log stream showing the latest lines. I wonder if this is something that I can configure or has AWS just changed something.
It is really frustrating when you want to follow how the progresses of your lambda app but cannot do it because when you open the log in AWS it shows the first lines in that log stream, and in order to see the latest lines you need to set a custom time frame. And it doesn't allow you to set a future timestamp into the end time, which forces you to always update the end time to see the new lines. I hope there is a solution for getting it to open the trail of the log stream.
Try clicking on ALL in timeframe option? For me recently they started setting start time, and logs are visible from that time onwards, like you described, but when I click on ALL, it shows logs regularly, like it used to.
Second thing you can do is to have rolling start of logs (like, last 15 minutes, 1 hour).
To do that, add:
;start=PT1H at the end of your URL if you want last hour
;start=PT15M at the end of your URL if you want last 15 minutes
You can change numbers depending on timeframe you want

Unable to upload to S3 with Grails S3 Demo Application

I am trying to run a demo project for uploading to S3 with Grails 3.
The project in question is this, more specifically the S3 upload is only for the 'Hotel' example at the end.
When I run the project and go to upload the image, I get an updated message but nothing actually happens - there's no inserted url in the dbconsole table.
I think the issue lies with how I am running the project, I am using the command:
grails -Daws.accessKeyId=XXXXX -Daws.secretKey=XXXXX run-app
(where I am supplementing the X's for my keys obviously).
This method of running the project appears to be slightly different to the method shown in the example. I run my project from the command line and I do not use GGTS, just Sublime.
I have tried inserting my AWS keys into the application.yml but I receive an internal server error then.
Can anyone help me out here?
Check your bucket policy in s3. You need to grant permissions to the API user to allow uploads.

Can not run 'rake paperclip:refresh:thumbnails CLASS=Spree::Image' in rails spree app console getting No Such Key

I am trying to RAILS_ENV=production run rake paperclip:refresh:thumbnails CLASS=Spree::Image
on my remote server in my current rails app directory, so I can refresh the spree images that I have uploaded in the past.
I am using S3, my bucket is setup correctly as I can see each of my product's images in individual ID folders in my AWS S3 bucket.
But each time I run the above command I get a 'No Such Key' Error when the rake is aborted.
This command runs locally and works fine. (obviously without the RAILS_ENV=production locally)
Ok so I wrote this question to answer it myself. I hope the question makes sense.
For clarity, I had this issue because it was old images (old non existing paths that were associated with an old S3 Key) that I had uploaded with another S3 Key in previous testing on the same rails app. I did this earlier while trying to get S3 to work with my Rails Spree Application.
What I did to solve this was go into my Rails console on my remote server with this command:
$RAILS_ENV=production rails c
I then ordered the list of all Spree:Images them with this:
$y Spree::Image.all(:order => 'attachment_updated_at')
The 'y' is a nice little yaml way of displaying the information of the Spree:Image that's a little more human.
Next I looked at the ID of each Image and noticed that there was a good amount of them with IDs that did not match folders in my AWS S3 bucket.
In my Case the lowest ID number that was in fact a folder in my S3 bucket was '1078' so I ran this:
$Spree::Image.where('id < ?', 1078).destroy_all
This deleted any Spree::Image that had an ID of 1077 or less.
Finally, I closed rails console and ran this on my remote server inside my current rails app directory. (In my case is was /home/deployer/apps/potentialapp/current/)
$RAILS_ENV=production rake paperclip:refresh:thumbnails CLASS=Spree::Image
This reformatted my uploaded images on Spree and everything is now working great.
Hope this saves someone a great big headache. (Oh and empty your cache when you go to test and see if the images have in fact reloaded, almost cried at 4 am last night.)
I solved the same problem using the console and skipping errors (old/broken S3 assets):
Spree::Image.all.each { |i| i.attachment.reprocess! rescue nil }