We have updated emr version for emr-5.30.0.Since then we are getting error in bootstrap.
"Terminated with bootstrap error"
If i change version back to emr-5.29.0 it work fine.I am not able to find reason for bootstrap error.
We are creating EMR cluster from step function.
We have changed version emr-5.29.0 to emr-5.30.0 as we are adding managed autoscalling and it supports only after 5.29.0
I checked logs but could not find any proper error message. Please suggest some pointers to troubleshoot this.
EMR version changes many thing including different applications you select to include like #Snighdhajyoti mentioned for example in emr 5.29.0 spark had version 2.4.4 and in emr 5.30.0 spark has version 2.4.5. You can see the basic list of application changes here.
But the point is, there can be some application or package that you install or configure in bootstrap scipt manually which might be conflicting with the other updated package.
For logs, bootstrap logs dont appear in cluster logs but are in stderr logs for your bootstrap action like below
s3://doc-example-bucket/cluster-id/node/instance-id/bootstrap-actions/
This link provides some more guidance how can you dig down the error, for example
If you can't determine why the script failed after reviewing the
stderr logs, modify your script to provide additional debug
information. For example, set the -ex parameters in the bash script.
This allows you to view the bash script flow in the bootstrap action
log files.
Note: If the failed bootstrap action isn't a bootstrap action that you
created (for example, if you created six bootstrap actions and the
error message is "bootstrap action 7 failed with non-zero exit code"),
it indicates that Amazon EMR couldn't install applications or start
services. This problem is rare. To resolve this issue, try launching
the cluster again.
Related
So I have ran the following commands to initialize my amplify project:
amplify configure
amplify init
Then i run amplify add api.
when I select "REST" api, it says "There was an error adding the API resource" Running amplify status confirms that nothing was added. However, when I try to add the "GraphQL" api, it gives me the same error message, but running amplify status indicates that it was actually added successfully.
i want to add REST api to my amplify app. Not sure what the issue is. I have tried updating amplify cli and reinitializing my project multiple times. thanks in advance.
I am seeing a similar error in the step:
Try opening with system-default editor instead?
The error is:
There are 2 options that helped me
On this step select No
Install xdg-utils: sudo yum -y install xdg-utils (I am using EC2 linux 2 instance)
Try these options and let's see if it helps you.
I thought it would be a topic to find easily on the web, but I couldnt find a solution..
I deployed the parse-server-example on AWS Elastic Beanstalk according to the original documentation and it works perfectly. Can anyone give me a hint how to update this server to the newest version? I try to use the parse-dashboard and I get the error "server version too low".
I cloned the parse server with eb cli already. But I do not know how / which files to update.
Thanks for any hint!
In package.json, you update the version next to 'parse-server'. I think by default this is '~2.0'?
Parse Dashboard requires Parse-Server to be '>=2.1.4', HOWEVER, currently I'm having issues when changing the parse-server version, it breaks my AWS server instance. Currently have an issue open on GitHub (https://github.com/ParsePlatform/parse-server-example/issues/109#issuecomment-198001722), so keep an eye on that.
But yeah, that's where you update your Parse-Server version, I believe!
Once you've done this locally on your machine, you obviously need to deploy the updates to AWS via the Beanstalk Dashboard, as this will install/update any node modules from package.json.
I am trying to deploy an application version but eb deploy command fails with:
ERROR: Update environment operation is complete, but with errors. For
more information, see troubleshooting documentation.
I checked the logs, made some changes to the code, committed and deployed again and guess what, it failed again. The logs indicate the same error, disregarding my changes. The error occurs in a file in this directory /var/app/ondeck/app/, when I go check, I can see the previous version is there.
I tried deploying using the Elastic Beanstalk dashboard, but somehow the instance is not receiving the new version. Can someone help me with this? Thanks.
Just had the same problem and noticed in the documentation
"Note
If you have initialized a git repository in your project folder, the EB CLI will always deploy the latest commit, even if you have pending changes. Commit your changes prior to running eb deploy to deploy them to your environment."
made the commits and worked fine
I am trying view logs of my running application on Bluemix using : "cf logs my-cool-app" command (CF version 6.11) .
If fails with :
FAILED
Loggregator endpoint missing from config file
Anyone seen this issue?
The problem appears to stem from the use of the 6.11 codebase for CF CLI and the current version of CloudFoundry that Bluemix is running. Good news is that an upcoming upgrade will alleviate the problem. We're investigating potential workarounds.
This is just an issue with the CF CLI version 6.11.
Is it possible or advisable to run WebHCat on an Amazon Elastic MapReduce cluster?
I'm new to this technology and I was wonder if it was possible to use WebHCat as a REST interface to run Hive queries. The cluster in question is running Hive.
I wasn't able to get it working but WebHCat is actually installed by default on Amazon's EMR instance.
To get it running you have to do the following,
chmod u+x /home/hadoop/hive/hcatalog/bin/hcat
chmod u+x /home/hadoop/hive/hcatalog/sbin/webhcat_server.sh
export TEMPLETON_HOME=/home/hadoop/.versions/hive-0.11.0/hcatalog/
export HCAT_PREFIX=/home/hadoop/.versions/hive-0.11.0/hcatalog/
/home/hadoop/hive/hcatalog/webhcat_server.sh start
You can then confirm that it's running on port 50111 using curl,
curl -i http://localhost:50111/templeton/v1/status
To hit 50111 on other machines you have to open the port up in the EC2 EMR security group.
You then have to configure the users you going to "proxy" when you run queries in hcatalog. I didn't actually save this configuration, but it is outlined in the WebHCat documentation. I wish they had some concrete examples there but basically I ended up configuring the local 'hadoop' user as the one that run the queries, not the most secure thing to do I am sure, but I was just trying to get it up and running.
Attempting a query then gave me this error,
{"error":"Server IPC version 9 cannot communicate with client version
4"}
The workaround was to switch off of the latest EMR image (3.0.4 with Hadoop 2.2.0) and switch to a Hadoop 1.0 image (2.4.2 with Hadoop 1.0.3).
I then hit another issues where it couldn't find the Hive jar properly, after struggling with the configuration more, I decided I had dumped enough time into trying to get this to work and decided to communicate with Hive directly (using RBHive for Ruby and JDBC for the JVM).
To answer my own question, it is possible to run WebHCat on EMR, but it's not documented at all (Googling lead me nowhere which is why I created this question in the first place, it's currently the first hit when you search "WebHCat EMR") and the WebHCat documentation leaves a lot to be desired. Getting it to work seems like a pain, though my hope is that by writing up the initial steps someone will come along and take it the rest of the way and post a complete answer.
I did not test it but, it should be doable.
EMR allows to customise the bootstrap actions, i.e. the scripts run where the nodes are started. You can use bootstrap actions to install additional software and to change the configuration of applications on the cluster
See more details at http://docs.aws.amazon.com/ElasticMapReduce/latest/DeveloperGuide/emr-plan-bootstrap.html.
I would create a shell script to install WebHCat and test your script on a regular EC2 instance first (outside the context of EMR - just as a test to ensure your script is OK)
You can use EC2's user-data properties to test your script, typically :
#!/bin/bash
curl http://path_to_your_install_script.sh | sh
Then - once you know the script is working - make it available to the cluster on a S3 bucket and follow these instructions to include your script as custom bootstrap action of your cluster.
--Seb