Error while launching the EC2 instance - amazon-web-services

I create a VPS for Rstudio and launched an EC2 instance in the following configuration
I first chose the AMI (default AMI for free tier users)
Then I added the piece of code for setting up the R server with credentials.
I also defined the security protocol with port 8787 for accessing my server.
I launched the EC2 instance and had a approval for the status check .
I was now able to access the R sever with mycredential.
I tried to read the data n my s3 bucket.
For this, i tried downloading the RCurl package in R.
I had a error that * Warning in install.packages :
installation of package ‘RCurl’ had non-zero exit status*
could someone help me to resolve this issue

Related

AWS Replication Agent installation failed

I'm trying to install AWS Replication Agent on ubuntu20 server. As per document, I have created IAM user with below AWS managed policy.
AWSElasticDisasterRecoveryAgentInstallationPolicy
AWSElasticDisasterRecoveryAgentPolicy
When i tried to install agent on ubuntu20 server, I received Unexpected error and Installation failed even i attached Administrator full access policy.
Unexpected Error
Installation failed.
Learn more about installation issues in our documentation at
https://docs.aws.amazon.com/drs/latest/userguide/Troubleshooting-Agent-Issues.html
Can any one please let me know why I'm getting this error?

Unable to deploy code on ec2 instance using codedeploy

I have single ec2 instance running on ubuntu server and I am trying to implement CI/CD flow using codedeploy and source is bit-bucket.I jave also installed codedeploy-agent on ec2 instance and it is installed and running successfully but whenever I am deploying code on ec2 deployment is failing with an error shown below:
The overall deployment failed because too many individual instances failed deployment, too few
healthy instances are available for deployment, or some instances in your deployment group are
experiencing problems.
In the CodeDeploy agent log file that I am accessing using less /var/log/aws/codedeploy-agent/codedeploy-agent.log showing below error:
ERROR [codedeploy-agent(31598)]: InstanceAgent::Plugins::CodeDeployPlugin::CommandPoller:
Missing credentials - please check if this instance was started with an IAM instance profile
I am unable to understand how can I overcome this error someone let me know.
CodeDeploy agent requires IAM permissions provided by IAM role/profile of your instance. The exact permissions needed are given in AWS docs:
Step 4: Create an IAM instance profile for your Amazon EC2 instances

How to launch workers on AWS EC2 instance when running locally?

I have put together a distributed setup at my university using the Distributed package that comes with Julia for running some intensive simulations. I usually launch workers on local machines through ssh using addprocs.
I have launched an c5.24xlarge EC2 instance. The aws_key.pem file exists and I have done
chmod 400 aws_key.pem
I am able to ssh into the instance just fine.
I am trying to add workers with the following code
workervec2 = [("ubuntu#ec2-xxxx:22", 24)]
addprocs(workervec2 ; sshflags="-i aws_key.pem",
tunnel=true, exename="/home/ubuntu/julia-1.0.4/bin/julia",
dir="/home/ubuntu/simulator")
I am trying to add additional workers on my Amazon EC2 instances, but I am failing with the following error
Warning: Identity file aws_key.pem not accessible: No such file or directory.
ubuntu#ec2-xxxx: Permission denied (publickey).
ERROR: LoadError: Unable to read host:port string from worker. Launch command exited with error?
The warning comes even when launching workers on the local machines, but the launch goes through. However, launching on my EC2 instance fails with the following error, while I am able to ssh from the terminal. What is going wrong?
Adding the ssh key from my local machine to the EC2 instance did the trick. This helped.
Then, workers can be added as usual
workervec2 = [("ubuntu#ec2-xxxx:22", 24)]
addprocs(workervec2 ; sshflags="-i ~/.ssh/id_rsa.pub",
tunnel=true, exename="/home/ubuntu/julia-1.0.4/bin/julia",
dir="/home/ubuntu/simulator")

Chef on AWS - ERROR: Fog::Compute::AWS::Error: AuthFailure

Chef Workstation and Server is setup on AWS as follows:
Chef Development Kit Version: 0.10.0,
Chef-server 12.2,
chef-client version: 12.5
This setup has been working for around an year.
Today, got following error when creating ec2 instance by executing'knife ec2 server create' command on chef-workstation.
ERROR: Fog::Compute::AWS::Error: AuthFailure => AWS was not able to validate the provided access credentials
There is no change in aws auth keys or file permissions. I'm not able to understand why this error all of sudden?
Thanks for any pointers.

SSH connection error - Permission denied (publickey)

I'm trying to run a Spark cluster on AWS using https://github.com/amplab/spark-ec2.
I've generated a key and and login credentials, and I'm using this command:
./spark-ec2 --key-pair=octavianKey4 --identity-file=credentials3.csv --region=eu-west-1 --zone=eu-west-1c launch my-instance-name
However, I keep getting this:
Warning: SSH connection error. (This could be temporary.)
Host: mec2-myHostNumber.eu-west-1.compute.amazonaws.com
SSH return code: 255
SSH output: Warning: Permanently added 'ec2-myHostNumber.eu-west-1.compute.amazonaws.com,myHostNumber' (ECDSA) to the list of known hosts.
Permission denied (publickey).
If I quit the console and then try to start the cluster again, I get this:
Setting up security groups...
Searching for existing cluster my-instance-name in region eu-west-1...
Found 1 master, 1 slave.
ERROR: There are already instances running in group my-instance-name-master or my-instance-name-slaves
The command is incorrect. Key pair name should be the one you mention in AWS. Identity file is .pem file associated. You can't ssh into a machine with AWS credentials (your csv file is credentials).
./spark-ec2 --key-pair=octavianKey4 --identity-file=octavianKey4.pem --region=eu-west-1 --zone=eu-west-1c launch my-instance-name
Can you add --resume to your spark-ec2 command and try? Your slave may not have the key. --resume will make sure it is transferred to the slave.
Running Spark on EC2
If one of your launches fails due to e.g. not having the right
permissions on your private key file, you can run launch with the
--resume option to restart the setup process on an existing cluster.