Rails Capistrano 3: Permission denied (publickey) on AWS EC2 - ruby-on-rails-4

I am trying to deploy my rails (4.2) application on Amazon ec2 through bitbucket. I have added my id_new_rsa.pub key in authorized_keys on the server. Also added same SSH keys in my bitbucket account. Added agent as well for same keys using these commands eval "$(ssh-agent -s)" and ssh-add ~/.ssh/id_new_rsa
By using this id_new_rsa.pub key, I can access both the servers (root and deploy) but when I trying to deploy my application, getting below errors.
I have spent 3 days to figure out the solution but no luck yet. Please help.
deploy.rb
production.rb

Almost there, but there is missing thing. You use publickey slightly incorrect for the auth method. Just add the following into your :ssh_options.
auth_methods: ["publickey"]
This is working example from with EC2 and capistrano:
set :ssh_options, {
forward_agent: true,
user: fetch(:user),
auth_methods: ["publickey"],
keys: ["/path/to/key.pem"]
}
Make sure do give 0600 permissions to your key file.

Related

AWS EC2 git clone permission denied

I have an Ubuntu instance in AWS EC2.
In .ssh folder in the authorized_keys file I see that I have the key_name which was generated in AWS.
I took this public key and added it on gitlab & github accounts under SSH preferences.
When I try to clone my repo with ssh I still get permission denied.
git clone git#gitlab.com:[username]/[project].git
What else am I missing?
GitHub now has a new form of account authentication, and when performing Git Clone of private projects, you will need to pass the SSH key previously registered in your account's PASSWORD, instead of your own password.
In the link below you can understand better and also learn how to register your SSH key for authentication:
https://docs.github.com/en/enterprise-server#3.0/github/authenticating-to-github/connecting-to-github-with-ssh/adding-a-new-ssh-key-to-your-github-account
I will try to explain better what my problem was and how I fixed it.
I created a Linux instance in AWS EC2 and also generated my private key_name with which I could ssh into the instance.
Inside the instance, in the .ssh folder, there is a file name authorized_keys which holds the public key.
I thought I could take this key and add it to my gitlab/github accounts, but this didn't work. (perhaps I still lack some basic understanding of ssh...)
What worked was generating a new key pair inside the EC2 Linux instance and place that public key in gitlab/github.
Thanks.

Got permission denied in ssh in aws instance

I have install git in my EC2 instance.
git version 2.14.5
I have create a new IAM user and give codeCommit permission.
In next, I have follow all the steps one by one from this link. which works fine.
At the 8th step I have add this code in my config file.
Host git-codecommit.*.amazonaws.com
User {{SSH KEY ID}}
IdentityFile ~/.ssh/id_rsa
Then I have assign 600 to config.
And then I have fire this command to test my SSH.
ssh git-codecommit.us-east-2.amazonaws.com
Error
Permission denied (publickey).
Can any one help me to fixed this issue ?
Can you retry the process with 400 permission on the ssh key
chmod 400 <key>.pem
If your goal is to access a repository from CodeCommit, you can do it with:
git clone ssh://git-codecommit.us-east-2.amazonaws.com/repo-name
If you try to ssh directly to CodeCommit, the connection will be denied with the message:
You have successfully authenticated over SSH. You can use Git to interact with AWS CodeCommit. Interactive shells are not supported.

Jenkins on AWS EC2 instance unable to use instance profile after upgrade

I had a Jenkins 2.46 installation running on an EC2 box, associated to a IAM role through an instance profile.
Jenkins was able to do various tasks requiring AWS credentials (f.e. use terraform, upload files to s3, access CodeCommit git repos) using just the instance profile role (no access key or secret keys were stored on the instance).
After upgrading to Jenkins 2.89, this is no longer the case: every task requiring authentication with AWS fails with a 403 error.
However, running a command on the instance bash as the jenkins user still works fine (f.e. running sudo -u jenkins /usr/bin/aws s3 ls s3://my-bucket/ lists bucket files; running the same command into Jenkins' Script Console yelds a 403).
I read the release notes of every version from 2.46 to 2.89 but I did not find anything relevant.
Jenkins was installed and updated through yum, the aws cli was installed using the bundled installer provided by AWS.

aws cli: invalid security token

I'm trying to create a reusable delegation set to use as whitelisted nameservers for my domains, using aws cli on Mac OS X. My AWS credentials (those of an IAM profile I created for that purpose with full administrator privileges, an location set to us-east-1) were correctly entered during setup and accepted by the system.
When entering the command
$ aws route53 create-reusable-delegation-set --caller-reference [CALLER-REFERENCE] --hosted-zone-id [HOSTED_ZONE] --generate-cli-skeleton
the request is successful and I get the response:
{
"CallerReference": "",
"HostedZoneId": ""
}
But when I remove --generate-cli-skeleton and enter
aws route53 create-reusable-delegation-set --caller-reference [CALLER-REFERENCE] --hosted-zone-id [HOSTED_ZONE]
I get this:
An error occurred (InvalidClientTokenId) when calling the CreateReusableDelegationSet operation: The security token included in the request is invalid.
I reality, my IAM credentials, despite being valid, and despite the profile I am using (donaldjenkins) having full administrator privileges, are refused systematically in all aws services and for all commands, not just Route53.
I've been unable to pinpoint the cause of this despite extensive research. Any suggestions gratefully receieved.
Deleting my credentials file (Linux, macOS, or Unix: ~/.aws Windows: %UserProfile%\.aws) then running aws configure again worked for me
The solution is to delete existing credentials for the IAM user and issue new ones. For some reason the credentials recorded during the initial setup of aws cli never worked properly, but overwriting them with new ones removed the issue instantly.
I had the same exact issue.
I'm running NodeJS on my local environment, and trying to deploy to Amazon using code deploy and some other aws tools.
What worked for me was to delete the current config and credentials folder, regnerate a new key and use. THis was after i originally installed aws cli and added the keys, had to add the keys again.
Depending on your folder structure, navigate to your home directory.
On mac if you open a new terminal, it should show your current home directory: "/Users/YOURNAME"
cd .aws
rm -rf config
rm -rf credentials
After you do this, go back to your home directory, then run:
"aws configure".
Enter your Key and secret key.
You can find more details here: https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html#cli-quick-configuration under Quickly Configuring the AWS CLI

Can't login to docker with aws

This is an extension of my last question considering I've decided to deploy a Docker container onto a ton of EC2's. I've set up a repository and a user with full rights, and I added the correct keys to my aws cli configuration. When I try to run the docker login command that comes up after running the "aws ecr get-login" command, it gives me a failed with status: 403 forbidden error. I have absolutely no clue what's going on, and I've spent the past 2 days trying to fix this error... Any ideas?
I would suggest to check the security group of the EC2 Instance
To allow access via SSH you have to apply the following settings for the Security Group of the EC2 Instance:
Security Groups