I am trying to setup hyperledger fabric blockchain network using amazon managed blockchain following this guide. To entroll, I have used the following command,
fabric-ca-client enroll -u 'https://admin:#D7a22hjjh*9b9#ca.m-zzz.n-zzzz.managedblockchain.us-east-1.amazonaws.com:30002' --tls.certfiles /home/ec2-user/managedblockchain-tls-chain.pem -M /home/ec2-user/admin-msp
I got the following error,
Error: The URL of the fabric CA server is missing the enrollment ID and secret; found 'https://admin:#D7a22613ac75c9b9#ca.m-zzz.n-zzzz.managedblockchain.us-east-1.amazonaws.com:30002' but expecting 'https://<enrollmentID>:<secret>#admin:'
I thought this is due to # symbol in the password. For testing purpose I remove the # symbol and tried. I got the following error.
Error: Failed to create keystore directory: mkdir /home/ec2-user/admin-msp: permission denied
when I use sudo, I am getting the following error,
sudo: fabric-ca-client: command not found
Help me to fix this issue.
What user are you logged in as? whoami
You should be using ec2-user, which has access to the /home/ec2-user/ directory.
You can try manually creating the admin-msp directory before running enrolling the admin:
cd ~ && mkdir admin-msp
Then try running your command.
If that doesn't work, use sudo to create the directory and then chown it to be owned by ec2-user:
cd ~
sudo mkdir admin-msp
sudo chown ec2-user ~/admin-msp
Then try your command.
Note that you can also wrap the username/password in quotes:
fabric-ca-client enroll -u 'https://"admin":"#D7a22hjjh*9b9"#ca.m-zzz.n-zzzz.managedblockchain.us-east-1.amazonaws.com:30002' --tls.certfiles /home/ec2-user/managedblockchain-tls-chain.pem -M /home/ec2-user/admin-msp
explaining all that has been tried and double checked.
Set up on local windows machine:
Xming installed and running.
in ssh_config ForwardX11 is set to yes.
In VS code remote connection config the the Forward X11 is set to yes.
Set up on GCP compute engine with Debian / Linux 9 and 1 GPU[free tier]:
xauth is installed.
In the sshd_config file below is set:
X11Forwarding yes
X11DisplayOffset 10
X11UseLocalhost no
The sshserver has be restarted to ensure below setting are read .
from local workstation I fire gcloud compute ssh --ssh-flag="-X" tensorflow-2-vm(instance name) and the response is :
/usr/bin/xauth: file /home/user/.Xauthority does not exist,
So, I attempted to perform the below on the remote compute engine with instance name - tensorflow-2-vm and user trapti_kalra:
trapti_kalra#tensorflow-2-vm:~$ xauth list
xauth: file /home/trapti_kalra/.Xauthority does not exist
trapti_kalra#tensorflow-2-vm:~$ mv .Xauthority old.Xauthority
mv: cannot stat '.Xauthority': No such file or directory
trapti_kalra#tensorflow-2-vm:~$ touch ~/.Xauthority
trapti_kalra#tensorflow-2-vm:~$ xauth generate :0 . trusted
xauth: (argv):1: unable to open display ":0".
trapti_kalra#tensorflow-2-vm:~$ sudo xauth generate :0 . trusted
xauth: file /root/.Xauthority does not exist
xauth: (argv):1: unable to open display ":0".
so, looks like something is missing, any help will be appreciated. This was working with a EC2 server before I moved to GCP.
Create n new file: touch ~/.Xauthority
Log out and back in again with your ssh session. (I'm using MobaXterm)
Then it writes the needed.
You logged into your Linux server over ssh and got the following error;
.Xauthority does not exist
Solution :
Let's go into the /etc/ssh/sshd_config file and remove the # sign at the beginning of the 3 lines below
X11Forwarding yes
X11DisplayOffset 10
X11UseLocalhost yes
Then systemctl restart sshd
Login again and you will not get the error.
There are many solutions to this problem, it can also depend on what machine you originate from. If you come from a Linux box, enabling sshd config options like:
X11Forwarding yes
could be enough.
When you use a Macbook however the scenario is different. In that case, you need to install xQuartz with brew:
brew install xquartz
And after this start it:
xQuartz &
After this is done the xQuartz logo appears in your bar and you can right-click the icon and start the terminal from the Applications menu. After you perform this you can run the following:
echo $DISPLAY from this terminal. This should give you the output:
:0
When you have another terminal such as iTerm, you can export this value in another terminal with export DISPLAY=:0 As long as xQuartz is still running the other terminal should be able to continue to use xQuartz.
After this you can SSH into the remote machine and check if the display variable is set:
$: ssh -Y anldisr#my-remote-machine
$: echo $DISPLAY
localhost:11.0
It took me a hour to figure this out, hope it helps someone. :)
This also happened when I added a new user to remote machine without giving the user a sudo privilege during creation.
To resolve, I used the root user or a sudo privileged user to assign a sudo privilege to the new user. Exit the new user and ssh again into your server.
> $ sudo usermod -aG sudo [newUser]
I'm running Redis server 2.8.17 on a Debian server 8.5. I'm using Redis as a session store for a Django 1.8.4 application.
I haven't changed the software configuration on my server for a couple of months and everything was working just fine until a week ago when Django began raising the following error:
MISCONF Redis is configured to save RDB snapshots but is currently not able to persist to disk. Commands that may modify the data set are disabled. Please check Redis logs for details...
I checked the redis log and saw this happening about once a second:
1 changes in 900 seconds. Saving...
Background saving started by pid 22213
Failed opening .rdb for saving: Permission denied
Background saving error
I've read these two SO questions 1, 2 but they haven't helped me find the problem.
ps shows that user "redis" is running the server:
redis 26769 ... /usr/bin/redis-server *.6379
I checked my config file for the redis file name and path:
grep ^dir /etc/redis/redis.conf =>
dir /var/lib/redis
grep ^dbfilename /etc =>
dbfilename dump.rdb
The permissons on /var/lib/redis are 755 and it's owned by redis:redis.
The permissons on /var/lib/redis/dump.rdb are 644 and it's owned by redis:redis too.
I also ran strace on the server process:
ps -C redis-server # pid = 26769
sudo strace -p 26769 -o /tmp/strace.out
But when I examine the output, I don't see any errors. In particular I don't see a "Permission denied" error as I would expect.
Also, /var/lib/redis is not an NFS directory.
Does anyone know what else could be causing this? I'd hate to have to stop using Redis. I know I can run the command "set stop-writes-on-bgsave-error yes" but that doesn't solve the problem.
This is now happening on a daily basis and the only way I can stop the error is to restart the Redis server.
Thanks.
I just had a similar issue. Despite my config file being correct, when I checked the actual dbfilename and dir in redis-client, they were incorrect.
Run redis-cli and then
CONFIG GET dbfilenamewhich should return something like
1) "dbfilename"
2) "dump.rdb"
1) is just the key and 2) the value. Similarly then run CONFIG GET dir should return something like
1) "dir"
2) "/var/lib/redis"
Confirm that these are correct and if not, set them with CONFIG SET dir /correct/path
Hope this helps!
If you have moved Redis to a new mounted volume: /mnt/data-01.
sudo vim /etc/systemd/system/redis.service
Set ReadWriteDirectories=-/mnt/data-01
sudo mkdir /mnt/data-01/redis
Set chown and chmod on new redis data dir and rdb file.
The permissons on /var/lib/redis are 755 and it's owned by redis:redis
The permissons on /var/lib/redis/dump.rdb are 644 and it's owned by redis:redis
Switch configurations while redis is running
$ redis-cli
127.0.0.1:6379> CONFIG SET dir /data/tmp
redis-cli 127.0.0.1:6379> CONFIG SET dbfilename temp.rdb
127.0.0.1:6379> BGSAVE
tail /var/log/redis/redis.cnf (verify saved)
Start Redis Server in a directory where Redis has write permissions
The answers above will definitely solve your problem, but here's what's actually going on:
The default location for storing the rdb.dump file is ./ (denoting current directory). You can verify this in your redis.conf file. Therefore, the directory from where you start the redis server is where a dump.rdb file will be created and updated.
Since you say your redis server has been working fine for a while and this just started happening, it seems you have started running the redis server in a directory where redis does not have the correct permissions to create the dump.rdb file.
To make matters worse, redis will also probably not allow you to shut down the server either until it is able to create the rdb file to ensure the proper saving of data.
To solve this problem, you must go into the active redis client environment using redis-cli and update the dir key and set its value to your project folder or any folder where non-root has permissions to save. Then run BGSAVE to invoke the creation of the dump.rdb file.
CONFIG SET dir "/hardcoded/path/to/your/project/folder"
BGSAVE
(Now, if you need to save the dump.rdb file in the directory that you started the server in, then you will need to change permissions for the directory so that redis can write to it. You can search stackoverflow for how to do that).
You should now be able to shut down the redis server. Note that we hardcoded the path. Hardcoding is rarely a good practice and I highly recommend starting the redis server from your project directory and changing the dir key back to./`.
CONFIG SET dir "./"
BGSAVE
That way when you need redis for another project, the dump file will be created in your current project's directory and not in the hardcoded path's project directory.
You can resolve this problem by going into the redis-cli
Type redis-cli in the terminal
Then write config set stop-writes-on-bgsave-error no and it resolved my problem.
Hope it resolved your problem
Up to redis 3.2 it shipped with pretty insane defaults which opened the port to the public. In combination with the CONFIG SET instruction everybody can change your redis config from outside easily. If the error starts after some time, someone probably changed your config.
On your local machine check that
telnet SERVER_IP REDIS_PORT
is denied. Otherwise check your config, you should have the setting
bind 127.0.0.1
enabled.
Dependent on the user that runs redis, you should also check for damage that the intruder has done.
I've created a new linux instance on Amazon EC2, and as part of that downloaded the .pem file to allow me to SSH in.
When I tried to ssh with:
ssh -i myfile.pem <public dns>
I got:
###########################################################
# WARNING: UNPROTECTED PRIVATE KEY FILE! #
###########################################################
Permissions 0644 for 'amazonec2.pem' are too open.
It is recommended that your private key files are NOT accessible by others.
This private key will be ignored.
bad permissions: ignore key: amazonec2.pem
Permission denied (publickey).
Following this post I tried to chmod +600 the .pem file, but now when I ssh I just get
Permission denied (publickey).
What school-boy error am I making here?
The .pem file is in my home folder (in macOS). Its permissions look like this:
-rw-------# 1 mattroberts staff 1696 19 Nov 11:20 amazonec2.pem
The problem is a wrong set of permissions on the file.
It is easily solved by executing: chmod 400 mykey.pem
This solution is taken from AWS instructions:
Your key file must not be publicly viewable for SSH to work. Use this command if needed: chmod 400 mykey.pem
400 protects it by making it read only and only for the owner.
You are likely using the wrong username to login, because—
Most Ubuntu images have a user ubuntu
Amazon's AMI is ec2-user
Most Debian images have either root or admin
To login, you need to adjust your ssh command:
ssh -l USERNAME_HERE -i .ssh/yourkey.pem public-ec2-host
I know this is very late to the game ... but this always works for me:
##step 1
ssh-add ~/.ssh/KEY_PAIR_NAME.pem
##step 2, simply ssh in :)
ssh user_name#<instance public dns/ip>
e.g.
ssh ec2-user#ec2-198-51-100-1.compute-1.amazonaws.com
Ok man, the only thing that worked for me was:
Change permissions of the key
chmod 400 mykey.pem
Make sure to log in using ec2-user, and the correct ec2-99... address. The ec2-99 address is at the bottom of the aws console when you're logged in and seeing your instance listed
ssh -i mykey.pem ec2-user#ec2-99-99-99-99.compute-1.amazonaws.com
Take a look at this article. You do not use the public DNS but rather the form
ssh -i your.pem root#ec2-XXX-XXX-XXX-XXX.z-2.compute-1.amazonaws.com
where the name is visible on your AMI panel
In windows you can go to the properties of the pem file, and go to the security tab, then to advance button.
remove inheritance and all the permissions. then grant yourself the full control. after all SSL will not give you the same error again.
Change permission for the key file with :
chmod 400 key-file-name.pem
See AWS documentation for connecting to the instance: Tutorial: Get started with Amazon EC2 Linux instances
I know this question has been answered already but for those that have tried them all and you are still getting the annoying "Permission denied (publickey)". Try running your command with SUDO. Of course this is a temporary solution and you should set permissions correctly but at least that will let you identify that your current user is not running with the privileges you need (as you assumed)
sudo ssh -i amazonec2.pem ec2-xxx-xxx-xxx-xxx.us-west-2.compute.amazonaws.com
Once you do this you'll get a message like this:
Please login as the user "ec2-user" rather than the user "root"
Which is also sparsely documented. In that case just do this:
sudo ssh -i amazonec2.pem ec2-xxx-xxx-xxx-xxx.us-west-2.compute.amazonaws.com -l ec2-user
And you'll get the glorious:
__| __|_ )
_| ( / Amazon Linux AMI
___|\___|___|
Feb, 2022 Update:
See the description to ssh to EC2 instance on AWS:
Then, you can find "No.3" saying this below:
So, run the command below as "No.3" says above:
chmod 400 myKey.pem
In Mac terminal, doing "chmod 400 xyz.pem" did not help me, it kept saying permission denied. For ubuntu users I would suggest
ssh-add xyz.pem
ssh -i xyz.pem ubuntu#ec2-54-69-172-118.us-west-2.compute.amazonaws.com (notice the user is ubuntu)
ssh -i /.pem user#host-machine-IP
I think it's because either you have entered wrong credentials
or, you are using a public key rather than private key
or, your port permissions are open for ALL to ssh. This is bad for Amazon.
There can be three reasons behind this error.
Your are using a wrong key.
Your key doesn't have the correct permissions. You need to chmod it to 400.
You are using the wrong user. Ubuntu images have a user ubuntu, Amazon's AMI is ec2-user and debian images have either root or admin
In addition to the other answers, here is what I did in order for this to work:
Copy the key to .ssh folder if you still hadn't:
cp key.pem ~/.ssh/key.pem
Give the proper permissions to the key
chmod 400 ~/.ssh/key.pem
Start ssh-agent (Thanks to https://stackoverflow.com/a/17848593 )
eval `ssh-agent -s`
ssh-add
Then, add the key
ssh-add ~/.ssh/key.pem
Now you should be able to ssh EC2 (:
SSH keys and file permission best practices:
.ssh directory - 0700 (only by owner)
private key/.pem file - 0400 (read only by owner)
public key/.pub file - 0600 (read & write only by owner)
chmod XXXX file/directory
Alternative log-in using PuTTY. Its good but needs a few steps.
Get your .pem that was generated when you first made the EC2 instance.
Convert the .pem file .ppk using PuttyGen since PuTTY does not read .pem.
Open PuTTY and enter your Host Name which is your instance username + Public DNS (Ex. ubuntu#ec2-xxx-xxx-xxx-xxx.region.compute.amazonaws.com). Not your AWS account username.
Then navigate to Connection > SSH > Auth. Then add your .ppk file. Click on Browse where it says "Private key file for authentication".
Click Open and you should be able to immediately establish connection.
Im using PuTTY 0.66 in Windows.
By default whenever you download the keyfile it come with 644 permissions.
So you need to change the permission each time you download new keys.
chmod 400 my_file.pem
In Windows go to the .pem file, right click and select Properties.
Go to Advanced in Security tab
Disable and remove inheritance.
Then press Add and select a principal.
Add account username as object name and press ok.
Give all permission.
Apply and save changes.
Now check the above command
You can find the answer from the ASW guide.
400 protects it by making it read only and only for the owner.
chmod 400 mykey.pem
In windows,
Right click on the pem file. Then select properties.
Select security tab --> Click on the Advanced button --> Disable inheritance --> Remove all inherited permissions from this object
Click on the Add button --> Select a principal --> Enter your username on the inputbox --> Click on the Check Names button --> Click on Ok --> Click on Ok --> Click on Ok --> Click on Ok
Do a chmod 400 yourkeyfile.pem
If your instance is Amazon linux then use ssh -i yourkeyfile.pem ec2-user#ip
for ubuntu
ssh -i yourkeyfile.pem ubuntu#ip
for centos
ssh -i yourkeyfile.pem centos#ip
Just change the permission of pem file to 0600 allowing only for the allowed user and it will work like charm.
sudo chmod 0600 myfile.pem
And then try to ssh it will work perfectly.
ssh -i myfile.pem <<ssh_user>>#<<server>>
BY default permission are not allowing the pem key.
You just have to change the permission:
chmod 400 xyz.pem
and if ubuntu instance then connect using:
ssh -i xyz.pem ubuntu#ec2-youraws.amazonaws.com
The issue for me was that my .pem file was in one of my NTFS partitions. I moved it to my linux partition (ext4).
Gave required permissions by running:
chmod 400 my_file.pem
And it worked.
I have seen two reasons behind this issue
1) access key does not have the right permission. pem keys with default permission are not allowed to make a secure connection. You just have to change the permission:
chmod 400 xyz.pem
2) Also check whether you have logged-in with proper user credentials. Otherwise, use sudo while connecting
sudo ssh -i {keyfile} ec2-user#{ip address of remote host}
Well, looking at your post description I feel there were 2 mistakes done by you:-
Set correct permissions for the private key.
Below command should help you to set correct file permision.
chmod 0600 mykey.pem
Wrong ec2 user you are trying to login.
Looking at your debug log I think you have spawned an Amazon linux instance. The default user for that instance type is ec2-user . If the instance would have been ubuntu then your default user would have been ubuntu .
ssh -i privatekey.pem default_ssh_user#server_ip
Note:
For an Amazon Linux AMI, the default user name is ec2-user.
For a Centos AMI, the default user name is centos.
For a Debian AMI, the default user name is admin or root.
For a Fedora AMI, the default user name is ec2-user or fedora.
For a RHEL AMI, the default user name is ec2-user or root.
For a SUSE AMI, the default user name is ec2-user or root.
For an Ubuntu AMI, the default user name is ubuntu.
Otherwise, if ec2-user and root don't work, check with the AMI provider.
source: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html
Key file should not be publicly viewable so use permission 400
chmod 400 keyfile.pem
If above command shows permission error use
sudo chmod 400 keyfile.pem
Now ssh into the ec2 machine, if you still face the issue, use ec2-user
ssh -i keyfile.pem ec2-user#ec2-12-34-56-78.compute-1.amazonaws.com
You're not in root then run this command
sudo chmod 400 -R myfile.pem
Not is root then run this command
chmod 400 -R myfile.pem
If you are connecting from Windows, perform the following steps on your local computer.
Navigate to your .pem file.
Right-click on the .pem file and select Properties.
Choose the Security tab.
Select Advanced.
Verify that you are the owner of the file. If not, change the owner to your username.
Select Disable inheritance and Remove all inherited permissions from this object.
Select Add, Select a principal, enter your username, and select OK.
From the Permission Entry window, grant Read permissions and select OK.
Click Apply to ensure all settings are saved.
Select OK to close the Advanced Security Settings window.
Select OK to close the Properties window.
You should be able to connect to your Linux instance from Windows via SSH.
From a Windows command prompt, run the following commands.
Run the following command to reset and remove explicit permissions:
icacls.exe $path /reset
Run the following command to grant Read permissions to the current user: icacls.exe $path /GRANT:R "$($env:USERNAME):(R)"
Run the following command to disable inheritance and remove inherited permissions : icacls.exe $path /inheritance:r
You should be able to connect to your Linux instance from Windows via SSH.
It is just a permission issue with your aws pem key.
Just change the permission of pem key to 400 using below command.
chmod 400 pemkeyname.pem
If you don't have permission to change the permission of a file you can use sudo like below command.
sudo chmod 400 pemkeyname.pem
Else if nothing works for you just follow this video to change the keys on your EC2 instance. You can install now public / private key pair on your instance.
https://youtu.be/LvLlRCrS8B4
Checklist:
Are you using the right private key .pem file?
Are its permissions set correctly? (My Amazon-brand AMIs work with 644, but Red hat must be at least 600 or 400. Don't know about Ubuntu.)
Are you using the right username in your ssh line? Amazon-branded = "ec2-user", Red Hat = "root", Ubuntu = "ubuntu". User can be specified as "ssh -i pem usename#hostname" OR "ssh -l username -i pem hostname"