VSCode open-ssh fail : AWS (SessionManagerPlugin is no found) - amazon-web-services

Thank you for reading.
I successfully set up the ssh config file to loggin to the AWS.
When I try to do ssh login in my local terminal, it works well, but when I try to do using my VSCode Open-SSH extension, it always fails except the first try.
The output is like this:
[18:38:25.400] Running script with connection command: ssh -T -D 53736 -o ConnectTimeout=15 -F <config> awsserver bash
[18:38:26.521] >
> SessionManagerPlugin is not found. Please refer to SessionManager Documentation here: http://docs.aws.amazon.com/console/systems-manager/session-manager-plugin-not-found
All aws commands are well reached from my terminal environment.
Thank you in advance.

I'm not familiar with the VSCode Open-SSH extension, but appears you are getting a message from Amazon's AWS CLI as if this command was being run:
aws ssm start-session --target i-0d2a6aaaaaaaa61c5
Rather than using ssh, is your extension perhaps configured to use Amazon SSM?

Related

Jenkins - bash: aws: command not found but runs fine from terminal

In Build Step, I've added Send files or execute command over SSh -> SSH Publishers -> Exec command, I'm trying to run aws command to copy file from ec2 to s3. The same command runs fine when I execute it over the terminal, but via jenkins it simply returns:
bash: aws: command not found
The command is
cd ~/.local/bin/ && aws s3 cp /home/ec2-user/lambda_test/lambda_function.zip s3://temp-airflow-us/lambda_function.zip
Based on the comments.
The solution was to use the following command:
cd ~/.local/bin/ && ./aws s3 cp /home/ec2-user/lambda_test/lambda_function.zip s3://temp-airflow-us/lambda_function.zip
since aws is not available in PATH env variable.
command not found indicates that the aws utility is not on $PATH for the jenkins user.
To confirm, sudo su -l jenkins and then issue the command which aws - this will most likely return no results.
You have two options:
use the full path (likely /usr/local/bin/aws)
add /usr/local/bin to the jenkins user's $PATH
I need my Makefile to work in both Linux and Windows so the accepted answer is not an option for me.
I diagnosed the problem by adding the following to the top of my build script:
whoami
which aws
env|grep PATH
This returned:
root
which: no aws in (/sbin:/bin:/usr/sbin:/usr/bin)
PATH=/sbin:/bin:/usr/sbin:/usr/bin
Bizarrely, the path does not include /usr/local/bin, even though the interactive shell on the Jenkins host includes it. The fix is simple enough, create a symlink on the Jenkins host:
ln -s /usr/local/bin/aws /bin/aws
Now the aws command can be found by scripts running in Jenkins (in /bin).

Aws command not found on Airflow, but work in console

I have in Airflow a BashOperator that executes an aws s3 ls s3:/path command. It works on the console but it doesn't works on Airflow, the error message is: command not found . Aws is correctly installed, the following URL explains how the PATH was set:
How do I resolve the "-bash: aws: command not found" awscli error? and how it was installed: Is it possible to install aws-cli package without root permission? .
aws --version
aws-cli/1.18.209 Python/3.8.5 Linux/5.4.0-58-generic botocore/1.19.49
I dont know what I am doing wrong. Please help. How can I make this command (and any other in aws) work on Airflow ?
Thanks in advance.

Problem running logstash in aws ec2 linux ami

I am setting up "elasticsearch" in AWS, i am trying to use AWS linux AMI. When i run the
bin/logstash -f "/path to config file"
i get error saying:
"logstash.yml" not found try using "--path.settings"
then when i use
"--path.settings="/etc/logstash"
i again get another error.
I have been following this document of AWS
https://aws.amazon.com/elasticsearch-service/resources/articles/logstash-tutorial/
The error i get after specifying
--path.settings="/etc/logstash" :
"Failed to execute action {:id=>:main, :action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Could not execute action: PipelineAction::Create<main>, action_result: false", :backtrace=>nil}"
I have configured file logstash_simple.conf, specifying input and output.
this is the command line input in the linux ec2 instance
/usr/share/logstash/bin/logstash -f /usr/share/logstash/logstash_simple.conf
--path.settings="/etc/logstash"
Okay i had made a mistake in the config file,
i missed providing aws accesskey and secret key, dumb me!

gcloud compute ssh stops

I am using gcloud ssh to connect gce.
> gcloud compute --project "first-medium-2****8" ssh --zone "us-east1-b" "instance-2"
I entered the above command to powershell ,but it replies
>Using username "hogehoge".
>Authenticating with public key "DESKTOP-****hogehoge"
and stops. Nothing happened after all.
Yesterday I did the same thing and there was no problem.
But today, I can't. I tried gcloud init and reinstalled the gcloud.
But nothing changed. What should I do to solve this problem?
Additonal information.
OS Windows10
Google Cloud SDK 237.0.0
PowerShell 5.1.17134.590
Putty 0.70 (only one installation)
note1:I found I could use cloud shell without problem.
But, cloud shell has timeout.So I prefer gcloud to cloud shell.
note2:When I use cloudshell, it connects as "tomotomo".
Not "hogehoge" which username when I use gcloud.
When I run "gcloud compute ssh VM_NAME --verbosity=debug --log-http"
it replies
>DEBUG: SSH Known Hosts File [C:\Users\hogehoge\.ssh\google_compute_known_hosts] could not be opened: Unable to read file
[C:\Users\hogehoge\.ssh\google_compute_known_hosts]: [Errno 2] No such file or directory: u'C:\\Users\\hogehoge\\.ssh\\goo
gle_compute_known_hosts'
DEBUG: Current SSH keys in project: [u'tomotomo:ssh-rsa AAAAB***
DEBUG: Running command [C:\Users\hogehoge\AppData\Local\Google\Cloud SDK\google-cloud-sdk\bin\sdk\putty.exe -t -i C:\User
s\hogehoge\.ssh\google_compute_engine.ppk hogehoge#3*****].
DEBUG: Executing command: [u'C:\\Users\\hogehoge\\AppData\\Local\\Google\\Cloud SDK\\google-cloud-sdk\\bin\\sdk\\putty.ex
e', u'-t', u'-i', u'C:\\Users\\hogehoge\\.ssh\\google_compute_engine.ppk', u'hogehoge#3*****']
It was very long, so I only extract which I think important.
Running
putty -cleanup
solves this problem.
Putty saves some information in registry.(IP address,public key and so on)
This command removes those registries and random seed file.
Running "putty -cleanup" as per #redpawn fixed the issue.

AWS Elastic Beanstalk commands return no output

I am very new to the Amazon Web Services and have been trying a learn-by-doing approach with them.
In summary I was trying to set up Git with the elastic beanstalk command line interface for my web-app. However, I wanted to use my SSH key-pair to authenticate (aws-access-id, secret) and in my naivety and ignorance, I just supplied this information (the SSH key files) and now I can't get it to work. More specifically stated below.
I have my project directory with Git set up so that it works. I then open the git bash window MINGW64 (I am on Windows 10) and attempt to set up eb.
$ eb init
It then tells me that my credentials are not set up and asks me for aws-access-id and the secret. I had just set up the SSH key-pair and try to enter these files; what's the harm in trying? EB failure, it turns out. Now, the instances seem to run fine still, looking at their status on the AWS console website. However, whatever I type into the bash:
$ eb init
$ eb status
$ eb deploy
$
There is no output. Not even an error. It just silently returns to awaiting a new command from me.
When using the --debug option with these commands, a long list of operations is returned, ending with
botocore.parsers.ResponseParserError: Unable to parse response (no element found: line 1, column 0), invalid XML received:
b''
I thought I would be able to log out or something the like, so that I could enter proper credentials which I messed up from the beginning. I restarted the web-app from the AWS webpage interface and restarted my PC. No success.
Thanks in advance.
EDIT:
I also tried reinstalling awscli and awsebcli:
pip uninstall awsebcli
pip uninstall awscli
pip install awscli
pip install awsebcli --upgrade --user
Problem persists, but now there is one output (previously seen only upon --debug option):
$ eb init
ERROR: ResponseParserError - Unable to parse response (no element found: line 1, column 0), invalid XML received:
b''
$
It sounds like you have replaced your AWS credentials in ~/.aws/credentials and/or ~/.aws/config file(s) with your SSH key. You could manually replace these or execute aws configure if you have the AWS CLI installed.