Couchbase console - aws - amazon-web-services

I used this tutorial:
https://docs.couchbase.com/server/6.0/install/rhel-suse-install-intro.html#installing-using-code-class-cmd-yum-code
To install counchbase community on a aws ec2 instance…running:
sudo service couchbase-server start
I can see [OK] but when I try to accesso to my_ip:8091 I get:
ERR_CONNECTION_REFUSED
…if I try to run: couchbase soft nproc 4096
I get:
couchbase: command not found
On my iptables I allow everything and also on aws networking I allowed all connections…what I’m doing wrong?
many thanks
Francesco

Related

AWS EC2 User Data not working (Tried Installing and starting httpd via User Data)

The Following is my EC2 User Data:
#!/bin/bash
sudo yum update -y
sudo yum install -y httpd
sudo systemctl start httpd
sudo systemctl enable httpd
In Security Group SSH 22 Port and HTTP 80 Port is Open.
Yet when I try accessing http://public_ip_of_instance the HTTP Apache page doesn't load.
Also, on the Instance Apache is not installed when I checked sudo systemctl status httpd.
I then manually tried it on the EC2 Server and it worked. Then I removed it through yum remove as I wanted to see whether User Data works.
I stopped the Instance and started again but I observed that the User Data Script doesn't work as I am unable to access http page through browser and also on Instance http is not installed.
Where is the actual issue? Some months back this same thing worked on another instance I remember.
Your user data is correct. Whatever is happening with your website is not due to the user data code that you provided.
There could be many reasons it does not work. Public IP of the instance has changed, as always happens when you stop/start the instance. Instance may have per-existing software that clashes with httpd.
Here's some general advice on running UserData once or each startup.
Short answer as John mentioned in the comments EC2's only run the UserData (aka Bootstrap) script once on initalization.
The user data Bash/Powershell is Infrastructure-As-Code. You deploy the script and it installs and configures the machine.
This causes confusion with everyone starting AWS. When you think about it though it doesn't make sense to run the UserData script each time when the PCs already been configured.
What people do often instead is make "Golden Images" (aka Amazon Machine Images - AMI's) of pre-setup EC2s, typically for PCs that take long time to install/configure. The beauty of this is you can setup AutoScaleGroups to use the images which saves any long installation during a scale up event.
Pro Tip: When developing an UserData script run through and test it manually on the EC2. Trust me its far quicker than troubleshooting unattended EC2 UserData errors.
Long answer: you can run the UserData on each boot of the machine using Mime multi-part file. A mime multi-part file allows your script to override how frequently user data is run in the cloud-init package.
https://aws.amazon.com/premiumsupport/knowledge-center/execute-user-data-ec2/
For all those who will run into this problem, first of all check the log with the command:
sudo cat /var/log/cloud-init-output.log
then if you notice connection errors to the various repositories, the reason is because you don't have an internet connection. However, if once inside your EC2 you manage to launch the update and install commands, then the reason why they fail in the UserData is because your EC2 takes a few seconds to get the Internet connection and executes the commands before having it. So to solve this problem, just add this command after #!/bin/bash
#!/bin/bash
until ping -c1 8.8.8.8 &>/dev/null; do :; done
sudo yum update -y
...
This will prevent your EC2 from executing commands before an internet connection is established

VSCode open-ssh fail : AWS (SessionManagerPlugin is no found)

Thank you for reading.
I successfully set up the ssh config file to loggin to the AWS.
When I try to do ssh login in my local terminal, it works well, but when I try to do using my VSCode Open-SSH extension, it always fails except the first try.
The output is like this:
[18:38:25.400] Running script with connection command: ssh -T -D 53736 -o ConnectTimeout=15 -F <config> awsserver bash
[18:38:26.521] >
> SessionManagerPlugin is not found. Please refer to SessionManager Documentation here: http://docs.aws.amazon.com/console/systems-manager/session-manager-plugin-not-found
All aws commands are well reached from my terminal environment.
Thank you in advance.
I'm not familiar with the VSCode Open-SSH extension, but appears you are getting a message from Amazon's AWS CLI as if this command was being run:
aws ssm start-session --target i-0d2a6aaaaaaaa61c5
Rather than using ssh, is your extension perhaps configured to use Amazon SSM?

GCP: kubectl exec/logs fails to container on using UBUNTU as OS

I created a 2 node cluster with OS as UBUNTU.
After deploying a container, trying a kubectl exec or logs fail with following error :-
Error from server: error dialing backend: No SSH tunnels currently open. Were the targets able to accept an ssh-key for user <username>
Please tell how to make it work.
Nodes are part of default pool only.
Steps to reproduce:-
gcloud container clusters create "gke-test-cluster" --image-type=UBUNTU --machine-type=n1-standard-2 --zone us-east1-c --num-nodes 2 --cluster-version=1.8
kubectl create -f https://k8s.io/docs/tasks/debug-application-cluster/shell-demo.yaml
kubectl get pod shell-demo
kubectl exec -it shell-demo -- /bin/bash
Error from server: error dialing backend: No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-0c"?
kubectl logs shell-demo
Error from server: Get https://10.142.0.5:10250/containerLogs/default/shell-demo/nginx: No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-0c"?
I am using my laptop for all CLI commands.
This issue has already been raised at:-
https://issuetracker.google.com/issues/77986235
https://serverfault.com/questions/907468/gcp-kubectl-exec-logs-fails-to-container-on-using-ubuntu-as-os/907882?noredirect=1#comment1177112_907882
I reproduced your issue, with your exact commands and it worked just fine. This has to be an issue due to something else (like the firewall, as in the issue tracker is suggested).
Actually, check to confirm you have these three firewall rules:
gke-gke-test-cluster-07424324-all ...
gke-gke-test-cluster-07424324-ssh ...
gke-gke-test-cluster-07424324-vms ...
About cloud shell and your laptop, there is no much difference, if you are correctly authenticated with Cloud SDK. So to say "This issue is also reproducible from gcp cloud-shell" doesn't really make sense.
If you do have the firewall rules, and don't have much done in the project, I would recommend you to create a new project and start over there.
It was some issue with size of project metadata. We cleaned it up and it worked.

AWS Elastic Beanstalk commands return no output

I am very new to the Amazon Web Services and have been trying a learn-by-doing approach with them.
In summary I was trying to set up Git with the elastic beanstalk command line interface for my web-app. However, I wanted to use my SSH key-pair to authenticate (aws-access-id, secret) and in my naivety and ignorance, I just supplied this information (the SSH key files) and now I can't get it to work. More specifically stated below.
I have my project directory with Git set up so that it works. I then open the git bash window MINGW64 (I am on Windows 10) and attempt to set up eb.
$ eb init
It then tells me that my credentials are not set up and asks me for aws-access-id and the secret. I had just set up the SSH key-pair and try to enter these files; what's the harm in trying? EB failure, it turns out. Now, the instances seem to run fine still, looking at their status on the AWS console website. However, whatever I type into the bash:
$ eb init
$ eb status
$ eb deploy
$
There is no output. Not even an error. It just silently returns to awaiting a new command from me.
When using the --debug option with these commands, a long list of operations is returned, ending with
botocore.parsers.ResponseParserError: Unable to parse response (no element found: line 1, column 0), invalid XML received:
b''
I thought I would be able to log out or something the like, so that I could enter proper credentials which I messed up from the beginning. I restarted the web-app from the AWS webpage interface and restarted my PC. No success.
Thanks in advance.
EDIT:
I also tried reinstalling awscli and awsebcli:
pip uninstall awsebcli
pip uninstall awscli
pip install awscli
pip install awsebcli --upgrade --user
Problem persists, but now there is one output (previously seen only upon --debug option):
$ eb init
ERROR: ResponseParserError - Unable to parse response (no element found: line 1, column 0), invalid XML received:
b''
$
It sounds like you have replaced your AWS credentials in ~/.aws/credentials and/or ~/.aws/config file(s) with your SSH key. You could manually replace these or execute aws configure if you have the AWS CLI installed.

Cassandra stop working on AWS (ubuntu server)

I have configure cassandra-cluster locally and it works fine, following the same steps I configure cassandra-cluster on AWS on a ubuntu-server instance.
It works fine, but if I stop cassandra service from one node:
sudo service cassandra stop
And then I start it, this node never connect to the cluster again.
And it fails throwing the next error:
* could not access pidfile for Cassandra
My cassandra version is 3.7, so if I access to /etc/init.d/cassandra, so the cmd_patt is the next:
CMD_PATT="Dcassandra-pidfile=.*cassandra.pid"
Cassandra version: 3.7
Host: ubuntu server 14.04 (AWS).
You have to remove /var/run/cassandra folder hence it has wrong permissions:
sudo rm -rf /var/run/cassandra
Or you can fix permissions manually:
sudo chmod 750 /var/run/cassandra
Then start Cassandra as service:
sudo service cassandra start
Some explanations
Instructions of file permissions you can find here.
It is safe to delete that folder because it recreates with right permissions and content. But do not delete it once it works correct. It may result in loss of data or incorrect behavior.
chmod 750 decrypts as rwxr-x--- permissions. It allows read-write-execute to the user, read-execute to the group and nothing to others. For Cassandra, it is enough to set permissions so.
Stop cassandra service:
sudo service cassandra stop
Remove the default dataset:
sudo rm -rf /var/lib/cassandra/data/system/*
Start cassandra service:
sudo service cassandra start