I have configure cassandra-cluster locally and it works fine, following the same steps I configure cassandra-cluster on AWS on a ubuntu-server instance.
It works fine, but if I stop cassandra service from one node:
sudo service cassandra stop
And then I start it, this node never connect to the cluster again.
And it fails throwing the next error:
* could not access pidfile for Cassandra
My cassandra version is 3.7, so if I access to /etc/init.d/cassandra, so the cmd_patt is the next:
CMD_PATT="Dcassandra-pidfile=.*cassandra.pid"
Cassandra version: 3.7
Host: ubuntu server 14.04 (AWS).
You have to remove /var/run/cassandra folder hence it has wrong permissions:
sudo rm -rf /var/run/cassandra
Or you can fix permissions manually:
sudo chmod 750 /var/run/cassandra
Then start Cassandra as service:
sudo service cassandra start
Some explanations
Instructions of file permissions you can find here.
It is safe to delete that folder because it recreates with right permissions and content. But do not delete it once it works correct. It may result in loss of data or incorrect behavior.
chmod 750 decrypts as rwxr-x--- permissions. It allows read-write-execute to the user, read-execute to the group and nothing to others. For Cassandra, it is enough to set permissions so.
Stop cassandra service:
sudo service cassandra stop
Remove the default dataset:
sudo rm -rf /var/lib/cassandra/data/system/*
Start cassandra service:
sudo service cassandra start
Related
I am new to AWS and recently I was trying to access a webpage using an EC2 instance. I uploaded the webpage using the following bash commands in the User Data field while creating the instance:
#!/bin/bash
yum update -y
yum -y install httpd
systemctl enable httpd
systemctl start httpd
echo '<html><h1>Sample Webpage</h1></html>' > /var/www/html/index.html
I noticed that the public IP address of the instance directed me to the Apache Web Server's test page when the names of the security group and the instance were different, but to the desired webpage when the names were same.
Could anyone please explain why is it so?
There is nothing wrong with your user_data. It works exactly as expected. Whatever you are checking, does not involve this code, thus please double check your instances and their user data.
The Following is my EC2 User Data:
#!/bin/bash
sudo yum update -y
sudo yum install -y httpd
sudo systemctl start httpd
sudo systemctl enable httpd
In Security Group SSH 22 Port and HTTP 80 Port is Open.
Yet when I try accessing http://public_ip_of_instance the HTTP Apache page doesn't load.
Also, on the Instance Apache is not installed when I checked sudo systemctl status httpd.
I then manually tried it on the EC2 Server and it worked. Then I removed it through yum remove as I wanted to see whether User Data works.
I stopped the Instance and started again but I observed that the User Data Script doesn't work as I am unable to access http page through browser and also on Instance http is not installed.
Where is the actual issue? Some months back this same thing worked on another instance I remember.
Your user data is correct. Whatever is happening with your website is not due to the user data code that you provided.
There could be many reasons it does not work. Public IP of the instance has changed, as always happens when you stop/start the instance. Instance may have per-existing software that clashes with httpd.
Here's some general advice on running UserData once or each startup.
Short answer as John mentioned in the comments EC2's only run the UserData (aka Bootstrap) script once on initalization.
The user data Bash/Powershell is Infrastructure-As-Code. You deploy the script and it installs and configures the machine.
This causes confusion with everyone starting AWS. When you think about it though it doesn't make sense to run the UserData script each time when the PCs already been configured.
What people do often instead is make "Golden Images" (aka Amazon Machine Images - AMI's) of pre-setup EC2s, typically for PCs that take long time to install/configure. The beauty of this is you can setup AutoScaleGroups to use the images which saves any long installation during a scale up event.
Pro Tip: When developing an UserData script run through and test it manually on the EC2. Trust me its far quicker than troubleshooting unattended EC2 UserData errors.
Long answer: you can run the UserData on each boot of the machine using Mime multi-part file. A mime multi-part file allows your script to override how frequently user data is run in the cloud-init package.
https://aws.amazon.com/premiumsupport/knowledge-center/execute-user-data-ec2/
For all those who will run into this problem, first of all check the log with the command:
sudo cat /var/log/cloud-init-output.log
then if you notice connection errors to the various repositories, the reason is because you don't have an internet connection. However, if once inside your EC2 you manage to launch the update and install commands, then the reason why they fail in the UserData is because your EC2 takes a few seconds to get the Internet connection and executes the commands before having it. So to solve this problem, just add this command after #!/bin/bash
#!/bin/bash
until ping -c1 8.8.8.8 &>/dev/null; do :; done
sudo yum update -y
...
This will prevent your EC2 from executing commands before an internet connection is established
TL;DR The command npm run build is taking forever to run on the Amazon EC2 [Ubuntu] instance when I tried running it explicitly by making an SSH. Meanwhile, when I try to create a deployment using CodeDeploy, the deployment takes a good 1 hour time and succeeds but the build folder doesn't get populated, hence I am unable to view my website on the public URL of the EC2 instance. Also, the instance reachability check fails every time after I try to run the command explicitly, and then I have to start and then stop the ec2 instance again! Woof!
Hello everyone, I am trying to deploy my MERN Stack application to AWS but I am stuck now!
Current Progress:
Added both Nginx configs.[Attaching image below]
Nginx is running and there is no problem there!
Added build-app.sh in appspec.yml in the root directory. [View code below]
#!/bin/bash
#clear build directory
cd /home/ubuntu/badlav-app/badlav-client
sudo rm -rf build
sudo mkdir build
#client (Generates a new `build` directory)
cd /home/ubuntu/badlav-app/badlav-client
sudo sh set-prod-env-aws.sh
sudo rm -rf node_modules
sudo npm i
sudo npm run build
#server
cd /home/ubuntu/badlav-app/badlav-server
sudo sh set-prod-env.sh
#back to root
cd /home/ubuntu
appspec.yml
version: 0.0
os: linux
files:
- source: /
destination: /home/ubuntu/badlav-app
hooks:
BeforeInstall:
- location: scripts/build-app.sh
runas: root
Using the above appspec.yml file, the deployment using CodeDeploy succeeds but didn't populate the build folder within /home/badlav-app/badlav-client/build.
So I tried to debug on my own and started running the commands one by one by myself after SSH(ing :P) into the EC2 instance. But when I reach npm run build, the instance just hangs forever. After being exhausted, I have no option left, I terminate the task. Now, when I view my instance on the AWS Console, it has gone berserk! The instance reachability check fails! The only way, I get my instance back is by stopping it and starting it again.
Since I am new to CI/CD, please don't judge my appspec.yml. It'd be great if anyone of you could suggest a better way, thanks for that! :)
To sum up, I want to be able to create a deployment using AWS CodeDeploy, but due to this npm run build taking so much time and hanging my server(instance reach check fails!), I am unable to do so. Moreover, I am not even sure whether npm run build is a problem at all!
I would be more than happy to share any further details/screenshots in order to support my question. Please ask over.
Thanks in advance!
/etc/nginx/nginx.conf/etc/nginx/conf.d/default.conf
If you're using EC2 free tier, the chance is that the instance may have low spec and memory (t2.nano has 0.5G and t2.micro has 1G of memory).
Maybe npm run build consumes all of the memory space.
I often face the same problem with my vue project.
Solution: Do NOT use free tier for medium and large projects. Upgrade your plan and use better instances e.g. t2.medium
Whenever an AWS autoscaling group launches new ubuntu instance and I try to install any package on that instance it gives me the following error:
[stderr]E: Could not get lock /var/lib/dpkg/lock-frontend - open (11: Resource temporarily unavailable)
[stderr]E: Unable to acquire the dpkg frontend lock (/var/lib/dpkg/lock-frontend),
Is there another process using it?
I tried to find a solution and manually fixed it but I don't know why whenever the autoscaling group launches a new ubuntu instance it gives the following error.
When any command updates the Ubuntu or installs a new application, it locks the dpkg(Debian Package Manager).
To identify the problem, please look at the logs
If your system is installing some updates you may find journalctl logs journalctl -u apt-daily.service. This usually happend when the system is set to update itslef and you will notice such activity with this ps -ef | grep apt.systemd.daily and you can check these setting in the file /etc/apt/apt.conf.d/20auto-upgrades
/var/log/dpkg.log*(as it may get rotated) check these logs to find which all services were trying to get installed
Once you have identified the problem, you can solve with these methods:
If system is updating, then try to wait by executing sleep command in the --user-dataof your bootstrapping script
If your 1st installation of an service/application is blocking other one, then put a condition to wait/sleep until the first service is up and so on with rest of the services you are installing.
This was a common problem in Ubuntu 16.04 LTS as per, and you can find the same with the solution code https://forums.aws.amazon.com/thread.jspa?threadID=251663
A snippet of code from the referenced link:
until service codedeploy-agent status >/dev/null 2>&1; do
sleep 60
rm -f install
wget https://aws-codedeploy-us-west-2.s3.amazonaws.com/latest/install
chmod +x ./install
sudo ./install auto
service codedeploy-agent restart
done
SSH into the instance before/while the UserData is running and check which process has acquired the lock:
$ lsof /var/lib/dpkg/lock-frontend
Also, try to enable CodeDeploy agent at the last step after performing all other steps in UserData, like:
https://gist.github.com/say8425/8344d19911dba20fab5538b85006bd31
I'm using ubuntu 12.04 AMI in EC2 for creating puppet cluster and i'm facing problems while configuring it.
The problem is that the master is not able to recognize the slaves.
Do i need more packages other than mysql
/etc/mysql/my.cnf
what changes do i need in the above file?
Puppet is a configuration management tool that allows automating the process of defining and maintaining consistent state of several developer workstations. It is a descriptive, centralized and client-server based system. The central server is configured and the clients synchronize themselves to it to ensure that all systems end in the described state. For instance, the task of ensuring the same development environment on all developer systems in a project can be easily accomplished using Puppet.
Here is a quick procedure to set up a Puppet server and one Puppet client on Amazon EC2 instance having Ubuntu OS, and also installing Puppet Dashboard on server to view the status of the clients.
Prerequisites
Two ec2 instances set up with Ubuntu ami.
One instance named as puppetserver and other as puppetclient.
Procedure
Puppet server and client set up
Configuring hosts files View the /etc/hostname file on puppetserver and puppetclient. These are the Puppet server and client hostnames respectively
Edit /etc/hosts file on both the systems. Add server and client IPs and corresponding hostnames.
Setting up the Puppet Server
Enabling the Puppet Labs Package repository
Download the "puppetlabs-release" package for the OS (here, Ubuntu 12.04) on Puppet server
Install the package by running
dpkg -i
Run apt-get update to get new list of available packages.
For example, to enable the repository for Ubuntu 12.04, Precise Pangolin:
wget https://apt.puppetlabs.com/puppetlabs-release-precise.deb
sudo dpkg -i puppetlabs-release-precise.deb
sudo apt-get updateInstall Puppet
Install Puppet
Install puppetmaster
sudo apt-get update sudo apt-get install puppetmaster
Setting up the Puppet Client
Install Puppet on the puppet client(s)
sudo apt-get update sudo apt-get install puppet
Specify the Puppet server domain name on the client. To do this, modify the
/etc/puppet/puppet.conf
file and add the line
server=.
The client can now connect to the Puppet master.
Start the Puppet agent service for establishing first communication between server and client.
sudo puppet agent --verbose --no-daemonize --onetime
This starts a connection to the Puppet master process that is listening on port 8140 on the Puppet server. The output will be verbose, and the agent will not continue running in the background as a daemon. Also, it will run only one time, that is, after the connection is closed, the agent process will exit. The output looks like:
The client has made itself known to the server by sending an SSL certificate request. The server needs to certify the client.
To view the list of yet-to-be signed certificates on the server
sudo puppet cert --list
This lists the following
Sign the client node's SSL certificate
sudo puppet cert --sign <puppet client name>
Client can now establish full connection to the server and poll the Puppet master for any configuration updations.
Defining Configurations
We have set up puppet on both Puppet server and client and have also established communication between the two machines. Next step is to define the configuration for the target systems using puppet manifest. These manifests are specified in site.pp file.
As an example, we define a manifest that will create a helloworld.txt file on the client.
Defining manifest
Put the following manifest definition in /etc/puppet/manifests/site.pp file,
node "<puppet client hostname>" { file { "/home/ubuntu/helloworld.txt": content => "This is test content", ensure => file, owner => "ubuntu", group => "ubuntu", mode => 0644 } }
This manifest defines that the puppet client must have a helloworld.txt file
in /home/ubuntu/ folder with content, This is test content.
Getting changes on client
On puppet client, run the following command.
sudo puppet agent -t
The puppet client pulls the manifests defined in the site.pp file on the puppet server. It learned that a file named helloworld.txt with defined specifications, is expected to exist at location /home/ubuntu. Since, no such file exists on the client, the agent takes action and creates the file.
View the 'helloworld.txt' file
To verify that the client exists in a state defined by the Puppet server, run the following command
sudo vi /home/ubuntu/helloworld.txt
The file contents are same as defined in the manifest definition on the server.
Installing Puppet Dashboard
Overview
Puppet Dashboard is a GUI that interfaces with Puppet. It can be used to view and report the status of all the client nodes. Puppet dashboard runs on port 3000 on the puppet server.
Following are the steps for set up
Installing external dependencies
Dashboard is a Ruby on Rails web app and thus requires certain software to be installed
RubyGems
Rake version 0.8.3 or newer
MySQL database server version 5.x
Ruby-MySQL bindings version 2.7.x or 2.8.x
Install the packages
sudo apt-get install -y build-essential irb libmysql-ruby libmysqlclient-dev libopenssl-ruby libreadline-ruby mysql-server rake rdoc ri ruby ruby-dev
Install RubyGems package system
( URL="http://production.cf.rubygems.org/rubygems/rubygems-1.3.7.tgz" PACKAGE=$(echo $URL | sed "s/\.[^\.]*$//; s/^.*\///") cd $(mktemp -d /tmp/install_rubygems.XXXXXXXXXX) && \ wget -c -t10 -T20 -q $URL && \ tar xfz $PACKAGE.tgz && \ cd $PACKAGE && \ sudo ruby setup.rb )
Create gem as an alternative name for gem1.8
sudo update-alternatives --install /usr/bin/gem gem /usr/bin/gem1.8 1
Installing Puppet Dashboard
Install puppet-dashboard from puppetlabs package repository
sudo apt-get update sudo apt-get install puppet-dashboard
Configuring Dashboard
Modify the database.yml file. It can be found at /usr/share/puppet-dashboard/config/database.yml.
Under the key-value pairs for production environment, the database value 'dashboard_production' specifies the dashboard database name, and username value 'dashboard' specifies the user for this database. In the next step, we will create both the database and the user. password value is the password for MySQL.
Creating and Configuring MySQL database
Create the user and database for puppet-dashboard. Navigate to MySQL command line
CREATE DATABASE dashboard_production CHARACTER SET utf8; CREATE USER 'dashboard'#'localhost' IDENTIFIED BY 'my_password'; GRANT ALL PRIVILEGES ON dashboard_production.* TO 'dashboard'#'localhost';
Configure MySQL's maximum packet size to permit larger rows in database
set global max_allowed_packet = 33554432;
Also modify the mysql configuration file /etc/mysql/my.cnf
Allowing 32MB allows an occasional 17MB row with plenty of spare room
max_allowed_packet = 32M
To create dashboard tables, run the following command in the puppet-dashboard folder
cd /usr/share/puppet-dashboard rake RAILS_ENV=production db:migrate
Testing that Dashboard is working
Start the dashboard using Ruby’s built-in WEBrick server
cd /usr/share/puppet-dashboard
sudo ./script/server -e production
Dashboard instance starts on port 3000 using the “production” environment. Dashboard’s UI can be viewed at :3000
Configure puppet
Both the puppet server and client need to be configured for the dashboard to receive reports.
Configure agent nodes to submit reports to master by turning their reporting ON.
puppet.conf (on each agent)
[agent]
report = true
Configure the server. Add the http report handler to puppet server's reports setting and set reporturl to Dashboard instance’s reports/upload URL
puppet.conf (on puppet master)
[master]
reports = store, http
reporturl = http://<server hostname>:3000/reports/upload
For enabling dashboard's external node classifier(ENC),
puppet.conf (on puppet master)
[master]
node_terminus = exec
external_nodes = /usr/bin/env PUPPET_DASHBOARD_URL=http://<server hostname>:3000 /usr/share/puppet-dashboard/bin/external_node
Testing Puppet's connection to Dashboard
Restart the puppet master
Run one of the puppet agents to test the configurations
sudo puppet agent -t
The output will be:
This means that the report has arrived. To process it, we will activate the delayed_job workers.
Starting delayed_job workers
Run the following command
cd /usr/share/puppet-dashboard
sudo env RAILS_ENV=production script/delayed_job -p dashboard -n 1 -m start
This starts the delayed_job workers, and completes the pending task.
Thus, puppet is now installed on two EC2 instances, out of which one is server and the other is client. Also, puppet-dashboard is installed to view the status of the client nodes.