How do you restart logstash on an AWS server? - amazon-web-services

This is a pretty basic question, but I coiuld nto for the life of me find a straight answer about it on google.
I have logstash/Kibana/Elastisearch installed and working on an AWS server. Due to some complications, logstash stopped sending files to Kibana, and I wanted to restart it to see if it would fix the issue. (This is on an Ubuntu 14.04.2).
All of the commands I looked up on google would not restart the service properly. If I check the services running, logstash is on that list and has a -.
restart logstash gives the error "Unknown job: logstash", and some of the others I found gave a similar kind of error.
What is the proper command to run in order to restart logstash?

if you installed .deb or .rpm package then you can restart using command
sudo service logstash restart
or
sudo /etc/init.d/logstash restart
or
sudo service logstash stop
sudo service logstash start
if you logstash creating problem to starting or stoping then you can start it manually also
go to /opt/logstash/bin and run logstash script (logstash.bat for window)
cd /opt/logstash/bin
./logstash -f logstash-simple.conf
logstash-simple.conf is your logstash config file you can change it to any name
or
if you downloaded .zip or any other compressed file then go that path and run same way.

Related

Neo4J online backup error on AWS - Failed to run a backup using the available strategies

I'm testing neo4j enterprise 3.3.3 on AWS and trying to run an online backup on a db, which is located on a different server.
I run on my AWS instance:
neo4j-admin backup --backup-dir=~/backup --name=graph.db-backup --from=0.0.0.0:4444
where I change 0.0.0.0 for my open IP for the external neo4j db and 4444 for my port.
But then I get this error:
Failed to load private key: /var/lib/neo4j/certificates/neo4j.key
UPDATE
I fixed that by running the command with sudo (on Amazon AWS).
However, now I'm getting another error:
Failed to run a backup using the available strategies.
The documentation on backups says that you only need to uncomment some settings in neo4j.conf, which is what I've done, both on the server which is being backed up and the one that is actually running backup.
Could it be that the issue is because on AWS you have to run commands with
systemctl
And if so, how do I run neo4-admin with it?
It doesn't work if I use
systemctl neo4j-admin ...
Somebody from Neo4J — can you please help? Backup is one of the main reason to get the Enterprise version but there is not enough documentation on how to use it.

Updating files when deployed (Django/Python)

I am trying to update files on a project that already has been deployed. The changes are not taking place when seeing it deployed, though when I sudo vim these files via GitBash, it shows the changes. Here's how I did when I'm logged into the server Ubuntu via AWS.
cd into the project
git add .
git commit -a -m "message"
git pull origin master
(it comes out a Nano screen--so I input a message then Ctrl X and then respond "no") and it shows the changes through vim.
There's no changes when I refresh the deployed project, and not even when I reboot it via AWS. Can someone please share the steps to make changes and show changes on a deployed project? Thank you so much, I appreciate your feedback!
You need to restart the service running your app to update the app:
sudo systemctl restart service_name

Send file to Jenkins from web server

I have a web server is running in Ubuntu (AWS EC2) and I would like to send a file on it. To do that I would like to use Jenkins but I didn't find a plugin or a good configuration to do it.
The problem when I configure a plugin or something else in Jenkins they ask a password, so my password to the server is encrypted by ssh and they cannot read it.
I tried with :
FTP repository hosts
Publish over FTP
Publish over SSH
Is there someone can help me please ?
Thank you in advance.
I found the solution. In fact it was a problem with the access. I used this command : sudo chown -R ubuntu:ubuntu [Directory] where I have my files. Then when I launched the build it was succeed.
Hope this help.
Thank you

Running Shoutcast from Openshift Permission Denied Error

I've been following along this blog to setup a shoutcast server on openshift using the diy cartridge. After replacing the destip with my server's OPENSHIFT_DIY_IP and editing the action and stop hooks I find that the server isn't starting when I visit the application's url, instead I'm getting:
503 Service Temporarily Unavailable
Service Temporarily Unavailable
The server is temporarily unable to service your request due to maintenance downtime or capacity problems. Please try again later.
When I checked the log file used in the action hook I'm finding:
server.log
nohup: failed to run command `/var/lib/openshift/xxxx app-id xxxx/app-root/runtime/repo//diy/sc_serv.exe': Permission denied
(while using window's shoutcast distribution) and
nohup: failed to run command `/var/lib/openshift/xxxx app-id xxxx/app-root/runtime/repo//diy/sc_serv': Permission denied
(while using linux's shoutcast distribution)
I've read on several forums that often openshift resets the chmod file permissions and prevents applications from being executed, and that's exactly what I found my openshift application doing (after using filezilla to edit the file permissions). Since sc_serv or sc_serv.exe is the main application (a command line application) to keep the server going I'm wondering how I could get around this odd permissions error.
start action hook (when I used window's shoutcast distribution)
nohup $OPENSHIFT_REPO_DIR/diy/sc_serv.exe $OPENSHIFT_REPO_DIR/diy/sc_serv.conf > $OPENSHIFT_DIY_LOG_DIR/server3.log 2>&1 &
start action hook (when I used linux's shoutcast distribution)
nohup $OPENSHIFT_REPO_DIR/diy/sc_serv $OPENSHIFT_REPO_DIR/diy/sc_serv.conf > $OPENSHIFT_DIY_LOG_DIR/server3.log 2>&1 &
I'd like to note that the blogger used linux and I'm using windows to edit the openshift repository and I assume that the files extracted from the linux distribution of shoutcast are the same whether from windows or linux, but I clearly can't test that. All I can tell so far is that openshift is blocking the main executable (whether it's linux or windows) which essentially runs the whole service. I've tested the server myself on my own localhost and found it working perfectly so I have no doubt if it were to run (with the right settings listed in this blog that it would work.
Edit: Solved
In order to have the permissions changed and kept that way they need to be edited from git using
git update-index --chmod=+x filename
git commit -m 'update file permissions ect...'
git push origin master
After stumbling across more stackoverflow answers (and feel free to link one that explains this I don't remember which one I used) I read that openshift will reset everything (permission wise) on every git push (to retain the safety of the code I assume). So the only way to solve the permissions issue is in fact with git, not through ftp software like filezilla or through ssh. This way changing the chmod will remain permanently.
git update-index --chmod=+x filename
git commit -m 'update file permissions ect...'
git push origin master
In the end what I have in openshift's diy folder is the linux distribution of shoutcast (which can be extracted with 7-Zip. Modified so that it can be reached through port-forwarding like in this blog. To reach the server (having set up openshift's tools) all you'll have to do before broadcasting is this in command line:
rhc port-forward [app-name]
If you're using Sam broadcasting software the good news is that you can easily add a mysql database, and also port-forward into that as well using that same command. Port-forwarding would mean that instead of finding the ip:port for your stream and mysql on openshift you would use localhost or 127.0.0.1 and whatever ports indicated by rhc port-forward. You could also be using your other favorite software to broadcast in which case I'd recommend setting up a batch file like so:
cd C:\YourSoftwarePath
start YourSoftware.exe
start rhc port-forward [app-name]
If you have hardware doing the streaming like through a barix box there will probably be some way of doing this in some other tricky manner.

Jenkins can't copy files to windows remote host

I have a Jenkins server on OS X 10.7, which polls a subversion server, builds the code and packages the app. The last step that I need to complete is deploying the app on a remote host, which is a windows share. Note that my domain account has write access to the target folder and the volume is mounted. I've tried using a shell script build step:
sudo cp "path/to/app" "/Volumes/path/to/target"
However i get a "no tty" response. I was able to run this command succesfully in Terminal, but not as a build step in Jenkins.
Does this have something to do with the user being used when starting up Jenkins? As a side note, the default user.name is jenkins and my JENKINS_HOME resides in /Users/Shared/Jenkins. I would appreciate any help as to how to achieve this.
Your immediate problem seems to be that you are running Jenkins in the background and sudo wants to input a password. Run Jenkins in the foreground with $ java -jar jenkins.war.
However, this most probably won't solve your problem as you'll be asked to enter a password when the command runs - from the terminal you started Jenkins from (presumably it's not what you want). You need to find a way to copy your files without needing root permissions. In general, it it not a good idea to rely on administrative permissions in your builds (there are exceptions, but your case is not it).