chef client run in aws bootup script not getting registered in chef server - amazon-web-services

We have used below command to run on booting aws machine. It ran chef client and software is installed. But i am not able to find server in chef console when i searched with vm ip address.
chef-client -r 'role[test]'
Can some one explain chef-client -r option works

The -r flag runs chef-client as normal but replaces the run-list coming from the Chef Server with the one specified on the command line. The two likely reasons for your confusion are either that chef-client is hitting an error which prevents the node data from being saved back to the server, or the way you are searching isn't correct. If you are looking for ipaddress:1.2.3.4, make sure you are using the private IP address (I think).

Related

How to run bash script on Google Cloud VM?

I found this auto shutdown script for VM instances on GCP and tried to add that into the VM's metadata.
Here's a link to that shutdown script.
The config sets it so that after 20 mins the idle VM will shut down, but it's been a few hours and it never shut down. Are there any more steps I have to do after adding the script to the VM metadata?
The metadata script I added:
Startup scripts are executed while the VM starts. If you execute your "shutdown script" at the boot there will be nothing for it to do. Additionally in order for this to work a proper service has to be created and it will be using this script to detect & shutdown the VM in case it's idling.
So - even if the main script ashutdown was executed at boot and there was no idling it did nothing. And since the service wasn't there to run it again your instance will run indefinatelly.
For this to work you need to install everything on the VM in question;
Download all three files to some directory in your vm, for example with curl:
curl -LJO https://raw.githubusercontent.com/GoogleCloudPlatform/ai-platform-samples/master/notebooks/tools/auto-shutdown/ashutdown
curl -LJO https://raw.githubusercontent.com/GoogleCloudPlatform/ai-platform-samples/master/notebooks/tools/auto-shutdown/ashutdown.service
curl -LJO https://raw.githubusercontent.com/GoogleCloudPlatform/ai-platform-samples/master/notebooks/tools/auto-shutdown/install.sh
Make install.sh exacutable: sudo chmod +x install.sh
Run it: sudo ./install.sh.
This should install & run the ashutdown service in your system.
You can check if it's running with service ashutdown status.
These instructions are for Debian system so if you're running CentOS or other flavour of Linux they may differ.

How to keep a server processing running on a Google Cloud VM?

This question seems very basic, but I wasn't able to quickly find an answer at https://cloud.google.com/compute/docs/instances/create-start-instance. I'm running a MicroMDM server on a Google Cloud VM by connecting to is using SSH (from the VM instances page in the Google Cloud Console) and then running the command
> sudo micromdm serve
However, I notice that when I shut down my laptop, the server also stops, which is actually why I wanted to run the server in a VM in the first place.
What would be the recommended way to keep the server running? Should I use systemd or perhaps run the process as a Docker container?
When you run the service from the command line, you "attach" it to your shell process, when you terminate your ssh session, your job gets terminated also.
To make a process run in background, simply append the & at the end of the command, in your case:
sudo micromdm serve &
This way your server is alive even after you quit your session.
I also suggest you to add that line in the instance startup script, if you want that server to always be up, so that you don't have to run the command by hand each time :)
More on Compute Engine startup scripts here.
As the Using MicroMDM with systemd documentation, it suggested to use systemd command to run MicroMDM service on linux.First, on our linux host, we create the micromdm.service file, then we move it to the location ‘/etc/systemd/system/micromdm.service’ . We can start the service. In this way, it will keep the service running, or restart service after the service fails or server restart.

Running Shoutcast from Openshift Permission Denied Error

I've been following along this blog to setup a shoutcast server on openshift using the diy cartridge. After replacing the destip with my server's OPENSHIFT_DIY_IP and editing the action and stop hooks I find that the server isn't starting when I visit the application's url, instead I'm getting:
503 Service Temporarily Unavailable
Service Temporarily Unavailable
The server is temporarily unable to service your request due to maintenance downtime or capacity problems. Please try again later.
When I checked the log file used in the action hook I'm finding:
server.log
nohup: failed to run command `/var/lib/openshift/xxxx app-id xxxx/app-root/runtime/repo//diy/sc_serv.exe': Permission denied
(while using window's shoutcast distribution) and
nohup: failed to run command `/var/lib/openshift/xxxx app-id xxxx/app-root/runtime/repo//diy/sc_serv': Permission denied
(while using linux's shoutcast distribution)
I've read on several forums that often openshift resets the chmod file permissions and prevents applications from being executed, and that's exactly what I found my openshift application doing (after using filezilla to edit the file permissions). Since sc_serv or sc_serv.exe is the main application (a command line application) to keep the server going I'm wondering how I could get around this odd permissions error.
start action hook (when I used window's shoutcast distribution)
nohup $OPENSHIFT_REPO_DIR/diy/sc_serv.exe $OPENSHIFT_REPO_DIR/diy/sc_serv.conf > $OPENSHIFT_DIY_LOG_DIR/server3.log 2>&1 &
start action hook (when I used linux's shoutcast distribution)
nohup $OPENSHIFT_REPO_DIR/diy/sc_serv $OPENSHIFT_REPO_DIR/diy/sc_serv.conf > $OPENSHIFT_DIY_LOG_DIR/server3.log 2>&1 &
I'd like to note that the blogger used linux and I'm using windows to edit the openshift repository and I assume that the files extracted from the linux distribution of shoutcast are the same whether from windows or linux, but I clearly can't test that. All I can tell so far is that openshift is blocking the main executable (whether it's linux or windows) which essentially runs the whole service. I've tested the server myself on my own localhost and found it working perfectly so I have no doubt if it were to run (with the right settings listed in this blog that it would work.
Edit: Solved
In order to have the permissions changed and kept that way they need to be edited from git using
git update-index --chmod=+x filename
git commit -m 'update file permissions ect...'
git push origin master
After stumbling across more stackoverflow answers (and feel free to link one that explains this I don't remember which one I used) I read that openshift will reset everything (permission wise) on every git push (to retain the safety of the code I assume). So the only way to solve the permissions issue is in fact with git, not through ftp software like filezilla or through ssh. This way changing the chmod will remain permanently.
git update-index --chmod=+x filename
git commit -m 'update file permissions ect...'
git push origin master
In the end what I have in openshift's diy folder is the linux distribution of shoutcast (which can be extracted with 7-Zip. Modified so that it can be reached through port-forwarding like in this blog. To reach the server (having set up openshift's tools) all you'll have to do before broadcasting is this in command line:
rhc port-forward [app-name]
If you're using Sam broadcasting software the good news is that you can easily add a mysql database, and also port-forward into that as well using that same command. Port-forwarding would mean that instead of finding the ip:port for your stream and mysql on openshift you would use localhost or 127.0.0.1 and whatever ports indicated by rhc port-forward. You could also be using your other favorite software to broadcast in which case I'd recommend setting up a batch file like so:
cd C:\YourSoftwarePath
start YourSoftware.exe
start rhc port-forward [app-name]
If you have hardware doing the streaming like through a barix box there will probably be some way of doing this in some other tricky manner.

Jenkins can't copy files to windows remote host

I have a Jenkins server on OS X 10.7, which polls a subversion server, builds the code and packages the app. The last step that I need to complete is deploying the app on a remote host, which is a windows share. Note that my domain account has write access to the target folder and the volume is mounted. I've tried using a shell script build step:
sudo cp "path/to/app" "/Volumes/path/to/target"
However i get a "no tty" response. I was able to run this command succesfully in Terminal, but not as a build step in Jenkins.
Does this have something to do with the user being used when starting up Jenkins? As a side note, the default user.name is jenkins and my JENKINS_HOME resides in /Users/Shared/Jenkins. I would appreciate any help as to how to achieve this.
Your immediate problem seems to be that you are running Jenkins in the background and sudo wants to input a password. Run Jenkins in the foreground with $ java -jar jenkins.war.
However, this most probably won't solve your problem as you'll be asked to enter a password when the command runs - from the terminal you started Jenkins from (presumably it's not what you want). You need to find a way to copy your files without needing root permissions. In general, it it not a good idea to rely on administrative permissions in your builds (there are exceptions, but your case is not it).

Understanding fabric

I've just stumbled upon Fabric and the documentation doesn't really make it obvious how it works.
My educated guess is that you need to install it on both client-side and server-side. The Python code is stored on the client side and transferred through Fabric's wire-protocol when the command is run. The server accepts connections using the OpenSSH SSH daemon through the ~/.ssh/authorized_keys file for the current user (or a special user, or specified in the host name to the fab command).
Is any of this correct? If not, how does it work?
From the docs:
Fabric is a Python (2.5 or higher) library and command-line tool for streamlining the use of SSH for application deployment or systems administration tasks.
It provides a basic suite of operations for executing local or remote shell commands (normally or via sudo) and uploading/downloading files, as well as auxiliary functionality such as prompting the running user for input, or aborting execution.
So it's just like ssh'ing into a box and running the commands you've put into run()/sudo().
There is no transfer of code, so you only need to have ssh running on the remote machine and have some sort of shell (bash is assumed by default).
If you want remote access to a python interpreter you're more looking at something like execnet.
If you want more information on how execution on the remote machine(s) work look to this section of the docs.
Most what you are saying is correct, except that the "fabfile.py" file only has to be stored on your client. An SSH server like OpenSSH needs to be installed on your server and an SSH client needs to be installed on your client.
Fabric then logs into one or more servers in turn and executes the shell commands defined in "fabfile.py". If you are located in the same dir as "fabfile.py" you can go "fab --list" to see a list of available commands and then "fab [COMMAND_NAME]" to execute a command.
The user on the server does not need to be added to "~/.ssh/authorized_keys" but if it is you don't have to type the password every time you want to execute a command.