I've been following along this blog to setup a shoutcast server on openshift using the diy cartridge. After replacing the destip with my server's OPENSHIFT_DIY_IP and editing the action and stop hooks I find that the server isn't starting when I visit the application's url, instead I'm getting:
503 Service Temporarily Unavailable
Service Temporarily Unavailable
The server is temporarily unable to service your request due to maintenance downtime or capacity problems. Please try again later.
When I checked the log file used in the action hook I'm finding:
server.log
nohup: failed to run command `/var/lib/openshift/xxxx app-id xxxx/app-root/runtime/repo//diy/sc_serv.exe': Permission denied
(while using window's shoutcast distribution) and
nohup: failed to run command `/var/lib/openshift/xxxx app-id xxxx/app-root/runtime/repo//diy/sc_serv': Permission denied
(while using linux's shoutcast distribution)
I've read on several forums that often openshift resets the chmod file permissions and prevents applications from being executed, and that's exactly what I found my openshift application doing (after using filezilla to edit the file permissions). Since sc_serv or sc_serv.exe is the main application (a command line application) to keep the server going I'm wondering how I could get around this odd permissions error.
start action hook (when I used window's shoutcast distribution)
nohup $OPENSHIFT_REPO_DIR/diy/sc_serv.exe $OPENSHIFT_REPO_DIR/diy/sc_serv.conf > $OPENSHIFT_DIY_LOG_DIR/server3.log 2>&1 &
start action hook (when I used linux's shoutcast distribution)
nohup $OPENSHIFT_REPO_DIR/diy/sc_serv $OPENSHIFT_REPO_DIR/diy/sc_serv.conf > $OPENSHIFT_DIY_LOG_DIR/server3.log 2>&1 &
I'd like to note that the blogger used linux and I'm using windows to edit the openshift repository and I assume that the files extracted from the linux distribution of shoutcast are the same whether from windows or linux, but I clearly can't test that. All I can tell so far is that openshift is blocking the main executable (whether it's linux or windows) which essentially runs the whole service. I've tested the server myself on my own localhost and found it working perfectly so I have no doubt if it were to run (with the right settings listed in this blog that it would work.
Edit: Solved
In order to have the permissions changed and kept that way they need to be edited from git using
git update-index --chmod=+x filename
git commit -m 'update file permissions ect...'
git push origin master
After stumbling across more stackoverflow answers (and feel free to link one that explains this I don't remember which one I used) I read that openshift will reset everything (permission wise) on every git push (to retain the safety of the code I assume). So the only way to solve the permissions issue is in fact with git, not through ftp software like filezilla or through ssh. This way changing the chmod will remain permanently.
git update-index --chmod=+x filename
git commit -m 'update file permissions ect...'
git push origin master
In the end what I have in openshift's diy folder is the linux distribution of shoutcast (which can be extracted with 7-Zip. Modified so that it can be reached through port-forwarding like in this blog. To reach the server (having set up openshift's tools) all you'll have to do before broadcasting is this in command line:
rhc port-forward [app-name]
If you're using Sam broadcasting software the good news is that you can easily add a mysql database, and also port-forward into that as well using that same command. Port-forwarding would mean that instead of finding the ip:port for your stream and mysql on openshift you would use localhost or 127.0.0.1 and whatever ports indicated by rhc port-forward. You could also be using your other favorite software to broadcast in which case I'd recommend setting up a batch file like so:
cd C:\YourSoftwarePath
start YourSoftware.exe
start rhc port-forward [app-name]
If you have hardware doing the streaming like through a barix box there will probably be some way of doing this in some other tricky manner.
Related
I'm running Docker 19 on Windows 10 for development. A container volume binds directly to a Git repo folder and serves the Django app in that folder. I use Mingw-w64 to run Git (aka Git Bash).
Occasionally, I'll do the following (or something similar to the following):
Request a page served by the Docker container. (To replicate an error, for example.)
Switch to a different branch.
Request a page served by the Docker container from the new branch.
Switch to a different branch.
On the last branch switch, Git will freeze for a bit and then say permission denied on a particular file. The file is a difference between the two branches, so Git is trying to change it.
Process Explorer tells me the files are used by the system process so the only way to get it to let go is by restarting.
My gut is telling me the Django web process (manage.py runserver) is likely locking the file until the request connection is fully closed and is probably lingering as an established connection.
Is my gut right? If it is... Why is the lock held by the system process and not Docker? Is there anything to do to check before I do a branch change? Is there any way to prevent it from happening at all?
I'm trying to make a release from VSTS to a VM(running on AWS) that is running an IIS. For that I use three tasks.
Windows Machine File Copy
Manage IIS App
Deploy IIS App
Before the release I'm running a build pipeline that that gives me an artifact containing the web app (webapp.zip).
When I manually put it on the server I can run step 2 and 3 of my release and the application works. The problem I have is that I don't get the Windows Machine File Copy to work. It always throws an exception giving a 'System Error 53: The network path was not found'. Of course the machines are not domain joined, because I'm running my release on VSTS and need the files on a AWS VM. I tried to open port 445 (for file sharing) and made sure the user has rights for the destination path on the target machine.
So my question is: How can I actually move the files from VSTS to the AWS VM if the two machines are not joined.
Using FTP Upload or cURL upload step/task instead.
Regarding how to create FTP site, you can refer to this article: Creating a New FTP Site in IIS 7.
Disclaimer: this answer merely explains how to fulfill the requirements to use tasks of Windows Machine File Copy and Manage/Deploy IIS tasks.
Please always be concerned about security of your target hosts, its hardening and security assessment is absolutely necessary.
As noted in comments, you need to protect the channel of deployment from the outside world, here an high level example:
Answer:
in order to use the Windows Machine File Copy task you need to:
on the target machine (the one running IIS) enable File and Printer Sharing running the following command from administrative command prompt:
netsh advfirewall firewall set rule group="File and Printer Sharing" new enable=yes
assure that on the target machine PowerShell 4 or more recent is installed; the following executed from a PS command prompt prints the version installed on the local machine:
PS> $PSVersionTable.PSVersion
To get PowerShell 5 you could for example install WMF 5
;
on the target machine you must have installed .NET Framework 4.5 or more recent;
For the other two tasks (Manage/Deploy IIS Task), both require you to enable a WinRM HTTPS listener on the target machine. For development deployment scenario you could follow these steps:
download the ConfigureWinRM.ps1 PowerShell script at from the officaial VSTS Tasks GitHub repository;
enable from an Administrative PowerShel command prompt the RemoteSigned PowerShell execution policy:
PS> Set-ExecutionPolicy RemoteSigned
run the script with the following arguments:
PS> ConfigureWinRM.ps1 FQDN https
Note that FQDN is the complete domain name of your machine as it is reached by the VSTS task, e.g. myhostname.domain.example .
Note also that this script downloads two executables (makecert.exe and winrmconf.cmd) from Internet, so this machine must have Internet connection. Otherwise just download those two files, place them sibling to the script, comment out from the script the Download-Files invocation.
Now you have enabled a WinRM HTTPS listener with a self signed certificate. Remember to use the "Test Certificate" option (which ironically means to not test the certificate, better name would have been "Skip CA Check") for those two tasks.
In production deployment scenario you may want to use instead a certificate which is properly signed.
Windows File Copy is designed to work on the same network and enabling it on the internet would open your server for hacking. It's designed for internal networks. FTP would also result in a significant security risk unless managed properly.
The easiest way to move forward would be to run an Agent on the VM in AWS that you want to release to. The agent will then download the artifacts to the AWS VM and run whatever tasks you need to install.
This allows you to run tasks on the local machine without opening it up to security risks.
If you had multiple machines that you need to manage in AWS you can easily create a local network that will allow your single agent to use Windows File Copy to push files to multiple VM's without risk.
I'm new in Linux world and recently start messing around with ubuntu server. For week or so I was able to understand the basics and now I'm shooting for more difficult tasks like setting up web server, ssh and SVN.
The question is: how to set url to be visible like svn://......../....
I found this and the book http://svnbook.red-bean.com/en/1.7/svn.serverconfig.svnserve.html perhaps i'm not doing something right:
svnserve as daemon
The easiest option is to run svnserve as a standalone “daemon” process. Use the -d option for this:
$ svnserve -d
$
# svnserve is now running, listening on port 3690
When tape this line and restart the service apache2 nothing changes. And I'm not sure is this the way to make it.
When tape this line and restart the service apache2 nothing changes
Apache in any way does not related to handling svn:// protocol
Quote from referenced by you page of SVN Book
Once we successfully start svnserve as explained previously, it makes
every repository on your system available to the network. A client
needs to specify an absolute path in the repository URL. For example,
if a repository is located at /var/svn/project1, a client would reach
it via svn://host.example.com/var/svn/project1.
(Bolded parts is important)
I have a Jenkins server on OS X 10.7, which polls a subversion server, builds the code and packages the app. The last step that I need to complete is deploying the app on a remote host, which is a windows share. Note that my domain account has write access to the target folder and the volume is mounted. I've tried using a shell script build step:
sudo cp "path/to/app" "/Volumes/path/to/target"
However i get a "no tty" response. I was able to run this command succesfully in Terminal, but not as a build step in Jenkins.
Does this have something to do with the user being used when starting up Jenkins? As a side note, the default user.name is jenkins and my JENKINS_HOME resides in /Users/Shared/Jenkins. I would appreciate any help as to how to achieve this.
Your immediate problem seems to be that you are running Jenkins in the background and sudo wants to input a password. Run Jenkins in the foreground with $ java -jar jenkins.war.
However, this most probably won't solve your problem as you'll be asked to enter a password when the command runs - from the terminal you started Jenkins from (presumably it's not what you want). You need to find a way to copy your files without needing root permissions. In general, it it not a good idea to rely on administrative permissions in your builds (there are exceptions, but your case is not it).
I've just stumbled upon Fabric and the documentation doesn't really make it obvious how it works.
My educated guess is that you need to install it on both client-side and server-side. The Python code is stored on the client side and transferred through Fabric's wire-protocol when the command is run. The server accepts connections using the OpenSSH SSH daemon through the ~/.ssh/authorized_keys file for the current user (or a special user, or specified in the host name to the fab command).
Is any of this correct? If not, how does it work?
From the docs:
Fabric is a Python (2.5 or higher) library and command-line tool for streamlining the use of SSH for application deployment or systems administration tasks.
It provides a basic suite of operations for executing local or remote shell commands (normally or via sudo) and uploading/downloading files, as well as auxiliary functionality such as prompting the running user for input, or aborting execution.
So it's just like ssh'ing into a box and running the commands you've put into run()/sudo().
There is no transfer of code, so you only need to have ssh running on the remote machine and have some sort of shell (bash is assumed by default).
If you want remote access to a python interpreter you're more looking at something like execnet.
If you want more information on how execution on the remote machine(s) work look to this section of the docs.
Most what you are saying is correct, except that the "fabfile.py" file only has to be stored on your client. An SSH server like OpenSSH needs to be installed on your server and an SSH client needs to be installed on your client.
Fabric then logs into one or more servers in turn and executes the shell commands defined in "fabfile.py". If you are located in the same dir as "fabfile.py" you can go "fab --list" to see a list of available commands and then "fab [COMMAND_NAME]" to execute a command.
The user on the server does not need to be added to "~/.ssh/authorized_keys" but if it is you don't have to type the password every time you want to execute a command.