VSTS Task: Window machine file copy: system error 53 - amazon-web-services

I'm trying to make a release from VSTS to a VM(running on AWS) that is running an IIS. For that I use three tasks.
Windows Machine File Copy
Manage IIS App
Deploy IIS App
Before the release I'm running a build pipeline that that gives me an artifact containing the web app (webapp.zip).
When I manually put it on the server I can run step 2 and 3 of my release and the application works. The problem I have is that I don't get the Windows Machine File Copy to work. It always throws an exception giving a 'System Error 53: The network path was not found'. Of course the machines are not domain joined, because I'm running my release on VSTS and need the files on a AWS VM. I tried to open port 445 (for file sharing) and made sure the user has rights for the destination path on the target machine.
So my question is: How can I actually move the files from VSTS to the AWS VM if the two machines are not joined.

Using FTP Upload or cURL upload step/task instead.
Regarding how to create FTP site, you can refer to this article: Creating a New FTP Site in IIS 7.

Disclaimer: this answer merely explains how to fulfill the requirements to use tasks of Windows Machine File Copy and Manage/Deploy IIS tasks.
Please always be concerned about security of your target hosts, its hardening and security assessment is absolutely necessary.
As noted in comments, you need to protect the channel of deployment from the outside world, here an high level example:
Answer:
in order to use the Windows Machine File Copy task you need to:
on the target machine (the one running IIS) enable File and Printer Sharing running the following command from administrative command prompt:
netsh advfirewall firewall set rule group="File and Printer Sharing" new enable=yes
assure that on the target machine PowerShell 4 or more recent is installed; the following executed from a PS command prompt prints the version installed on the local machine:
PS> $PSVersionTable.PSVersion
To get PowerShell 5 you could for example install WMF 5
;
on the target machine you must have installed .NET Framework 4.5 or more recent;
For the other two tasks (Manage/Deploy IIS Task), both require you to enable a WinRM HTTPS listener on the target machine. For development deployment scenario you could follow these steps:
download the ConfigureWinRM.ps1 PowerShell script at from the officaial VSTS Tasks GitHub repository;
enable from an Administrative PowerShel command prompt the RemoteSigned PowerShell execution policy:
PS> Set-ExecutionPolicy RemoteSigned
run the script with the following arguments:
PS> ConfigureWinRM.ps1 FQDN https
Note that FQDN is the complete domain name of your machine as it is reached by the VSTS task, e.g. myhostname.domain.example .
Note also that this script downloads two executables (makecert.exe and winrmconf.cmd) from Internet, so this machine must have Internet connection. Otherwise just download those two files, place them sibling to the script, comment out from the script the Download-Files invocation.
Now you have enabled a WinRM HTTPS listener with a self signed certificate. Remember to use the "Test Certificate" option (which ironically means to not test the certificate, better name would have been "Skip CA Check") for those two tasks.
In production deployment scenario you may want to use instead a certificate which is properly signed.

Windows File Copy is designed to work on the same network and enabling it on the internet would open your server for hacking. It's designed for internal networks. FTP would also result in a significant security risk unless managed properly.
The easiest way to move forward would be to run an Agent on the VM in AWS that you want to release to. The agent will then download the artifacts to the AWS VM and run whatever tasks you need to install.
This allows you to run tasks on the local machine without opening it up to security risks.
If you had multiple machines that you need to manage in AWS you can easily create a local network that will allow your single agent to use Windows File Copy to push files to multiple VM's without risk.

Related

Symbolic link on Amazon File Share System FSx - Windows EC2

I am currently hosting my ASP.Net web application on AWS. I have searched for the best aws storage options for windows environment. I have found that aws File shares system FSx is suitable for our needs.
One of the required features in my app is to be able to create symbolic link on the network shared folder. On my local environment I have active directory and network shared folder. I have applied those steps to enable symbolic link on my pc with windows 10 and it works:
1- Enable remote to remote symbolic link using this cmd command:
fsutil behavior set SymlinkEvaluation R2R:1
2- Check if the feature is enabled:
fsutil behavior query SymlinkEvaluation
the result is:
Local to local symbolic links are enabled.
Local to remote symbolic links are enabled.
Remote to local symbolic links are disabled.
Remote to remote symbolic links are enabled.
3- apply this command for gain access to the target directory:
net use y: "\\share\Public\" * /user:UserName /persistent:yes
4- create symbolic link using this command:
mklink /D \\share\Public\Husam\symtest \\share\Public
It works fine on my local network with active directory.
On aws I have EC2 windows VM joined aws managed domain. The same domain I created the FSx with. I logged in to the machine with domain administrator. I gave security permission (share and security) to this uses on the shared folder using Windows File Shares GUI Tool.
When I try to create the symbolic link I get: Access Denied
mklink /d \\fs-432432fr34234a.myad.com\share\Husam\slink \\fs-432432fr34234a.myad.com\share
Access Denied
any suggestions? is there a way to add this permission in active directory?
It looks to me like mklink is not supported by amazon fsx. I can mklink to my heart's content on my ebs volume but not on the fsx. Also when I mount the share in linux ln -s test1 test2
ln: failed to create symbolic link 'test2': Operation not supported
I found a comment that said "in the GPO you can Change it in "Computer Configuration > Administrative Templates > System > Filesystem" and configure "Selectively allow the evaluation of a symbolic link" – deru May 11 '17 at 6:45." I don't think it will help because I can mklink on ebs.
This is a problem for me as my asp.net web app also uses mklink during it's setup. My solution is to use a windows container for my web app and then use docker-compose to put the links in to the FSx file system. I thought that I wanted to do the docker-compose build on the fsx volume. This was a terrible idea though because the ebs volume is way faster.
I was getting the same error messages reported above. I consulted with the AWS contacts available to the company I work for, and they confirmed that as of right now, FSx for Windows File Server does not support symbolic links.

How to develop remotely in PyCharm?

I have a lab system (with a hardware piece attached to it) which has some python test scripts. The test script sends commands to the attached hardware and receives response.
I don't want to work on the lab computer all the time. Currently, I'm using SSH from my local machine to the lab computer and using the shell to modify the scripts, run the commands etc. Using nano is cumbersome especially while debugging. I want to use an IDE (Pycharm) on my local machine in order to edit and run the scripts on the remote server. Pycharm has remote interpreters which uses the remote python but I want to be able to access and modify the scripts too, just like SSH from terminal.
How can I do that?
PyCharm (Professional Edition only) is also capable of Deployments. You can upload/download files via SFTP directly within Pycharm and run your scripts remotely.
You can visit the following pages for further instructions on how to set everything up:
Setting up a deployment
Configuring a remote interpreter
Yes, PyCharm Professional Edition can do this. Since PyCharm 2018.1 setting up a remote interpreter also automatically sets up deployment. If you have automatic deployments configured (Tools | Deployment | Automatic Deployment) all changes will automatically be uploaded to your SSH box.
See here for a tutorial on configuring an SSH box in PyCharm Professional Edition: https://blog.jetbrains.com/pycharm/2018/04/running-flask-with-an-ssh-remote-python-interpreter/

Running Shoutcast from Openshift Permission Denied Error

I've been following along this blog to setup a shoutcast server on openshift using the diy cartridge. After replacing the destip with my server's OPENSHIFT_DIY_IP and editing the action and stop hooks I find that the server isn't starting when I visit the application's url, instead I'm getting:
503 Service Temporarily Unavailable
Service Temporarily Unavailable
The server is temporarily unable to service your request due to maintenance downtime or capacity problems. Please try again later.
When I checked the log file used in the action hook I'm finding:
server.log
nohup: failed to run command `/var/lib/openshift/xxxx app-id xxxx/app-root/runtime/repo//diy/sc_serv.exe': Permission denied
(while using window's shoutcast distribution) and
nohup: failed to run command `/var/lib/openshift/xxxx app-id xxxx/app-root/runtime/repo//diy/sc_serv': Permission denied
(while using linux's shoutcast distribution)
I've read on several forums that often openshift resets the chmod file permissions and prevents applications from being executed, and that's exactly what I found my openshift application doing (after using filezilla to edit the file permissions). Since sc_serv or sc_serv.exe is the main application (a command line application) to keep the server going I'm wondering how I could get around this odd permissions error.
start action hook (when I used window's shoutcast distribution)
nohup $OPENSHIFT_REPO_DIR/diy/sc_serv.exe $OPENSHIFT_REPO_DIR/diy/sc_serv.conf > $OPENSHIFT_DIY_LOG_DIR/server3.log 2>&1 &
start action hook (when I used linux's shoutcast distribution)
nohup $OPENSHIFT_REPO_DIR/diy/sc_serv $OPENSHIFT_REPO_DIR/diy/sc_serv.conf > $OPENSHIFT_DIY_LOG_DIR/server3.log 2>&1 &
I'd like to note that the blogger used linux and I'm using windows to edit the openshift repository and I assume that the files extracted from the linux distribution of shoutcast are the same whether from windows or linux, but I clearly can't test that. All I can tell so far is that openshift is blocking the main executable (whether it's linux or windows) which essentially runs the whole service. I've tested the server myself on my own localhost and found it working perfectly so I have no doubt if it were to run (with the right settings listed in this blog that it would work.
Edit: Solved
In order to have the permissions changed and kept that way they need to be edited from git using
git update-index --chmod=+x filename
git commit -m 'update file permissions ect...'
git push origin master
After stumbling across more stackoverflow answers (and feel free to link one that explains this I don't remember which one I used) I read that openshift will reset everything (permission wise) on every git push (to retain the safety of the code I assume). So the only way to solve the permissions issue is in fact with git, not through ftp software like filezilla or through ssh. This way changing the chmod will remain permanently.
git update-index --chmod=+x filename
git commit -m 'update file permissions ect...'
git push origin master
In the end what I have in openshift's diy folder is the linux distribution of shoutcast (which can be extracted with 7-Zip. Modified so that it can be reached through port-forwarding like in this blog. To reach the server (having set up openshift's tools) all you'll have to do before broadcasting is this in command line:
rhc port-forward [app-name]
If you're using Sam broadcasting software the good news is that you can easily add a mysql database, and also port-forward into that as well using that same command. Port-forwarding would mean that instead of finding the ip:port for your stream and mysql on openshift you would use localhost or 127.0.0.1 and whatever ports indicated by rhc port-forward. You could also be using your other favorite software to broadcast in which case I'd recommend setting up a batch file like so:
cd C:\YourSoftwarePath
start YourSoftware.exe
start rhc port-forward [app-name]
If you have hardware doing the streaming like through a barix box there will probably be some way of doing this in some other tricky manner.

Jenkins can't copy files to windows remote host

I have a Jenkins server on OS X 10.7, which polls a subversion server, builds the code and packages the app. The last step that I need to complete is deploying the app on a remote host, which is a windows share. Note that my domain account has write access to the target folder and the volume is mounted. I've tried using a shell script build step:
sudo cp "path/to/app" "/Volumes/path/to/target"
However i get a "no tty" response. I was able to run this command succesfully in Terminal, but not as a build step in Jenkins.
Does this have something to do with the user being used when starting up Jenkins? As a side note, the default user.name is jenkins and my JENKINS_HOME resides in /Users/Shared/Jenkins. I would appreciate any help as to how to achieve this.
Your immediate problem seems to be that you are running Jenkins in the background and sudo wants to input a password. Run Jenkins in the foreground with $ java -jar jenkins.war.
However, this most probably won't solve your problem as you'll be asked to enter a password when the command runs - from the terminal you started Jenkins from (presumably it's not what you want). You need to find a way to copy your files without needing root permissions. In general, it it not a good idea to rely on administrative permissions in your builds (there are exceptions, but your case is not it).

Understanding fabric

I've just stumbled upon Fabric and the documentation doesn't really make it obvious how it works.
My educated guess is that you need to install it on both client-side and server-side. The Python code is stored on the client side and transferred through Fabric's wire-protocol when the command is run. The server accepts connections using the OpenSSH SSH daemon through the ~/.ssh/authorized_keys file for the current user (or a special user, or specified in the host name to the fab command).
Is any of this correct? If not, how does it work?
From the docs:
Fabric is a Python (2.5 or higher) library and command-line tool for streamlining the use of SSH for application deployment or systems administration tasks.
It provides a basic suite of operations for executing local or remote shell commands (normally or via sudo) and uploading/downloading files, as well as auxiliary functionality such as prompting the running user for input, or aborting execution.
So it's just like ssh'ing into a box and running the commands you've put into run()/sudo().
There is no transfer of code, so you only need to have ssh running on the remote machine and have some sort of shell (bash is assumed by default).
If you want remote access to a python interpreter you're more looking at something like execnet.
If you want more information on how execution on the remote machine(s) work look to this section of the docs.
Most what you are saying is correct, except that the "fabfile.py" file only has to be stored on your client. An SSH server like OpenSSH needs to be installed on your server and an SSH client needs to be installed on your client.
Fabric then logs into one or more servers in turn and executes the shell commands defined in "fabfile.py". If you are located in the same dir as "fabfile.py" you can go "fab --list" to see a list of available commands and then "fab [COMMAND_NAME]" to execute a command.
The user on the server does not need to be added to "~/.ssh/authorized_keys" but if it is you don't have to type the password every time you want to execute a command.