Is it possible to change the youtube-dl download location to another ftp server? If so, than how it can be done? I want to set the location of youtube-dl default download location to a remote ftp server.
youtube-dl does not include an FTP upload functionality itself, but you can mount the FTP server to a local directory. That way, every application can use FTP, without any application-specific configuration necessary.
How to do that depends on your operating system. Here are instructions for debian/Ubuntu, which should work on most Linux distributions as well, and this question on our sister site serverfault lists options on how to do it on Windows.
To make this the default behavior, create a youtube-dl configuration file (~/.config/youtube-dl/config on BSD/Linux/OSX, refer to the link for other locations) with the content:
-o '/my/ftp/mountpoint/some/subdirectory/%(title)s-%(id)s.%(ext)s'
where /my/ftp/mountpoint is the mountpoint you chose for the remote server, and some/subdirectory is the path on that server. See the youtube-dl documentation for more information on output templates.
Related
I am currently hosting my ASP.Net web application on AWS. I have searched for the best aws storage options for windows environment. I have found that aws File shares system FSx is suitable for our needs.
One of the required features in my app is to be able to create symbolic link on the network shared folder. On my local environment I have active directory and network shared folder. I have applied those steps to enable symbolic link on my pc with windows 10 and it works:
1- Enable remote to remote symbolic link using this cmd command:
fsutil behavior set SymlinkEvaluation R2R:1
2- Check if the feature is enabled:
fsutil behavior query SymlinkEvaluation
the result is:
Local to local symbolic links are enabled.
Local to remote symbolic links are enabled.
Remote to local symbolic links are disabled.
Remote to remote symbolic links are enabled.
3- apply this command for gain access to the target directory:
net use y: "\\share\Public\" * /user:UserName /persistent:yes
4- create symbolic link using this command:
mklink /D \\share\Public\Husam\symtest \\share\Public
It works fine on my local network with active directory.
On aws I have EC2 windows VM joined aws managed domain. The same domain I created the FSx with. I logged in to the machine with domain administrator. I gave security permission (share and security) to this uses on the shared folder using Windows File Shares GUI Tool.
When I try to create the symbolic link I get: Access Denied
mklink /d \\fs-432432fr34234a.myad.com\share\Husam\slink \\fs-432432fr34234a.myad.com\share
Access Denied
any suggestions? is there a way to add this permission in active directory?
It looks to me like mklink is not supported by amazon fsx. I can mklink to my heart's content on my ebs volume but not on the fsx. Also when I mount the share in linux ln -s test1 test2
ln: failed to create symbolic link 'test2': Operation not supported
I found a comment that said "in the GPO you can Change it in "Computer Configuration > Administrative Templates > System > Filesystem" and configure "Selectively allow the evaluation of a symbolic link" – deru May 11 '17 at 6:45." I don't think it will help because I can mklink on ebs.
This is a problem for me as my asp.net web app also uses mklink during it's setup. My solution is to use a windows container for my web app and then use docker-compose to put the links in to the FSx file system. I thought that I wanted to do the docker-compose build on the fsx volume. This was a terrible idea though because the ebs volume is way faster.
I was getting the same error messages reported above. I consulted with the AWS contacts available to the company I work for, and they confirmed that as of right now, FSx for Windows File Server does not support symbolic links.
I made a fairly standard deployment of the Single-Node File Server on Google Cloud. It works fine as I can mount the file server's disk from other instances.
However, now I want to add another disk to the same file server. The documentation says I should use the following command to add another file system:
zfs create storagepool_name/file_system_name
I tried to run this command on the VM that is acting as the file server, but I get the error that the command zfs is not found.
Now I can probably install zfs myself, but I wonder whether that will somehow collide with whatever the deployment has already set up on the machine.
Is installing and setting up zfs myself a problem? If so, how do I add another disk to the file server?
I figured out what went wrong with my setup of the Single-Node File Server.
First, the default deployment settings seems to choose xfs as the default file system instead of zfs. The file server I had was using xfs, as can be seen in the metadata of the instance itself.
Secondly, as user John Hanley commented in my question, even with zfs selected as the file system, only the root user has its PATH variable set-up properly to be able to directly use the zfs command.
I've been following along this blog to setup a shoutcast server on openshift using the diy cartridge. After replacing the destip with my server's OPENSHIFT_DIY_IP and editing the action and stop hooks I find that the server isn't starting when I visit the application's url, instead I'm getting:
503 Service Temporarily Unavailable
Service Temporarily Unavailable
The server is temporarily unable to service your request due to maintenance downtime or capacity problems. Please try again later.
When I checked the log file used in the action hook I'm finding:
server.log
nohup: failed to run command `/var/lib/openshift/xxxx app-id xxxx/app-root/runtime/repo//diy/sc_serv.exe': Permission denied
(while using window's shoutcast distribution) and
nohup: failed to run command `/var/lib/openshift/xxxx app-id xxxx/app-root/runtime/repo//diy/sc_serv': Permission denied
(while using linux's shoutcast distribution)
I've read on several forums that often openshift resets the chmod file permissions and prevents applications from being executed, and that's exactly what I found my openshift application doing (after using filezilla to edit the file permissions). Since sc_serv or sc_serv.exe is the main application (a command line application) to keep the server going I'm wondering how I could get around this odd permissions error.
start action hook (when I used window's shoutcast distribution)
nohup $OPENSHIFT_REPO_DIR/diy/sc_serv.exe $OPENSHIFT_REPO_DIR/diy/sc_serv.conf > $OPENSHIFT_DIY_LOG_DIR/server3.log 2>&1 &
start action hook (when I used linux's shoutcast distribution)
nohup $OPENSHIFT_REPO_DIR/diy/sc_serv $OPENSHIFT_REPO_DIR/diy/sc_serv.conf > $OPENSHIFT_DIY_LOG_DIR/server3.log 2>&1 &
I'd like to note that the blogger used linux and I'm using windows to edit the openshift repository and I assume that the files extracted from the linux distribution of shoutcast are the same whether from windows or linux, but I clearly can't test that. All I can tell so far is that openshift is blocking the main executable (whether it's linux or windows) which essentially runs the whole service. I've tested the server myself on my own localhost and found it working perfectly so I have no doubt if it were to run (with the right settings listed in this blog that it would work.
Edit: Solved
In order to have the permissions changed and kept that way they need to be edited from git using
git update-index --chmod=+x filename
git commit -m 'update file permissions ect...'
git push origin master
After stumbling across more stackoverflow answers (and feel free to link one that explains this I don't remember which one I used) I read that openshift will reset everything (permission wise) on every git push (to retain the safety of the code I assume). So the only way to solve the permissions issue is in fact with git, not through ftp software like filezilla or through ssh. This way changing the chmod will remain permanently.
git update-index --chmod=+x filename
git commit -m 'update file permissions ect...'
git push origin master
In the end what I have in openshift's diy folder is the linux distribution of shoutcast (which can be extracted with 7-Zip. Modified so that it can be reached through port-forwarding like in this blog. To reach the server (having set up openshift's tools) all you'll have to do before broadcasting is this in command line:
rhc port-forward [app-name]
If you're using Sam broadcasting software the good news is that you can easily add a mysql database, and also port-forward into that as well using that same command. Port-forwarding would mean that instead of finding the ip:port for your stream and mysql on openshift you would use localhost or 127.0.0.1 and whatever ports indicated by rhc port-forward. You could also be using your other favorite software to broadcast in which case I'd recommend setting up a batch file like so:
cd C:\YourSoftwarePath
start YourSoftware.exe
start rhc port-forward [app-name]
If you have hardware doing the streaming like through a barix box there will probably be some way of doing this in some other tricky manner.
I use config.vm.synced_folder to sync folders from the host to the VM, but I'd like to sync it in the other direction. Is this possible using vagrant/virtualbox?
By default Vagrant uses VirtualBox's vboxsf to sync folders between host and guest.
It is two way, so if you make changes to the files in /vagrant_data in the guest, it changes the corresponding files in the host's directory. You don't need to do it again the other way around.
Other options to sync files:
rsync
sshfs
NFS
See more => Synced Folders
If I understand correctly, you're looking to create a shared folder for Vagrant where files are being added from the guest machine and should show up in the host machine?
If that's the case, you're still going to have to create the host folder. I'm afraid Vagrant won't create the directory for you from a config.vm.synced_folder line in your Vagrantfile, but it will work fantastically once the host directory exists.
I have a cgi script that I know works (as far as the code is concerned), but which cannot be accessed through my website. My hosting provider simply states that I need to edit the .htaccess file, but I have no idea what options/handlers I need to set in order to make the contents of a directory execute like c++.
How is this done?
You can't on this service provider. A quick search of the Bluehost Kb gave this: https://my.bluehost.com/cgi/help/48
Our LINUX web servers have the capability to run CGI scripts in your own "cgi-bin" directory. Scripts may be written in Perl, Python and CGI languages.
Here are some helpful tips to follow when installing scripts:
Upload to your cgi-bin directory to ensure proper file permission settings.
All scripts on our server must have permissions set to 755 (rwx-rx-rx). If you need help in changing script permissions, please see our article about setting file and user permissions.
Upload in ASCII transfer mode (and NOT BINARY mode)
The first line of each script must read: a) #!/usr/bin/perl (for Perl) b) #!/usr/bin/python (for Python)
Ensure the permissions are set to 755
However, there is nothing stopping you just trying just putting your exe in the cgi-bin dir and seeing if it runs, but this probably won't work.
In this case, you'd need to relink any C++ against the local target server, and I doubt that Bluehost would facilitate this -- just too much support hassle for the few $ / month that you pay.