Google Cloud Single-Node File Server add another disk - google-cloud-platform

I made a fairly standard deployment of the Single-Node File Server on Google Cloud. It works fine as I can mount the file server's disk from other instances.
However, now I want to add another disk to the same file server. The documentation says I should use the following command to add another file system:
zfs create storagepool_name/file_system_name
I tried to run this command on the VM that is acting as the file server, but I get the error that the command zfs is not found.
Now I can probably install zfs myself, but I wonder whether that will somehow collide with whatever the deployment has already set up on the machine.
Is installing and setting up zfs myself a problem? If so, how do I add another disk to the file server?

I figured out what went wrong with my setup of the Single-Node File Server.
First, the default deployment settings seems to choose xfs as the default file system instead of zfs. The file server I had was using xfs, as can be seen in the metadata of the instance itself.
Secondly, as user John Hanley commented in my question, even with zfs selected as the file system, only the root user has its PATH variable set-up properly to be able to directly use the zfs command.

Related

Check which ebextensions have been run

I have an environment in Amazon Elastic Beanstalk that is proving problematic. I'm trying to check which ebextensions have run because there is an oddity where I see a logrotate conf file created but the contents are not what I've written.
Does anyone know which log file I can find that information in? I've tried /var/log/eb-engine.log but that doesn't seem to have anything about running ebextensions. eb-activity, and eb-commandprocessor are mentioned in the docs but they don't exist on the instance.
Platform: Ruby 2.6 running on 64bit Amazon Linux 2/3.0.3
As Amazon Linux 2 is still being worked on some log files aren't available yet. So says AWS Support.
However, you can see which ebextensions have been run by looking in cfn-init.log, and cfn-init-cmd.log.
I was searching for the file names rather than the command names so I couldn't see where the results were being logged.

pass zip file from unc to python code running inside docker

I have a python script running inside a docker container. This app does the loading of builds on to external hardware which is connected on the network. The builds are in zip format and are available in the local network share \\path\to\zip.zip. I am running a python script inside the docker container to trigger this loading zip file.
The python script is simple. All it takes is >python load.py IP path_to_zip
The script then unzips the contents and makes an ftp connection to the hardware and erases the existing contents, then loads the contents of the zip file.
At the moment, I see that the docker container is not able to access the unc path. The script runs fine on a normal environment (without docker)
I've tried it in different ways to resolve the issue
1.When I try giving the unc path from inside docker (attaching shell), I see that it cannot find the unc path
I tried building the Docker container with the zip file in it so that it is available. But I do not see the contents getting uploaded
to the hardware. I am not sure why.
can someone suggest what is the best way to perform point 1? which is my preferred option...

Is there a way to open and modify a python file in a virtual machine of google cloud platform?

I just started to use cloud to do training for my deep learning program. For now every time I modified my local .py file I have to remove the old one in the remote virtual machine of Google Cloud Platform and upload the new one. I am just curious if there is a way that I can actually open the .py file in the remote visual machine through the command line? That would be very high efficiency.
Thank you very much!
To edit a file on a machine you can SSH into there are many potential solutions.
Use scp to copy files. E.g. scp mylocalfile ssh://my-host-address/myfolder
Use ssh mounting solutions: How do you edit files over SSH?
Edit using nano your-file-to-edit (my favorite) https://www.howtogeek.com/howto/42980/the-beginners-guide-to-nano-the-linux-command-line-text-editor/
Edit using vi or vim http://vim.wikia.com/wiki/Tutorial

Hyperledger: get "/bin/bash: ./scripts/script.sh: No such file or directory" when running "./byfn -m up"

I'm a newer for the hyperledger and just studying it by following the tutorials on http://hyperledger-fabric.readthedocs.io. I am trying to build the first network using "first-network" in the fabric-samples. The ./byfn -m generate is OK. But after typing ./byfn -m up, I meet
/bin/bash: ./scripts/script.sh: No such file or directory
error and the process hangs.
What is going wrong?
PS: The OS is Windows 10.
Check to see if you have a local firewall enabled. Depending on your docker configuration, a firewall may prohibit the docker daemon from accessing share drives as specified in docker setup (windows).
Restart the Docker daemon after applying local firewall changes.
I was facing the same issue and could resolve it.
The shared network drive needs to be working for any directory on the local machine to be identified from the container.
Docker for example has the "Shared drive" usually c:\ under which all your byfn.sh paths shall be present. Second condition is you need to be running the byfn.sh script with the same user who was authenticated to share the drives on the container. Your password change on the windows environment could break the already existing shared drives with the containers, hence creating problems in starting them.
Follow these steps :
In your docker terminal check the path $HOME. Type the command echo $HOME.
Make sure that your fabric-samples folder is the same path as of the variable $HOME.
Follow the steps for generating your first network.
or try the below solution.
Follow these steps :
Go to settings of docker.
Click on reset credentials.
Now check if the shared drives include the required drives or not.
If not, then include them apply your changes and restart your docker and your bash where you were trying to start your network.
I know the question is old but i have faced the similar issue so i did the following
./byfn.sh -m generate
./byfn.sh -m up
i was missing .sh in both commands.

Running Shoutcast from Openshift Permission Denied Error

I've been following along this blog to setup a shoutcast server on openshift using the diy cartridge. After replacing the destip with my server's OPENSHIFT_DIY_IP and editing the action and stop hooks I find that the server isn't starting when I visit the application's url, instead I'm getting:
503 Service Temporarily Unavailable
Service Temporarily Unavailable
The server is temporarily unable to service your request due to maintenance downtime or capacity problems. Please try again later.
When I checked the log file used in the action hook I'm finding:
server.log
nohup: failed to run command `/var/lib/openshift/xxxx app-id xxxx/app-root/runtime/repo//diy/sc_serv.exe': Permission denied
(while using window's shoutcast distribution) and
nohup: failed to run command `/var/lib/openshift/xxxx app-id xxxx/app-root/runtime/repo//diy/sc_serv': Permission denied
(while using linux's shoutcast distribution)
I've read on several forums that often openshift resets the chmod file permissions and prevents applications from being executed, and that's exactly what I found my openshift application doing (after using filezilla to edit the file permissions). Since sc_serv or sc_serv.exe is the main application (a command line application) to keep the server going I'm wondering how I could get around this odd permissions error.
start action hook (when I used window's shoutcast distribution)
nohup $OPENSHIFT_REPO_DIR/diy/sc_serv.exe $OPENSHIFT_REPO_DIR/diy/sc_serv.conf > $OPENSHIFT_DIY_LOG_DIR/server3.log 2>&1 &
start action hook (when I used linux's shoutcast distribution)
nohup $OPENSHIFT_REPO_DIR/diy/sc_serv $OPENSHIFT_REPO_DIR/diy/sc_serv.conf > $OPENSHIFT_DIY_LOG_DIR/server3.log 2>&1 &
I'd like to note that the blogger used linux and I'm using windows to edit the openshift repository and I assume that the files extracted from the linux distribution of shoutcast are the same whether from windows or linux, but I clearly can't test that. All I can tell so far is that openshift is blocking the main executable (whether it's linux or windows) which essentially runs the whole service. I've tested the server myself on my own localhost and found it working perfectly so I have no doubt if it were to run (with the right settings listed in this blog that it would work.
Edit: Solved
In order to have the permissions changed and kept that way they need to be edited from git using
git update-index --chmod=+x filename
git commit -m 'update file permissions ect...'
git push origin master
After stumbling across more stackoverflow answers (and feel free to link one that explains this I don't remember which one I used) I read that openshift will reset everything (permission wise) on every git push (to retain the safety of the code I assume). So the only way to solve the permissions issue is in fact with git, not through ftp software like filezilla or through ssh. This way changing the chmod will remain permanently.
git update-index --chmod=+x filename
git commit -m 'update file permissions ect...'
git push origin master
In the end what I have in openshift's diy folder is the linux distribution of shoutcast (which can be extracted with 7-Zip. Modified so that it can be reached through port-forwarding like in this blog. To reach the server (having set up openshift's tools) all you'll have to do before broadcasting is this in command line:
rhc port-forward [app-name]
If you're using Sam broadcasting software the good news is that you can easily add a mysql database, and also port-forward into that as well using that same command. Port-forwarding would mean that instead of finding the ip:port for your stream and mysql on openshift you would use localhost or 127.0.0.1 and whatever ports indicated by rhc port-forward. You could also be using your other favorite software to broadcast in which case I'd recommend setting up a batch file like so:
cd C:\YourSoftwarePath
start YourSoftware.exe
start rhc port-forward [app-name]
If you have hardware doing the streaming like through a barix box there will probably be some way of doing this in some other tricky manner.