Vagrant, VirtualBox, Supervisord: When is synced folder mounted? - virtualbox

I'm running a virtual machine with Supervisord to start and maintain several important background processes. I create the virtual machine with vagrant and virtualbox, and provision it with puppet. When the machine starts up supervisord grabs all the .conf files in /etc/supervisor/conf.d and tries to run them. Unfortunately, when I run
Vagrant up
supervisord starts trying to run the files in conf.d immediately, before the synced folders are shared. So starting some background processes like Xvfb runs just fine, but starting my stat tracker, which resides within the synced folder, wont be possible. In fact, I see in the supervisord logs multiple attempts to start the process, complaining that it can't find the file, and finally giving up. Then, once the machine is fully running I can SSH in and run the exact same command in the .conf file and start the process myself.
I have created an intermediary script to loop continuously, waiting for the synced folder to become available, and then starting the processes I want. But in this case, supervisor does not have a way to make sure the process remains running, and it feels like a hack.
Is there a cleaner way to do this? Maybe from within puppet or vagrant?

After some more googling I found this which solved my problem: http://razius.com/articles/launching-services-after-vagrant-mount/

Related

Run EC2 --user-data file only after instance has fully launched with Status OK

I have a --user-data file that downloads a few python scripts that runs a docker as soon as the server is launched.
It seems that the EC2 launch processes interferes with the python script / docker I'm running and is destroying the connection to the docker.
How I've dealt with this issue so far is to wait 5 minutes before running the python script with a simple timer.sleep(300) method. But this feels messy. Any way I can check for what launch processes that are interfering? or a clean solution might be to look for the completion of the launch processes, but I don't know what those processes are, or how I would check against it.
In your question, I see Build and run docker, I think its a lot expensive for your EC2 start up,
I suggest you to externalize the build/push the image docker in other part like CodeBuild/CodePipeline/ECR.... and why not using EC2/ECS/Fargate solution to pull the image docker and run it with ECS agent.
But if you want to use EC2, you can generate a custom AMI with your predefined lib and packages, and in your userdata, you can just pull the image and start it

Switching Git branches while running in Docker container causes permission error

I'm running Docker 19 on Windows 10 for development. A container volume binds directly to a Git repo folder and serves the Django app in that folder. I use Mingw-w64 to run Git (aka Git Bash).
Occasionally, I'll do the following (or something similar to the following):
Request a page served by the Docker container. (To replicate an error, for example.)
Switch to a different branch.
Request a page served by the Docker container from the new branch.
Switch to a different branch.
On the last branch switch, Git will freeze for a bit and then say permission denied on a particular file. The file is a difference between the two branches, so Git is trying to change it.
Process Explorer tells me the files are used by the system process so the only way to get it to let go is by restarting.
My gut is telling me the Django web process (manage.py runserver) is likely locking the file until the request connection is fully closed and is probably lingering as an established connection.
Is my gut right? If it is... Why is the lock held by the system process and not Docker? Is there anything to do to check before I do a branch change? Is there any way to prevent it from happening at all?

Vagrant Box with Trellis/Bedrock

I recently set up a Trellis/Bedrock/Sage environment. I like it so far but have been running into the issue that if I step away from my computer, I can’t reconnect to my local production environment and have to start up my wordpress install from scratch. Is there a way to “save” a vagrant box so I can close my computer and not have to vagrant destroy, then vagrant up each time?
Thanks,
You could probably Save State directly in VirtualBox to accomplish what you want:
Also, I was able to find this comment to help answer your question here which also may do the trick:
In order to keep the same VM around and not loose its state, use the
vagrant suspend and vagrant resume commands. This will make your VM
survive a host machine reboot with its state intact.
https://serverfault.com/users/210885/

Hyperledger: get "/bin/bash: ./scripts/script.sh: No such file or directory" when running "./byfn -m up"

I'm a newer for the hyperledger and just studying it by following the tutorials on http://hyperledger-fabric.readthedocs.io. I am trying to build the first network using "first-network" in the fabric-samples. The ./byfn -m generate is OK. But after typing ./byfn -m up, I meet
/bin/bash: ./scripts/script.sh: No such file or directory
error and the process hangs.
What is going wrong?
PS: The OS is Windows 10.
Check to see if you have a local firewall enabled. Depending on your docker configuration, a firewall may prohibit the docker daemon from accessing share drives as specified in docker setup (windows).
Restart the Docker daemon after applying local firewall changes.
I was facing the same issue and could resolve it.
The shared network drive needs to be working for any directory on the local machine to be identified from the container.
Docker for example has the "Shared drive" usually c:\ under which all your byfn.sh paths shall be present. Second condition is you need to be running the byfn.sh script with the same user who was authenticated to share the drives on the container. Your password change on the windows environment could break the already existing shared drives with the containers, hence creating problems in starting them.
Follow these steps :
In your docker terminal check the path $HOME. Type the command echo $HOME.
Make sure that your fabric-samples folder is the same path as of the variable $HOME.
Follow the steps for generating your first network.
or try the below solution.
Follow these steps :
Go to settings of docker.
Click on reset credentials.
Now check if the shared drives include the required drives or not.
If not, then include them apply your changes and restart your docker and your bash where you were trying to start your network.
I know the question is old but i have faced the similar issue so i did the following
./byfn.sh -m generate
./byfn.sh -m up
i was missing .sh in both commands.

Trouble with Django-Kronos

I have a weird problem with Django-Kronos.
I've got it running successfully on my local machine and on our development server. However, on the production server, I can't get kronos to acknowledge my cron.py file. When I run installtasks, it runs but says "0 tasks installed". I've also tried running the tasks manually and kronos tells me the task doesn't exist.
We use git to push everything through to the server, so all the files and the structures are identical between the three locations. I've also checked and the cron.py file exists and has exactly the same content as the working servers.
The only differences between the servers is that the production server is running Postgres (SQlite on the dev server) and it's Ubuntu 12.10, whereas the dev server is 12.01.
Kronos is functioning properly, but it's not picking up our cron.py file for some reason....
Any got any ideas?!
Well, unfortunately, our solution was to scrap Django-Kronos altogether and create a custom management command which we're running from the crontab.
This happens when one of import you are trying to make is not there, your production system might be missing some Python package which is included in your cron.py.