I have a python script running inside a docker container. This app does the loading of builds on to external hardware which is connected on the network. The builds are in zip format and are available in the local network share \\path\to\zip.zip. I am running a python script inside the docker container to trigger this loading zip file.
The python script is simple. All it takes is >python load.py IP path_to_zip
The script then unzips the contents and makes an ftp connection to the hardware and erases the existing contents, then loads the contents of the zip file.
At the moment, I see that the docker container is not able to access the unc path. The script runs fine on a normal environment (without docker)
I've tried it in different ways to resolve the issue
1.When I try giving the unc path from inside docker (attaching shell), I see that it cannot find the unc path
I tried building the Docker container with the zip file in it so that it is available. But I do not see the contents getting uploaded
to the hardware. I am not sure why.
can someone suggest what is the best way to perform point 1? which is my preferred option...
Related
this last week I have been trying to upload a flask app using AWS Beanstalk.
The main problem for me was loading a very heavy library as part of the bundle (there is a 500mb limit for uploading the bundle code).
Instead, I tried to use requirements.txt file so it would download the library directly to the server.
Unfortunately, every time I tried to include the library name in the requirements file, it failed to load it (torch library).
on pythonanywhere server there is a console which allows you to access the virtual environment and simply type
pip install torch
which was very useful and comfortable.
I am looking for something similar in AWS beanstalk, so that I could install the library directly instead of relying on the requirements.txt file.
I have been at it for a few days now and can't make any progress.
your help would be much appreciated.
another question,
is it possible to load the venv to Amazon-S3 and then access the folder from the beanstalk environment?
Its not a good practice to "manually" install your dependencies or configure your EB env from inside. This is only useful for testing and debugging purposes. Thus keep that it mind.
To get your venv, you have to ssh to your EB instance using regular ssh or web-based clients available in AWS EC2 console when you locate your EB EC2 instance. Session manager should work out-of-the-box to enable you to login to the instance.
When you login to the instance, then to activate your venv, you do:
# start bash
bash
# source venv
source /var/app/venv/staging-*/bin/activate
I'm considering the purchase of one of the new Macbook M1's. Docker Desktop is apparently unworkable with their Rosetta 2 engine, and all of my development efforts rely on Docker desktop, and a local development environment that auto-reloads when files are changed.
I haven't done much with Docker remote hosts, but I see that this could be a stop-gap solution until Docker rewrites its engine. Google is failing me... can you keep files on your local machine synced up with your Docker remote host?
No, Docker doesn't do this. Instead, Docker packages your application code into an image; that image can be transferred to a repository (with Docker Hub being the most prominent option), and then run on the remote system, without necessarily needing to have the application code or the interpreter directly installed there. Beyond the image system, Docker has no direct ability to transfer or mount files from one system to another (you could do something like create an NFS-backed named volume, but you would need to run the NFS server yourself).
For day-to-day development, using your language's native isolation system often will work better than trying to simulate a local development environment using Docker. For Python, consider using a tool like Pipfile to create a virtual environment. Python is reasonably platform-independent, so you shouldn't notice any trouble using Apple silicon vs. Intel's.
Don't even consider using the Docker remote API. If you don't configure it perfectly, it's trivial to use it to root the host (and there are many instances of this in the wild). Even if it is configured, you can't use it to mount files from your local system (a docker run -v bind-mount option is always interpreted relative to the Docker host it runs on). If you need to work directly on the remote host for whatever reason, use an ordinary ssh connection.
I made a fairly standard deployment of the Single-Node File Server on Google Cloud. It works fine as I can mount the file server's disk from other instances.
However, now I want to add another disk to the same file server. The documentation says I should use the following command to add another file system:
zfs create storagepool_name/file_system_name
I tried to run this command on the VM that is acting as the file server, but I get the error that the command zfs is not found.
Now I can probably install zfs myself, but I wonder whether that will somehow collide with whatever the deployment has already set up on the machine.
Is installing and setting up zfs myself a problem? If so, how do I add another disk to the file server?
I figured out what went wrong with my setup of the Single-Node File Server.
First, the default deployment settings seems to choose xfs as the default file system instead of zfs. The file server I had was using xfs, as can be seen in the metadata of the instance itself.
Secondly, as user John Hanley commented in my question, even with zfs selected as the file system, only the root user has its PATH variable set-up properly to be able to directly use the zfs command.
Within Google Container OS, I would like to use it as my cloud development environment. How would I run the docker command from the toolbox? Do I need to add the docker.sock as a bind mount? I need to be able to run docker (and docker-compose) to run my development environment.
Google Container OS images come with docker already installed and configured, so you will be able to use the docker command from the command line without any prior configuration if you create a virtual machine from one of these images, and SSH into the machine.
As for docker-compose, this doesn't come pre-installed. However, you can install this, and other relevant tools/programs you require by making use of the toolbox you mentioned which provides a shell (including a package manager)in a Debian chroot-like environment (here you automatically gain root privileges).
You can install docker-compose by following these steps:
1) If you havn't already, enter the toolbox environment by running /usr/bin/toolbox
2) Check the latest version of docker-compose here.
3) You can run the following to retrieve and install docker-compose on the machine (substitute the docker-compose version number for the latest version you retrieved in step 2):
curl -L https://github.com/docker/compose/releases/download/1.18.0/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
You've probably found at this point that although you can now run the freshly installed docker-compose command within the toolbox, you can't run the docker command. This is because by default the toolbox environment doesn't have access to all paths with the rootfs and that the filesystem available doesn't correspond between both environments.
It may be possible to remedy this by exiting out of the toolbox shell, and then edit the /etc/default/toolbox file which allows you to configure the toolbox settings. This would allow you to provide access to the docker binary file in the standard environment by following these steps:
1) Ensure you are no longer in the toolbox shell, then run command which docker. You will see something similar to /usr/bin/docker.
2) Open file /etc/default/toolbox
3) The TOOLBOX_BIND line specifies the paths from rootfs to be made available inside the toolbox environment. To ensure docker is available inside the toolbox environment, you could try adding an entry to the TOOLBOX_BIND section, for example --bind=/usr/bin/docker:/usr/bin/docker.
However, I've found that even though it's possible to edit the /etc/default/toolbox to make the docker binary file available in the toolbox environment, when certain docker commands are run in the toolbox environment, additional errors are generated as the docker version that comes pre-installed on the machine is configured to use certain configuration files and directories and although it may be possible edit the /etc/default/toolbox file and make all of the required locations accessible from within the toolbox environment, it may be simpler to install docker within the toolbox by following the instructions for installing docker on debian found here.
You would then be able, to issue both the docker and docker-compose commands from within toolbox.
Alternatively, it's possible to simply install docker and docker-compose on a standard VM (i.e. without necessarily using a Google Container OS machine type) although the suitability of this depends on your use case.
I'm a newer for the hyperledger and just studying it by following the tutorials on http://hyperledger-fabric.readthedocs.io. I am trying to build the first network using "first-network" in the fabric-samples. The ./byfn -m generate is OK. But after typing ./byfn -m up, I meet
/bin/bash: ./scripts/script.sh: No such file or directory
error and the process hangs.
What is going wrong?
PS: The OS is Windows 10.
Check to see if you have a local firewall enabled. Depending on your docker configuration, a firewall may prohibit the docker daemon from accessing share drives as specified in docker setup (windows).
Restart the Docker daemon after applying local firewall changes.
I was facing the same issue and could resolve it.
The shared network drive needs to be working for any directory on the local machine to be identified from the container.
Docker for example has the "Shared drive" usually c:\ under which all your byfn.sh paths shall be present. Second condition is you need to be running the byfn.sh script with the same user who was authenticated to share the drives on the container. Your password change on the windows environment could break the already existing shared drives with the containers, hence creating problems in starting them.
Follow these steps :
In your docker terminal check the path $HOME. Type the command echo $HOME.
Make sure that your fabric-samples folder is the same path as of the variable $HOME.
Follow the steps for generating your first network.
or try the below solution.
Follow these steps :
Go to settings of docker.
Click on reset credentials.
Now check if the shared drives include the required drives or not.
If not, then include them apply your changes and restart your docker and your bash where you were trying to start your network.
I know the question is old but i have faced the similar issue so i did the following
./byfn.sh -m generate
./byfn.sh -m up
i was missing .sh in both commands.