I'm trying to create a Docker context that will automatically integrate with AWS's ECS.
I'm following this tutorial
The author just does:
docker context create ecs myecs and gets a "pick an integration" prompt, whereas I get an error saying it needs exactly 1 argument.
docker context create" requires exactly 1 argument.
See 'docker context create --help'.
Usage: docker context create [OPTIONS] CONTEXT
Create a context
You need to install the Docker Compose CLI preview
The below curl is from here: Docker docs
curl -L https://raw.githubusercontent.com/docker/compose-cli/main/scripts/install/install_linux.sh | sh
sudo docker context create ecs myecs
It didn't work without sudo for me for some reason.
After the script finished I had some weird errors:
cp: cannot stat '/tmp/tmp.d4QjhW8T6k/docker-compose': No such file or directory and docker context create ecs myecs didn't work at first, but once I tried with sudo it worked fine.
EDIT: . ~/.zshrc (or just close your terminal and open a new one) made it possible for me to run docker context create ecs myecs without sudo.
Author of the blog/tutorial here. It looks like you don't have the pre-requsite installed. In the blog I call out the pre-req in pieces like this.
....In July, Docker released a beta for Docker Desktop that embedded these functionalities and, on September 15th, Docker released an updated experience in their Docker Desktop stable channel....
and then
...For now the only thing you need is Docker Desktop and an AWS account. For this test , I am using Docker Desktop (stable) version 2.5.0.1....
and finally
The core of this integration is built around a new tool dubbed Compose CLI (this is not to be confused with the original docker-compose CLI). This new CLI surfaces to the user as new functionalities in the docker command. While in Docker Desktop all this plumbing is completely hidden and available out of the box, if you are using a Linux machine you can set it up using either a script or a manual install. This new CLI is, essentially, a new version of the docker binary.
Eager to understand more how we could make it more clear / front and center that there were stuff to install and/or minimum software versions you had to use.
Thanks for trying it out!
If you're on Linux and you're running the docker context create ecs myecscontext command from the docs then try enabling experimental features in docker:
Edit /etc/docker/daemon.json
Set contents to
{
"experimental": true
}
Restart docker service sudo systemctl restart docker
Exit your terminal and open a new one so that the changes take effect.
Source1
Source2
I had same issue but after installing Docker Desktop version problem resolved.
Server side version doesn't have such kind of functionality.
Related
Can anyone provide a steps to Change the setting of apache superset from sqlite to MySQL?
I have create superset_config.py to override the configuration
after adding the property i am able to enabled swagger url
I have added SQLALCHEMY_DATABASE_URI = 'mysql://root:xxxxxx#127.0.0.1:3306/superset' property in superset_config.py file
but still it is connecting with SQLlite.
I wrote article that can help you.
I had problem using docker compose and native installation . Port is closed can be due to networking problem or problem with packages. Host.docker.internal doesn’t worked for me on Ubuntu 22. I would like to recommend to not follow official doc and use better approach with single docker image to start. Instead of running 5 containers by compose, run everything in one. Use official docker image, here image. Than modify docker file as follows to install custom db driver:
FROM apache/superset
USER root
RUN pip install mysqlclient
RUN pip install sqlalchemy-redshift
USER superset
Second step is to build new image based on docker file description. To avoid networking problems start both containers on same network (superset, your db) easier is to use host network. I used this on Google cloud example as follow:
docker run -d --network host --name superset supers
The same command to start container with your database. —network host. This solved my problems. More about in whole step to step tutorial: medium or here blog
On starting the SageMaker Studio server, I can only see a set of predefined kernels when
I select kernel for any notebook.
I create conda environments and persist them between sessions by pointing .condarc to a custom miniconda directory stored on EFS.
I want all notebooks to have access to environments stored in the custom miniconda directory. I can do that on the system terminal but can't seem to find a way to make the kernels available to notebooks.
I am aware of Life Cycle Configuration but that seems to be working only with notebooks instances rather than SageMaker Studio.
Desired outcomes
Ideally making custom kernels persistently available to notebooks but if that isn't feasible or requires custom docker image, I am happy with running a script manually every time I run the server.
What I have tried so far:
I ran the following which is a tweaked version of start.sh meant to be for Life Cycle Configuration.
#!/bin/bash
set -e
sudo -u sagemaker-user -i <<'EOF'
unset SUDO_UID
WORKING_DIR=/home/sagemaker-user/.SageMaker/custom-miniconda/
source "$WORKING_DIR/miniconda/bin/activate"
for env in $WORKING_DIR/miniconda/envs/*; do
BASENAME=$(basename "$env")
source activate "$BASENAME"
python -m ipykernel install --user --name "$BASENAME" --display-name "$BASENAME"
done
EOF
That didn't work and I couldn't access the kernels from the notebooks.
If you need a persistent custom kernel in SageMaker studio, you can create an ECR repository and build a docker image with custom environment configurations. This image can then be attached to the SageMaker studio notebooks. Reference link!
SageMaker studio now also supports the use of lifecycle configurations. Reference link!
I'm following this Docker tutorial, which creates a simple Docker-managed Django site, and when I try to run docker-compose up to launch my docker project, I get the ambiguous error:
ERROR: Couldn't connect to Docker daemon at http+docker://localunixsocket - is it running?
The error suggests that the Docker daemon isn't running, but service docker status shows the Docker daemon is running.
If instead I run sudo docker-compose up, then it succeeds, but it chowns a lot of my local development files to the root user, which is easy enough to fix, but annoying.
Why does Docker require root access just to start a local Django development server? How do I fix this?
My versions:
Docker version 18.06.1-ce, build e68fc7a
docker-compose version 1.11.1, build 7c5d5e4
Ubuntu 16.04.5 LTS
If you can run any Docker command at all, you can trivially root the host:
docker run --rm -v /:/host busybox \
cat /host/etc/shadow
Additionally, Docker containers frequently run as root within their own container space, which means that whatever parts of the host filesystem you choose to expose into them, they can make arbitrary changes as arbitrary user IDs. You can use a docker run -u option to pick a different user ID, but you can pick any user ID, even one that belongs to another user on a shared system.
It is very reasonable to use sudo as a way to get root privileges for things that need it, and this is a typical out-of-the-box Docker configuration.
At the end of the day the only real gate on this is the Unix permissions on the file /var/run/docker.sock. This is often mode 0660 owned by a dedicated docker group. If you don’t mind your normal user being able to read and write arbitrary host files without much of a control at all, you can add yourself to that group. That’s frequently appropriate for something like a developer laptop; but on anything like a production system it deserves some real consideration of its security implications.
Within Google Container OS, I would like to use it as my cloud development environment. How would I run the docker command from the toolbox? Do I need to add the docker.sock as a bind mount? I need to be able to run docker (and docker-compose) to run my development environment.
Google Container OS images come with docker already installed and configured, so you will be able to use the docker command from the command line without any prior configuration if you create a virtual machine from one of these images, and SSH into the machine.
As for docker-compose, this doesn't come pre-installed. However, you can install this, and other relevant tools/programs you require by making use of the toolbox you mentioned which provides a shell (including a package manager)in a Debian chroot-like environment (here you automatically gain root privileges).
You can install docker-compose by following these steps:
1) If you havn't already, enter the toolbox environment by running /usr/bin/toolbox
2) Check the latest version of docker-compose here.
3) You can run the following to retrieve and install docker-compose on the machine (substitute the docker-compose version number for the latest version you retrieved in step 2):
curl -L https://github.com/docker/compose/releases/download/1.18.0/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
You've probably found at this point that although you can now run the freshly installed docker-compose command within the toolbox, you can't run the docker command. This is because by default the toolbox environment doesn't have access to all paths with the rootfs and that the filesystem available doesn't correspond between both environments.
It may be possible to remedy this by exiting out of the toolbox shell, and then edit the /etc/default/toolbox file which allows you to configure the toolbox settings. This would allow you to provide access to the docker binary file in the standard environment by following these steps:
1) Ensure you are no longer in the toolbox shell, then run command which docker. You will see something similar to /usr/bin/docker.
2) Open file /etc/default/toolbox
3) The TOOLBOX_BIND line specifies the paths from rootfs to be made available inside the toolbox environment. To ensure docker is available inside the toolbox environment, you could try adding an entry to the TOOLBOX_BIND section, for example --bind=/usr/bin/docker:/usr/bin/docker.
However, I've found that even though it's possible to edit the /etc/default/toolbox to make the docker binary file available in the toolbox environment, when certain docker commands are run in the toolbox environment, additional errors are generated as the docker version that comes pre-installed on the machine is configured to use certain configuration files and directories and although it may be possible edit the /etc/default/toolbox file and make all of the required locations accessible from within the toolbox environment, it may be simpler to install docker within the toolbox by following the instructions for installing docker on debian found here.
You would then be able, to issue both the docker and docker-compose commands from within toolbox.
Alternatively, it's possible to simply install docker and docker-compose on a standard VM (i.e. without necessarily using a Google Container OS machine type) although the suitability of this depends on your use case.
I decided to try and use Google Cloud Datalab for a small project that I'm working on rather than a Jupyter Notebook in an Anaconda environment on an AWS instance.
How can I install a package (for example OpenCV) onto the Datalab VM so that I don't have to reinstall it every time I restart my VM? Why do the packages disappear after every restart but the updated notebooks remain persistent? Any help answering these questions and clarifying how the Datalab VM works would be very helpful.
The notebooks are stored in a docker volume mount that represents a location on the persistent disk that is maintained across restarts of the VM.
The packages you install however are stored in the running container and hence lost on each restart.
You could create a custom docker image and use that instead. On the datalab create command, see the --image-name argument.
Here is an example of a Dockerfile you'll want to use:
FROM gcr.io/cloud-datalab/datalab:latest
RUN pip install opencv
Note that you'll need build the docker image using this docker file, and push the image to Google Container Registry. My memory is a bit fuzzy on this, but it is possible this image needs to be marked as public.
Hope that helps!