I am using the command docker context use in order to set the active Docker context:
> docker context use aws-context
aws-context
However, the active Docker context does not change for some reason.
When I subsequently type docker context show, the activated context is still the default context:
> docker context show
default
When I list the existing contexts, the asterisk is still behind the default context:
> docker context ls
NAME TYPE DESCRIPTION DOCKER ENDPOINT
aws-context ecs (eu-west-1)
default * moby Current DOCKER_HOST based configuration tcp://192.168.99.100:2376
How can I change the Docker context?
If you have the DOCKER_HOST environment variable set, it will always take precedence over the newer docker context use workflow.
Type env | grep DOCKER in your shell to see if you have any docker-specific variables set. unset them by typing unset DOCKER_HOST. Other variables such as DOCKER_CONTEXT may also get in the way.
The docker context use command should work fine once that variable is out of the way.
This is noted in the docs here:
The easiest way to see what a context looks like is to view the default context.
$ docker context ls
NAME DESCRIPTION DOCKER ENDPOINT KUBERNETES ENDPOINT ORCHESTRATOR
default * Current... unix:///var/run/docker.sock swarm
This
shows a single context called “default”. It’s configured to talk to a
Swarm cluster through the local /var/run/docker.sock Unix socket. It
has no Kubernetes endpoint configured.
The asterisk in the NAME column indicates that this is the active
context. This means all docker commands will be executed against the
“default” context unless overridden with environment variables such as
DOCKER_HOST and DOCKER_CONTEXT, or on the command-line with the
--context and --host flags.
(bold added by me)
I also experienced problems with env variable: DOCKER_CONTEXT.
Make sure that you are setting it correctly...
instead of:
pb#L1:~$ DOCKER_CONTEXT=example
pb#L1:~$ echo $DOCKER_CONTEXT
example
use:
pb#L1:~$ export DOCKER_CONTEXT=example
pb#L1:~$ echo $DOCKER_CONTEXT
example
alternatively (if you are using docker extension in VSCode, modify settings.json)
ctrl+shift+p -> Open workspace settings (JSON)
{
"docker.context":"remote_workstation"
}
Related
I want to know how to use ENTRYPOINT in a Dockerfile to run a shell script that logs me in.
There’s no need for a custom entry point. Simply set the WANDB_API_KEY environment variable in a kubernetes spec or via the -e flag passed to docker run
I was reading the Docker documentation about deploy Docker containers on AWS ECS https://docs.docker.com/cloud/ecs-integration/ . And after i run the command docker context create ecs myecscontext and select the option AWS environment variables every docker commands that i try to run throw this message on my terminal context requires credentials to be passed as environment variables. I've tried to set the AWS environments with the windows set command but it dosen't work.
I've used like this:
set AWS_SECRET_ACCESS_KEY=any-value
set AWS_ACCESS_KEY_ID=any-value
I'm searching how to solve this problem and the only thing that i've found is to set environment variables like i've already done. What i have to do?
UPDATE:
I've find another way to set environment variables on windows in this site https://www.tutorialspoint.com/how-to-set-environment-variables-using-powershell
Instead use set i had to use $env:VARIABLE_NAME = 'any-value' this sintax to really update the vars.
Like this:
$env:AWS_ACCESS_KEY_ID = 'my-aws-access-key-id'
$env:AWS_SECRET_ACCESS_KEY = 'my-aws-secret-access-key'
I'm using elastic beanstalk to deploy a Django app. I'd like to SSH on the EC2 instance to execute some shell commands but the environment variables don't seem to be there. I specified them via the AWS GUI (configuration -> environment properties) and they seem to work during the boot-up of my app.
I tried activating and deactivating the virtual env via:
source /var/app/venv/*/bin/activate
Is there some environment (or script I can run) to access an environment with all the properties set? Otherwise, I'm hardly able to run any command like python3 manage.py ... since there is no settings module configured (I know how to specify it manually but my app needs around 7 variables to work).
During deployment, the environment properties are readily available to your .platform hook scripts.
After deployment, e.g. when using eb ssh, you need to load the environment properties manually.
One option is to use the EB get-config tool. The environment properties can be accessed either individually (using the -k option), or as a JSON or YAML object with key-value pairs.
For example, one way to export all environment properties would be:
export $(/opt/elasticbeanstalk/bin/get-config --output YAML environment |
sed -r 's/: /=/' | xargs)
Here the get-config part returns all environment properties as YAML, the sed part replaces the ': ' in the YAML output with '=', and the xargs part fixes quoted numbers.
Note this does not require sudo.
Alternatively, you could refer to this AWS knowledge center post:
Important: On Amazon Linux 2, all environment properties are centralized into a single file called /opt/elasticbeanstalk/deployment/env. You must use this file during Elastic Beanstalk's application deployment process only. ...
The post describes how to make a copy of the env file during deployment, using .platform hooks, and how to set permissions so you can access the file later.
You can also perform similar steps manually, using SSH. Once you have the copy set up, with the proper permissions, you can source it.
Beware:
Note: Environment properties with spaces or special characters are interpreted by the Bash shell and can result in a different value.
Try running the command /opt/elasticbeanstalk/bin/get-config environment after you ssh into the EC2 instance.
If you are trying to access the environment variables in eb script elastic beanstalk
Use this
$(/opt/elasticbeanstalk/bin/get-config environment -k ENVURL)
{ "Ref" : "AWSEBEnvironmentName" }
$(/opt/elasticbeanstalk/bin/get-config environment -k ENVURL)
So, i'm trying not to put sensitive information on the dockerfile. A logical approach is to put the creds in the ebs configuration (the GUI) as a ENV variable. However, docker build doesn't seem to be able to access the ENV variable. Any thoughts?
FROM jupyter/scipy-notebook
USER root
ARG AWS_ACCESS_KEY_ID
RUN echo {$AWS_ACCESS_KEY_ID}
I assume that for every deployment you create a new Dockerrun.aws.json file with the correct docker image tag for that deployment. At deployment stage, you can inject environment values which will then be used in docker run command by EB agent. So your docker containers can now access to these environment variables.
Putting sensitive information (for a Dockerfile to use) can be either for allowing a specific step of the image to run (build time), or for the resulting image to have that secret still there at runtime.
For runtime, if you can use the latest docker 1.13 in a swarm mode configuration, you can manage secrets that way
But the first case (build time) is typically for passing credentials to an http_proxy, and that can be done with --build-arg:
docker build --build-arg HTTP_PROXY=http://...
This flag allows you to pass the build-time variables that are accessed like regular environment variables in the RUN instruction of the Dockerfile.
Also, these values don’t persist in the intermediate or final images like ENV values do.
In that case, you would not use ENV, but ARG:
ARG <name>[=<default value>]
The ARG instruction defines a variable that users can pass at build-time to the builder with the docker build command using the --build-arg <varname>=<value> flag
I have two elastic-beanstalk environments on AWS: development and production. I'm running a glassfish server on each instance and it is requested that the same application package be deployable in production and in development environment, without requiring two different .EAR files.The two instance differ in size: the dev has a micro instance while the production has a medium instance, therefore I need to deploy two different configuration files for glassfish, one for each environment.
The main problem is that the file has to be in the glassfish config directory before the server starts, therefore I thought it could be better moving it while the container was created.
Of course each environment uses a docker container to host the glassfish instance, so my first thought was to configure an environment variable for the elastic-beanstalk. In this case
ypenvironment = dev
for the development environment and
ypenvironment = pro
for the production environment. Then in my DOCKERFILE I put this statement in the RUN command:
RUN if [ "$ypenvironment"="pro" ] ; then \
mv --force /var/app/GF_domain.xml /usr/local/glassfish/glassfish/domains/domain1/config/domain.xml ; \
elif [ "$ypenvironment"="dev" ] ; then \
mv --force /var/app/GF_domain.xml.dev /usr/local/glassfish/glassfish/domains/domain1/config/domain.xml ; \
fi
unfortunately, when the startup finishes, both GF_domain files are still in var/app.
Then I red that the RUN command runs things BEFORE the container is fully loaded, maybe missing the elastic-beanstalk-injected variables. So I tried to move the code to the ENTRYPOINT directive. No luck again, the container startup fails. Also tried the
ENTRYPOINT ["command", "param"]
syntax, but it didn't work giving a
System error: exec: "if": executable file not found in $PATH
Thus I'm stuck.
You need:
1/ Not to use entrypoint (or at least use a sh -c 'if...' syntax): that is for runtime execution, not compile-time image build.
2/ to use build-time variables (--build-arg):
You can use ENV instructions in a Dockerfile to define variable values. These values persist in the built image.
However, often persistence is not what you want. Users want to specify variables differently depending on which host they build an image on.
A good example is http_proxy or source versions for pulling intermediate files. The ARG instruction lets Dockerfile authors define values that users can set at build-time using the --build-arg flag:
$ docker build --build-arg HTTP_PROXY=http://10.20.30.2:1234 .
In your case, your Dockefile should include:
ENV ypenvironment
Then docker build --build-arg ypenvironment=dev ... myDevImage
You will build 2 different images (based on the same Dockerfile)
I need to be able to use the same EAR package for dev and pro environments,
Then you want your ENTRYPOINT, when run, to move a file depending on the value of an environment variable.
Your Dockerfile still needs to include:
ENV ypenvironment
But you need to run your one image with
docker run -x ypenvironment=dev ...
Make sure your script (referenced by your entrypoint) includes the if [ "$ypenvironment"="pro" ] ; then... you mention in your question, plus the actual launch (in foreground) of your app.
Your script needs to not exit right away, or your container would switch to exit status right after having started.
When working with Docker you must differentiate between build-time actions and run-time actions.
Dockerfiles are used for building Docker images, not for deploying containers. This means that all the commands in the Dockerfile are executed when you build the Docker image, not when you deploy a container from it.
The CMD and ENTRYPOINT commands are special build-time commands which tell Docker what command to execute when a container is deployed from that image.
Now, in your case a better approach would be to check if Glassfish supports environment variables inside domain.xml (or somewhere else). If it does, you can use the same domain.xml file for both environments, and have the same Docker image for both of them. You then differentiate between the environments by injecting run-time environment variables to the containers by using docker run -e "VAR=value" when running locally, and by using the Environment Properties configuration section when deploying on Elastic Beanstalk.
Edit: In case you can't use environment variables inside domain.xml, you can solve the problem by starting the container with a script which reads the runtime environment variables and puts their values in the correct places in domain.xml using sed, then starts your application as usual. You can find an example in this post.