podman container MySQL container - access denied for user 'root'#'localhost' - dockerfile

I am building a MySQL image using buildah bud -f .podman/MySQL.conf -t localhost/mysql:mushroom and the following dockerfile (located at .podman/MYSQL.conf)
FROM mysql:8.0
ENV MYSQL_ROOT_PASSWORD='password'
EXPOSE 3306
I start the container using:
podman run --rm -v mysql_data:/var/lib/mysql localhost/mysql:mushroom
After starting the container I podman exe -it [ID] /bin/bash into the container cli.
running mysql -p and entering the correct password returns access denied for user 'root'#'localhost' (using password: YES)
I have confirmed that the env var MYSQL_ROOT_PASSWORD is correctly set.
I have tried entering the password in the podman run command (using -e MYSQL_ROOT_PASSWORD=password) I have confirmed that the volume mysql_data doesn't exist when I start the container.
Any suggestions for other things to try?

All I can say is that it seems to work for me.
I used your example Dockerfile (the only thing I did was to trim all the whitespace it seems to have accidentally gained when you pasted it).
I saved it as Dockerfile and then just used podman build ..
Starting the image in one terminal with podman run 8a0516eaa26e prints a load of log lines showing mysql startup and then ends with
[System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '8.0.31' socket: '/var/run/mysqld/mysqld.sock' port: 3306 MySQL Community Server - GPL.
In another terminal I ran podman exec -it happy_dijkstra /bin/bash (that was the auto-generated container name I got) and tried to login to mysql with "password" and it worked. I have podman v3.4.2 here, but I would expect something as simple as this to have worked since v1. Are you sure there isn't a space or other odd character that has sneaked into the password you set?

Everything worked after a reboot.
My best guess is that the mysql_data volume still existed somehow, and held default login data.

Related

SSH to port exposed by container - permission denied

I have a docker container running and it's exposing port 22 to local host port 1312. I am using the following command to run the container:
docker run -it -d -p 127.0.0.1:1312:22 -v
/workspace/project:/root --name
cpp_dep cpp_dep
Now to build the project in CLion, it need to be able to ssh into the container. I entered the container in interactive mode and ran "service ssh restart".
Now when I try to ssh into root#127.0.0.1:1312, it asks for my password. But when I enter my sudo (root) password, it keeps saying permission denied.
Is it an issue with ssh key? Which password should i use? or is there any way to bypass the password?
I am running a MAC OS.
Thanks in advance.
You may enter the container in interactive mode, use whoami to find the current user while use passwd to change the password of current user, then ssh into it using the updated passwd.
More details if you are interested:
User running the container is decided by
USER config in your Dockerfile: https://docs.docker.com/engine/reference/builder/#user
-u option in docker run command: https://docs.docker.com/engine/reference/run/#user
By default it's root (uid = 0), but it depends on your settings.
User password is stored in /etc/passwd file, which is different inside the container and in the host, so the same uid may have different password inside the container. It's a workaround to mannually reset it using passwd in the interactive mode but your may also set it in Dockerfile like
RUN echo 'root:Docker!' | chpasswd // (NOTICE: unsafe!)
It changes the password for root as "Docker!"
EDIT #1
As emphasized by David Maze in comments, it's unsafe to store plain password in the Dockerfile as it's public to anyone who get the source file, and it's not uncommon source files intended to be private mistakenly submitted to open github repository. If the container needs to provide public service, you must use build args (https://docs.docker.com/engine/reference/commandline/build/#set-build-time-variables---build-arg) so password can be secretly specified at build time.
Dockerfile:
ARG PASSWD
RUN echo 'root:${PASSWD}' | chpasswd
build:
docker build --build-arg PASSWD=<secret stored safely>

stellar docker image db init failing after entering password

First time trying to run stellar docker image in persistence mode and receiving this error after entering & confirming new password:
pq: password authentication failed for user "stellar"
docker cmd
docker run --rm -it -p "8000:8000" -v "/dev/stellar:/opt/stellar" --name stellar stellar/quickstart --testnet
I looked at trying to edit pg_hba.conf but I don't see the stellar user that has been configured.
Also, I verified the stellar-core.cfg has the correct db password as defined during setup.
I had exactly the same issue while I was using a password with special characters (too fancy for sed to process?). Then I recreated the '/opt/stellar' shared volume and used stellarpasswd as a New Postgresql Password. And it worked! :)
Another piece of info, maybe more important. I have also added user stellar to my host machine. When the default configurations get rsynced to '/opt/stellar', this might not have worked properly without that user on my host machine..

WSO2 Identity Server: How to enter the keystore and private key password in a Dockerized WSO2 identity server

After running the ciphertool.bat or ciphertool.sh script in the bin directory of WSO2 Identity server, the next time the server is started up, you are presented with a prompt that asks you for the keystore and private key password used to configure the WSO2 secure vault. Example:
C:\Program Files\WSO2\Identity Server\5.7.0\bin>wso2server.bat --start
JAVA_HOME environment variable is set to C:\Program Files\Java\jdk1.8.0_181
CARBON_HOME environment variable is set to C:\PROGRA~1\WSO2\IDENTI~1\570D0D~1.0\bin\..
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0
[Enter KeyStore and Private Key Password :]
I have a WSO2 identity server instance that is running in a Docker container. My passwords are encrypted so I need to provide a keystore/private key password on startup.
This presents an issue though:
I have to run my docker container with the -it flag in order to create an active bash shell in the container that allows me to type in the keystore and private key password. My docker run command looks like this docker run -p 443:443 -it wso2-test .. If I don't include the -it flag, WSO2 IS will never ask for the password and the passwords won't get resolved, causing everything to fail.
I don't want to use the -it flag because it forces user input and I'd like the containers to run independently.
In order to keep things as automated as possible, I want to provide the keystore and private key password right away when I run the wso2server.sh script (which is the entrypoint of my Dockerfile), rather than when the prompt is presented. Is this possible?
Ideally, a solution would have a Dockerfile entrypoint that looks something like this:
ENTRYPOINT ["wso2server.sh", "run", "KEYSTORE_PASSWORD"]
You should pass the keystore password as an environment variable to the docker run command.
docker run -e KEY_STORE_PASSWORD=wso2carbon secvault-test:latest
This environment variable should be read by ENTRYPOINT command, and written into a file named password-tmp under the $PRODUCT_HOME directory. Here's a sample Dockerfile with ENTRYPOINT:
$> cat Dockerfile
FROM ubuntu:16.04
RUN mkdir /opt/wso2is
WORKDIR /opt/wso2is
ENTRYPOINT ["/bin/sh", "-c", "echo $KEY_STORE_PASSWORD > password-tmp && wso2server.sh run"]
Security check:
Since the password is not baked into the docker image, we can safely push the image to a registry. Further, you'll need to enter environment variable whenever you spin a new container. Note that the environment variables of the containers are visible via docker inspect command.

Creating user in ubuntu from AWS

Using AWS (Amazon Web Services) I have created an Ubuntu 16.10 instance and I am able to login using a pem file like this:
ssh -i key.pem ubuntu#52.16.73.14.54
After I am logged, I can see that I am able to execute:
sudo su
(with no password), however the file /etc/sudoers does NOT contain any reference to the user current user: ubuntu.
How can I create another user with exactly the same behavior (without touching the sudoers file) from terminal in a NON interactive way?
I tried:
sudo useradd -m -c "adding a test user" -G sudo,adm -s /bin/bash testuser
But after I become "testuser" if I invoke:
sudo su
I have to provide a password. Which is exactly the way I want to avoid.
You can't do this without touching sudo, beacuse the ubuntu user is given passwordless access specifically.
$ for group in `groups ubuntu`; do sudo grep -r ^[[:space:]]*[^#]*$group[[:space:]] /etc/sudoers* ; done
/etc/sudoers.d/90-cloud-init-users:ubuntu ALL=(ALL) NOPASSWD:ALL
/etc/sudoers.d/90-cloud-init-users:ubuntu ALL=(ALL) NOPASSWD:ALL
/etc/sudoers:%sudo ALL=(ALL:ALL) ALL
But what you can do is create a new sudoers file without touching any existing files. sudo is typically configured these days to read all the configurations in a directiory, usually /etc/sudoers.d/, preceisely so that one failing config doesn't effect the rest of sudo.
In your case, you might want to give an admin group sudoless access rather than your user. Then you can add access in the future to other users without changing sudo config.

Getting ember to run under docker on Windows Quickstart

Working through this tutorial on setting up ember-cli in a Docker container:
http://www.rkblog.rk.edu.pl/w/p/setting-ember-cli-development-environment-ember-21/
Here are my steps:
Created docker-compose.yml in an empty folder on the host machine
Launched Docker Quickstart to get a terminal
Changed to the folder with the .yml
Ran the two docker-compose commands below from the terminal (added -d because without that you get a message that interactive mode is not supported)
Ran docker ps -a to verify that the container was running
Ran docker inspect CONTAINER_ID to find the ip address of the running container
Found the IP address at an odd location (172.17.0.2)
Attempted to access port 4200 on that IP from the host Windows machine browser and also from the Docker CL via curl but without success.
Ran docker ps -a and found that both containers that had been instantiated had exited.
Now if I try to start the container again it just exits immediately
docker-compose run -d --rm ember init
docker-compose run -d --rm ember server
What am I missing to get up and running? Do I need to open ports on the Default VM running in Virtualbox? How do I diagnose why the container keeps exiting?
First I would suggest using docker-compose up, that is most likely what you want.
To see the logs for a detached container you can run docker logs <container name>. If there are any errors you'll see them there.
A likely cause of the "container exit" is because the process goes into the background. Docker requires a process to stay in the foreground, but many serve commands will background by default. To keep the process in the foreground you can sometimes add use a flag like --foreground or --no-daemon, but I'm not sure if one exists for ember.
If that flag doesn't exist, it's likely that ember server is just checking if stdin/stdout are connected to a tty. By default they are not. You can add these lines to your docker-compose.yml to fix it:
stdin_open: True
tty: True
Ok finally resolved it. The issue with the module resolution may have been long file name resolution on windows because after I moved the source folder to the root of the host I was able to get ember serve running under windows.
Then from the terminal window I ran the commands to init and launch ember-server
docker-compose run -d --rm ember init
docker-compose run -d --rm ember server
Then did:
docker-compose up -d
which launched the containers successfully and then I was able to access the Ember page served up at the IP:Port specified earlier in the comments
http://192.168.99.100:4200/