Description: Getting the following error when running a docker build. I thought mqm group would be automatically created by default. Doesn't mention otherwise in the site link below. Can someone else try this?
System Notes:(VS Code- Docker build), windows machine.
Error:
useradd: group 'mqm' does not exist
Reference site for instructions:
IBM MQ Customer Docker Image Instructions
Docker File:
FROM ibmcom/mq
USER root
RUN useradd alice -G mqm && \
echo alice:passw0rd | chpasswd
USER mqm
COPY 20-config.mqsc /etc/mqm/
Duplicate of ibmcom/mq docker image backward compatibility issue
From 9.1.5 the container does not use OS based users or groups. This is to conform to cloud best practices. Instead a file based system is being used. This is so that when you roll-out the container in a cloud into production you can switch to an LDAP based system.
The 9.1.5 container uses htpasswd, with the relevant file in /etc/mqm/
For development, if you are not going to create new users, then you can use the 9.1.5 container. If you want to create new users, then you can use 9.1.4 or earlier, or use htpasswd with bcrypt to create the users.
I was using a deprecated site apparently that's in the docker repo link. I guess its a problem with docker and they can`t remove it. Please follow the instructions here. I had no issue.
https://github.com/ibm-messaging/mq-container
Related
I'm using Appwrite on AWS (started with the pre-canned Appwrite marketplace and upgraded to 0.14.2.305).
In order to allow certificate generation, I need to update _APP_DOMAIN and _APP_DOMAIN_TARGET. however, no matter which value I put there, it is not "ingested" by the app (container restart and reboot of the server did not make any difference)
I also tried to read the values from the docker instance itself - but again - no value was read.
Ideas?
You need to restart the Docker containers :)
Just run a docker compose up -d in your appwrite directory.
I have a cloud formation template where I have all the resources and details for the project.
I have the cfn-lint setup locally and it is running perfectly fine. However when I push the code changes, build fails at deployment stage due to cfn-nag stating some simple changes which could be fixed.
I'm using windows machine and I need a way to run this cfn-nag locally so that I could check this just like cfn-lint and fix them locally instead of waiting 40 minutes for build till it reaches deployment stage.
I referred several posts online, found below two helpful
https://stelligent.com/2018/03/23/validating-aws-cloudformation-templates-with-cfn_nag-and-mu/
https://github.com/stelligent/cfn_nag
What is the difference between cfn-nag and cfn-lint and why lint is not failing on what cfn-nag is complaining about?
The above links have some instructions on Ruby and Brew but I'm using Nodejs, felt lost. Please help.
CFN-Nag looks for patterns in AWS CloudFormation templates that may indicate insecure infrastructure,
Ex:
IAM rules that are too permissive (wildcards),
Security group rules that are too permissive (wildcards),
Access logs that aren’t enabled,
Encryption that isn’t enabled,
CFN-Lint scans the AWS CloudFormation template by processing a collection of Rules, where every rule handles a specific function check or validation of the template. It validates against AWS CloudFormation Resource specification.
This collection of rules can be extended with custom rules using the --append-rules argument.
Ex: Whitespaces, alignment(YAML), type checks, valid values for resource properties, and other best practices.
Those two links you previded above have all the information needed, just not directly for a Nodejs developer using a Windows machine.
Step1: Pull the docket image stelligent/cfn-nag
Step2: Add the script to your package.json for cfn-nag
Ex:
"scripts" : {
"cfn:nag": "cfn-nag"
}
If you're using docker-compose.yml
Add the cfn-nag image details to your docker-compose.yml like below
cfn-nag:
image: "stelligent/cfn-nag"
volumes:
-./path_of_cfn_file_to_copy: /path_to_copy_to
command: ${COMMAND: -/path_to_copy_tp/cfn_file}
Just set the scripts in package.json to run via docker-compose
"cfn:nag": "docker-compose run --rm cfn-nag"
I'm trying to test a Django app managed by Docker. Since it's a development project only used by me, I'm using a sqlite3 database backend. However, because I'll be populating this test database with a lot of generated data, and because I don't fully trust Docker, I want to store this sqlite3 db file outside of the container in my home directory, to ensure it doesn't get deleted or lost.
However, by design, Docker makes it difficult for programs inside containers to access files outside of those containers. How do I update my Docker configuration to allow access to this one specific db file in my home directory?
You can mount a host directory into your docker container using -v flag.
For details see this answer: https://stackoverflow.com/a/23455537/7695859.
docker run -v /host/directory:/container/directory -other -options image_name command_to_run
For more details understanding see these official docs.
Use volumes
Manage data in Docker
I'm looking to automatically deploy my app once we release a new version. We use CircleCI, so firing these commands shouldn't be a big deal.
cf login -a https://api.lyra-836.appcloud.swisscom.com -u myuser -p seret
cf push myapp
However I don't want to expose my personal credentials (Passeport acount) into our git repository. Is it possible to generate an API key for that purpose?
How do you handle that? I might also need to ssh into the instance to fire some migrations scripts after the deployment, same goes there.
Currently Swisscoms Application cloud does not offer technical accounts but you can create an additional account easily. Then add it to your org/space as developer and it should be able to fulfill your needs.
CircleCI documentation has a section about handling secrets: Using CircleCI Environment Variables
Setting environment variables for all commands without adding them to
git
Occasionally, you’ll need to add an API key or some other secret
as an environment variable. You might not want to add the value to
your git history. Instead, you can add environment variables using the
Project settings > Environment Variables page of your project.
This documentation describes how to store encrypted stuff within your VCS.
If you prefer to keep your sensitive environment variables checked
into git, but encrypted, you can follow the process outlined at
circleci/encrypted-files.
I am migrating a Django application from Openshift v2 to v3 (In case you don't know, RedHat is shutting down v2 on September 30th, see: https://blog.openshift.com/migrate-to-v3-v2-eol/)
So, I am following this blog post to help me: https://blog.openshift.com/migrating-django-applications-openshift-3/ . I am new to all these Docker / Kubernetes concepts the new version is build upon.
I was able to make some progress : I managed to get a successful build of my app. Yet it crashes at deployment time:
---> Running application from script (app.sh) ...
/usr/libexec/s2i/run: line 42: /opt/app-root/src/app.sh: Permission denied
Indeed, app.sh has lost its x permission. I log into the failing container as debug and see it:
> oc debug dc/<my app>
> (app-root)sh-4.2$ ls -l /opt/app-root/src/app.sh
-rw-rw-r--. 1 default root 127 Sep 6 21:20 /opt/app-root/src/app.sh
The blog posts states "Ensure that the app.sh file is executable by running chmod +x app.sh.", which I did on my local repo. Whatever, I want to do it again directly in the pod, but it doesn't work:
(app-root)sh-4.2$ chmod +x /opt/app-root/src/app.sh
chmod: changing permissions of ‘/opt/app-root/src/app.sh’: Operation not permitted
So, how can I set the x permission to app.sh ? Thank you
Without looking into more details, any S2I builder image will gladly use your custom supplied run script to start the application in an alternative way.
Create .s2i/bin/ (mind the dot) in your source code directory, place the run script into it and rebuild the app in OpenShift - it will automatically use your custom run script upon deployment.
This is the preferred way of starting applications using custom commands in OpenShift.
Regarding your immediate problem, there is a very simple reason why you can not change the permissions of the script: you were trying to modify the permissions in the deployed pod, and not the builder pod. Deployed pods run using different UIDs, usually somewhere in the range of 100000000, and definitely do not match the file ownership as generated by the build. Hence permission denied.
The root cause of your problem (app.sh losing executable permissions) must be in the way the build process installs those files, and indeed looking at the /usr/libexec/s2i/assemble script in the base image does seem to reveal the culprit. The last two lines are:
# set permissions for any installed artifacts
fix-permissions /opt/app-root
If you wanted to change this part of the build instead of using a custom run script, I suggest you then create .s2i/bin/assemble in your project's source code and make it look sort of like this:
#!/bin/bash
echo "Running stock build:"
${STI_SCRIPTS_PATH}/assemble
echo "Fixing the mess:"
chmod 755 /opt/app-root/src/app.sh
This will fix whatever the stock build process does to file permissions, and will do it using the same UID as the rest of the build, so file ownership shouldn't be an issue.
as I stumbled upon this issue myself I've found a way to resolve it.
You have to make your file app.sh executable and push it in your repo as such.
If git does not track this modification as it did for me, you have to use: git update-index --chmod=+x app.sh for it to work.