when i try to run a simple program like this:
int main(void) {
mysql_library_init(0, NULL, NULL);
mysql_library_end();
return 0;}
i get the following output errors:
mysql_embedded: Can't find file: '/var/lib/mysql/mysql/plugin.frm' (errno: 13 - Permission denied)
2015-12-30 00:23:04 7fd31a6be740 InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
mysql_embedded: File '/var/lib/mysql/auto.cnf' not found (Errcode: 13 - Permission denied)
here is some info about the directory and files:
$ ls -l --time-style=long-iso /var/lib/
...
drwx------ 4 mysql mysql 4096 2015-12-26 12:58 mysql
...
$ sudo ls -l --time-style=long-iso /var/lib/mysql/mysql/plugin.frm
-rw-rw---- 1 mysql mysql 8586 2015-12-25 21:49 /var/lib/mysql/mysql/plugin.frm
$ sudo ls -l --time-style=long-iso /var/lib/mysql/auto.cnf
-rw-rw---- 1 mysql mysql 56 2015-12-25 21:49 /var/lib/mysql/auto.cnf
what's the problem?
$ dpkg -s libmysqld-dev | grep 'Version'
Version: 5.6.27-0ubuntu1
Related
In a Dockerfile, the common way to copy a directory as a non-root user (e.g $UID 1000) is the following:
COPY --chown=1000:1000 /path/to/host/dir/ /path/to/container/dir
However, I want to use variables instead. For example
ARG USER_ID=1000
ARG GROUP_ID=1000
COPY --chown=${USER_ID}:${GROUP_ID} /path/to/host/dir/ /path/to/container/dir
But this is not possible. There exist a workaround?
Note I know that a possible workaround could be to copy the directory as root and then run chown on the directory (variables works fine with RUN). However, the size of the image will grow just for the use of chown in a separate command.
You can create a user before running the --chown;
mkdir -p test && cd test
mkdir -p path/to/host/dir/
touch path/to/host/dir/myfile
Create your Dockerfile:
FROM busybox
ARG USER_ID=1000
ARG GROUP_ID=1000
RUN addgroup -g ${GROUP_ID} mygroup \
&& adduser -D myuser -u ${USER_ID} -g myuser -G mygroup -s /bin/sh -h /
COPY --chown=myuser:mygroup /path/to/host/dir/ /path/to/container/dir
Build the image
docker build -t example .
Or build it with a custom UID/GID:
docker build -t example --build-arg USER_ID=1234 --build-arg GROUP_ID=2345 .
And verify that the file was chown'ed
docker run --rm example ls -la /path/to/container/dir
total 8
drwxr-xr-x 2 myuser mygroup 4096 Dec 22 16:08 .
drwxr-xr-x 3 root root 4096 Dec 22 16:08 ..
-rw-r--r-- 1 myuser mygroup 0 Dec 22 15:51 myfile
Verify that it has the correct uid/gid:
docker run --rm example ls -lan /path/to/container/dir
total 8
drwxr-xr-x 2 1234 2345 4096 Dec 22 16:08 .
drwxr-xr-x 3 0 0 4096 Dec 22 16:08 ..
-rw-r--r-- 1 1234 2345 0 Dec 22 15:51 myfile
Note: there is an open feature-request for adding this functionality:
issue #35018 "Allow COPY command's --chown to be dynamically populated via ENV or ARG"
In my case, I used my UID and GID numbers and it works as I do have the same non-root account in the DEV and PROD environments.
COPY --chown=1000:1000 /path/to/host/dir/ /path/to/container/dir
And you can find the user and group IDs with the linux command: id
How do I put the output of a task into a resource?
- name: build-pkg-rpm
public: true
plan:
- aggregate:
- get: oregano-test-fedora
- get: git-clone-resource
trigger: true
passed: [compile, build-docker-image-fedora]
- task: create-rpm
image: oregano-test-fedora
config:
platform: linux
inputs:
- name: git-clone-resource
outputs:
- name: srpm
path: ../srpm
run:
path: .concourse/fedora/buildrpm.sh
dir: git-clone-resource
- put: srpm
resource: copr-resource
params:
rpmbuild_dir: "srpm/rpmbuild/SRPMS"
chroots: ["mageia-6-x86_64", "mageia-couldron-x86_64", "fedora-rawhide-x86_64", "fedora-25-x86_64"]
enable_net: false
max_n_bytes: 250000000
project_id: 825
regex: ".*oregano-.*\\.src\\.rpm$"
buildrpm.sh
#!/usr/bin/env bash
set -e
set -x
pwd 2>&1
RPMBUILD_DIR="$(pwd)/../srpm/rpmbuild/"
mkdir -p ${RPMBUILD_DIR}/{SOURCES,BUILD,RPMS,SRPMS,SPECS}
# fill all vars of the spec.in
./waf configure rpmspec
cp -v build/rpmspec/oregano.spec ${RPMBUILD_DIR}/SPECS/
# generate the distributable tar
./waf dist
cp -v oregano*.tar.xz ${RPMBUILD_DIR}/SOURCES/
cd ${RPMBUILD_DIR}
rpmbuild \
--define "_topdir %(pwd)" \
--define "_builddir %{_topdir}/BUILD" \
--define "_rpmdir %{_topdir}/RPMS" \
--define "_srcrpmdir %{_topdir}/SRPMS" \
--define "_specdir %{_topdir}/SPECS" \
--define "_sourcedir %{_topdir}/SOURCES" \
-ba SPECS/oregano.spec && echo "RPM was built"
pwd 2>&1
According to
https://github.com/starkandwayne/concourse-tutorial/tree/master/12_publishing_outputs
this should work, yet the directory in the put step is empty.
The documentation https://concourse-ci.org/put-step.html seems to not cover this topic very much.
I see that the files are writte properly:
Wrote: /tmp/build/be0f50d1/srpm/rpmbuild/SRPMS/oregano-0.84.3-1.fc25.src.rpm Wrote: /tmp/build/be0f50d1/srpm/rpmbuild/RPMS/x86_64/oregano-0.84.3-1.fc25.x86_64.rpm Wrote: /tmp/build/be0f50d1/srpm/rpmbuild/RPMS/x86_64/oregano-debuginfo-0.84.3-1.fc25.x86_64.rpm Executing(%clean): /bin/sh -e /var/tmp/rpm-tmp.6mO3iF
+ umask 022
+ cd /tmp/build/be0f50d1/srpm/rpmbuild/BUILD
+ cd oregano
+ rm -rf /tmp/build/be0f50d1/srpm/rpmbuild/BUILDROOT/oregano-0.84.3-1.fc25.x86_64
+ exit 0
+ echo 'RPM was built' RPM was built RPM was built
+ pwd /tmp/build/be0f50d1/srpm/rpmbuild
But then when it comes to put the task's output, it does not find any files:
base dir: "/tmp/build/put/srpm/rpmbuild/SRPMS"
error: Could not find any matches with that regex
caused by: WalkDir entry is useless
caused by: IO error for operation on /tmp/build/put/srpm/rpmbuild/SRPMS: No such file or directory (os error 2)
caused by: No such file or directory (os error 2)
When I hijack into the containers:
1: build #136, step: create-rpm, type: task
2: build #136, step: srpm, type: put
choose a container: 1
[root#f004c6a0-adee-4735-792f-8e237f645751 be0f50d1]# ls -al
total 0
drwxr-xr-x. 1 root root 44 May 31 11:43 .
drwxr-xr-x. 1 root root 24 May 31 11:43 ..
drwxr-xr-x. 1 root root 598 May 31 11:43 git-clone-resource
drwxr-xr-x. 1 root root 16 May 31 11:43 srpm
[root#f004c6a0-adee-4735-792f-8e237f645751 be0f50d1]# ls -al srpm/rpmbuild/
BUILD/ BUILDROOT/ RPMS/ SOURCES/ SPECS/ SRPMS/
[root#f004c6a0-adee-4735-792f-8e237f645751 be0f50d1]# ls -al srpm/rpmbuild/
BUILD/ BUILDROOT/ RPMS/ SOURCES/ SPECS/ SRPMS/
[root#f004c6a0-adee-4735-792f-8e237f645751 be0f50d1]# ls -al srpm/rpmbuild/SRPMS/
total 1232
drwxr-xr-x. 1 root root 58 May 31 11:43 .
drwxr-xr-x. 1 root root 70 May 31 11:43 ..
-rw-r--r--. 1 root root 1260666 May 31 11:43 oregano-0.84.3-1.fc25.src.rpm
choose a container: 2
/tmp/build/put # ls -al
total 0
drwxr-xr-x 1 root root 82 May 31 11:43 .
drwxr-xr-x 1 root root 6 May 31 11:43 ..
drwxr-xr-x 1 root root 414 May 31 11:31 git-clone-resource
drwxr-xr-x 1 root root 130 May 23 07:25 oregano-test-fedora
drwxr-xr-x 1 42949672 42949672 0 May 31 11:43 srpm
/tmp/build/put # #ls -al srpm/
/tmp/build/put # #ls -al srpm/
/tmp/build/put # #ls -al ../srpm
/tmp/build/put # ls -al ../srpm
ls: ../srpm: No such file or directory
/tmp/build/put # ls -al srpm
total 0
drwxr-xr-x 1 42949672 42949672 0 May 31 11:43 .
drwxr-xr-x 1 root root 82 May 31 11:43 ..
So why is the file structure nor oregano-*.src.rpm available in the put step?
The full concourse YAML is avail here, though not necessary as far as I can tell https://github.com/drahnr/oregano/blob/master/.concourse.yml
The issue was this:
- task: create-rpm
image: oregano-test-fedora
config:
platform: linux
inputs:
- name: git-clone-resource
outputs:
- name: srpm
path: ../srpm
path: ../srpm placed the outputs outside the volume and as such the content was lost. Note that dir does not do anything regarding the inputs and outputs but just changes the workdir for the execution of the script!
The default would have sufficed which would have equaled to path: srpm (or path: "")
This works:
- task: create-rpm
image: oregano-test-fedora
config:
platform: linux
inputs:
- name: git-clone-resource
outputs:
- name: srpm
I can't create a minion from the map file, no idea what's happened. A month ago my script was working correctly, right now it fails. I was trying to do some research about it but I could't find anything about it. Could someone have a look on my DEBUG log? The minion is created on DigitalOcean but the master server can't connect to it at all.
so I run:
salt-cloud -P -m /etc/salt/cloud.maps.d/production.map -l debug
The master is running on Ubuntu 16.04.1 x64, the minion also.
I use the latest saltstack's library:
echo "deb http://repo.saltstack.com/apt/ubuntu/16.04/amd64/latest xenial main" >> /etc/apt/sources.list.d/saltstack.list
I tested both 2016.3.2 and 2016.3.3, what is interesting, the same script was working correctly 4 weeks ago, I assume something had to change.
ERROR:
Writing /usr/lib/python2.7/dist-packages/salt-2016.3.3.egg-info
* INFO: Running install_ubuntu_git_post()
disabled
Created symlink from /etc/systemd/system/multi-user.target.wants/salt-minion.service to /lib/systemd/system/salt-minion.service.
* INFO: Running install_ubuntu_check_services()
* INFO: Running install_ubuntu_restart_daemons()
Job for salt-minion.service failed because a configured resource limit was exceeded. See "systemctl status salt-minion.service" and "journalctl -xe" for details.
start: Unable to connect to Upstart: Failed to connect to socket /com/ubuntu/upstart: Connection refused
* ERROR: No init.d support for salt-minion was found
* ERROR: Fai
[DEBUG ] led to run install_ubuntu_restart_daemons()!!!
[ERROR ] Failed to deploy 'minion-zk-0'. Error: Command 'ssh -t -t -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null -oControlPath=none -oPasswordAuthentication=no -oChallengeResponseAuthentication=no -oPubkeyAuthentication=yes -oIdentitiesOnly=yes -oKbdInteractiveAuthentication=no -i /etc/salt/keys/cloud/do.pem -p 22 root#REMOVED_IP '/tmp/.saltcloud-5d18c002-e817-46d5-9fb2-d3bdb2dfe7fd/deploy.sh -c '"'"'/tmp/.saltcloud-5d18c002-e817-46d5-9fb2-d3bdb2dfe7fd'"'"' -P git v2016.3.3'' failed. Exit code: 1
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/salt/cloud/__init__.py", line 2293, in create_multiprocessing
local_master=parallel_data['local_master']
File "/usr/lib/python2.7/dist-packages/salt/cloud/__init__.py", line 1281, in create
output = self.clouds[func](vm_)
File "/usr/lib/python2.7/dist-packages/salt/cloud/clouds/digital_ocean.py", line 481, in create
ret = __utils__['cloud.bootstrap'](vm_, __opts__)
File "/usr/lib/python2.7/dist-packages/salt/utils/cloud.py", line 527, in bootstrap
deployed = deploy_script(**deploy_kwargs)
File "/usr/lib/python2.7/dist-packages/salt/utils/cloud.py", line 1516, in deploy_script
if root_cmd(deploy_command, tty, sudo, **ssh_kwargs) != 0:
File "/usr/lib/python2.7/dist-packages/salt/utils/cloud.py", line 2167, in root_cmd
retcode = _exec_ssh_cmd(cmd, allow_failure=allow_failure, **kwargs)
File "/usr/lib/python2.7/dist-packages/salt/utils/cloud.py", line 1784, in _exec_ssh_cmd
cmd, proc.exitstatus
SaltCloudSystemExit: Command 'ssh -t -t -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null -oControlPath=none -oPasswordAuthentication=no -oChallengeResponseAuthentication=no -oPubkeyAuthentication=yes -oIdentitiesOnly=yes -oKbdInteractiveAuthentication=no -i /etc/salt/keys/cloud/do.pem -p 22 root#REMOVED_ID '/tmp/.saltcloud-5d18c002-e817-46d5-9fb2-d3bdb2dfe7fd/deploy.sh -c '"'"'/tmp/.saltcloud-5d18c002-e817-46d5-9fb2-d3bdb2dfe7fd'"'"' -P git v2016.3.3'' failed. Exit code: 1
[DEBUG ] LazyLoaded nested.output
minion-zk-0:
----------
Error:
Command 'ssh -t -t -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null -oControlPath=none -oPasswordAuthentication=no -oChallengeResponseAuthentication=no -oPubkeyAuthentication=yes -oIdentitiesOnly=yes -oKbdInteractiveAuthentication=no -i /etc/salt/keys/cloud/do.pem -p 22 root#REMOVED_IP '/tmp/.saltcloud-5d18c002-e817-46d5-9fb2-d3bdb2dfe7fd/deploy.sh -c '"'"'/tmp/.saltcloud-5d18c002-e817-46d5-9fb2-d3bdb2dfe7fd'"'"' -P git v2016.3.3'' failed. Exit code: 1
root#master-zk:/etc/salt/cloud.maps.d# salt '*' test.ping
minion-zk-0:
Minion did not return. [No response]
root#master-zk:/etc/salt/cloud.maps.d#
It is located in your cloud configuration somewhere in /etc/salt/cloud.profiles.d/, /etc/salt/cloud.providers.d/ or /etc/salt/cloud.d/. Just figure out where and change the value salt to your masters ip.
I currently do this in my providers setting like that:
hit-vcenter:
driver: vmware
user: 'foo'
password: 'secret'
url: 'some url'
protocol: 'https'
port: 443
minion:
master: 10.1.10.1
After installing/configuring whenever-elasticbeanstalk gem, I'm seeing the following error in /var/log/cfn-init.log on my EC2 instance after running git aws.push from my local repo.
Iam using aws elastic benastalk with rails 4.
2014-10-21 08:08:37,602 [DEBUG] Running test for command cron_01_set_leader
2014-10-21 08:08:37,744 [DEBUG] Test command output:
2014-10-21 08:08:37,745 [DEBUG] Test for command cron_01_set_leader passed
2014-10-21 08:08:38,085 [ERROR] Command cron_01_set_leader (su -c "/usr/local/bin/bundle exec create_cron_leader --no-update" $EB_CONFIG_APP_USER) failed
2014-10-21 08:08:38,086 [DEBUG] Command cron_01_set_leader output: bash: /usr/local/bin/bundle: No such file or directory
Traceback (most recent call last):
I have added the whenever-elasticbeanstalk
Below is my cron.config file content..
Any idea ...what am i doing wrong?
files:
# Reload the on deployment
/opt/elasticbeanstalk/hooks/appdeploy/post/10_reload_cron.sh:
mode: "00700"
owner: root
group: root
content: |
#!/usr/bin/env bash
. /opt/elasticbeanstalk/containerfiles/envvars
cd $EB_CONFIG_APP_CURRENT
su -c "/usr/local/bin/bundle exec setup_cron" $EB_CONFIG_APP_USER
# Add Bundle to the PATH
"/etc/profile.d/bundle.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
export PATH=$PATH:/usr/local/bin
encoding: plain
container_commands:
cron_01_set_leader:
test: test ! -f /opt/elasticbeanstalk/containerfiles/.cron-setup-complete
leader_only: true
cwd: /var/app/ondeck
command: su -c "/usr/local/bin/bundle exec create_cron_leader --no-update" $EB_CONFIG_APP_USER
cron_02_write_cron_setup_complete_file:
cwd: /opt/elasticbeanstalk/containerfiles
command: touch .cron-setup-complete
Which solution stack are you using? Can you give the exact name, something like "64bit Amazon Linux 2014.03 v1.0.9 running Ruby 2.1 (Puma)".
I think you will need to replace "/usr/local/bin/bundle" with the actual version of bundle that is used for the solution stack.
Can you just try using "bundle" instaed of "/usr/local/bin/bundle"?
mercurial-server runs on Ubuntu 12.04 LTS
myserver#ip:/etc$ hg --version
Mercurial Distributed SCM (version 2.0.2)
myserver#ip:/etc$ dpkg -s mercurial-server
Package: mercurial-server
Version: 1.2-1
....
myserver#ip:/etc/mercurial-server/remote-hgrc.d$ ls -ltr
total 12
-rw-r--r-- 1 root root 180 Oct 10 2011 logging.rc
-rw-r--r-- 1 root root 139 Oct 10 2011 access.rc
-rw-r--r-- 1 root root 74 Mar 13 22:14 check.rc
myserver#ip:/etc/mercurial-server/remote-hgrc.d$ cat check.rc
[hooks]
pretxncommit.author_check = /SOURCE/mercurial-server/validate.sh
#manually added here too
myserver#ip:/etc/mercurial-server/remote-hgrc.d$ cat ~hg/repos/hgadmin/.hg/hgrc
# WARNING: when these hooks run they will entirely destroy and rewrite
# ~/.ssh/authorized_keys
[extensions]
hgext.purge =
[hooks]
changegroup.aaaab_update = hg update -C default > /dev/null
changegroup.aaaac_purge = hg purge --all > /dev/null
changegroup.refreshauth = python:mercurialserver.refreshauth.hook
pretxncommit.author_check = /SOURCE/mercurial-server/validate.sh
myserver#ip:/etc/mercurial-server/remote-hgrc.d$ cat /SOURCE/mercurial-server/validate.sh
#!/bin/bash
echo "REMUSR:$REMOTE_USER"
echo "ATHR:`hg tip --template "{author}\n"`b"
exit 1
myserver#ip:~$ sudo -u hg cat ~hg/.ssh/authorized_keys
no-pty,no-port-forwarding,no-X11-forwarding,no-agent-forwarding,command="/usr/share/mercurial-server/hg-ssh root/user1/user1.pub" ssh-rsa AAAAB3xOMN8ZiF user1#server.com
no-pty,no-port-forwarding,no-X11-forwarding,no-agent-forwarding,command="/usr/share/mercurial-server/hg-ssh users/user2/user2.pub" ssh-rsa AAAAB3N..0HchQQw== user2#server.com
After this from a local machine(Windows) I cloned a testproject ,changed,commited,push and it was successfull without any error or message.I tried this with both the initial user/key and a user/key added via hgadmin push
D:\hg\testproj>hg push
pushing to ssh://hg#myserver.com/testproj
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 1 changesets with 1 changes to 1 files
Works with
$ cat check.rc
[hooks]
pretxnchangegroup.author_check = /SOURCE/mercurial-server/validate.sh
Not working with pretxncommit