Cannot Start GUIX Daemon - failed-installation

I've followed all the steps in the installation of GUIX at https://www.gnu.org/software/guix/manual/html_node/Binary-Installation.html#Binary-Installation but when I run
sudo ln -sf ~root/.guix-profile/lib/systemd/system/guix-daemon.service /etc/systemd/system/
sudo systemctl enable guix-daemon
it errors as
Failed to execute operation: Too many levels of symbolic links
The contents of ~root/.guix-profile/lib/systemd/system/guix-daemon.service looks correct. Only one symbolic link is involved.
What's wrong?
Update: I solved it by copying the file as
sudo cp -f ~root/.guix-profile/lib/systemd/system/guix-daemon.service /etc/systemd/system/
instead. It seems there's a limitation on the number of symbolic links.
Update: Next problem:
The command
sudo systemctl start guix-daemon
doesn't printing anything on stdout but the daemon is not created:
ps -fel|grep guix
returns nothing.

Related

"RUN true" in dockerfile

When I took over a project, I found a command "RUN true" in the Dockerfile.
FROM xxx
RUN xxx
RUN true
RUN xxx
I don't know what this command does, can anyone help explain. In my opinion, this command makes no sense, but i'm not sure if there is any other use.
There is doc about Creating Images, you can see it:
RUN true \
&& dnf install -y --setopt=tsflags=nodocs \
httpd vim \
&& systemctl enable httpd \
&& dnf clean all \
&& true
#David Maze
test for it. docker file:
FROM centos:7.9.2009
RUN yum install tmux -y
RUN yum install not_exists -y
build log:
Sending build context to Docker daemon 2.048kB
Step 1/3 : FROM centos:7.9.2009
---> eeb6ee3f44bd
Step 2/3 : RUN yum install tmux -y
---> Running in 6c6e29ea9f2c
...omit...
Complete!
Removing intermediate container 6c6e29ea9f2c
---> 7c796c2b5260
Step 3/3 : RUN yum install not_exists -y
---> Running in e4b7096cc42b
...omit...
No package not_exists available.
Error: Nothing to do
The command '/bin/sh -c yum install not_exists -y' returned a non-zero code: 1
modify dockefile:
FROM centos:7.9.2009
RUN yum install tmux -y
RUN yum install tree -y
build log:
Sending build context to Docker daemon 2.048kB
Step 1/3 : FROM centos:7.9.2009
---> eeb6ee3f44bd
Step 2/3 : RUN yum install tmux -y
---> Using cache
---> 7c796c2b5260
Step 3/3 : RUN yum install tree -y
---> Running in 180b32cb44f3
...omit...
Installed:
tree.x86_64 0:1.6.0-10.el7
Complete!
Removing intermediate container 180b32cb44f3
---> 4e905ed25cc2
Successfully built 4e905ed25cc2
Successfully tagged test:v0
you can see Using cache 7c796c2b5260. without a command "RUN true", but the first "RUN" cache is reusged.
RUN true as a standalone command does absolutely nothing and it's safe to delete it.
/bin/true is a standard shell command. It reads no input, produces no output, and neither reads nor writes files; it just exits with a status code of 0 ("success"). Running it as a Docker step will have no effect on the final image other than inserting an additional layer into the docker history.
The one clever use I can think of for this is to cause a later part of a Dockerfile to re-run. Imagine a Dockerfile like
RUN some_expensive_command http://server-a.example.com/input1
RUN another_expensive_command http://server-b.example.com/input2
If the second input changes, you could want to rebuild this image. docker build --no-cache will re-run the first step too, though, and this could take longer than you want. Inserting a RUN true line between the two lines would break Docker's layer caching, but only after the first command has run.
# identical RUN line as before, from cache
RUN some_expensive_command http://server-a.example.com/input1
# not the same RUN line, so "executes" (but does nothing)
RUN true
# not running commands from cache any more
RUN another_expensive_command http://server-b.example.com/input2
I found an already existing answer which explains it quite well.
And if I quote the answer here:
Running and thus creating a new container even if it terminates still keeps the resulting container image and metadata lying around which can still be linked to.
So when you run docker run ... /bin/true you are essentially creating a new container for storage purposes and running the simplest thing you can.
In Docker 1.5 was introduced the docker create command so I believe you can now "create" containers without confusingly running something like /bin/true
And I found an quick explanation from the best practices github page under section '#chaining-commands' saying:
The first and last commands of the block are special.
If you would like to prepend or append a one-line command to the block, you will have to edit two lines - one that you are adding and the first or last commands. The first command is on the same line as the RUN directive, whereas the last command lacks the trailing backslash.
Editing a line with a command that that you don’t want to change presents a risk of introducing a bug, and you also obscure the line’s history. This can be mitigated by having both the first and last commands true - they don’t do anything.

Vmware on Arch: Could not open /dev/vmmon: No such file or directory

I want to install VMware Workstation on Arch. I used the command yay -S vmware-workstation to install. After installation finished, I ran VMware, created an VM, started it, and got the error:
Could not open /dev/vmmon: No such file or directory.
Please make sure that the kernel module `vmmon' is loaded.
I tried to install linux-header but it still didn't work.
OS: Arch Linux, 5.10.56-1-lts.
Thanks for any help!
I ran into this problem too, and the solutions I found online ended up having their own problems that needed to be solve, which I'll document here.
To solve the error Could not open /dev/vmmon/: you need to run:
sudo vmware-mod-config --console --install-all
If this returns an error about Glib not having support, you need to clone https://github.com/mkubecek/vmware-host-modules.git and make it.
$ git clone https://github.com/mkubecek/vmware-host-modules.git
$ cd vmware-host-modules
$ git checkout -b 16.2.1 origin/workstation-16.2.1
$ sudo make
If this returns an error containing fatal error: generated/autoconf.h: No such file or directory, you need to install linux-headers and make sure it matches your linux kernel version. Probably also make sure your kernel is up to date, although I'm not sure if that's necessary.
$ sudo pacman -S linux
$ sudo reboot
$ sudo pacman -S linux-headers
Now you should be able to make install in that cloned repo, which should install the required modules for you to be able to run the vmware-mod-config --console --install-all command, which should solve the issue.
So working backwards, the steps are:
Update your linux kernel and install the right linux-headers for it.
Clone this git repo, cd into it, git checkout -b 16.2.1 origin/workstation-16.2.1, and run sudo make install
Run sudo vmware-mod-config --console --install-all
More about can be found at my post here: https://bbs.archlinux.org/viewtopic.php?pid=2020372#p2020372
I also encountered same problem. Most of the stackoverflow pages suggest to reinstall the vmware but its not a fair solution and even its not a solution. Its just like if you have pain in teeth remove that teeth.
Another thing is that some post and even vmware official posts said to disable secure boot.
But after trying both nothing changed so i goto to cd /dev and found that the file vmmmon exists.
And when i tries to load the file it loaded successfully.
So from here i concluded that to solve this issue do the following:
Either disable the secure boot or sign the vmmon.
CD to /dev/
Load vmmon using modprobe
and as always Happy coding....
The command: sudo vmware-modconfig --console --install-all works, but every time that I reboot the system the error come out again.
In my case, the cause of this problem was the vmmon didn't be load. So I just
sudo modprobe -v vmmon
and it work.

AWS install code deploy agent just fails - what am I missing

I follow, verbatim, the instructions given, and I get an error almost like some utterly unrelated program is being invoked. For the record, it seemed to be working yesterday.
I am running this on Amazon linux 2:
sudo yum update
sudo yum -y install ruby wget
cd /home/ec2-user
wget https://aws-codedeploy-${AWS::Region}.s3.${AWS::Region}.amazonaws.com/latest/install
chmod +x ./install
sudo ./install auto
sudo service codedeploy-agent status
and here is what happens:
[root#ip-10-204-84-134 bin]# sudo ./install auto
./install: missing destination file operand after ‘auto’
Try './install --help' for more information.
[root#ip-10-204-84-134 bin]# sudo ./install --help
Usage: ./install [OPTION]... [-T] SOURCE DEST
or: ./install [OPTION]... SOURCE... DIRECTORY
or: ./install [OPTION]... -t DIRECTORY SOURCE...
or: ./install [OPTION]... -d DIRECTORY...
This install program copies files (often just compiled) into destination
locations you choose. If you want to download and install a ready-to-use
package on a GNU/Linux system, you should instead be using a package manager
like yum(1) or apt-get(1).
In the first three forms, copy SOURCE to DEST or multiple SOURCE(s) to
the existing DIRECTORY, while setting permission modes and owner/group.
In the 4th form, create all components of the given DIRECTORY(ies).
Mandatory arguments to long options are mandatory for short options too.
--backup[=CONTROL] make a backup of each existing destination file
-b like --backup but does not accept an argument
-c (ignored)
-C, --compare compare each pair of source and destination files, and
in some cases, do not modify the destination at all
-d, --directory treat all arguments as directory names; create all
components of the specified directories
-D create all leading components of DEST except the last,
then copy SOURCE to DEST
-g, --group=GROUP set group ownership, instead of process' current group
-m, --mode=MODE set permission mode (as in chmod), instead of rwxr-xr-x
-o, --owner=OWNER set ownership (super-user only)
-p, --preserve-timestamps apply access/modification times of SOURCE files
to corresponding destination files
-s, --strip strip symbol tables
--strip-program=PROGRAM program used to strip binaries
-S, --suffix=SUFFIX override the usual backup suffix
-t, --target-directory=DIRECTORY copy all SOURCE arguments into DIRECTORY
-T, --no-target-directory treat DEST as a normal file
-v, --verbose print the name of each directory as it is created
-P, --preserve-context preserve SELinux security context (-P deprecated)
-Z set SELinux security context of destination
file to default type
--context[=CTX] like -Z, or if CTX is specified then set the
SELinux or SMACK security context to CTX
--help display this help and exit
--version output version information and exit
The backup suffix is '~', unless set with --suffix or SIMPLE_BACKUP_SUFFIX.
The version control method may be selected via the --backup option or through
the VERSION_CONTROL environment variable. Here are the values:
none, off never make backups (even if --backup is given)
numbered, t make numbered backups
existing, nil numbered if numbered backups exist, simple otherwise
simple, never always make simple backups
GNU coreutils online help: <http://www.gnu.org/software/coreutils/>
Report install translation bugs to <http://translationproject.org/team/>
For complete documentation, run: info coreutils 'install invocation'
Ok, The documentation is totally broken, but upon just looking in that bucket I found an RPM and ran it:
[root#ip-10-204-84-134 bin]# sudo yum install -y https://aws-codedeploy-us-east-1.s3.us-east-1.amazonaws.com/latest/codedeploy-agent.noarch.rpm^C
[root#ip-10-204-84-134 bin]# sudo service codedeploy-agent status
The AWS CodeDeploy agent is running as PID 4572
[root#ip-10-204-84-134 bin]#

How to solve celerybeat is down: no pid file found?

I have followed instructions from https://pythad.github.io/articles/2016-12/how-to-run-celery-as-a-daemon-in-production
It works pretty well for celeryd, however when starting celerybeat it says pid file not found.
I've used this tutorial on my previous projects and it did fine for both celeryd and celerybeat. The difference of this project only is all project files including the django project are owned by root. I fail at finding more details about the issue.
You also need to change permissions of the log directories that celery writes to:
sudo chmod 755 /var/log/celery/ /var/run/celery/
sudo chown root:root /var/log/celery/ /var/run/celery/

ldconfig seems no functional under alpine 3.3

I'm currently install the goczmq (https://github.com/zeromq/goczmq) on golang:1.6.2-alpine docker container, as following:
wget https://download.libsodium.org/libsodium/releases/libsodium-1.0.10.tar.gz
wget https://download.libsodium.org/libsodium/releases/libsodium-1.0.10.tar.gz.sig
wget https://download.libsodium.org/jedi.gpg.asc
gpg --import jedi.gpg.asc
gpg --verify libsodium-1.0.10.tar.gz.sig libsodium-1.0.10.tar.gz
tar zxvf libsodium-1.0.10.tar.gz
cd libsodium-1.010.
./configure; make check
sudo make install
sudo ldconfig
The process failed on ldconfig, there seems be a command ldconfig, but I don't think it is actually functional. Any insights? Thank you in advance.
Alpine's version of ldconfig requires you to specify the target folder or library as an argument. Note that alpine has no /etc/ld.so.conf file, nor does it recognize one if you create it.
Example with no target path:
$ docker run -ti alpine sh -c "ldconfig; echo \$?"
1
Example with target path:
$ docker run -ti alpine sh -c "ldconfig /; echo \$?"
0
However, even with that there are frequently linking errors. Others suggest:
Manual symbolic links
Installing glibc into your container.