I am using SLURM job manager for dispatching jobs in a Linux cluster running Ubuntu Server 14.04.3. I noticed that sinfo reports all nodes in mixed mode whether they are partially or fully allocated; idle nodes are correctly reported as idle. Below is the output of sinfo command:
PARTITION AVAIL TIMELIMIT NODES STATE NODELIST
compute* up infinite 5 mix node[01-05]
compute* up infinite 1 idle node06
However, node04 is fully allocated and therefore its state should be reported as alloc by sinfo, while node03 is partially allocated as can be seen using scontrol command:
scontrol show node node04
CPUAlloc=6 CPUErr=0 CPUTot=6 CPULoad=6.01 Features=(null)
Gres=(null)
NodeAddr=node04 NodeHostName=node04
OS=Linux RealMemory=64333 AllocMem=0 Sockets=1 Boards=1
State=ALLOCATED ThreadsPerCore=1 TmpDisk=0 Weight=1
BootTime=2016-04-11T16:38:52 SlurmdStartTime=2016-04-11T16:39:59
CurrentWatts=0 LowestJoules=0 ConsumedJoules=0
ExtSensorsJoules=n/s ExtSensorsWatts=0 ExtSensorsTemp=n/s
scontrol show node node03
CPUAlloc=1 CPUErr=0 CPUTot=6 CPULoad=1.01 Features=(null)
Gres=(null)
NodeAddr=node03 NodeHostName=node03
OS=Linux RealMemory=64333 AllocMem=0 Sockets=1 Boards=1
State=MIXED ThreadsPerCore=1 TmpDisk=0 Weight=1
BootTime=2016-04-11T16:38:38 SlurmdStartTime=2016-04-11T16:39:08
CurrentWatts=0 LowestJoules=0 ConsumedJoules=0
ExtSensorsJoules=n/s ExtSensorsWatts=0 ExtSensorsTemp=n/s
What is wrong with sinfo?
Thanks in advance for any suggestions!
In case anyone else has the some issue, this has been resolved a couple of years ago:
https://bugs.schedmd.com/show_bug.cgi?id=611
Related
I have a fairly robust CI test for a C++ library, these tests (around 50) run over the same docker image but in different machines.
In one machine ("A") all the memcheck (valgrind) tests pass (i.e. no memory leaks).
In the other ("B"), all tests produce the same valgrind error below.
51/56 MemCheck #51: combinations.cpp.x ....................***Exception: SegFault 0.14 sec
valgrind: m_libcfile.c:66 (vgPlain_safe_fd): Assertion 'newfd >= VG_(fd_hard_limit)' failed.
Cannot find memory tester output file: /builds/user/boost-multi/build/Testing/Temporary/MemoryChecker.51.log
The machines are very similar, both are intel i7.
The only difference I can think of is that one is:
A. Ubuntu 22.10, Linux 5.19.0-29, docker 20.10.16
and the other:
B. Fedora 37, Linux 6.1.7-200.fc37.x86_64, docker 20.10.23
and perhaps some configuration of docker I don't know about.
Is there some configuration of docker that might generate the difference? or of the kernel? or some option in valgrind to workaround this problem?
I know for a fact that in real machines (not docker) valgrind doesn't produce any memory error.
The options I use for valgrind are always -leak-check=yes --num-callers=51 --trace-children=yes --leak-check=full --track-origins=yes --gen-suppressions=all.
Valgrind version in the image is 3.19.0-1 from the debian:testing image.
Note that this isn't an error reported by valgrind, it is an error within valgrind.
Perhaps after all, the only difference is that Ubuntu version of valgrind is compiled in release mode and the error is just ignored. (<-- this doesn't make sense, valgrind is the same in both cases because the docker image is the same).
I tried removing --num-callers=51 or setting it at 12 (default value), to no avail.
I found a difference between the images and the real machine and a workaround.
It has to do with the number of file descriptors.
(This was pointed out briefly in one of the threads on valgind bug issues on Mac OS https://bugs.kde.org/show_bug.cgi?id=381815#c0)
Inside the docker image running in Ubuntu 22.10:
ulimit -n
1048576
Inside the docker image running in Fedora 37:
ulimit -n
1073741816
(which looks like a ridiculous number or an overflow)
In the Fedora 37 and the Ubuntu 22.10 real machines:
ulimit -n
1024
So, doing this in the CI recipe, "solved" the problem:
- ulimit -n # reports current value
- ulimit -n 1024 # workaround neededed by valgrind in docker running in Fedora 37
- ctest ... (with memcheck)
I have no idea why this workaround works.
For reference:
$ ulimit --help
...
-n the maximum number of open file descriptors
First off, "you are doing it wrong" with your Valgrind arguments. For CI I recommend a two stage approach. Use as many default arguments as possible for the CI run (--trace-children=yes may well be necessary but not the others). If your codebase is leaky then you may need to check for leaks, but if you can maintain a zero leak policy (or only suppressed leaks) then you can tell if there are new leaks from the summary. After your CI detects an issue you can run again with the kitchen sink options to get full information. Your runs will be significantly faster without all those options.
Back to the question.
Valgrind is trying to dup() some file (the guest exe, a tempfile or something like that). The fd that it fets is higher than what it thinks the nofile rlimit is, so it is asserting.
A billion files is ridiculous.
Valgrind will try to call prlimit RLIMIT_NOFILE, with a fallback call to rlimit, and a second fallback to setting the limit to 1024.
To realy see what is going on you need to modify the Valgrind source (m_main.c, setup_file_descriptors, set local show to True). With this change I see
fd limits: host, before: cur 65535 max 65535
fd limits: host, after: cur 65535 max 65535
fd limits: guest : cur 65523 max 65523
Otherwise with strace I see
2049 prlimit64(0, RLIMIT_NOFILE, NULL, {rlim_cur=65535, rlim_max=65535}) = 0
2049 prlimit64(0, RLIMIT_NOFILE, {rlim_cur=65535, rlim_max=65535}, NULL) = 0
(all the above on RHEL 7.6 amd64)
EDIT: Note that the above show Valgrind querying and setting the resource limit. If you use ulimit to lower the limit before running Valgrind, then Valgrind will try to honour that limit. Also note that Valgrind reserves a small number (8) of files for its own use.
I am following the tutorial on
Center for High Throughput Computing and Introduction to Configuration in the HTCondor website to set up a Partitionable slot. Before any configuration I run
condor_status
and get the following output.
I update the file 00-minicondor in /etc/condor/config.d by adding the following lines at the end of the file.
NUM_SLOTS = 1
NUM_SLOTS_TYPE_1 = 1
SLOT_TYPE_1 = cpus=4
SLOT_TYPE_1_PARTITIONABLE = TRUE
and reconfigure
sudo condor_reconfig
Now with
condor_status
I get this output as expected. Now, I run the following command to check everything is fine
condor_status -af Name Slotype Cpus
and find slot1#ip-172-31-54-214.ec2.internal undefined 1 instead of slot1#ip-172-31-54-214.ec2.internal Partitionable 4 61295 that is what I would expect. Moreover, when I try to summit a job that asks for more than 1 cpu it does not allocate space for it (It stays waiting forever) as it should.
I don't know if I made some mistake during the installation process or what could be happening. I would really appreciate any help!
EXTRA INFO: If it can be of any help have have installed HTCondor with the command
curl -fsSL https://get.htcondor.org | sudo /bin/bash -s – –no-dry-run
on Ubuntu 18.04 running on an old p2.xlarge instance (it has 4 cores).
UPDATE: After rebooting the whole thing it seems to be working. I can now send jobs with different CPUs requests and it will start them properly.
The only issue I would say persists is that Memory allocation is not showing properly, for example:
But in reality it is allocating enough memory for the job (in this case around 12 GB).
If I run again
condor_status -af Name Slotype Cpus
I still get something I am not supposed to
But at least it is showing the correct number of CPUs (even if it just says undefined).
What is the output of condor_q -better when the job is idle?
Error message I am trying to create a staking pool for cardano, i got the node up and running but cardano-cli is giving me a hard time. I have it installed as when i type cardano-cli version it returns infocardano-cli version.
However when i enter cardano-cli query utxo --mainnet --address $RECEIVER i get this error:
cardano-cli: Network.Socket.connect: <socket: 11>: does not exist (No such file or directory)root#vmi803461:~#
Could it be because the blockchain isn't fully synched?
I am running windows 10 with vs
node
Yes ivan p is correct however its also important to mention that there have been cardano node example scripts involving stakepools such as creating a shelley blockchain from scratch that had a bug for quite some time essentially causing multiple nodes to point to the same socket. If you're attempting to run multiple nodes - which you should be while running a stakepool, double check the precise path and location of the socket
cardano-cli requires both CARDANO_NODE_SOCKET_PATH variables installed in the shell and node server running to access it with the socket.
export CARDANO_NODE_SOCKET_PATH="/root/db/node.socket"
To query the balance of an address we need a running node and the
environment variable CARDANO_NODE_SOCKET_PATH set to the path of the
node.socket:
I'm trying to use Torque's (5.1.1) qsub command to launch multiple OpenMPI
processes, one process per node, and having each process launch a single
process on its own local node using MPI_Comm_spawn(). MPI_Comm_spawn() is reporting:
All nodes which are allocated for this job are already filled.
My OpenMPI version is 4.0.1.
I am following the instructions here to control the mapping of nodes.
Controlling node mapping of MPI_COMM_SPAWN
using the --map-by ppr:1:node option to mpiexec, and a hostfile (programatically derived
from the ${PBS_NODEFILE} file that Torque produces). My derived file MyHostFile looks
like this:
n001.cluster.com slots=2 max_slots=2
n002.cluster.com slots=2 max_slots=2
while the original ${PBS_NODEFILE} only has the node names, and no slot specifications.
My qsub command is
qsub -V -j oe -e ./tempdir -o ./tempdir -N MyJob MyJob.bash
The mpiexec command from MyJob.bash is
mpiexec --display-map --np 2 --hostfile MyNodefile --map-by ppr:1:node <executable>.
MPI_Comm_spawn() causes this error to be printed:
Data for JOB [22220,1] offset 0 Total slots allocated 1 <=====
======================== JOB MAP ========================
Data for node: n001 Num slots: 1 Max slots: 0 Num procs: 1
Process OMPI jobid: [22220,1] App: 0 Process rank: 0 Bound: socket 0[core 0[hwt 0]]:[B/././././././././.][./././././././././.]
=============================================================
All nodes which are allocated for this job are already filled.
There are two things that occur to me:
(1) "Total slots allocated" is 1 above, but I need at least two slots available.
(2) It may not be right to try to specify a hostfile to mpiexec when
using Torque (though it is derived from the Torque hostfile ${PBS_NODEFILE}). Maybe my derived hostfile is being ignored.
Is there a way to make this work? I've tried recompiling OpenMPI
without Torque support, hopefully preventing OpenMPI from interacting
with it, but it didn't change the error message.
Answering my own question: adding the argument -l nodes=1:ppn=2 to the qsub command reserves 2 processors on the node, even though mpiexec is launching only one process. MPI_Comm_spawn() can then spawn the new process on the second reserved slot.
I also had to compile OpenMPI without Torque support, since including it causes my hostfile argument to be ignored and the Torque-generated hostfile to be used.
I’m trying to run a model using TPUEstimator locally on a CPU first to validate that it works by setting use_tpu=False on the estimator initialization. When running train I get this error.
InternalError: failed to synchronously memcpy host-to-device: host 0x7fcc7e4d4000 to device 0x1deffc002 size 4096: Failed precondition: Unable to enqueue when not opened, queue: [0000:00:04.0 PE0 C0 MC0 TN0 Queue HBM_WRITE]. State is: CLOSED
[[Node: optimizer/gradients/neural_network/fully_connected_2/BiasAdd_grad/BiasAddGrad_G14 = _Recv[client_terminated=false, recv_device="/job:worker/replica:0/task:0/device:TPU:0", send_device="/job:worker/replica:0/task:0/device:CPU:0", send_device_incarnation=-7832507818616568453, tensor_name="edge_42_op...iasAddGrad", tensor_type=DT_FLOAT, _device="/job:worker/replica:0/task:0/device:TPU:0"]()]]
It looks like it’s still trying to use the TPU, as it says recv_device="/job:worker/replica:0/task:0/device:TPU:0". Why is it trying to use the TPU when use_tpu is set to False?
What optimizer are you using? This type of error can happen if you use a tf.contrib.tpu.CrossShardOptimizer and use_tpu is set to False. The optimizer is trying to shard the work across TPU cores but can’t because you’re running on your CPU.
It’s common practice to have a command line flag that sets whether the TPU is being used. This flag is used to toggle things like CrossShardOptimizer and use_tpu. For example, in the MNIST reference model:
if FLAGS.use_tpu:
optimizer = tf.contrib.tpu.CrossShardOptimizer(optimizer)
https://github.com/tensorflow/models/blob/ad3526a98e7d5e9e57c029b8857ef7b15c903ca2/official/mnist/mnist_tpu.py#L102