"File is locked, but no other information is available" when trying to read from (NFS) folder with SAS on linux? - sas

Trying to open a .sas file from a read-only NFS mount on CentOS7 via SAS (v9.4 TS Level 1M5) with the Program Editor and getting the error
ERROR: File is locked, but no other information is available. File=/path/to/sas/file.sas System Error Code = 37.
ERROR: File is in use, /path/to/sas/file.sas
Am not sure what could be causing this. Not sure what processes could be accessing the file aside from the process I am opening the file with and have no problems with accessing the NFS share and opening the file with other applications on the server (eg. Pluma text editor).
Running lslock on the server, none of these .sas files show up.
If relevant, the /etc/exports config for the NFS mount on the exporting server I'm trying to access from the client looks like...
...
/path/to/nfs/share myclientserver(ro,root_squash,sync) someotherserver(...) ...
...
Anyone with more experience have any ideas what could be going on here? Any other debugging info that should be added to this question?

we had a similar problem but not on CENTOS. The unmount and mount fixed this problem. Have you tried?

Related

Download File - Google Cloud VM

I am trying to download a copy of my mysql history to keep on my local drive as a safeguard.
Once selected, a dropdown menu appears
And I am prompted to enter the file path for the download
But after all the variations I can think of, I keep receiving the following error message:
Download File means that you are downloading a file from the VM to your local computer. Therefore the expected path is a file on the VM.
If instead your want to upload c:\test.txt to your VM, select Upload File. Then enter c:\test.txt. The file will be uploaded to your home directory on the VM.

Mount persistent disk with data to VM without formatting

I am trying to mount a persistent disk with data to a VM to use it in Google Datalab. So far no success, ideally I would like to see my files in the Datalab notebook.
First, I added the disk in VM settings with Read/Write mode.
Second, I ran $lsblk to see what disks there are.
Then tried this: sudo mount -o ro /dev/sdd /mnt/disks/z
I got this error:
wrong fs type, bad option, bad superblock on /dev/sdd,
P.S. I used the disk I want to mount on another VM and downloaded some data on it. It was formatted as NTFS disk.
Any ideas?
I would need to create a file system as noted in the following document
$ mkfs.vfat /dev/sdX2
Creates a new partition, without creating a new file system on that partition.
Command: mkpart [part-type fs-type name] start end

Trace32 config file error

I am using following settings to control trace32 cmm scripts execution using C# scripts.
Node="localhost"
Port="20000"
PackLen="1024"
Device="1"
Somehow I recently uninstalled & installed trace32 & lost config file. Now I am not able to execute T32_Init() function itself. Can someone give me the config file content ?
If TRACE32 still starts, but you just can't connect to it via the remote API, add the following lines to your TRACE32 configuration file (config.t32):
RCL=NETASSIST
PORT=20000
There must be an empty line before and after that block.

Not getting IDQ log while running using infacmd command

We are running a shell script that runs a deployed IDQ mapping. I tried in unix directories to see if it created a mapping log file but no where i can see.
I checked in various directories under "" <infa_home> " folder but i could not trace the log file.
If you have come across the same situation, please do let me know.
Which log you are looking for? Mapping/ workflow/ DIS/Webservice/Catalina?
All IDQ logs stored under $HOME/infa_shared/log/disLogs/*

Django ImageField upload to nfs. (No locks available)

I want to upload with a Django ImageField to a nfs storage but I get this error:
[Errno 37] No locks available
This is in /etc/fstab/:
173.203.221.112:/home/user/project/media/uploads/ /home/user/project/media/uploads nfs rw,bg,hard,lock,intr,tcp,vers=3,wsize=8192,rsize=8192 0 0
I also tried to patch django to use flock() instead of lockf() but still not working.
http://code.djangoproject.com/ticket/9400
Any idea whats wrong?
I have this messy issue once, and after losing a lot of time looking for an answer I found this solution: rpc.statd
I have to execute that command in both sides of the NFS folders, in my case was my Computer and a Virtual Machine
Some information about this command can be found here:
Linux command: rpc.statd - NSM status monitor
In case that is not enough, some times I faced this issue I have to execute the statd service manually because it wasn't running. This other way to fix the problem is execute in both sides of the NFS the command:
service statd start
After executing the command in both sides the locking problem should dissappear.
Some more information on NFS software can be found here:
Archlinux wiki: NFS
You could check if nfslock is running on both the nfs server and client machines. It is responsible for managing the locks.