UniVerse 10.3 on Linux, file creation isn't acting as expected - universe

I'm getting a real strange thing:
I am issuing the TCL command
CREATE-FILE FNAME 30 7 1
UniVerse is creating the file but as a type 18?
Is this a bug in this version of UniVerse or have they not documented something?

From UniVerse Help:
The syntax of CREATE-FILE is slightly different in PICK, REALTY, and IN2 flavor accounts from that used in INFORMATION and IDEAL flavor accounts.
CREATE.FILE filename [dict.modula [,dict.separation [,dict.type]]] [data.modula [,data.separation [,data.type]]]
Thus, in your case
CREATE-FILE FNAME 30 7 1
is mean create a file with
dictionary file with modulo 30.
data file with modulo 7.
The last 1 is ignored.
since you didn't set data file type, so UniVerse take default file type 18.

Related

Convert CSV to Gridded Binary

I am trying to convert a CSV text file with three columns and 572 rows to a gridded binary file (.bin) using gfortran.
I have two Fortran programs that I have written to achieve this.
The issue is that my binary file size is ending up way too large (9.6GB) by the end, which is not correct.
I have a sneaking suspicion that my nx and ny values in ascii2grd.90 are not correct and that is leading to the bad .bin file being created. With such a small list (only 572 rows), I am expecting the final .bin to be more in KBs, not GBs.
temp.90
!PROGRAM TO CONVERT ASCII TO GRD
program gridded
real lon(572),lat(572),temp(572)
open(2,file='/home/weather/data/file')
open(3,file='/home/weather/out.dat')
do 20 i=1,572
read(2,*)lat(i),lon(i),temp(i)
write(3,*)temp(i)
20 continue
stop
end
ascii2grd.f90
!PROGRAM TO CONVERT ASCII TO GRD
program ascii2grd
parameter(nx=26,ny=22,np=1)
real u(nx,ny,np),temp1(nx,ny)
integer :: reclen
inquire(iolength=reclen)a
open(12,file='/home/weather/test.bin',&
form='unformatted',access='direct',recl=nx*ny*reclen)
open(11,file='/home/weather/out.dat')
do k=1,np
read(11,*)((u(j,i,k),j=1,nx),i=1,ny)
10 continue
enddo
rec=1
do kk=1,np
write(12,rec=irec)((u(j,i,kk),j=1,nx),i=1,ny)
write(*,*)'Processing...'
irec=irec+1
enddo
write(*,*)'Finished'
stop
end
Sample from out.dat
6.90000010
15.1999998
21.2999992
999.000000
6.50000000
10.1000004
999.000000
18.0000000
999.000000
20.1000004
15.6000004
8.30000019
9.89999962
999.000000
Sample from file
-69.93500 43.90028 6.9
-69.79722 44.32056 15.2
-69.71076 43.96401 21.3
-69.68333 44.53333 999.00000
-69.55380 45.46462 6.5
-69.53333 46.61667 10.1
-69.1 44.06667 999.00000
-68.81861 44.79722 18.0
-68.69194 45.64778 999.00000
-68.36667 44.45 20.1
-68.30722 47.28500 15.6
-68.05 46.68333 8.3
-68.01333 46.86722 9.9
-67.79194 46.12306 999.00000
I would suggest a general strategy like the following:
Read the CSV with python/pandas (it could be many other things, although using python will be nice for step 2, as you'll see). But the important thing is that many other languages are more convenient than fortran for reading a CSV, and that will allow you to check that that step 1 is working before moving on.
Output to binary with numpy's tofile(). Also note that numpy will default to 'c' order for arrays so you may need to specify 'f' (fortran) order.
I have a utility at github called dataset2binary that automates this and may be of interest to you, or you could refer to the code at this answer. That is probably overkill though, because you seem to just be reading one big array of the same datatype. Nevertheless, the code you'd want will be similar, just simpler.

How to check max number of file open-wirte-close operations per second in ubuntu

Having some files on disk. The files have fixed size lines with the following format:
98969547,1236548896,1236547899,0a234505,1478889565
which 0a234505 is an IP Address in hex format.
I should open a file, read on line of the file and found the IP address. Then, create a directory on disk (if not exists) with the same name as IP Address and create a file witch holds the line under that directory.
The file name is today date e.g. 2017-02-09. If the directory and the and its file is created previously, simply append the corresponding line to the end of the file.
My files contains too much lines e.g. 100000 or greater, so this steps must be repeated for all lines.
My requirement is to process one files with 100000 lines in one second.
so what i need to understand is what is the maximum number of file open-wirte-close operations per second in ubuntu 16.04?
if the answer does not satisfy my requirement, How should I properly do this?
so its better to say if the OS limitation does not allow me to do such a huge amount of open-write-close operations, is there any second way to do this?
Programming language: c++
OS: ubuntu-16.04 4.4.0-62-generic

ANT script to replace Value from properties files

I have below requirement. I have env.properties file which consists of Name/Value pairs and i have one more properties file that is being checked out from SVN to server machine where ANT is installed.
The env.prop file values will not change and remain constant.Example below shows 3 values but in real time scenario it can contain almost 20 to 30 values.
env.properties
DataProvider/JMS_Host/destination.value=127.0.0.1
DataProvider/JMS_Port/destination.value=8987
DataProvider/JMS_User/destination.value=admin
svn.properties
DataProvider/JMS_Host/destination.value=7899000--877##
DataProvider/JMS_Port/destination.value=
DataProvider/JMS_User/destination.value=##$%###
This properties file which is pulled out from svn (svn.properties) will contain the same Name but the values can differ or can be even blank.So aim is to replace the values in svn.properties file with the values from env.properties and end result should be with values from env.prop file.Any help would be really help. There is a similar request as per below link but it servers for only few values but when we have more than 20 to 30 tokens to replace which would be ugly way of implementation.
enter link description here

Can I accumulate gcov line counts? (I don't have LCOV)

The gcov data files (*.gcda) accumulate the counts across multiple tests. That is a wonderful thing. The problem is, I can't figure out how to get the .gcov files to accumulate in the same way the .gcda files do.
I have a large project (53 header, 54 cpp) and some headers are used in multiple cpp files. The following example is radically simplified; the brute force approach will take days of manual, tedious work if that is required.
Say for example I have xyz.hpp that defines the xyz class. On line 24 it defines the build() method that builds xyz data, and on line 35 it defines the data() method that returns a reference to the data.
Say I run my test suite, then I execute gcov on abc.cpp. The xyz.hpp.gcov report has a count of 5 for line 24 (build) and a count of zero for line 35 (data). Now I run gcov on def.cpp, and the xyz.hpp.gcov report has a count of zero for line 24 and a count of 7 for line 35. So, instead of accumulating the report information and having a count of 5 for line 24 (build) and 7 for line 35 (data), it replaces the xyz.hpp.gcov each time so all counts are reset. I understand why that's the default behavior, but I can't seem to override it. If I'm unable to accumulate the .gcov reports programatically, I'll be forced to manually compare, say, a dozen different xyz.hpp.gcov in order to assess the coverage.
It looks like LCOV is able to do this accumulation, but it takes weeks to get new software installed in my current work culture.
Thanks in advance for any help.

Fortran runtime error: Bad real number in item 1 of list input

I am getting run time error: Bad real number in item 1 of list input for this sample problem. Please, suggest the correct way.
implicit double precision (a-h,o-x)
parameter (ni=150)
dimension x(ni)
open(40,file='fortin')
do 80 i=1,5
read(40,*)x(i)
write(*,*)i,x(i)
80 continue
stop
end
The data in the fortin file arranged in column
1.0
5.0
3.0
5.0
7.0
Your code expects only numbers and it appears you have characters in the file. You can do one of two things to fix this:
Delete the words at the top of the fortin file
Add a single read(*,*) (no need for anything following it) before the loop
In my case, the problem lies in the data file, not the code.
My problem turn out to be the file is in Unicode format. When I view in vi, it's shown fine. But when I view in a viewer that does not support unicode, such as using midnight commander, it look like a mess. The one who sent me the file later told me that he save the file in UTF-16.