I want to change the resolution in my netcdf file using bilinear interpolation from (1°x1°) ---> (0.1°x0.1°). I use remapbil but it doesn't work. It returns: Unsupported grid type: generic.
I upload file1 and file2
The command I use: cdo remapbil,infile1 infile2 ofile
The problem is that cdo does not like the presence of the variables lat_bounds and lon_bounds. If you delete these first with nco like this:
ncks -x -v lat_bounds,lon_bounds deseasonalized_2002-2019new_grace_inter.nc modified.nc
(it will give you a warning but it can be ignored).
Then you can use cdo to remap this file successfully. I tried this and it then worked:
cdo remapbil,r360x180 modified.nc test.nc
so I'm sure it will also work mapping one of your files to the other (don't forget to remove the bounds on both files first though).
The command should be cdo remapbil,infile1 infile2 ofile
Related
I have files with beautiful, glob-friendly pathnames such as:
/New XXXX_Condition-selected FINAL/677193 2018-06-08 Xxxx Event-Exchange_FINAL/Xxxxx Dome Yyyy Side/Xxxx_General016 #07-08.BMP
(the Xxx...Yyyy strings are for privacy reasons). Of course the format is not fixed: the depth of the folder hierarchy can vary, but spaces, letters and symbols such as _, - and # can all appear, either as part of the path or part of the filename, or both.
My goal is to recurse all subfolders, find the .BMP files and convert them to JPG files, without having "double" extensions such as .BMP.JPG: in other words, the above filename must become
/New XXXX_Condition-selected FINAL/677193 2018-06-08 Xxxx Event-Exchange_FINAL/Xxxxx Dome Yyyy Side/Xxxx_General016 #07-08.JPG
I can use either bash shell tools or Python. Can you help me?
PS I have no need for the original files, so they can be overwritten. Of course a solution which doesn't overwrite them is also fine - I'll just follow up with a find . -name "*.BMP" -type f -delete command.
Would you please try:
find . -type f -iname "*.BMP" -exec mogrify -format JPG '{}' +
The command mogrify is a tool of ImageMagick suite and mogrify -format JPG file.BMP is equivalent to convert file.BMP file.JPG.
You can add the same options which are accepted by convert such as -quality.
The benefit of mogrify is it can perform the same conversion on multiple files all at once without specifying the output (converted) filenames.
If the command issues a warning: mogrify-im6.q16: length and filesize do not match, it means the image size stored in the BMP header discords with the actual size of image data block.
If JPG files are properly produced, you may ignore the warnings. Otherwise you will need to repair the BMP files which cause the warnings.
If the input files and the output files have the same extention (in such a
case JPG to JPG conversion with a resize), the original files are overwritten.
If they have different extentions like this time, the original BMP files are
not removed. You can remove them using the find as well.
I have file names in the following format:
images.jpeg-028
I would like to transform the file name of every file in a directory from the format above to the following format:
images-028.jpeg
All the numbers at the end of the filenames are three digits long.
Based on this thread on another forum, I am thinking of something along the lines of:
for i in *; do mv "$i"\-(\d+) \-(\d+)"$i"; done
But am open to other Bash-based approaches.
Having done countless file renaming tasks using for loops in bash, I now find the rename utility invaluable.
This should work for your case:
rename 's/\.jpeg-(\d+)/-$1.jpeg/g' images.jpeg-*
Note: I'm referring to the File::Rename module from Perl, not the rename utility that is included on many Linux distros in the util-linux package.
If you already have the version from util-linux, you may want to read this:
Get the Perl rename utility instead of the built-in rename.
If you're set on using a pure bash solution, or you just don't want the hassle of installing rename, this should work:
for i in images.jpeg-*; do
mv "$i" "images-${i##*-}.jpeg"
done
This question already has answers here:
Extract filename and extension in Bash
(38 answers)
Closed 8 years ago.
i have a script that is pushing out some filesystem data to be uploaded to another system.
it would be very handy if i could tell myself what 'kind' of file each file actually is, because it will help with some querying later on down the road.
so, for example, say that my script is spitting out the following:
/home/myuser/mydata/myfile/data.log
/home/myuser/mydata/myfile/myfile.gz
/home/myuser/mydata/myfile/mod.conf
/home/myuser/mydata/myfile/security
/home/myuser/mydata/myfile/last
in the end, i'd like to see:
/home/myuser/mydata/myfile/data.log log
/home/myuser/mydata/myfile/myfile.gz gz
/home/myuser/mydata/myfile/mod.conf conf
/home/myuser/mydata/myfile/security security
/home/myuser/mydata/myfile/last last
there's gotta be a way to do this with regular expressions and sed, but i can't figure it out.
any suggestions?
EDIT:
i need to get this info via the command line. looking at the answers so far, i obviously have not made this clear. so with the example data i provided, assume that data is all being fed via greps and seds (data is already sterlized). i need to be able to pipe the example data to sed/grep/awk/whatever in order to produce the desired results.
Print last filed that are separated by a none alpha character.
awk -F '[^[:alpha:]]' '{ print $0,$NF }'
/home/myuser/mydata/myfile/data.log log
/home/myuser/mydata/myfile/myfile.gz gz
/home/myuser/mydata/myfile/mod.conf conf
/home/myuser/mydata/myfile/security security
/home/myuser/mydata/myfile/last last
This should work for you:
x='/home/myuser/mydata/myfile/security'
( IFS=[/.] && arr=( $x ) && echo ${arr[#]:(-1):1} )
security
x='/home/myuser/mydata/myfile/data.log'
( IFS=[/.] && arr=( $x ) && echo ${arr[#]:(-1):1} )
log
To extract the last element in a filename path:
filename=$(path##*/}
To extract characters after a dot in a filename:
extension=${filename##*.}
But (my comment) rather than looking at the extension, it might be better to use file. See man file.
As others have already answered, to parse the file names:
extension="${full_file_name##*.}" # BASH and Kornshell/POSIX only
filename=$(basename "$full_file_name")
dirname=$(dirname "$full_file_name")
Quotes are needed if file names could have spaces, tabs, or other strange characters in them.
You can also test whether a file is a directory or file or link with the test command (which is linked to [ so that test -f foo is the same as [ -f foo ].
However, you said: "it would be very handy if i could tell myself what kind of file each file actually is".
In that case, you may want to investigate the file command. This command will return the file type as determined by some sort of magic file (traditionally in /etc/magic), but newer implementations can use the user's own scheme. This can tell file type by extension and by the magic number in the file's header, or by looking at the first few lines in the file (looking for a regular expression ^#! .*/bash$ in the first line.
This extracts the last component after a slash or a dot.
awk -F '[/.]' '{ print $NF }'
I have a big file that looks like this:
7f0c41d6-f9c6-47aa-a034-d40bc629c973.csv
159890
159891
24faaed6-62ee-4175-8430-5d73b09911c8.csv
159907
5bad221f-25ef-44fa-9086-fd152e697928.csv
642e4ac3-3d46-4b4c-b5c8-aa2fa54d0b04.csv
d0e145a5-ceb8-4d4b-ae47-11e0c9a6548d.csv
159929
ba678cbd-af57-493b-a69e-e7504b4bc328.csv
7750840f-9bf9-4a68-9f25-a2ba0968d481.csv
159955
159959
And I'm only interesting in *.csv files, can someone point me how to remove files that do not end with .csv.
Thank you.
grep "\.csv$" file
will pull out only those lines ending in .csv
Then if you want to put them in a different file;
grep "\.csv$" file > newfile
sed is your friend:
sed -i.bak '/\.csv$/!d' file
-i.bak : in-place edit. creates backup file with .bak extension
([0-9a-zA-Z-]*.csv$)
This is the regex code that only select the filename ending with .csv extensions.
Hope this will help you.
If you are familiar with the vim text editor (vim or vi is typically installed on many linux boxes), use the following vim Ex mode command to remove lines that don't match a particular pattern:
:v/<pattern>/d
For example, if I wanted to delete all lines that didn't contain "column" I would run:
:v/"column"/d
Hope this helps.
If it is the case that you do not want to have to save the names of files in another file just to remove unwanted files, then this may also be an added solution for your needs (understanding that this is an old question).
This single line for loop using the grep "\.csv" file solution recursively so you don't need to manage multiple files names being saved here or there.
for f in *; do if [ ! "$(echo ${f} | grep -Eo '.csv')" == ".csv" ]; then rm "${f}"; fi; done
As a visual aid to show you that it works as intended (for removing all files except csv files) here is a quick and dirty screenshot showing the results using your sample output.
And here is a slightly shorter version of the single line command:
for f in *; do if [ ! "$(echo ${f} | grep -o '.csv')" ]; then rm "${f}"; fi; done
And here is it's sample output using your sample's csv file names and some randomly generated text files.
The purpose for using such a loop with a conditional is to guarantee you only rid yourself of the files you want gone (the non-csv files) and only in the current working directory without parsing the ls command.
Hopefully this helps you and anyone else that is looking for a similar solution.
I have 114 files with .dat extension to convert to Stata/SE and append, with substantial number of variables (varying from 81 to 16800). I have reset max number of variables to 32000 (set maxvar 32000), increased the memory (set mem 500m) and I was using the following algorithm to combine large number of files and to generate several variables by extracting parts of file names: http://www.ats.ucla.edu/stat/stata/faq/append_many_files.htm
The code looks as follows:
cd "C:\Users\..."
! dir *.dat /a-d /b >d:\Stata_directory\Products_batchfilelist.txt
file open myfile using "d:\Stata_directory\Products_batchfilelist.txt", read
file read myfile line
drop _all
insheet using `line', comma names
gen n = substr("`line'",10,1)
gen m = substr("`line'",12,1)
gen playersnum = substr("`line'",14,1)
save Products_merged.dta, replace
drop _all
file read myfile line
while r(eof)==0 {
insheet using `line', comma names
gen n = substr("`line'",10,1)
gen m = substr("`line'",12,1)
generate playersnum = substr("`line'",14,1)
save `line'.dta, replace
append using Products_merged.dta
save Products_merged.dta,replace
drop _all
file read myfile line
}
The problem is that although variables n,m,playersnumextracted from file names are present in each individual file, they disappear in the final "Products_merged.dta" file. Could anyone tell me what could be the problem and if it is possible to solve with Stata/SE?
I don't see an obvious problem with the code that would be causing this. It may have something to do with the limits in SE, but that is still unlikely in my mind (you would see an error if a command does something to exceed maxvar).
My only suggestion would be to put a couple commands inside the append loop that will help you debug:
save `line'.dta, replace
append using Products_merged.dta
assert m!="" & n!="" & playersnum!=""
save Products_merged.dta,replace
This will do two things: ensure your variables exist after each new append (your first-order concern), and check that they are never blank (not your stated concern but a good check anyway).
If you post a couple of the files I could probably give a better answer.