How to check Symbolic Link to a directory? - c++

$ ls -l
total 4
drwxr-xr-x 2 t domain users 4096 Nov 3 17:55 original
lrwxrwxrwx 1 t domain users 8 Nov 3 17:56 symbolic -> original
Here symbolic is a symbolic link pointing to original folder.
Contents of original folder.
$ ls -l original/
total 8
-rw-r--r-- 2 t domain users 4096 Nov 3 17:55 mydoc.docx
I have a file path in my code like:
std::string fileName = "/home/Downloads/symbolic/mydoc.docx";
path filePath(fileName);
How to check if fileName is a symbolic link?
is_symlink(filePath) is returning false and read_symlink(filePath) is returning empty path.
I want to use canonical only if it is symbolic link. Like this:
if(is_symlink(filePath)) --> This is returning false.Any other alternative ?
{
newFilePath = canonical(filePath);
}

According to the man page, is_symlink need a filesystem::path as parameter, not a std::string. You may want to try that.

Related

How do I convert s.st_dev to /sys/block/<name>

I want to determine whether a file is on an HDD or an SDD.
I found out that I could check the type of drive using the /sys/block info:
prompt$ cat /sys/block/sdc/queue/rotational
1
This has 1 if it is rotational or unknown. It is 0 when the disk is an SSD.
Now I have a file and what to know whether it is on an HDD or an SDD. I can stat() the file to get the device number:
struct stat s;
stat(filename, &s);
// what do I do with s.st_dev now?
I'd like to convert s.st_dev to a drive name as I have in my /sys/block directory, in C.
What functions do I have to use to get that info? Or is it available in some /proc file?
First of all for the input file we need to file on which partition the file exists
you can use the following command for that
df -P <file name> | tail -1 | cut -d ' ' -f 1
Which will give you output something like this : /dev/sda3
Now you can apply following command to determine HDD , SDD
cat /sys/block/sdc/queue/rotational
You can use popen in your program to get output of these system commands
Okay, I really found it!
So my first solution, reading the partitions, wouldn't work. It would give me sbc1 instead of sbc. I also found the /proc/mounts which includes some info about what's mounted where, but it would still not help me convert the value to sbc.
Instead, I found another solution, which is to look at the block devices and more specifically this softlink:
/sys/dev/block/<major>:<minor>
The <major> and <minor> numbers can be extracted using the functions of the same name in C (I use C++, but the basic functions are all in C):
#include <sys/types.h>
...
std::string dev_path("/sys/dev/block/");
dev_path += std::to_string(major(s.st_dev));
dev_path += ":";
dev_path += std::to_string(minor(s.st_dev));
That path is a soft link and I want to get the real path of the destination:
char device_path[PATH_MAX + 1];
if(realpath(dev_path.c_str(), device_path) == nullptr)
{
return true;
}
From that real path, I then break up the path in segments and search for a directory with a sub-directory named queue and a file named rotational.
advgetopt::string_list_t segments;
advgetopt::split_string(device_path, segments, { "/" });
while(segments.size() > 3)
{
std::string path("/"
+ boost::algorithm::join(segments, "/")
+ "/queue/rotational");
std::ifstream in;
in.open(path);
if(in.is_open())
{
char line[32];
in.getline(line, sizeof(line));
return std::atoi(line) != 0;
}
segments.pop_back();
}
The in.getline() is what reads the .../queue/rotational file. If the value is not 0 then I consider that this is an HDD. If something fails, I also consider that the drive is an HDD drive. The only way my function returns false is if the rotational file exists and is set to 0.
My function can be found here. The line number may change over time, search for tool::is_hdd.
Old "Solution"
The file /proc/partition includes the major & minor device numbers, a size, and a name. So I just have to parse that one and return the name I need. VoilĂ .
$ cat /proc/partitions
major minor #blocks name
8 16 1953514584 sdb
8 17 248832 sdb1
8 18 1 sdb2
8 21 1953263616 sdb5
8 0 1953514584 sda
8 1 248832 sda1
8 2 1 sda2
8 5 1953263616 sda5
11 0 1048575 sr0
8 32 976764928 sdc
8 33 976763904 sdc1
252 0 4096 dm-0
252 1 1936375808 dm-1
252 2 1936375808 dm-2
252 3 1936375808 dm-3
252 4 16744448 dm-4
As you can see in this example, the first two lines represent the column names and an empty.The Name column is what I was looking for.

Ansible: ios upgrade router: check "spacefree_kb" prior to image copy

I'm writing a playbook for ios upgrade of multiple switches and have most pieces working with exception of the flash free check. Basically, I want to check if there is enough flash space free prior to copying the image.
I tried using the gather facts module but it is not working how I expected:
from gather facts I see this:
"ansible_net_filesystems_info": {
"flash:": {
"spacefree_kb": 37492,
"spacetotal_kb": 56574
This is the check I want to do:
fail:
msg: 'This device does not have enough flash memory to proceed.'
when: "ansible_net_filesystems_info | json_query('*.spacefree_kb')|int < new_ios_filesize|int"
From doing some research I understand that any value returned by a jinja2 template will be a string so my check is failing:
Pass integer variable to task without losing the integer type
The solution suggested in the link doesn't seem to work for me even with ansible 2.7.
I then resorted to store the results of 'dir' in a register and tried using regex_search but can't seem to get the syntax right.
(similar to this :
Ansible regex_findall multiple strings)
"stdout_lines": [
[
"Directory of flash:/",
"",
" 2 -rwx 785 Jul 2 2019 15:39:05 +00:00 dhcp-snooping.db",
" 3 -rwx 1944 Jul 28 2018 20:05:20 +00:00 vlan.dat",
" 4 -rwx 3096 Jul 2 2019 01:03:26 +00:00 multiple-fs",
" 5 -rwx 1915 Jul 2 2019 01:03:26 +00:00 private-config.text",
" 7 -rwx 35800 Jul 2 2019 01:03:25 +00:00 config.text",
" 8 drwx 512 Apr 25 2015 00:03:16 +00:00 c2960s-universalk9-mz.150-2.SE7",
" 622 drwx 512 Apr 25 2015 00:03:17 +00:00 dc_profile_dir",
"",
"57931776 bytes total (38391808 bytes free)"
]
]
Can anyone provide some insight to this seemingly simple task? I just want '38391808' as an integer from the example above (or any other suggestion). I'm fairly new to ansible.
Thanks in advance.
json_query wildcard expressions return a list. The tasks below
- set_fact:
free_space: "{{ ansible_net_filesystems_info|
json_query('*.spacefree_kb') }}"
- debug:
var: free_space
give the list
"free_space": [
37492
]
which neither can be converted to an integer nor can be compared to an integer. This is the reason for the problem.
The solution is simple. Just take the first element of the list and the condition will start working
- fail:
msg: 'This device does not have enough flash memory to proceed.'
when: ansible_net_filesystems_info|
json_query('*.spacefree_kb')|
first|
int < new_ios_filesize|int
Moreover, json_query is not necessary. The attribute spacefree_kb can be referenced directly
- fail:
msg: 'This device does not have enough flash memory to proceed.'
when: ansible_net_filesystems_info['flash:'].spacefree_kb|
int < new_ios_filesize|int
json_query has an advantage : see this example on a C9500 :
[{'bootflash:': {'spacetotal_kb': 10986424.0, 'spacefree_kb': 4391116.0}}]
yes they changed flash: to bootflash:.

Import file from a folder based on a regular expression

I am working with DHS data, which involves various data files with a consistent naming located in different folders. Each folder contains data for a specific country and survey year.
I would like to import datasets whose name consists of the component 'HR' for example I have ETHR41FL.DTA. The 'HR' part is consistent but other components of the name vary depending on country and survey year. I need to work with one dataset at a time and then move to the next so I believe an automated search would be helpful.
Running the command below gives:
dir "*.dta"
42.6M 5/17/07 10:49 ETBR41FL.dta
19.4M 7/17/06 12:32 ETHR41FL.DTA
60.5M 7/17/06 12:33 ETIR41FL.DTA
10.6M 7/17/06 12:33 ETKR41FL.DTA
234.4k 4/05/07 12:36 ETWI41FL.DTA
I have tried the following approach which did not go through as desired and might not be the best or most direct approach:
local datafiles : dir . files "*.dta" //store file names in a macro
di `datafiles'
etbr41fl.dtaethr41fl.dtaetir41fl.dtaetkr41fl.dtaetwi41fl.dta
The next step I think would be to store the value of the macro datafiles above into a variable (since strupper does not seem to work with macros but variables) and then convert to uppercase and extract the string ETHR41FL.dta. However, I encounter a problem when I do this:
local datafiles : dir . files "*.dta" //store file names in a macro
gen datafiles= `datafiles'
invalid '"ethr41fl.dta'
If I try the command below it works but gives a variable of empty values:
local datafiles : dir . files "*.dta" //store file names in a macro
gen datafiles= "`datafiles'"
How can I store the components of datafiles into a new variable?
If this works I could then extract the required string using a regular expression and import the dataset:
gen targetfile= regexs(0) if(regexm(`datafiles', "[A-Z][A-Z][H][R][0-9][0-9][A-Z][A-Z]"))
However, I would also appreciate a different approach.
Following Nick's advice to continue working with local macros rather than putting filenames into Stata variables, here is some technique to accomplish your stated objective. I agree with Nick to ignore the capitalization of the filenames provided by Windows, which is a case-insensitive filesystem. My example will work with case-sensitive filesystems, but will match any upper- or lower- or mixed-case filenames.
. dir *.dta
-rw-r--r-- 1 lisowskiw staff 1199 Jan 18 10:04 a space.dta
-rw-r--r-- 1 lisowskiw staff 1199 Jan 18 10:04 etbr41fl.dta
-rw-r--r-- 1 lisowskiw staff 1199 Jan 18 10:04 ethr41fl.dta
-rw-r--r-- 1 lisowskiw staff 1199 Jan 18 10:04 etir41fl.dta
-rw-r--r-- 1 lisowskiw staff 1199 Jan 18 10:04 etkr41fl.dta
-rw-r--r-- 1 lisowskiw staff 1199 Jan 18 10:04 etwi41fl.dta
. local datafiles : dir . files "*.dta"
. di `"`datafiles'"'
"a space.dta" "etbr41fl.dta" "ethr41fl.dta" "etir41fl.dta" "etkr41fl.dta" "etwi41fl.dta"
. foreach file of local datafiles {
2. display "`file' testing"
3. if regexm(upper("`file'"),"[A-Z][A-Z][H][R][0-9][0-9][A-Z][A-Z]") {
4. display "`file' matched!"
5. // process file here
. }
6. }
a space.dta testing
etbr41fl.dta testing
ethr41fl.dta testing
ethr41fl.dta matched!
etir41fl.dta testing
etkr41fl.dta testing
etwi41fl.dta testing
You can use filelist (from SSC) to create a dataset of file names. You can then leverage the full set of Stata data management tools to identify the file you want to target. To install filelist, type in Stata's command window:
ssc install filelist
Here's a quick example with datasets that follow the example provided:
. filelist, norecur
Number of files found = 6
. list if strpos(upper(filename),".DTA")
+---------------------------------+
| dirname filename fsize |
|---------------------------------|
1. | . ETBR41FL.dta 12,207 |
2. | . ETHR41FL.DTA 12,207 |
3. | . ETIR41FL.DTA 12,207 |
4. | . ETKR41FL.DTA 12,207 |
5. | . ETWI41FL.DTA 12,207 |
+---------------------------------+
. keep if regexm(upper(filename), "[A-Z][A-Z][H][R][0-9][0-9][A-Z][A-Z]")
(5 observations deleted)
. list
+---------------------------------+
| dirname filename fsize |
|---------------------------------|
1. | . ETHR41FL.DTA 12,207 |
+---------------------------------+
.
. * with only one observation in memory, use immediate macro expansion
. * to form the file name to read in memory
. use "`=filename'", clear
(1978 Automobile Data)
. describe, short
Contains data from ETHR41FL.DTA
obs: 74 1978 Automobile Data
vars: 12 18 Jan 2016 11:58
size: 3,182
Sorted by: foreign
I find the question very puzzling as it is about extracting a particular filename; but if you know the filename you want, you can just type it directly. You may need to revise your question if the point is different.
However, let's discuss some technique.
Putting Stata variable names inside Stata variables (meaning, strictly, columns in the dataset) is possible in principle, but it is only rarely the best idea. You should keep going in the direction you started, namely defining and then manipulating local macros.
In this case the variable element can be extracted by inspection, but let's show how to remove some common elements:
. local names etbr41fl.dta ethr41fl.dta etir41fl.dta etkr41fl.dta etwi41fl.dta
. local names : subinstr local names ".dta" "", all
. local names : subinstr local names "et" "", all
. di "`names'"
br41fl hr41fl ir41fl kr41fl wi41fl
That's enough to show more technique, which is that you can loop over such names. In fact with the construct you illustrate you can do that any way, and neither regular expressions nor anything else is needed:
. local datafiles : dir . files "*.dta"
. foreach f of local datafiles {
... using "`f'"
}
. foreach n of local names {
... using "et`n'.dta"
}
The examples here show a detail when giving literal strings, namely that " " are often needed as delimiters (and rarely harmful).
Note. Upper case and lower case in file names is probably irrelevant here. Stata will translate.
Note. You say that
. gen datafiles = "`datafiles'"
gives empty values. That's likely to be because you executed that statement in a locale where the local macro was invisible. Common examples are: executing one command from a do-file editor window and another from the main command window; executing commands one by one from a do-file editor window. That's why local macros are so named; they are only visible within the same block of code.
In this particular case you do not really need to use a regular expression.
The strmatch() function will do the job equally well:
local datafiles etbr41fl.dta ethr41fl.dta etir41fl.dta etkr41fl.dta etwi41fl.dta
foreach x of local datafiles {
if strmatch(upper("`x'"), "*HR*") display "`x'"
}
ethr41fl.dta
The use of the upper() function is optional.

How to Regex in a script to gzip log files

i would like to gzip log files but i cannot work out how to run a regex expression in my command.
My Log file look like this, they roll every hour.
-rw-r--r-- 1 aus nds 191353 Sep 28 01:59 fubar.log.20150928-01
-rw-r--r-- 1 aus nds 191058 Sep 28 02:59 fubar.log.20150928-02
-rw-r--r-- 1 aus nds 190991 Sep 28 03:59 fubar.log.20150928-03
-rw-r--r-- 1 aus nds 191388 Sep 28 04:59 fubar.log.20150928-04
script.
FUBAR_DATE=$(date -d "days ago" +"%Y%m%d ")
fubar_file="/apps/fubar/logs/fubar.log."$AUS_DATE"-^[0-9]"
/bin/gzip $fubar_file
i have tried a few varients on using the regex but without success, can you see the simple error in my code.
Thanks in advace
I did:
$ fubar_file="./fubar.log."${FUBAR_DATE%% }"-[0-9][0-9]"
and it worked for me.
Why not make fubar_file an array to hold the matching log file names, and then use a loop to gzip them individually. Then presuming AUS_DATE contains 20150928:
# FUBAR_DATE=$(date -d "days ago" +"%Y%m%d ") # not needed for gzip
fubar_file=( /apps/fubar/logs/fubar.log.$AUS_DATE-[0-9][0-9] )
for i in ${fubar_file[#]}; do
gzip "$i"
done
or if you do not need to preserve the filenames in the array for later use, just gzip the files with a for loop:
for i in /apps/fubar/logs/fubar.log.$AUS_DATE-[0-9][0-9]; do
gzip "$i"
done
or, simply use find to match the files and gzip them:
find /apps/fubar/logs -type f -name "fubar.log.$AUS_DATE-[0-9][0-9]" -execdir gzip '{}' +
Note: all answers presume AUS_DATE contains 20150928.

parse command line arguments in a different order

I have a small C++ program that reads arguments from the bash.
Let's say that I have a folder with some files with two different extensions.
Example: file1.ext1 file1.ext2 file2.ext1 file2.ext2 ...
if I execute the program with this command: ./readargs *.ext1
it will read all the files with the ext1.
if I execute ./readargs *.ext1 *.ext2 it will read all the .ext1 files and then all the .ext2 files.
My question is how can I execute the program in a way that reads with this order: file1.ext1 file1.ext2 file2.ext1 file2.ext2 ... Can I handle that from the command line or I need to handle it in the code?
If the names of your files is really in the form file1.ext1 file1.ext2 file2.ext1 file2.ext2, then you can sort it with
echo *.{ext1,ext2}|sort -u
the above gives the output:
$ ll | grep ext
23880775 0 -rw-r--r-- 1 user users 0 Apr 29 13:28 file1.ext1
23880789 0 -rw-r--r-- 1 user users 0 Apr 29 13:28 file1.ext2
23880787 0 -rw-r--r-- 1 user users 0 Apr 29 13:28 file2.ext1
23880784 0 -rw-r--r-- 1 user users 0 Apr 29 13:28 file2.ext2
$ echo *.{ext1,ext2} | sort -u
file1.ext1 file2.ext1 file1.ext2 file2.ext2
Then you copy the output and call your program. But if you do in fact need the files with .ext1 before the files of .ext2, then you have to either make sure that the .ext1 is alphabetically inferior to .ext2 or use another sorting criterion.
Optionally you could also adapt your executable to handle command line arguments in the correct order, but if you do already have an executable I'd recommend the first solution as work-around.
edit: this command does also sort lexically:
$echo *.ext[1,2]
$file1.ext1 file1.ext2 file2.ext1 file2.ext2