vimrc to detect remote connection - regex

At the moment I have to hard code the names of servers on my vimrc in order to either make it different on the remote machine. This is done by conditional statement using hostname() function in vim. I want to make the conditional to be based on the status of remote connection and not on the hostname. So...
The first possible solution I found was using the following bash command in system():
cat /proc/$PPID/status | head -1 | cut -f2
This does not work because I use GNU screen and this will not detect my connection status properly.
The second possible solution I am exploring right now is using who am i This reliably shows whether or not remote connection has been made from which client, but I have trouble getting it working with system()
if substitute(system('who am i'), "theclient", ????, "") == ""
...
How could I get ???? to extract my client name somehow??
Even if the second solution works, allowing me to use .vimrc for many different remote machines, it is still tied to one client. I want the conditional to work in all remote session, regardless of the client name. So I am wondering, is this possible?

The following line allows me to create a variable that detects the remote connection status:
let g:remoteSession = ($STY == "")
Now you can surround the lines that you want to be ignored in the remote connection via:
if g:remoteSession
...
endif
On a side note, I do not know how expensive it is look up the environment variable compared to the global variable, but I am guessing the difference is negligible. The system call in an environment like cygwin where fork() is inefficient, it is worth doing the optimization.

Instead of adding conditional logic to a shared ~/.vimrc, you could alternatively source system-local settings. I use the following:
" Source system-specific .vimrc first.
if filereadable(expand('~/local/.vimrc'))
source ~/local/.vimrc
endif
" Stop sourcing if inclusion guard exists.
if exists('g:loaded_vimrc')
finish
endif
" Common settings of .vimrc here...
I find this more scalable than trying to maintain an ever-changing list of hostnames in a central location.

Related

Mounting ecryptfs using C++ mount function

I am trying to mount ecryptfs from within a C++ program. I can definitely mount it without it asking questions by issuing this command at the prompt:
sudo mount -t ecryptfs -o "rw,key=passphrase:passphrase_passwd=geoff,ecryptfs_cipher=aes,ecryptfs_key_bytes=32,ecryptfs_passthrough=n,ecryptfs_enable_filename_crypto=n,no_sig_cache" ~/source/ ~/target/
Note that in reality, I am passing a full canonical path in case that matters.
But from within the program I get failure with errno=EINVAL after trying by using the mount() function with the same arguments:
mount("~/source/", "~/target/", "ecryptfs", MS_NODEV, "rw,key=passphrase:passphrase_passwd=geoff,ecryptfs_cipher=aes,ecryptfs_key_bytes=32,ecryptfs_passthrough=n,ecryptfs_enable_filename_crypto=n,no_sig_cache")
The program does launch with root privileges and I have checked that I have CAP_SYS_ADMIN.
The mount() function returns -1 and sets errno to EINVAL.
Have I got the arguments correct? Is this maybe a privileges issue?
EDIT: I got it to work by executing mount externally via system(), but would still like to use the function because of reasons.
I believe this is because mount -t ecryptfs is actually calling the helper executable mount.ecryptfs, and it's processing some of the options (in particular, key=) itself. What's actually passed to the kernel is different (you can see this by looking at /proc/mounts afterward).
If you look closely at https://manpages.ubuntu.com/manpages/kinetic/en/man7/ecryptfs.7.html, key= and ecryptfs_enable_filename_crypto= are listed under "MOUNT HELPER OPTIONS" - the actual kernel module's options are ecryptfs_sig=(fekek_sig) and ecryptfs_fnek_sig=(fnek_sig).
So, if you want to bypass the helper and do the mount directly, you'd need to load the tokens into the kernel's keyring with https://man7.org/linux/man-pages/man2/keyctl.2.html and replace key= with the resulting token signatures, like mount.ecryptfs did.
It does appear that there is a libecrytpfs with functions in ecryptfs.h like ecryptfs_add_passphrase_key_to_keyring which you can (presumably, not tested) use to do this in a way matching the mount.ecryptfs

FileName Port is not supported with connection or merge option

I need to create a csv flat file and need to store in particular path in ftp .
File name should be dynmaically created with timestamp . i have created the filename port in informatica and mapped to expression which i created. when i ran the workflow , am getting below error
Severity Timestamp Node Thread Message Code Message
ERROR 28-06-2017 07:31:19 PM node01_oktst93 WRITER_1_*_1 WRT_8419 Flat File Target [NewOrders] FileName Port is not supported with connection or merge option.
Please help to resolve without deleting filename port .
Thanks
If your requirement is to create dynamic file during each session run. Please check the below steps:
1) Connect the source qualifier to an expression transformation. In the expression transformation create an output port (call it as File_Name) and assign the expression as 'FileNameXXX'||to_char(sessstarttime, 'YYYYMMDDHH24MISS')||'.csv'
2) Now connect the expression transformation to the target and connect eh File_Name port of expression transformation to the FileName port of the target file definition.
3) Create a workflow and run the workflow.
I have used sessstarttime, as it is constant throughout the session run. If you have used sysdate, a new file will be created whenever a new transaction occurs in the session run
file port option dosen't work with the FTP target option. If you are simply using a local flat file: please disable the append if exists option at session level.
Please refer the below informatica KB :
https://kb.informatica.com/solution/11/Pages/102937.aspx
Late answer but may help some.
Since file port option dosen't work with the FTP target option.
Another way is to
Create a variable in workflow
Then create an assignment in between
Then you may set the $variable with full path i.e.
'/path/to_drop/file/name_of_file_'||to_char(SYSDATE, 'YYYYMMDD')||'.csv'
Use that $variable now in your session under workflows.
add it in your mappings now
Late answer but may help some.

How to evaluate an expression to be used for a gdb monitor command?

Inside a scripted gdb session I want to use monitor <cmd> where cmd should contain the address of a symbol. For example:
monitor foobar &myVariable
should become to:
monitor foobar 0x00004711
Because the remote side cannot evaluate the expression. Whatever I tried, the string "&myVariable" gets sent instead of the address.
I played around with convenience variables and stuff, but this is the only workaround I found:
# write the command into a file and execute this file
# didn't find a direct way to include the address into the monitor command
set logging overwrite on
set logging on command.tmp
printf "monitor foobar 0x%08x\n", &myVariable
set logging off
source command.tmp
Any ideas to solve this in a more elegant way?
The simplest way to do this is to use the gdb eval command, which was introduced for exactly this purpose. It works a bit like printf:
(gdb) eval "monitor foobar %p", &myVariable
(I didn't actually try this, so caution.)
If your gdb doesn't have eval, then it is on the old side. I would suggest upgrading.
If you can't upgrade, then you can also script this using Python.
If your gdb doesn't have Python, then it is either very old (upgrade!) or compiled without Python support (recompile!).
If you can't manage to get either of these features, then I am afraid the "write out a script and source it" approach is all that is left.

zmq ventilator/worker/sink paradigm not working w/ subprocess

I am trying to replicate the ventilator/workers/sink paradigm described in the ZMQ guide. I have the same Python Ventilator, the same C++ worker as, and the same Python Sink as was described in the ZMQ examples. I want to launch the ventilator, workers, and sink from one main python script, so I created "class" wrappers around the ventilator & sink, and both of those classes subclass the Python module "multiprocessing.Process." Since the C++ is a binary, I launch it with Python's subprocess.Popen call.
The order of starting all of this up is as follows:
h = subprocess.Popen('test') # test is the name of the binary
time.sleep(1)
s = sinkObj.start()
time.sleep(1)
v = ventObj.start()
What I am finding is that no data is getting through the system when I start up the components like this. However, if I start the C++ binary in its own shell, and only start the sinkObj and ventObj from the main python script, it works fine.
I apologize in advance if this is more of a Python question than a ZMQ question, but I haven't run into issues like this w/ Python's subprocess. I have also tried using os.system() instead of the subprocess... but same issue. I put all the code on this website: https://github.com/kkarrancsu/zmqtest if anybody is curious to test it out. There is a readme on that git which tells you what the files are.
Any ideas on why this could be happening?
------------------------- UPDATE --------------------
I found that if I create a shell script which simply launches the C binary, and call that shell script w/ os.system('run_the_shell_script') it works! So this means that there is something wrong with the way that I am using subprocess.Popen(...), but can't seem to pinpoint what the issue is. I tried w/ the shell=True flag, but it still hangs with that...
It's the name of the worker binary file that causes the problem.
There two solutions:
Chang the name of the binary file test to test_new and do the same in your All.py file, and then it will work as you desire.
Substitute subprocess.Popen('./test', shell=True) for subprocess.Popen('test', shell=True).
test is Linux command. If you type the following in your shell
$ echo $PATH
You may see that . is at the last position. It means that until shell couldn't find the binary file to be executed in the directories that $PATH indicates, it will try to search for it in the current directory .
When you execute subprocess.Popen('test', shell=True), it could find it before trying the . directory and so it won't execute the workers.
As I see, the ventilator and sink bind() to ports 6557 and 6558, and the C++ app connect() to these ports. In this case, if you start the cpp app first, it will try to connect() to the endpoints, but as nothing is bound there, it will drop silently.
In ZeroMQ the basic principle is "First Bind, then Connect". So you should not connect() before you bind() something on the socket. Imagine bind() is the 'Server', and connect() is the client. You cannot connect client to non-existing server. Also, in ZeroMQ every socket can be 'Server', but you SHOULD HAVE only 1 bind()-ing socket per URL. And you can have multiple 'connect()'s.

Wmic /format switch invalid XSL?

I have a quick question, should be relatively simple for those who have some more experience in WMI-command processor than I do (and since I'm an absolute beginner thats not hard :-) )
I fail to understand why wmic /format switch works the way it does. I open up cmd.exe and type
wmic process list brief /format:htable > processlist.html
this does exactly what I want and no bothers further on. Whereas if I go to wmic processor, and try to execute the same command exactly as above...
wmic:root\cli>process list brief /format:htable > processlist.html
I receive the error tag: "Invalid XSL format (or) file name."
Here goes the screenshot. Note I have already copied XSL files from wbem to sys32 dir
Can someone explain to me why these 2 commands that for me look exactly the same, with the only difference that one is executed outside wmic environment and the other one is from inside, the latter one doesn't work? I just fail to understand it.
Please advise so I can comprehend this a bit better! :-)
Try this
copy /y %WINDIR%\system32\wbem\en-US\*.xsl %WINDIR%\system32\
And then
wmic:root\cli>process list brief /format:htable.xsl > processlist.html
Note the presence of the extension after "htable"
You are attempting to use CMD.EXE > redirection while you are within the interactive WMIC context. That can't work.
You can use the WMIC /output:filename switch while in interactive mode. Each subsequent command will overwrite the output of the previous command. You can get multiple commands to go to the same file by using /append:filename instead. You can reset the output back to stdout using /output:stdout.
/output:processlist.html
process list brief /format:htable
/output:stdout
Did you try specifying a full path in the wmic:root\cli>process call? My bets are that the first worked because it output the file to the current directory.