I am using RSE to connect to a Linux RDT server. I connect through SOCKS (SSH tunneling from another machine) but that's not the issue, because I have the same problem when I connect directly (when I am in the same LAN).
The settings are pretty straightforward, and I can connect to the remote project and save files.
However, when I try to build I have:
Error: Program "rm" not found in PATH
Needless to say, the environment variable PATH is defined and yes, it points to /bin among others.
Anyone can guess what's the problem here?
After that, I have a can not clean programmatically: build workspace path is not specified message.
Why don't you simple use absolute pathname? /bin/rm
also ' ' (spaces) in PATH could give you problems.
Related
I am using vscode for my development with C++ and docker (Linux) and I have an issue with my compilation-to-code-editing work cycles.
Namely, I get errors with paths like this from the compiler:
In file included from **/rootsrc/myproj/test.cpp:33**:
<here is some error description>
And now if I hold Ctrl and left mouse-click on the file reference, vscode has no idea where that file is and gives me an error "No matching result".
Is there some way to set vscode up such that this plugin (or whatever it is) that is responsible for navigating from path links in the Terminal "knows" that it should first cut off the docker base dir and then prepend my host dir before resolving the path link?
Thanks in advance :)
Not familiar with ngrok, I am reading a book on Django and am trying to set it up. Another question on here (ngrok command not found) said to put the executable in usr/local/bin. I put it here but when I run ./ngrok http 8000 it returns zsh: no such file or directory: ./ngrok
Somethings I can add, I am using a virtual environment and echo $PATH returns the following: /Users/justin/Desktop/djangoByExample/sm/env/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/opt/X11/bin
The only thing I am thinking is that because I am in a virtual environment it is not looking at /usr/local/bin on my machine and that I need to put this ngrok executable somewhere else related to my virtualenv?
Not sure if I provided enough info, please let me know if anything is missing and thanks for any help.
Some unix 101:
A single dot ('.') refers to the current directory.
A double dot ('..') refers to the parent directory.
As a result, executing ./ngrok will look for ngrok in the current directory. If you moved it to /usr/local/bin but you are in /Users/justin, it will still look for /Users/justin/ngrok.
We can execute programs in any directory mentioned in $PATH, by not using a directory reference, but just the program name:
ngrok
That's it.
I'm working on a Makefile project on Visual Studio, modifying code from my laptop and remotely building on a Linux server which i connect to via ssh.
I configured my project Property pages as such:
General: https://i.stack.imgur.com/3WdP6.png
Debugging: https://i.stack.imgur.com/zI5ua.png
Remote Build > Build command line: cd $(RemoteProjectDir) && echo password |sudo -S make
In the remote project directory i have already every file of the project, Makefile too. When pressing Compile i expect VS to copy the changed code from the local directory, file by file, to the remote one, but the only error i get is this:
Linux.Makefile.Target(108,5): error : Cannot copy \foo\bar\file.cpp remotely to /home/user/projects/MyProject/file.cpp
At line 108 of the file with target extension there is this tag:
<CopySources
Sources ="#(FinalSourcesToCopyRemotely)"
AdditionalSources="#(AdditionalSourcesToCopyRemotely)"
ProjectDir="$(ProjectDir)"
RemoteProjectDir="$(RemoteProjectDir)"
RemoteTarget="$(ResolvedRemoteTarget)"
IntermediateDir="$(IntDir)"
RemoteProjectDirFile="$(RemoteProjectDirFile)"
UpToDateFile="$(CopySourcesUpToDateFile)"
LocalRemoteCopySources="$(LocalRemoteCopySources)">
<Output TaskParameter="ResolvedRemoteProjectDir" PropertyName="_ResolvedRemoteProjectDir" />
</CopySources>
Can this file be the cause of the problem for some reason? Is it good to tinker with a .targets file?
I've already remotely build another project before with same configurations and similar Makefile (adapted for paths and file names) and it worked just fine.
[EDIT]:i've added the command echo password| sudo -S make to interact with the password request prompted by sudo, this worked in the other project and i still get the error
You can't use sudo when debugging . So I'd guess you can't use sudo when building either. The problem is sudo prompts for a password and VSLinux can't handle that. One option is to configure sudo so it doesn't request a password but that's not advised. Can you change your setup so it doesn't require sudo?
Long story short: if you get this error, just create another project with the same exact property as the first one, that'll do.
The problem was probably due to the fact that I changed the property several times before using the right one, which could have created some configuration (-ish) file, unchanged after the modifications.
I have an access to some server where there is a lot of data. I can't copy the whole of data on my computer.
I can't compile on the server the program I want because the server doesn't have all libs I need.
I don't think that the server admin would be very happy to see me coming and asking to him to install some libs just for me...
So, I try to figure if there is a way to open a file like with,
FILE *fopen(const char *filename, const char *mode);
or
void std::ifstream::open(const char* filename, ios_base::openmode mode = ios_base::in);
but over a SSH connection. Then reading the file like I do for usual program.
both computer and server are running linux
I assume you are working on your Linux laptop and the remote machine is some supercomputer.
First non-technical advice: ask permission first to access the data remotely. In some workplaces you are not allowed to do that, even if it technically possible.
You could sort-of use libssh for that purpose, but you'll need some coding and read its documentation.
You could consider using some FUSE file system (on your laptop), e.g. some sshfs; you would then be able to access some supercomputer files as /sshfilesystem/foo.bar). It is probably the slowest solution, and probably not a very reliable one. I don't really recommend it.
You could ask permission to use NFS mounts.
Maybe you might consider some HTTPS access (if the remote computer has it for your files) using some HTTP/HTTPS client library like libcurl (or the other way round, some HTTP/HTTPS server library like libonion)
And you might (but ask permission first!) use some TLS connection (e.g. start manually a server like program on the remote supercomputer) perhaps thru OpenSSL or libgnutls
At last, you should consider installing (i.e. asking politely the installation on the remote supercomputer) or using some database software (e.g. a PostgreSQL or MariaDB or Redis or MongoDB server) on the remote computer and make your program become a database client application ...
BTW, things might be different if you access a few dozen of terabyte sized files in a random access (each run reading a few kilobytes inside them), or a million files, of which a given run access only a dozen of them with sequential reads, each file of a reasonable size (a few megabytes). In other words, DNA data, video films, HTML documents, source code, ... are all different cases!
Well, the answer to your question is no, as already stated several times (unless you think about implementing ssh yourself which is out of scope of sanity).
But as you also describe your real problem, it's probably just asking the wrong question, so -- looking for alternatives:
Alternative 1
Link the library you want to use statically to your binary. Say you want to link libfoo statically:
Make sure you have libfoo.a (the object archive of your library) in your library search path. Often, development packages for a library provided by your distribution already contain it, if not, compile the library yourself with options to enable the creation of the static library
Assuming the GNU toolchain, build your program with the following flags: -Wl,-Bstatic -lfoo -Wl,-Bdynamic (instead of just -lfoo)
Alternative 2
Create your binary as usual (linked against the dynamic library) and put that library (libfoo.so) e.g. in ~/lib on the server. Then run your binary there with LD_LIBRARY_PATH=~/lib ./a.out.
You can copy parts of file to your computer over SSH connection:
copy part of source file using dd command to temporary file
copy temporary file to your local box using scp or rsync
You can create a shell script to automate this if you need to do that multiple times.
Instead of fopen on a path, you can use popen on an ssh command. (Don't forget that FILE * streams obtained from popen are closed with pclose and not fclose).
You can simplify the interface by writing a function which wraps popen. The function accepts just the remote file name, and then generates the ssh command to fetch that file, properly escaping everything, like spaces in the file name, shell meta-characters and whatnot.
FILE *stream = popen("ssh user#host cat /path/to/remote/file", "r");
if (stream != 0) {
/* ... */
pclose(stream);
}
popen has some drawbacks because it processes a shell command. Because the argument to ssh is also a shell command that is processed on the remote end, it raises issues of double escaping: passing a command through as a shell command.
To do something more robust, you can create a pipe using pipe, then fork and exec* the ssh process, installing the write end of the pipe as its stdout, and use fdopen to create a FILE * stream on the reading end of the pipe in the parent process. This way, there is accurate control over the arguments which are handed to the process: at least locally, you're not running a shell command.
You can't directly(1) open a file over ssh with fopen() or ifstream::open. But you can leverage the existing ssh binary. Simply have your program read from stdin, and pipe the file to it via ssh:
ssh that_server cat /path/to/largefile | ./yourprogram
(1) Well, if you mount the remote system using sshfs you can access the files over ssh as if they were local.
I am trying to configuring HTTPS based on this tutorial:
Configuring HTTPS for your Elastic Beanstalk Environment
I am stuck at the following section:
To set the OpenSSL_HOME variable
Enter the path to the OpenSSL installation:
c:\ set OpenSSL_HOME=path_to_your_OpenSSL_installation
My openSSL is installed in c:\OpenSSL, so would I write set OpenSSL_HOME=C:\ OpenSSL?
Do I enter such command in Command Prompt?
Finally this step:
To include OpenSSL in your path
Open a terminal or command interface and enter the appropriate command for your operating system:
c:\ set Path=OpenSSL_HOME\bin;%Path%
My %Path% here would be what?
My openSSL is installed in c:\OpenSSL, so would I write set OpenSSL_HOME=C:\ OpenSSL?
Yes, but without the space after C:\:
set OpenSSL_HOME=C:\OpenSSL
Do I enter such command in Command Prompt?
You can. Do note, however, that with this approach, you would be modifying the OpenSSL_HOME environment variable for that particular command window only, and it would be accessible only to processes that are run from that same window. As soon as you close the window, your variable disappears.
If you need to make it persistent, especially through reboots, you have to configure the OS's global environment instead. On Windows, right-click on My Computer, go to Properties, Advanced system settings, Environment Variables, and add a new entry for your variable.
My %Path% here would be what?
That is an existing environment variable. You are modifying the existing Path, so by including %Path% to the end of your assignment, you preserve the existing Path so that existing paths can still be accessed.
Fir, note that the example in the documentation is wrong. It should be this instead:
c:\ set Path=%OpenSSL_HOME%\bin;%Path%
With that said, lets say for example that Path already contains a value of C:\Windows\;etc. After the assignment, the new Path will be C:\OpenSSL\bin;C:\Windows\;etc