Compile and run OpenMPI program - c++

The cluster I am using has several host types - different distributions/versions of Linux, some 32 bit, some 64 bit, different versions of GCC. I know that I should compile my program with platform specific MPI wrapper for GCC. This step is more or less clear to me.
My program uses fixed number of hosts and each host runs exactly 1 process. Shared memory threads are handled by TBB, so basically I need MPI only for work distribution between hosts.
The last step would be to run the program on all hosts. It turns out that it is the part I am not sure how to do and my IT folks couldn't help me.
What I have is a list of host IP addresses (local addresses to be precise, something like 192.168.1.xxx) and user name and password for each host. What are the step to run my program on all hosts, provided it was compiler with platform specific compiler and copied into each host? Any help appreciated.

You need passwordless SSH access to all machines, a hostfile, the executable on all machines.
Make sure the executable has the same (relative) path on all machines.
Hostfile (on the master machine):
# my_hostfile
192.168.0.205
192.168.0.208
Command for Open MPI:
mpirun --hostfile my_hostfile programname
For passwordless SSH access create a masterkey under ~/.ssh
ssh-keygen -t rsa
Add the (one line) content of ~/.ssh/rsa.pub from your master machine to a new line in ~/.ssh/authorized_keys2 on your target machines. (Instead of RSA you can use another SSH cryptosystem.)

Related

Key not present on Coral Dev Board when configuring MDT with macOS host machine

I am currently following the Coral Dev Board configuration guide using a MacOS machine running Catalina as my host machine. As per the instructions, for me to use the MDT command line tools on my specific host machine I must manually configure mdt. I used the following guide step-by-step but end up with the following error when I try to connect to the board.
Waiting for a device...
Connecting to jumbo-goose at 192.168.0.78
Key not present on jumbo-goose -- pushing
It looks like you're trying to connect to a device that isn't connected
to your workstation via USB and doesn't have the SSH key this MDT generated.
To connect with `mdt shell` you will need to first connect to your device
ONLY via USB.
Cowardly refusing to attempt to push a key to a public machine.
I would greatly appreciate if someone would be able to assist me in debugging this issue. I have reflashed my device a few times so I am unsure where the issue is emerging.
On the Coral Dev Mini I had similar issues with generating ssh keys, especially when I switched from a Windows to Linux machine. The easiest was (though not as secure) is to follow these steps and allows ssh with a password. First you will need to connect with a USB cable.
https://serverpilot.io/docs/how-to-enable-ssh-password-authentication/
To edit use sudo nano /etc/ssh/sshd_config
The key is to set: PasswordAuthentication yes
Check the sha25sum of the key file transferred to the dev board is the same as on MacOS, mine didn't match i.e the cut and paste wasn't quite right. With key file transferred manually using a SD card it worked first time on mdt devices/mdt shell commands.

How to fix error by creating new ssh connection?

I m trying to cross-compiling a simple HelloWorld app in C++ on Win 10 for raspberry pi3. I installed toolchain to configure it out. But till now by creating new ssh connection i got always an Error such as "Access denied" or "No connection could be made bcz the target maschine actively refused it".
I checked many tutorials to cross-compiling but no success till now
I think you are mixing different stuff here.
Cross compiling means compiling (and linking) the software for the embedded target on the host computer (in your case Win10). You don't need to SSH on the target for this. You'll likely need to run your configure your build like this:
./configure --host=arm-linux --build=amd64-pc-linux-gnu
The host argument is where the binary should run, and the build argument is where the binary is built.
However, I suspect that you've successfully built the software on your Win10 computer, and then you're trying to copy it on the embedded device. In that case, you must make sure that:
The embedded device is connected to the network
It's running a SSH daemon (likely opensshd)
It's allowing your user to connect to (typically, on default installation, root is not allowed to connect, you'll need to modify /etc/ssh/sshd_config to PermitRootLogin to yes)
(Optionally) You generate a key pair on the host (via ssh-keygen and copy the public key on your embedded user's .ssh/authorized_keys folder) to allow password-less login
Please refer to SSH man page.
With all the above in place, you can then scp build/mySoftware root#myDevice:/usr/local/bin without the Access Denied message.

Hyperledger: get "/bin/bash: ./scripts/script.sh: No such file or directory" when running "./byfn -m up"

I'm a newer for the hyperledger and just studying it by following the tutorials on http://hyperledger-fabric.readthedocs.io. I am trying to build the first network using "first-network" in the fabric-samples. The ./byfn -m generate is OK. But after typing ./byfn -m up, I meet
/bin/bash: ./scripts/script.sh: No such file or directory
error and the process hangs.
What is going wrong?
PS: The OS is Windows 10.
Check to see if you have a local firewall enabled. Depending on your docker configuration, a firewall may prohibit the docker daemon from accessing share drives as specified in docker setup (windows).
Restart the Docker daemon after applying local firewall changes.
I was facing the same issue and could resolve it.
The shared network drive needs to be working for any directory on the local machine to be identified from the container.
Docker for example has the "Shared drive" usually c:\ under which all your byfn.sh paths shall be present. Second condition is you need to be running the byfn.sh script with the same user who was authenticated to share the drives on the container. Your password change on the windows environment could break the already existing shared drives with the containers, hence creating problems in starting them.
Follow these steps :
In your docker terminal check the path $HOME. Type the command echo $HOME.
Make sure that your fabric-samples folder is the same path as of the variable $HOME.
Follow the steps for generating your first network.
or try the below solution.
Follow these steps :
Go to settings of docker.
Click on reset credentials.
Now check if the shared drives include the required drives or not.
If not, then include them apply your changes and restart your docker and your bash where you were trying to start your network.
I know the question is old but i have faced the similar issue so i did the following
./byfn.sh -m generate
./byfn.sh -m up
i was missing .sh in both commands.

VSTS Task: Window machine file copy: system error 53

I'm trying to make a release from VSTS to a VM(running on AWS) that is running an IIS. For that I use three tasks.
Windows Machine File Copy
Manage IIS App
Deploy IIS App
Before the release I'm running a build pipeline that that gives me an artifact containing the web app (webapp.zip).
When I manually put it on the server I can run step 2 and 3 of my release and the application works. The problem I have is that I don't get the Windows Machine File Copy to work. It always throws an exception giving a 'System Error 53: The network path was not found'. Of course the machines are not domain joined, because I'm running my release on VSTS and need the files on a AWS VM. I tried to open port 445 (for file sharing) and made sure the user has rights for the destination path on the target machine.
So my question is: How can I actually move the files from VSTS to the AWS VM if the two machines are not joined.
Using FTP Upload or cURL upload step/task instead.
Regarding how to create FTP site, you can refer to this article: Creating a New FTP Site in IIS 7.
Disclaimer: this answer merely explains how to fulfill the requirements to use tasks of Windows Machine File Copy and Manage/Deploy IIS tasks.
Please always be concerned about security of your target hosts, its hardening and security assessment is absolutely necessary.
As noted in comments, you need to protect the channel of deployment from the outside world, here an high level example:
Answer:
in order to use the Windows Machine File Copy task you need to:
on the target machine (the one running IIS) enable File and Printer Sharing running the following command from administrative command prompt:
netsh advfirewall firewall set rule group="File and Printer Sharing" new enable=yes
assure that on the target machine PowerShell 4 or more recent is installed; the following executed from a PS command prompt prints the version installed on the local machine:
PS> $PSVersionTable.PSVersion
To get PowerShell 5 you could for example install WMF 5
;
on the target machine you must have installed .NET Framework 4.5 or more recent;
For the other two tasks (Manage/Deploy IIS Task), both require you to enable a WinRM HTTPS listener on the target machine. For development deployment scenario you could follow these steps:
download the ConfigureWinRM.ps1 PowerShell script at from the officaial VSTS Tasks GitHub repository;
enable from an Administrative PowerShel command prompt the RemoteSigned PowerShell execution policy:
PS> Set-ExecutionPolicy RemoteSigned
run the script with the following arguments:
PS> ConfigureWinRM.ps1 FQDN https
Note that FQDN is the complete domain name of your machine as it is reached by the VSTS task, e.g. myhostname.domain.example .
Note also that this script downloads two executables (makecert.exe and winrmconf.cmd) from Internet, so this machine must have Internet connection. Otherwise just download those two files, place them sibling to the script, comment out from the script the Download-Files invocation.
Now you have enabled a WinRM HTTPS listener with a self signed certificate. Remember to use the "Test Certificate" option (which ironically means to not test the certificate, better name would have been "Skip CA Check") for those two tasks.
In production deployment scenario you may want to use instead a certificate which is properly signed.
Windows File Copy is designed to work on the same network and enabling it on the internet would open your server for hacking. It's designed for internal networks. FTP would also result in a significant security risk unless managed properly.
The easiest way to move forward would be to run an Agent on the VM in AWS that you want to release to. The agent will then download the artifacts to the AWS VM and run whatever tasks you need to install.
This allows you to run tasks on the local machine without opening it up to security risks.
If you had multiple machines that you need to manage in AWS you can easily create a local network that will allow your single agent to use Windows File Copy to push files to multiple VM's without risk.

C++ executable not working on linux based hosting server

I created an static executable for a CGI application in CentOS 64-bit. The program is using cgicc lib. Then I executed the executable on the same machine (where I created executable file) as well as on another CentOS 64-bit machine (where cgicc lib don't exists). On both machines it got executed successfully. But I have a web hosting server there same executable is not working. The web hosting server is linux (64-bit) machine, but not sure of exactly the linux flavor. In log I found internal server error. Even I checked the executable is having 755 permission. Can some one help in finding the reason? Thanks in advance.
My first thought is maybe your hosting server has different kind of CPUs. Different CPUs have different instruction sets, so different c++ compiler may be used for binaries. Like if you want to run some program on embedded system with ARMs, you need a cross compiler for it.