Error while installing COMSOL on amazon AMI - amazon-web-services

I was trying to run COMSOL on Amazon AWS.
While file following the link RunningCOMSOLOnTheAmazonCloud.pdf,
I face errors when trying to install COMSOL on the AMI. When I try to install COMSOL into the remote terminal: cd ~/Private/comsol/COMSOL43b_dvd/setup ami, I get this error:
bash: cd: /root/Private/comsol/COMSOL43b_dvd/setup: No such file or directory

The issue seems to be caused by you trying to navigate to a non existing directory
Please check that the directory exists
ls /root/Private/comsol/COMSOL43b_dvd/

I assume you're at part 2 on page 25 of this document. The instructions there are not very well worded but I think I can see what's going wrong:
2 Install COMSOL:
cd ~/Private
/comsol/COMSOL43b_dvd/setup ami
Answer the following questions:
Enter the installation directory: (press enter for default:
/home/ec2-user/Private/)
Accept the license agreement (press the space bar to flip the pages until the last
one, and answer yes to the question if you agree).
Enter the path to the license file: (press enter for default: 1718#localhost)
The tilde (~) is a shorthand for any user's home directory. This is usually in the form /home/user. If you are logged in as the root user, then the home directory is /root. It looks like you are still the root user, when you should be ec2-user, which is Amazon's default username for most of their Linux EC2 instances. (The suggested default installation directory of /home/ec2-user/Private is what makes me think that)
Try running su - ec2-user to change to the right user, then type pwd to get the console to print out the directory you're in. You should be in the home directory.
Use the ls command to list all the contents of the directory you are in, or ls -l to give you a more detailed list.
You can also use the tab button on your keyboard to autocomplete filenames if they exist. Pressing tab twice will give you a list of options. If pressing tab doesn't complete, or show a list of options after a double tap, then the directory does not exist.

Related

The .xauthority file is not does not exist;hence via local ssh connection display from the GCP compute engine not working

explaining all that has been tried and double checked.
Set up on local windows machine:
Xming installed and running.
in ssh_config ForwardX11 is set to yes.
In VS code remote connection config the the Forward X11 is set to yes.
Set up on GCP compute engine with Debian / Linux 9 and 1 GPU[free tier]:
xauth is installed.
In the sshd_config file below is set:
X11Forwarding yes
X11DisplayOffset 10
X11UseLocalhost no
The sshserver has be restarted to ensure below setting are read .
from local workstation I fire gcloud compute ssh --ssh-flag="-X" tensorflow-2-vm(instance name) and the response is :
/usr/bin/xauth: file /home/user/.Xauthority does not exist,
So, I attempted to perform the below on the remote compute engine with instance name - tensorflow-2-vm and user trapti_kalra:
trapti_kalra#tensorflow-2-vm:~$ xauth list
xauth: file /home/trapti_kalra/.Xauthority does not exist
trapti_kalra#tensorflow-2-vm:~$ mv .Xauthority old.Xauthority
mv: cannot stat '.Xauthority': No such file or directory
trapti_kalra#tensorflow-2-vm:~$ touch ~/.Xauthority
trapti_kalra#tensorflow-2-vm:~$ xauth generate :0 . trusted
xauth: (argv):1: unable to open display ":0".
trapti_kalra#tensorflow-2-vm:~$ sudo xauth generate :0 . trusted
xauth: file /root/.Xauthority does not exist
xauth: (argv):1: unable to open display ":0".
so, looks like something is missing, any help will be appreciated. This was working with a EC2 server before I moved to GCP.
Create n new file: touch ~/.Xauthority
Log out and back in again with your ssh session. (I'm using MobaXterm)
Then it writes the needed.
You logged into your Linux server over ssh and got the following error;
.Xauthority does not exist
Solution :
Let's go into the /etc/ssh/sshd_config file and remove the # sign at the beginning of the 3 lines below
X11Forwarding yes
X11DisplayOffset 10
X11UseLocalhost yes
Then systemctl restart sshd
Login again and you will not get the error.
There are many solutions to this problem, it can also depend on what machine you originate from. If you come from a Linux box, enabling sshd config options like:
X11Forwarding yes
could be enough.
When you use a Macbook however the scenario is different. In that case, you need to install xQuartz with brew:
brew install xquartz
And after this start it:
xQuartz &
After this is done the xQuartz logo appears in your bar and you can right-click the icon and start the terminal from the Applications menu. After you perform this you can run the following:
echo $DISPLAY from this terminal. This should give you the output:
:0
When you have another terminal such as iTerm, you can export this value in another terminal with export DISPLAY=:0 As long as xQuartz is still running the other terminal should be able to continue to use xQuartz.
After this you can SSH into the remote machine and check if the display variable is set:
$: ssh -Y anldisr#my-remote-machine
$: echo $DISPLAY
localhost:11.0
It took me a hour to figure this out, hope it helps someone. :)
This also happened when I added a new user to remote machine without giving the user a sudo privilege during creation.
To resolve, I used the root user or a sudo privileged user to assign a sudo privilege to the new user. Exit the new user and ssh again into your server.
> $ sudo usermod -aG sudo [newUser]

view contents of directory in google cloud

Does anyone know how to view the contents of a directory in gcloud.
I ran
gcloud compute ssh --zone=us-west1-b cs231-vm
from powershell and connected to my instance.
I am trying to navigate to like this:
cd cs231n/datasets
according to a tutorial here:
http://cs231n.github.io/assignments2018/assignment1/
But it says no such file or directory and so I want to know what is in the current directory. I tried ls and dir but get nothing.
ls or dir definitely works on gcloud, it seems probably you might have missed few steps of downloading folder/data. Please see if you have completed First time setup from http://cs231n.github.io/gce-tutorial/
You can also 'view gcloud command' by clicking ssh dropdown available at list of vm-instances page. Additionally you can pass --project='project-name' to your gcloud ssh command.

Cannot chmod file on Openshift online v3 : Operation not permitted

I am migrating a Django application from Openshift v2 to v3 (In case you don't know, RedHat is shutting down v2 on September 30th, see: https://blog.openshift.com/migrate-to-v3-v2-eol/)
So, I am following this blog post to help me: https://blog.openshift.com/migrating-django-applications-openshift-3/ . I am new to all these Docker / Kubernetes concepts the new version is build upon.
I was able to make some progress : I managed to get a successful build of my app. Yet it crashes at deployment time:
---> Running application from script (app.sh) ...
/usr/libexec/s2i/run: line 42: /opt/app-root/src/app.sh: Permission denied
Indeed, app.sh has lost its x permission. I log into the failing container as debug and see it:
> oc debug dc/<my app>
> (app-root)sh-4.2$ ls -l /opt/app-root/src/app.sh
-rw-rw-r--. 1 default root 127 Sep 6 21:20 /opt/app-root/src/app.sh
The blog posts states "Ensure that the app.sh file is executable by running chmod +x app.sh.", which I did on my local repo. Whatever, I want to do it again directly in the pod, but it doesn't work:
(app-root)sh-4.2$ chmod +x /opt/app-root/src/app.sh
chmod: changing permissions of ‘/opt/app-root/src/app.sh’: Operation not permitted
So, how can I set the x permission to app.sh ? Thank you
Without looking into more details, any S2I builder image will gladly use your custom supplied run script to start the application in an alternative way.
Create .s2i/bin/ (mind the dot) in your source code directory, place the run script into it and rebuild the app in OpenShift - it will automatically use your custom run script upon deployment.
This is the preferred way of starting applications using custom commands in OpenShift.
Regarding your immediate problem, there is a very simple reason why you can not change the permissions of the script: you were trying to modify the permissions in the deployed pod, and not the builder pod. Deployed pods run using different UIDs, usually somewhere in the range of 100000000, and definitely do not match the file ownership as generated by the build. Hence permission denied.
The root cause of your problem (app.sh losing executable permissions) must be in the way the build process installs those files, and indeed looking at the /usr/libexec/s2i/assemble script in the base image does seem to reveal the culprit. The last two lines are:
# set permissions for any installed artifacts
fix-permissions /opt/app-root
If you wanted to change this part of the build instead of using a custom run script, I suggest you then create .s2i/bin/assemble in your project's source code and make it look sort of like this:
#!/bin/bash
echo "Running stock build:"
${STI_SCRIPTS_PATH}/assemble
echo "Fixing the mess:"
chmod 755 /opt/app-root/src/app.sh
This will fix whatever the stock build process does to file permissions, and will do it using the same UID as the rest of the build, so file ownership shouldn't be an issue.
as I stumbled upon this issue myself I've found a way to resolve it.
You have to make your file app.sh executable and push it in your repo as such.
If git does not track this modification as it did for me, you have to use: git update-index --chmod=+x app.sh for it to work.

EC2 FTP User Directory change

Our EC2 instance setup has an ftp user that had successful setup through the vsftpd program with an original home directory of \home\user\, I followed the instructions on this stack overflow, and had the user's shell set to /bin/false
What I'm looking to do is make the ftp user login only accessible to a particular directory, a folder in the html directory - \var\www\html\website.com\userfolder
What I've done:
Added user to a group ftpgroup
Authorized access and ownership of the new directory to the user:ftponly
Changed the user's home directory in /etc/passwd
added .ssh/authorized_keys with user's key in the new directory
changed ChrootDirectory in /etc/ssh/sshd_config to new directory
changed the permissions on the directory to chmod -R 775 user:ftpgroup
mounting \var\www\html\website.com\userfolder
Before these changes I was able to access the FTP, and now upon attempted access I receive the following errors from the Filezilla client:
Error: Disconnected: No supported authentication methods available
(server sent:publickey) ... Status: Connection attempt failed with
"ETIMEDOUT - Connection attempt timed out"
As it was working before, I'm thinking that it might have something to do with permissions, I'm just unsure of where else to change.
Thanks for any insight.
This worked for me.
After creating the user with vsftpd, the user now has access to the directory via FileZilla.
I then added a link from the home/{user} directory to the /var/www/html/{user} directory.
The user can upload files to the home directory and can view it from the html directory.
This is a simple hack. Let me know if this solves your problem.

Running lynx via sudo

I am trying to run Lynx under apache user via sudo, but it seems that lynx tries to access my home directory:
$ sudo -u apache lynx
/home/ssmirnov/: No such directory
I have such permissions on my home directory: drwx------
Can you advice me how to run Lynx under another user?
You might try using sudo's -H option. It sets $HOME to the home directory of the user you're trying to run as. Perhaps lynx is looking for a file there, i dunno. (It doesn't seem to have a problem on my machine...but eh.)
-i might work as well; it basically sets the environment up as if the user had logged in, including cd'ing to their home directory. Note, that means starting the shell specified for that user, running login scripts, and all that. If the user's not allowed to log in, this will likely fail.
If you want to run it from your home directory, for example to download something to that location, of course you'll have to grant access to apache somehow. This can be done on ext* filesystems on most modern Linux systems (without granting everyone access) by saying something like setfacl -m u:apache:rwx $HOME. In a pinch, you could temporarily put apache in your group and grant group rwx permissions on your homedir...but unless this is your home machine, i wouldn't do that.