virtualbox no shared folder access - virtualbox

I am running a host machine with windows 7 on it and installed virtual box on it several months ago. All of a sudden I am no longer able to access the shared folder I have been using. I can open a terminal and type cd /media/shared/networks and I get the error message bash: cd: /media/shared/networks: no such file or directory. There is a folder named shared on my windows desktop and there is a folder named networks within that shared folder. I am not sure where the issue is. If I type cd /media/shared the pwd returns as /media/shared, but when I ls nothing is returned, it shows that the folder is empty. has anyone else run into this type of problem and if so how did you fix it?

Verify in /media/sf_networks
https://www.virtualbox.org/manual/ch04.html#sf_mount_auto
resume:
With Linux guests, auto-mounted shared folders are mounted into the
/media directory, along with the prefix sf_. For example, the shared
folder myfiles would be mounted to /media/sf_myfiles

Related

How to convert the client based file path to the server based when we create symlink within NFS Drive?

I am developing NFS Server modules(ProcedureSYMLINK) and met path conversion issue when creating symlink.
After I run the NFS server service, I connected the server and mount as a drive on Linux Client.
And run the below command to create symbolic link within NFS Drive using full path and debug it on server side. But the target file path is not given on server path based.
ln -s /mnt/nfs/1.bin /mnt/nfs/symlink/1.lnk
Let me give you an example to clarify my question.
The base directory path on NFS Server is /usr/nfs.
So I executed below command on the Server.
./nfs_server /usr/nfs
And I mounted the NFS on Ubuntu Client using the below command.
sudo mount -t nfs -o vers=3,proto=tcp,port=2049 192.168.1.37:/usr/nfs /mnt/nfs
After that, I created the symbolic link.
ln -s /mnt/nfs/1.bin /mnt/nfs/symlink/1.lnk
/mnt/nfs/1.bin : Target Path
/mnt/nfs/symlink/1.lnk : Symlink Path
Once I enter above command and tried to debug on the server side.
in the ProcedureSYMLINK function I could see the status of variables.
I could safely get the Symlink Path which is based on Server, but the target path is not based on Server.
Target path was still /mnt/nfs/1.bin
Actually, there is no way to get the mount base path of the NFS client (/mnt/nfs) on Server.. right?
If I know the base path(/mnt/nfs), I can calculate the target file path on Server based, but I don't know the base path of the Client.
target file path may be /usr/nfs/1.bin. but no way to calculate the path like this.
Does anyone know?
I am using NFS v3
Two points which hopefully will help answer your questions:
In NFS, symlinks are always resolved by the client. From the perspective of the NFS server, a symlink is just a special type of file but the content (as in, where the symlink points to) is just considered an opaque blob.
The mountpoint on the client (/mnt/nfs in your example) is purely a client-side matter, there is no provision in the NFS protocol for letting the server know it.

GCP copy files in from VM to local

I'm trying to copy files from my VM to my local computer.
I can do this with the standard command
sudo gcloud compute scp --recurse orca-1:/opt/test.txt .
However in downloading the log files they transfer but they're empty? (empty files are created with the same name)
I'm also unable to use the Cloud Shell 'Download' UI button because it gives No such file despite the absolute file path being correct (cat /path returns the data).
I understand it's a permissions thing somehow with log files?
Thanks for the replies to my thread above I figured out it was a permissions issue on my files.
Interestingly the first time I ran the commands it did not throw any errors or permission errors -- it downloaded all the expected files however they were empty. In testing again, now it threw permission errors. I then modified the files in question to have public read permissions, and it downloaded successfully.

postgresql-10 : The program "initdb" is needed by pg_ctl but was not found in the same directory as "pg_ctl" Check your Installation

I am new to this forum, i am facing this issue, where i get the below error.
sudo -u shahid ./pg_ctl -D /root/pgsql10x/data/ initdb
invalid binary "/root/pgsql10x/bin/pg_ctl"
invalid binary "/root/pgsql10x/bin/pg_ctl"
invalid binary "/root/pgsql10x/bin/pg_ctl"
The program "initdb" is needed by pg_ctl but was not found in the same directory as "pg_ctl".
Check your installation.
I am trying to run this from root.
I have built from the source. I am trying version 10. on CentOS7.
I downloaded direct from postgres site.
I am not facing this problem when I run as non-root user.
I have all the files in the bin directory, as shared in image here:
Finally, I was able to resolve the issue.
The problem was due to the built source was directly placed in "/root/" directory as
/root/pgsql10x/ in root login.
when i placed it in "/app/" directory as /app/pgsql10x/ things started working normally, database got created and database got working without any problem (in root login).
-Shahid
The error comes from the fact that you are trying to start the cluster as the root user you'll have to switch to postgres and try restarting the cluster again
su - postgres
pg_ctl start

How do I set the beanstalk .ebextensions .config "sources" key "target directory" to the current bundle directory

I'm working in a python 2.7 elastic beanstalk environment.
I'm trying to use the sources key in an .ebextensions .config file to copy a tgz archive to a directory in my application root -- /opt/python/current/app/utility. I'm doing this because the files in this folder are too big to include in my github repository.
However, it looks like the sources key is executed before the ondeck symbolic link is created to the current bundle directory so I can't reference /opt/python/ondeck/app when using the sources command because it creates the folder and then beanstalk errors out when trying to create the ondeck symbolic link.
Here are copies of the .ebextensions/utility.config files I have tried:
sources:
/opt/python/ondeck/app/utility: http://[bucket].s3.amazonaws.com/utility.tgz
Above successfully copies to /opt/python/ondec/app/utility but then beanstalk errors out becasue it can't create the symbolic link from /opt/python/bundle/x --> /opt/python/ondeck.
sources:
utility: http://[bucket].s3.amazonaws.com/utility.tgz
Above copies the folder to /utility right off the root in parallel with /etc.
You can use container_commands instead of sources as it runs after the application has been set up.
With container_commands you won't be able to use sources to automatically get your files and extract them so you will have to use commands such as wget or curl to get your files and untar them afterwards.
Example: curl http://[bucket].s3.amazonaws.com/utility.tgz | tar xz
In my environment (php) there is no transient ondeck directory and the current directory where my app is eventually deployed is recreated after commands are run.
Therefore, I needed to run a script post deploy. Searching revealed that I can put a script in /opt/elasticbeanstalk/hooks/appdeploy/post/ and it will run after deploy.
So I download/extract my files from S3 to a temporary directory in the simplest way by using sources. Then I create a file that will copy my files over after the deploy and put it in the post deploy hook directory .
sources:
/some/existing/directory: https://s3-us-west-2.amazonaws.com/my-bucket/vendor.zip
files:
"/opt/elasticbeanstalk/hooks/appdeploy/post/99_move_my_files_on_deploy.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
mv /some/existing/directory /var/app/current/where/the/files/belong

Bamboo SCP plug-in: how to find directory

I am trying to upload a file to a remote server using the SCP task. I have OpenSSH configured on the remote server in question, and I am using an Amazon EC2 instance running Windows Server 2008 R2 with Cygwin to run the Bamboo build server.
My question is regarding finding the directory I wish to use. I want to upload the entire contents of C:\doc using SCP. The documentation notes that I must use the local path relative to the Bamboo working directory rather than an absolute directory name.
I found by running pwd during the build plan that the working directory is /cygdrive/c/build-dir/CDP-DOC-JOB1. So to get to doc, I can run cd ../../doc. However, when I set my working directory under the SCP configuration as ../../doc/** (using this pattern matching guide), I get the message There were no files to upload. in the log.
C:\doc contains subfolders as well as a textfile in the root directory.
Here is my SCP task configuration:
Here is a look from cygwin at my directory:
You may add a first "script" task running a Windows shell, that copies everything from C:\doc to some local directory, and then run the scp task to copy the content of this new directory onto your remote server
mkdir doc
xcopy c:\doc .\doc /E /F
Then the pattern for copy should be /doc/**