When I download files to the bucket I use the command: gsutil -u absolute-bison-xxxxx cp $FILE gs://bucket_1 which works fine.
I am running downstream programs that I want the output to be saved to the same bucket, but when I for instance type: -output gs://bucket_1/file.out to specify the folder for the output, it does not recognise the bucket as a place to store the output. How do I set the path to the bucket?
From the command you have posted on your question, I think you are using Cloud SDK as stated in the documentation to copy files from your computer to your bucket.
If that is the case, following Kolban’s comment, you could use Cloud Storage FUSE to mount your Google Cloud Storage bucket as a file system.
In order to use Cloud Storage Fuse, you should be running an OS based on Linux kernel version 3.10 or newer, or Mac OS X 10.10.2 as stated on the installation notes; also, you could run it from Google Compute Engine VMs.
You could begin installing it following the installation notes and following this YouTube tutorial.
For more information about using Cloud Storage FUSE, or to file an issue, go to the Google Cloud [GitHub repository].
For a fully-supported file system product in Google Cloud, see Filestore.
You could use Firestore as recommended in the documentation. To be used locally on your machine, you could follow this guide.
Related
On SSH, I've tried using gsutil -m cp filename* Desktop to copy a file from the VM Instance to my computer desktop, like Google Cloud's own example in its documentation. I got a message saying that the file was copied successfully, but no mention of downloading anything, and I don't see the relevant file on my desktop. I've tried specifying the full desktop address instead of just 'Desktop', but SSH does not recognize the address.
Is there a way I can directly download files from the VM Instance to my desktop without having to go through a Google Cloud bucket?
In my opinion this is the easiest way. Directly to local machine in a few steps.
I'm trying to automate deploying code to my 3 GCE Linux VM's. I read this article Scripting with gcloud: a beginner’s guide to automating GCP tasks, it shows how to make a script. Now I assume that means saving the code as a .sh file (it even has a shebang on top), now how do I run that. Do I type the script file name in the Google Cloud SDK Shell? I tried it, it does not seem to work. can someone help me? I will really appreciate.
Here is an image of my google cloud shell where I am trying to use the script files.
You're able to install Google Cloud SDK on variety of operation systems such as Linux, macOS and Windows. After that, you'll be able to use same commands like gcloud, gsutil and bq. Meanwhile, scripting relies on the command-line interpreters: you can use bash with Linux and macOS, but for Windows you should use cmd and PowerShell. You can run examples provided at the article, you've mentioned, and at the documentation Scripting gcloud CLI commands with bash on Linux and macOS, so the error messages you've got were expected. You can't run .sh scripts on windows naively, as it was mentioned by #Pievis at the comment section.
As a possible workaround you can install Windows Subsystem for Linux (WSL) for Windows 10 (usually you can choose between WSL2 and WSL1, but it depends on build version of your Windows 10) to get some interoperability between Windows and Linux.
If you need to transfer files to you VM instances please follow the documentation Transferring files to VMs.
If you are interested in automation with GCP, please have a look on the documentation Infrastructure as code to "automate repeatable tasks like provisioning, configuration, and deployments".
I am currently hosting my ASP.Net web application on AWS. I have searched for the best aws storage options for windows environment. I have found that aws File shares system FSx is suitable for our needs.
One of the required features in my app is to be able to create symbolic link on the network shared folder. On my local environment I have active directory and network shared folder. I have applied those steps to enable symbolic link on my pc with windows 10 and it works:
1- Enable remote to remote symbolic link using this cmd command:
fsutil behavior set SymlinkEvaluation R2R:1
2- Check if the feature is enabled:
fsutil behavior query SymlinkEvaluation
the result is:
Local to local symbolic links are enabled.
Local to remote symbolic links are enabled.
Remote to local symbolic links are disabled.
Remote to remote symbolic links are enabled.
3- apply this command for gain access to the target directory:
net use y: "\\share\Public\" * /user:UserName /persistent:yes
4- create symbolic link using this command:
mklink /D \\share\Public\Husam\symtest \\share\Public
It works fine on my local network with active directory.
On aws I have EC2 windows VM joined aws managed domain. The same domain I created the FSx with. I logged in to the machine with domain administrator. I gave security permission (share and security) to this uses on the shared folder using Windows File Shares GUI Tool.
When I try to create the symbolic link I get: Access Denied
mklink /d \\fs-432432fr34234a.myad.com\share\Husam\slink \\fs-432432fr34234a.myad.com\share
Access Denied
any suggestions? is there a way to add this permission in active directory?
It looks to me like mklink is not supported by amazon fsx. I can mklink to my heart's content on my ebs volume but not on the fsx. Also when I mount the share in linux ln -s test1 test2
ln: failed to create symbolic link 'test2': Operation not supported
I found a comment that said "in the GPO you can Change it in "Computer Configuration > Administrative Templates > System > Filesystem" and configure "Selectively allow the evaluation of a symbolic link" – deru May 11 '17 at 6:45." I don't think it will help because I can mklink on ebs.
This is a problem for me as my asp.net web app also uses mklink during it's setup. My solution is to use a windows container for my web app and then use docker-compose to put the links in to the FSx file system. I thought that I wanted to do the docker-compose build on the fsx volume. This was a terrible idea though because the ebs volume is way faster.
I was getting the same error messages reported above. I consulted with the AWS contacts available to the company I work for, and they confirmed that as of right now, FSx for Windows File Server does not support symbolic links.
I have annotated dataset. I am trying to use this dataset at Linux virtual machine in google cloud. Please help with steps
If you have the gsutil tool installed in your VM (which you can easily install following this documentation), you can easily download the files to your VM using the following command:
gsutil cp -r gs://<YOUR-BUCKET-NAME>/<YOUR-FOLDER-PATH>/* <DESTINATION-DIRECTORY>
This would be the most straight-forward answer, however, if you need to automate this, you could use the Cloud Storage Client Libraries to do this process using code.
I am struggling about how to upload my Cloud Code files that i had on Parse.com to my Parse Server hosted on AWS EB.
So far i have:
Parse Server hosted on AWS EB. To host it on AWS i used the Orange Deploy Button which basically makes all stuff easier for people without having to install the Parse Server locally and upload it later to AWS.
iOS App written in objective C connected to the Parse server and working perfectly
Parse Dashboard locally on my mac connected to the Parse Server on AWS
The only thing that i would need is to upload all my cloud code files to the Parse Server. How could i do this? I have researched a lot over Google, stackoverflow, etc without success. There is some information but its unclear. Thanks in advance.
Finally and thanks to Ran Hassid i now have a Fully functional Parse Server on AWS with Cloud Code. For those who are in the same situation where i was, here is the answer to my question:
Go to this link here and follow all the steps (By the time i asked the question, the information provided by this link of AWS wasn't that clear as it is now. They improved the explanations and the info.)
After you finish all the previous steps from the link. You would have a Parse Server on AWS working.
Now the part of CLOUD CODE. Just create a folder in your MAC or PC wherever you like. Let's say on the desktop and called it Parse Server AWS (You can call it whatever you want)
Install the EB CLI which is the Command line interface to user Terminal (On Mac) or the equivalent on windows to work with the parse server you just set up on AWS (Similar to CloudCode with Parse CLI). The easy way to install it is running this command:
brew install awsebcli
Now open terminal on mac (or the equivalent on windows) and go to the folder that you just created on the step 3.
Run the next command. It will ask you to select the location of your parse server, and then the name.
eb init
Now this command. It will download all the files from AWS of your parse server to this folder you are in.
eb labs download
Finally, you will have a folder called Cloud where you can put all your cloud code files in.
When you finish just run the command:
eb deploy
Now you have your parse server with all your cloud code files working on AWS.
Now any change you need to make to your cloudCode files, just change the local files inside this folder just created on step 3 and run again the command from the step 9. Just exactly as you used to do with Parse Deploy command
Hopefully this information will help many people as it helped to me.
Have a happy coding!
parse-server cloud code is a bit different from Parse.com cloud code. In Parse.com we use the Parse CLI in order to modify and deploy our cloud code (parse deploy ...) in parse-server your cloud code exist under the following path of your parse project ./cloud/main.js* so your cloud code endpoint is the main.js file which by default located under the **cloud folder of your parse project. If you really want you can change this path but to keep it simple use the default location.
Now about deployment. in parse-server you need to redeploy your parse server again when you do some modification to your cloud code. Another option is to edit your cloud code remotely but from my POV its better to redeploy it