Linux n00b here having trouble pulling a file from the server to my local Windows 7 professional 64 bit machine. I am using Wowza to stream live video and I am recording these live videos to my Google Cloud instance located here:
/usr/local/WowzaStreamingEngine/content/myStream.mp4
When I ssh:
gcutil --project=”myprojectname” pull “my instance”
“/usr/local/WowzaStreamingEngine/content/myStream.mp4” “/folder1”
I receive a permission denied error. When I try saving another folder deep on my local machine i.e "/folder1/folder2" the error returned is file or directory not found. I've checked that I have write permisions set on my local Windows 7 machine so I do not think it is a permissions error. Again, apologize for the n00b question, I'm just been stuck here for hours.
Thx,
~Greg
Comment added 7/18:
I enter the following through ssh:
gcutil --project=”Myproject” pull “instance-1” "/usr/local/WowzaStreamingEngine/content/myStream.mp4” “/content"
By entering this I'm expecting the file mystream.mp4 to be copied to my C:/content folder. The following is returned: Warning: Permanently added '107.178.218.8' (ECDSA) to the list of known hosts. Enter passphrase for key '/home/Greg/.ssh/google_compute_engine':
Here I enter the passphrase and the following error is returned: /content: Permission denied Have write set up on this folder. Thanks! – Greg
-=-=-==->
To answer the question about using Cygwin, I'm not familiar with Cygwin and I do not believe it was used in this instance. I ran these commands through the Google Cloud SDK shell which I installed per the directions found here: https://developers.google.com/compute/docs/gcutil/.
What I am doing:
After setting up my google cloud instance I open Google CLoud SDK and enter the following:
gcutil --service_version="v1" --project="myproject" ssh --zone="us-central1-a" "instance-1"
I then am prompted for a passphrase which I create and then run the following:
curl http://metadata/computeMetadata/v1/instance/id -H "X-Google-Metadata-Request:True"
This provides the password I use to login to the Wowza live video streaming engine. All of this works beautifully, I can stream video and record the video to the following location: /usr/local/WowzaStreamingEngine/content/myStream.mp4
Next I attempt to save the .mp4 file to my local drive and that is where I'm having issues. I attempt to run:
gcutil --project=”myproject” pull “instance-1” “/usr/local/WowzaStreamingEngine/content/myStream.mp4” “C:/content”
also tried, C:/content C:\content and C:\content
These attempts threw the following error:
Could not resolve hostname C: Name or service not known
Thanks again for your time, I know it is valuable, I really appreciate you helping out a novice.
Update I believe I am close thanks to your help. Switched to local C drive, entered the command as you displayed in your Answer update. Now returning a new, not before seen error:
Error: API rate limit exceeded
I did some research on S.O. and some suggestions made were that billing is not enabled or the relevant API is not enabled and I could solve by turning on Google Compute Engine. Billing has been enabled for a few weeks now on my project. In terms of Google Compute Engine, below are what I believe to be the relevant items turned on:
User Info: Enabled
Compute: Read Write
Storage: Full
Task Queue: Enabled
BigQuery: Enabled
Cloud SQL: Enabled
Cloud Database: Enabled
The test video I recorded was short and small in size. I also have not done anything else with this instance so at a loss as to why I am getting the API rate exceeded error.
I also went to the Google APIs console. I see very limited usage reported so, again, not sure why I am exceeding the API limit. Perhaps I do not have something set appropriately in the APIs console?
I'm guessing you're using Cygwin here (please correct me if I'm wrong).
The root directory for your Cygwin installation is most likely C:\cygwin (see FAQ) and not C: so when you say /content on the command line, you're referring to C:\cygwin\content and not C:\content.
Secondly, since you're likely running as a regular user (and not root) you cannot write to /content so that's why you're getting the permission denied error.
Solution: specify the target directory as C:/content (or C:\\content) rather than /content.
Update: from the update to the question, you're using the Google Cloud SDK shell, not Cygwin, so the above answer does not apply. The reason you're seeing the error:
Could not resolve hostname C: Name or service not known
is because gcutil (like ssh) parses destinations which include : as having the pattern [hostname]:[path]. Thus, you should avoid : in the destination, which means we need to drop the drive spec.
In this case, the following should suffice, assuming that you're currently at a prompt that looks like C:\...>:
gcutil --project=myproject pull instance-1 /usr/local/WowzaStreamingEngine/content/myStream.mp4 \content
If not, first switch to the C: drive by issuing the command:
C:
and then run the above command.
Note: I removed the quotes from the command line because you don't need it in the case where parameters don't have spaces in them.
Related
For a project, I have to create a Dataproc cluster that has one of the outdated versions (for example, 1.3.94-debian10) that contain the vulnerabilities in Apache Log4j 2 utility. The goal is to get the alert related (DATAPROC_IMAGE_OUTDATED), in order to check how SCC works (it is just for a test environment).
I tried to run the command gcloud dataproc clusters create dataproc-cluster --region=us-east1 --image-version=1.3.94-debian10 but got the following message ERROR: (gcloud.dataproc.clusters.create) INVALID_ARGUMENT: Selected software image version 1.3.94-debian10 is vulnerable to remote code execution due to a log4j vulnerability (CVE-2021-44228) and cannot be used to create new clusters. Please upgrade to image versions >=1.3.95, >=1.4.77, >=1.5.53, or >=2.0.27. For more information, see https://cloud.google.com/dataproc/docs/guides/recreate-cluster, which makes sense, in order to protect the cluster.
I did some research and discovered that I will have to create a custom image with said version and generate the cluster from that. The thing is, I have tried to read the documentation or find some tutorial, but I still can't understand how to start or to run the file generate_custom_image.py, for example, since I am not confortable with cloud shell (I prefer the console).
Can someone help? Thank you
Help definition is not clear for me:
Token used to route traces of service requests for investigation of
issues.
Could you provide simple example how to use it?
I tried:
gcloud compute instances create vm3 --trace-token xyz123
I can find "vm3" string in logs, but not my token xyz123.
The only use of it seems to be in grep:
history| grep xyz123
The flag --trace-token is intended to be used by the support agents when there is some error which is difficult to track from the logs. The Google Cloud Platform Support agent provides a time bound token which will expire after a specified time and asks the user to run the command for the specific product in which the user is facing the issue. Then it gets easier for the support agent to trace the error by using that --trace-token.
For example :
A user faced some error while creating a Compute Engine instance and contacted the Google Cloud Platform Support team. The support agent then inspected the logs and other resources but could not find the root cause of the issue. Then the support agent provides a --trace-token and asks the user to run the below command with the provided --trace-token.
--trace-token = abcdefgh
Command : gcloud compute instances create my-vm --trace-token abcdefgh
After the user runs the above command the support agent could find the error by analysing in depth with the help of the --trace-token
Please note that when a --trace-token flag is used the content of the trace may include sensitive information like auth tokens, the contents of any accessed files. Hence they should only be used for manual testing and should not be used in production environments.
I am having some of my GCP instances behave in a way similar to what is described in the below link:
Google Cloud VM Files Deleted after Restart
The session gets disconnected after a small duration of inactivity at times. On reconnecting, the machine is as if it is freshly installed. (Not on restarts as in the above link). All the files are gone.
As you can see in the attachment, it is creating the profile directory fresh when the session is reconnected. Also, none of the installations I have made are there. Everything is lost including the root installations. Fortunately, I have been logging all my commands and file set ups manually on my client. So, nothing is lost, but I would like to know what is happening and resolve this for good.
This has now happened a few times.
A point to note is that if I get a clean exit, like if I properly logout or exit from the ssh, I get the machine back as I have left, when I reconnect. The issue is there only when the session disconnects itself. There have been instances where the session disconnected and I was able to connect back as well.
The issue is not there on all my VMs.
From the suggestions from the link I have posted above:
I am not connected to the cloud shell. i am taking ssh of the machine using the chrome extension
Have not manually mounted any disks (afaik)
I have checked the logs from gcloud compute instances get-serial-port-output --zone us-east4-c INSTANCE_NAME. I could not really make much of it. Is there anything I should look for specifically?
Any help is appreciated.
Please find the links to the logs as suggested by #W_B
Below is from 8th when the machine was restarted and files deleted
https://pastebin.com/NN5dvQMK
It happened again today. I didn't run the command immediately then. The below file is from afterwards though
https://pastebin.com/m5cgdLF6
The below one is after logout today.
[4]: https://pastebin.com/143NPatF
Please note that I have replaced the user id, system name and a lot of numeric values in general using regexp. So, there is a slight chance that the time and other values have changed. Not sure if that would be a problem.
I have added the screenshot of the current config from the UI
Using locally attached SDD seems to be the cause ... here it is explained:
https://cloud.google.com/compute/docs/disks/local-ssd#data_persistence
You need to use a "persistent disk" - else it will behave just as you describe it.
I accidentally messed up the permissions of the file system, which shows the message sudo: /usr/local/bin/sudo must be owned by uid 0 and have the setuid bit set when attempting to use sudo, such as read protected files, etc.
Response from this answer (https://askubuntu.com/a/471503) suggest to login as root to do so, however I didn't setup a root password before and this answer (https://stackoverflow.com/a/35017164/4343317) suggest me to use sudo passwd. Obviously I am stuck in an infinite loop from the two answers above.
How can I read/get the files from Google Cloud Compute Engine's disk without logging in into the VM (I have full control of the VM instance and the disk as well)? Is there another "higher" way to login as root (such as from gcloud tool or the Google Cloud interface) to access the VM disk externally?
Thanks.
It looks like the following recipe may be of value:
https://cloud.google.com/compute/docs/disks/detach-reattach-boot-disk
What this article says is that you can shutdown your VM, detach its boot disk and then attach it as a data disk to a second VM. In that second VM you will have the ability to make changes. However, if you don't know what changes you need to make to restore the system to sanity, then as #John Hanley says, we might want to use this mounting technique to copy of your work and then destroy your tainted VM and recreate a new one fresh and copy back in your work and start from there.
Every time I stop a VM Instance from VM instance details page or do
sudo reboot
from inside the vm
When I try to connect to the vm using ssh I get this message
The VM guest environment is outdated and only supports the deprecated 'sshKeys' metadata item. Please follow the steps here to update.
How can I fix this error and connect to the vm?
Ps:
the mentioned steps are at this page
and I can't follow them because I can't connect to the vm
On the page you linked to, with instructions, there are TWO primary ways to add the "Guest Environment" which should stop the error you're encountering. I encountered the same error by trying to use ssh by logging in as "root". You said that you tried the first method, but first you must gain access with a username that is loaded into the virtual machine via your metadata:
Have you tried using a different username to gain ssh access to the server? (Click on the gear on the top right of the browser based ssh terminal, and then select "Change Linux Username" and just put in a different name besides "root" -your google account name will usually work). If this works, then you CAN follow the instructions here.
If you have tried changing the username but get the same error, have you tried following the instructions that allow you basically pre-pend the scripts before the virtual machine starts? Those instructions are here.