I am currently having a strange issue that I am hoping I can get some help with;
I am attempting to start GCE instances with a startup script that is being stored in Google Cloud Storage, and regardless of whether I attempt to launch the instance from the command line or the web UI, even though the config shows the appropriate metadata pair, the logs show "INFO No startup scripts found in metadata" and my startup script does not execute. See below screenshots.
I can see in my instance details that the metadata for the script URL exists.
But when I look in the logs, I get the following:
Anyone have any advice?
Apologies; I figured out the problem.
startup_script_url != startup-script-url
Use hyphens, not underscores.
Related
Trying to create a VM instance in Google Cloud Platform. Getting error in the mentioned process. Trying to resolve it.
Error: **Could not fetch a resource:
Invalid value for field 'resource.networkInterfaces[0].subnetwork': 'https://compute.googleapis.com/compute/v1/projects/xxx/regions/us/subnetworks/10.128.0.0/20'. The URL is malformed.**
Anyone, please guide me. My intention to make VM creation automated and make it simple by putting it all together in an automated Bash Script.
The error indicates that the URL is malformed. It probably because that "subnetwork" does not exist as you write it.
One way to fix it is to have a look in the documentation to know the right way to write the command. Also be sure that that subnet exist in your GCP project.
https://cloud.google.com/vpc/docs/create-use-multiple-interfaces
The easy way to avoid typos is to create the VM in the console the first time (you don't really have to create it, just start the form), at the bottom of the page you will see a line that says "Equivalent REST or command line", click in "command line" to see exactly the CLI command equivalent to the VM you are configuring. Use this command line in your CLI console or script.
Clicking in the "command line" will return something like:
gcloud compute instances create VM_NAME \
--network=NETWORK_NAME \
--subnet=SUBNET_NAME \
--zone=ZONE
with all the parameters already filled in for you.
I want to create users in windows server on google cloud during instance creation. Searched in google cloud documentation and other sites but could not find answers. I am aware of startup scripts but those are great when you want to do something every time machine boots up. Please help.
You can use GCP startup script to do it. Please have a look at the documentation Running startup scripts. For example, you can easily add a user John and add him to the group Remote Desktop Users by using metadata:
and, as a result, you'll be able to login via RDP to your VM instance with login John and password fadf24as.FD*.
By default such script will be executed during each start cycle of VM instance:
Compute Engine lets you create and run your own startup scripts on
your virtual machine (VM) instances to perform automated tasks every
time your instance boots up.
To change this default behavior you can add there additional step like creating some folder or file and use them as a flag: if folder or file already exist than rest part of the script should be skipped. In such case PowerShell looks more suitable than cmd, final script could be uploaded from Google Cloud Bucket.
I'm currently using the "Ruby 2.6 running on 64bit Amazon Linux 2/3.0.2" image, and by looking, inside the EC2 instance at the /var/logs/eb-engine.log ("eb logs" command won't show me this), there is a recurring error:
[ERROR] failed to parse JSON file
/opt/elasticbeanstalk/deployment/app_version_manifest.json with error:
json: cannot unmarshal string into Go struct field
AppVersionManifest.Serial of type uint64
When I check that file, I do not know what is wrong with it, or what is preventing that file from being parsed, if that is actually the problem:
{ "RuntimeSources":{"my_api":{"my_api-source_alfa0.2":"s3url":""}}},"DeploymentId":9,"Serial":"23","VersionLabel":"my_api-source_alfa0.2"}
The serial "23" seems pretty parsable to me. Please help!
What causes this
I believe this is a bug.
In some cases, this can occur if you try to terminate or rebuild your Elastic Beanstalk environment and the operation fails to delete your AWSEBSecurityGroup.
There are reports (see comments) of other causes besides this.
How to fix it
The AWS document How do I terminate or rebuild my AWS Elastic Beanstalk environment when the AWSEBSecurityGroup fails to delete? describes how to resolve this, but I excerpted the main steps below, in case that link ever breaks:
Open the AWS CloudFormation console.
From the Stack Name column, choose the stack that failed to delete.
Note: The Status column of your stack shows DELETE_FAILED.
From the Actions menu, choose Delete Stack.
In the Delete Stack pop-up window, choose AWSEBSecurityGroup, and then choose Yes, Delete.
Terminate or rebuild the Elastic Beanstalk environment.
The linked docs have other steps if you prefer the CLI or have a more complex setup.
Then what?
After you've deleted the group and rebuilt your environment, you won't get the app_version_manifest.json error any more. Deploy your app.
Once it's done, if you SSH in and run…
cat /opt/elasticbeanstalk/deployment/app_version_manifest.json
…you'll notice that Serial is now correctly represented as a JSON number.
I am having some of my GCP instances behave in a way similar to what is described in the below link:
Google Cloud VM Files Deleted after Restart
The session gets disconnected after a small duration of inactivity at times. On reconnecting, the machine is as if it is freshly installed. (Not on restarts as in the above link). All the files are gone.
As you can see in the attachment, it is creating the profile directory fresh when the session is reconnected. Also, none of the installations I have made are there. Everything is lost including the root installations. Fortunately, I have been logging all my commands and file set ups manually on my client. So, nothing is lost, but I would like to know what is happening and resolve this for good.
This has now happened a few times.
A point to note is that if I get a clean exit, like if I properly logout or exit from the ssh, I get the machine back as I have left, when I reconnect. The issue is there only when the session disconnects itself. There have been instances where the session disconnected and I was able to connect back as well.
The issue is not there on all my VMs.
From the suggestions from the link I have posted above:
I am not connected to the cloud shell. i am taking ssh of the machine using the chrome extension
Have not manually mounted any disks (afaik)
I have checked the logs from gcloud compute instances get-serial-port-output --zone us-east4-c INSTANCE_NAME. I could not really make much of it. Is there anything I should look for specifically?
Any help is appreciated.
Please find the links to the logs as suggested by #W_B
Below is from 8th when the machine was restarted and files deleted
https://pastebin.com/NN5dvQMK
It happened again today. I didn't run the command immediately then. The below file is from afterwards though
https://pastebin.com/m5cgdLF6
The below one is after logout today.
[4]: https://pastebin.com/143NPatF
Please note that I have replaced the user id, system name and a lot of numeric values in general using regexp. So, there is a slight chance that the time and other values have changed. Not sure if that would be a problem.
I have added the screenshot of the current config from the UI
Using locally attached SDD seems to be the cause ... here it is explained:
https://cloud.google.com/compute/docs/disks/local-ssd#data_persistence
You need to use a "persistent disk" - else it will behave just as you describe it.
My Issue
I just deployed my first application to AWS Beanstalk. I have logging in my application using logback. When I download all logs from AWS, I get a huge bundle:
Not only that, but it is pretty annoying to log in, navigate to my instance, download a big zip file, extract it, navigate to my log, open it, then parse for the info I want.
The Question
I really only care about a single one of the log files on AWS - the one I set up my application to create.
What is the easiest way to view only the log file I care about? Best solution would display only the one log file I care about in a web console somewhere, but I don't know if that is possible in AWS. If not, then what is the closest I can get?
You can use the EB console to display logs, or the eb logs command-line tool. By default, each will only show the last 100 lines of each log file. You could also script ssh or scp to just retrieve a single log file.
However, the best solution is probably to publish your application log file to a service like Papertrail or Loggly. If and when you move to a clustered environment, retrieving and searching log files across multiple machines will be a headache unless you're aggregating your logs somehow.