I began putting together a project yesterday and decided that I'd like to use Cloud 9 as the development IDE. When I was setting up my dev environment, I selected that I wanted to create a new EC2 instance for the environment (t2.micro) and I put the cost-savings settings as 30 minutes (so that the environment will auto-hibernate after inactivity). I then proceeded to use Cloud 9 as I had in the past, which included some changes such as upgrading the version of Node.js and installing Django. Everything worked great until I went to bed. When I woke up and opened my environment again this morning, the instance was relaunched and none of the changes I made persisted, so I needed to do the updates/installations all over again.
Is there a way I can avoid this without having to turn off auto-hibernate (or is the root issue something else, and if so, how can I address it)? I don't particularly want to waste a bunch of compute time having my instance just sitting there idly, but it's really annoying having to spend a chunk of my morning re-configuring everything that I did yesterday.
Are you setting the default Node version with nvm? If you manually set a Node version with the terminal but don't set it to default, the Node version will only apply to the one terminal session (it won't even persist to a new terminal tab).
Related
I am having some of my GCP instances behave in a way similar to what is described in the below link:
Google Cloud VM Files Deleted after Restart
The session gets disconnected after a small duration of inactivity at times. On reconnecting, the machine is as if it is freshly installed. (Not on restarts as in the above link). All the files are gone.
As you can see in the attachment, it is creating the profile directory fresh when the session is reconnected. Also, none of the installations I have made are there. Everything is lost including the root installations. Fortunately, I have been logging all my commands and file set ups manually on my client. So, nothing is lost, but I would like to know what is happening and resolve this for good.
This has now happened a few times.
A point to note is that if I get a clean exit, like if I properly logout or exit from the ssh, I get the machine back as I have left, when I reconnect. The issue is there only when the session disconnects itself. There have been instances where the session disconnected and I was able to connect back as well.
The issue is not there on all my VMs.
From the suggestions from the link I have posted above:
I am not connected to the cloud shell. i am taking ssh of the machine using the chrome extension
Have not manually mounted any disks (afaik)
I have checked the logs from gcloud compute instances get-serial-port-output --zone us-east4-c INSTANCE_NAME. I could not really make much of it. Is there anything I should look for specifically?
Any help is appreciated.
Please find the links to the logs as suggested by #W_B
Below is from 8th when the machine was restarted and files deleted
https://pastebin.com/NN5dvQMK
It happened again today. I didn't run the command immediately then. The below file is from afterwards though
https://pastebin.com/m5cgdLF6
The below one is after logout today.
[4]: https://pastebin.com/143NPatF
Please note that I have replaced the user id, system name and a lot of numeric values in general using regexp. So, there is a slight chance that the time and other values have changed. Not sure if that would be a problem.
I have added the screenshot of the current config from the UI
Using locally attached SDD seems to be the cause ... here it is explained:
https://cloud.google.com/compute/docs/disks/local-ssd#data_persistence
You need to use a "persistent disk" - else it will behave just as you describe it.
Recently I restarted my AWS instance and got a new IP address but after I restarted both Jenkins and AWS, the information about my previous jobs was no longer shown in Jenkins.
I checked the path and it still exists in the instance but it is not shown on the web. I tried to create another project and it still created in the same path just that only the newly created project is in. Any suggestions on how to recover my missing projects??
FYI
I have lots of old plugins that mentions "xxx failed to load" so I do not know if that is causing it.
one of my plugins does not match and all those that depends on it will fail to show on the installed section of the plugin. Thus I remove all the plugins by deleting it directly from the plugin folder and check for the working copy that was on my previous version and download the same version of plugins. After which, all the jobs come back on screen
I recently added a few new DAGs to production airflow and as a result decided to scale up the number of nodes in the Composer pool. After doing so I got the error: Can't decrypt _val for key=<KEY>, invalid token or value. This happens now for every single DAG that uses variables. It's not the same key either, it depends on what variables the DAG needs.
I immediately scaled Composer back down to 3 nodes and the problem persisted.
I have tried re-saving all of the Variables, recreating them in the UI (which says they are all valid), recreating them in the CLI (which lists invalid for every single one).
I have also tried updating configuration to try and reboot the server, and manually stopping the VM instances.
Composer also seems to negate the ability to update the Fernet Key, so I can't try and use a new one. For some reason it appears that the permanent one Composer has assigned is now invalid.
Is there anything else that can be done to remedy that problem short of recreating the environment?
I managed to fix this problem by adding a new python package. It seems that adding a package is the only way to really "reboot" the environment. The reboot invalidated all of my variables and connections when it had finished but I was able to just add those back in rather than having to recreate the entire environment.
Heard back about this issue: According to Google, Composer creates a custom image for the environment and passes one to each node, and if that got corrupted during scaling then the only way to fix it is by adding a new python package so it rebuilds the image. Incidentally, version 1.3.0 of Composer is much better as the scheduler is restarted every 10 minutes which should solve some of the latter issues I experienced.
I have set up an elastic beanstalk deployment of Drupal to host a Drupal built website.
When I start up my ec2 instance, I go through the installation steps of setting up Drupal.
However, when the instance is Restarted, or Stopped, restarting the instance goes back to this page!
How can I configure the instance so that these installation steps do not need to be repeated even when the instance goes down. This is quite worrying as I am looking to host my website this way.
Any help on this issue would be greatly appreciated!
This step means that drupal can't find database or it can find it but installation is not done yet (required tables are not created). So if you pass installation and you see again this screen it can mean 2 things:
Your database configuration is lost, so drupal can't find DB.
DB configuration file is ok, but DB it self is lost so configuration must be done again.
So first check what is the problem in your case and then solve it. Probably since you are installing in instance, DB configuration is then recorded and then lost when your box expires. If so...find a way to make permanent change do config file (should be /sites/default/settings.php).
I've been using a Windows 2008R2 EC2 instance for some time. As of today, it still works. I started working with the AWS API, and I was unable to start my instance using the API, the error message being "not authorized for images", specifically : An error occurred (AuthFailure) when calling the RunInstances operation: Not authorized for images: [ami-088dab1e]
That's when I learned about deprecation.
From what I read, what this means is that the AMI being used is no longer publicly available. When using the API call "describe-images", this image cannot be queried. While it apparently can still be used from the console, the API simply doesn't support it and will not start an instance using that image ID. On the console, the AMI description reads : Cannot load details for ami-088dab1e. You may not be permitted to view it.
I understand how to find a new image and I think I understand how to launch my instance using a new image. However, I have lots of custom software installed on this instance. So before I try it, I want to know if I will lose that custom software installation if I launch my existing instance with a new AMI. I'm hoping that my custom software won't change, but I'm skeptical. I don't want to fire up a brand new version of Windows and start from scratch. Mostly, I don't want to lose what I've already got.
I know this is a basic question, but I've looked all over, and I haven't yet found a straightforward answer. I was hoping y'all would know. Thanks.
I think I've found an answer here: AWS EC2 new instance from image AMI
When launching an instance from an Amazon Machine Image (AMI), the disks will contain an exact copy of the disk at the time that the AMI was created.
In other words, if I start a new instance, I'll lose my installed software. WRONG!
Launching != starting. More editing to come once I get this completely figured out.
So, given that updated Windows images are created and deprecated all the time, and the Windows OS is constantly updated by Microsoft, one must wonder how it is a static Windows image can be used with other software? It seems like far more trouble than it's worth, if you've got to constantly reinstall your software to keep your Windows system up to date.
Amazon recently came up with a solution for that, here: Patching Windows
I don't know how to do it yet, but this seems like exactly what I need in order to keep Windows up to date, and keep my installed software intact.