I had NEO4J running on a m3.medium instance, only to realizes that I was being charged for AWS usage. No issue here. Since I am experimenting at this time, I'd like to run NEO4J on the t2.micro instance. I followed instructions on AWS to resize to a t2.micro instance and now I cannot access the NEO4J server. My NEO4J stack is up and running, but I get a 503 service unavailable error.
What am I missing?
Neo4j should run fine on t2.micro. I have it even running on Raspberry PI for demo purposes. You just need to take care on setting right heap size and pagecache size. Maybe go with 512M for heap and 200M for pagecache, leaving up ~300 for the system.
If all memory is occupied, sshd cannot allocate memory for new connections.
Related
After installing clamscan clamav on ubuntu 18.04 aws ec2 instance I can't login to my aws server with ssh. Neither my website on that server shows up on browser. I have rebooted but not working. How do I fix this?
Common reasons for instances are Exhausted memory and Corrupted file system.
Since you are using t2.micro, which only has 1GB of ram, and by default 8GB disk, its possible that your instance is simply too small to run your workloads. In such a situation, a common solution is to upgrade it to, e.g. t2.medium (2GB of RAM), but such change will be outside of free-tier.
Alterantively, you can re-reinstall your application on new t2.micro, but this time setup CloudWatch Agent to monitor RAM and disk use. By default these things are not monitored. If you monitor them on a new instance, it can give your insights about how much ram, disk or other resources, are used by your applications.
The metrics collected in CloudWatch will help you judge better of what causing the freeze of your instance.
I'm using free tier ec2-instance (t2.micro) with EBS volume (default one). I use a docker container to host a website.
Occasionally, when I run out of storage, I have to bulk delete my dangling docker images. On performing delete and rebuilding the container ssh instance hangs (while installing the npm modules), and I'm not able to even log into my machine for almost 1-2 hours.
On research, I realized this has something to do with burst credits, but on inspecting my EBS burst credits I still have 60 credits left. And I have around 90 CPU credits.
Not sure why this unresponsiveness in happening, my instance even stops serving the website it's running after this for 1-2 hours.
For reference, this is my Dockerfile.
Do you have any suggestions about instance type for zookeeper deployed on aws?
Btw, I use zookeeper mainly for ClickHouse, which means zookeeper is used to transmit metadata and log things (with autopurge set).
Do not use HDD. EBS GP2 is OK.
You need at least 4GB for normal ZK work (without OOM every day) -- EC2 a1.large/c4.large (with swap 16GB)
ZK is in-memory DB. So if your ZK DB (snapshot) is 100MB then 4GB is ok if it's bigger you need more memory.
So EC2 large or xlarge.
i would like to resize the SSD boot disk of my Google Cloud instance from 500 GB to 150 GB. The instance has Ubuntu 16.04.5 LTS and Plesk Onyx installed and a web and mail server is running which is currently my biggest problem.
My idea is to create a new instance and add a mirror of the current disk as a boot disk on the new instance. But how do I mirror the disk without a downtime of the mail and web server? Or if I have to stop both services which is the best way to mirror the disk?
Any experiences? Or tips?
500GB SSD is more expensive than we thought, this is the reason why we have to reduce the disk size.
Thanks
To avoid the downtime, I can suggest the following action plan:
Deploy a new instance with the required parameters.
Perform a migration to the new instance. You can find documentation here, and while it may seem complex, when you have two instances with the same Plesk version and the same list of installed components, it is a pretty straightforward process.
When the migration is finished, switch routing from the public IP or IPs to a new instance.
Make sure that everything works fine and get rid of the overpriced instance.
We're looking for the best way to deploy a small production Cassandra cluster (community) on EC2. For performance reasons, all recommendations are to avoid EBS.
But when deploying the Datastax provided AMI with Ephemeral storage, whenever the ephemeral storage is wiped out the instance dies permanently. (Start + Stop manually, or sometimes triggered by AWS for maintenance) will render the instance unusable.
OpsCenter fails to fix the instance after a reboot and the instance does not recover on its own.
I'd expect the instance to launch itself back up, run some script to detect that the ephemeral storage is wiped, and sync with the cluster. Since it does not the AMI looks appropriate only for dev tasks.
Can anyone please help us understand what is the alternative? We can live with a momentary loss of a node due to replication but if the node never recovers and a new cluster is required this looks like a dead end for a production environment.
is there a way to install Cassandra on EC2 so that it will recover from an Ephemeral storage loss?
If we buy a license for an enterprise edition will this problem go away?
Does this meant that in spite of poor performance, EBS (optimized) with PIOPS is the best way to run Cassandra on AWS?
Is the recommendation to just avoid stopping + starting the instance and hope that AWS will not retire or reallocate their host machine? What is the recommendation in this case?
What about AWS rolling update? Upgrading one machine (killing it) and starting it again, then proceeding to next machine will erase all cluster data, since machines will be responsive (unlike Cassandra on those). That way it can destroy small (e.g. 3 node) cluster.
Has anyone had good experience with payed services such as Instacluster?
New docs from Datastax actually indicate that EBS Optimized GP2 SSD backed instances can be used for production workloads. With EBS backed, you can easily do snapshots which virtually eliminate the chance of data loss on a node, and it makes it so that they are easily migrated to a new host by a simple start/stop.
With ephemeral, you basically have to plan around failure, consider if your entire cluster is in a single region (SimpleSnitch) and that region goes down.
http://docs.datastax.com/en/cassandra/3.x/cassandra/planning/planPlanningEC2.html