Amazon Elasticsearch 1.5 hangs while re-indexing - amazon-web-services

I m using a micro ES instance from Amazon with 2 nodes.
However, while I m re-indexing my data (around 300.000 docs, 300MB), the instance becomes unresponsive several times. It usually hangs when trying to read from the instance at the same time.
I m using this instance for the production of my website, and this issue causes me big headaches.
Anyone experiencing same issues? Would it help If I move to:
1) larger instance?
2) upgrade to 2.X version?
Thank you

The only time I've had issues with ES queries being unresponsive during re-indexing is when the resources are exhausted, so I would advocate a larger instance.
You should use CloudWatch metrics to determine the resource usage on your current instance during a period where it's running well and also during your re-index. Use this information to decide the best instance type accordingly, the following table will give you an idea of what resource you get https://aws.amazon.com/elasticsearch-service/pricing/

Related

Availability of V100 and P100 on Google Compute Engine

Description
I just tried for some time to set up or reserve a virtual machine for machine learning with my personal account that I'm using for some months on n1 with around 8 or more GB Ram and either a P100 or a V100 for machine learning and now tried for at least half of all zones with P100/V100 availability and always get a Resource Error like this one:
Operation type [insert] failed with message "The zone 'projects/lexical-list-285719/zones/us-central1-c' does not have enough resources available to fulfill the request. Try a different zone, or try again later."
no resources available in zone-x. I recently switched from the trial.
Questions:
A) Is that common?
B) Is there a fix?
C) What (if anything) can I do to get a machine with these specifications, or similar performance?
I know that this is because of the zone not having these specifications available and that I'm supposed to try switching. I'm aware too of managed instance groups. But it can't be that difficult, can it?
Is google that booked out?
Possible Solutions
Currently my ideas to fix it:
multizone managed group (still have to check if my project is compatible with that)
cloud shell script that iterates through all available zones (would need to research how shell scripts works)
Anyone with experience in this topic sharing their experience with the solutions or with better solutions is very appreciated.
⁣
⁣
A good answer for me would not include any of the following:
Zone Switching (tried that)
Smaller machine (tried that and project doesn't work with too small machine)
Reserving (tried that)
Waiting (already know about that and doesn't help if I want a machine right now)
Though I recommend anyone with less persistent or urgent issues to do just those.
It's not an issue, events like this happens from time to time.
This error message means that there's no available resources like CPU/RAM/GPU on the Google's side in the particular zone. More details the you can find at the documentation Troubleshooting VM creation section Resource availability:
Resource errors occur when you try to request new resources in a zone
that cannot accommodate your request due to the current unavailability
of a Compute Engine resource, such as GPUs or CPUs.
Resource errors only apply to new resource requests in the zone and do
not affect existing resources. Resource errors are not related to your
Compute Engine quota and only apply to the resource you specified in
your request at the time you sent the request, not to all resources in
the zone.
Resource availability are depending from users requests and therefore are dynamic.
There are a few ways to solve this issue:
Try to create your instance at another zone where GPU is available (request an increase in quota if needed).
Wait for a while and try again.
Request some smaller VM (if possible), later you'll be able to try to request some bigger VM (same principle as for quota requests).
Reserve resources for your VM by following documentation to avoid such issue in future (extra payment required).
I had the same issue, I was trying to create V100s, I was able to get it working by switching zones to europe-west4.
What I tried if you're curious: All the sub zones in us-central1 (failed), One sub zone in us-west1 (failed), finally europe-west4 (Success).
This tells me it's due to the zones not having the GPU available. I really wish google wouldn't list it as an option since it doesn't actually have the ability to provision it. Or provide another way of knowing.

Is there an `initial_workers` (cluster.yaml) replacement mechanism in ray tune?

I shortly describe my use case: Assuming I wanted to spin up a cluster with 10 workers on AWS:
In the past I always used initial_workers: 10, min_workers: 0, max_workers: 10 options (cluster.yaml) to initially spin up the cluster to full capacity and then exploit the automated downscaling of the cluster based on idle time. So at the end of job, where almost all trials have been terminated and the full capacity of the cluster is not needed anymore, nodes are automatically removed.
Now with the initial_workers option gone #12444, it is not really clear to me how to accomplish the same downscaling behavior.
I experimented with the programatic way to request resources (ray.autoscaler.sdk.request_resources) before and after tune.run but this seems to be the same as settig the min_workers field and I can only downscale the cluster after all jobs have been terminated.
I also tried to set the upscaling_speed but for some reason upscaling is very slowly and seems to add only one node at a time (I am requesting GPUs). There is also always only one pending task which I also do not really understand yet (Unfortunately I also do not really have the time to investigate this fully :()
Currently I am using the programatic way described above which works fine but then I have a lot of idle resources at the end of the job that run for hours before I can downscale.
Would be great if someone could point me to the right direction to solve this.
Thx
With ray version 1.30 the autoscaler issues I observed seem to be resolved and now the cluster scales with the pending trials as expected (using AWS ec2 g4dn instances). So no need for intial_workers option anymore.

not have enough resources available to fulfil the request try a different zone

not have enough resources available to fulfill the request try a different zone
All of my machines in the different zone
have the same issue and can not run.
"Starting VM instance "home-1" failed.
Error:
The zone 'projects/extreme-pixel-208800/zones/us-west1-b' does not have enough resources available to fulfill the request. Try a different zone, or try again later."
I am having the same issue. I emailed google and figured out this has nothing to do with quota. However, you can try to decrease the need of your instance (eg. decrease RAM, CPUs, GPUs). It might work if you are lucky.
Secondly, if you want to email google again, you will get the message sent from the following template.
Good day! This is XX from Google Cloud Platform Support and I'll be
glad to help you from here. First, my apologies that you’re
experiencing this issue. Rest assured that the team is working hard to
resolve it.
Our goal is to make sure that there are available resources in all
zones. This type of issue is rare, when a situation like this occurs
or is about to occur, our team is notified immediately and the issue
is investigated.
We recommend deploying and balancing your workload across multiple
zones or regions to reduce the likelihood of an outage. Please review
our documentation [1] which outlines how to build resilient and
scalable architectures on Google Cloud Platform.
Again, we want to offer our sincerest apologies. We are working hard
to resolve this and make this an exceptionally rare event. I'll be
keeping this case open for one (1) business day in case you have
additional question related to this matter, otherwise you may
disregard this email for this ticket to automatically close.
All the best,
XXXX Google Cloud Platform Support
[1] https://cloud.google.com/solutions/scalable-and-resilient-apps
So, if you ask me how long you are expected to wait and when this issue is likely to happen:
I waited for an average of 1.5-3 days.
During the weekend (like from Friday to Sunday) daytime EST, GCP has a high probability of unavailable resources.
Usually when you have one instance that has this issue, others too. For me, keep trying in different region waste my time. (But, maybe it just that I don't have any luck)
The error message "The zone 'projects/[...]' does not have enough resources available to fulfill the request. Try a different zone, or try again later." is always in reference to a shortage of resources in a zone.
Google recommends spreading your workload across different zones to reduce the impact of these issues on your workload. Otherwise, there isn't much else to do other than wait or try another zone/region
Faced this Issue yesterday [01/Aug/2020] when GCP free credit was over and below steps helped to workaround this.
I was on asia-south-c zone and moved to us zone
Going to my Google Cloud Platform >>> Compute Engine
Went to Snapshots >>> created a snapshot >>> Select your Compute Engine instance
Once snapshot was completed I clicked on my snapshot.
Ended up under "snapshot details". There, on the top, just click create instance. Here you are basically creating an instance with a copy of your disk.
Select your new zone, don't forget to attach GPUs, all previous setting, create new name.
Click create, that's it, your image should now be running in your new zone
No worry of losting configuration as well.

Recover session after Network Error in AWS

I'm a beginner user of AWS and I'm using an EC2 instance for MCMC sampling which requires some hours of time. Unfortunately I had a network problem in the middle of the sampling and got the message:
Network error: Software caused connection abort
So that I had to reboot the instance losing all of my work (but not my data).
Is there a way to set up the instance to avoid this issue?
Thank you in advance
I'm unsure what MCMC sampling mean but will try to guess.
The only way not to lost information in such cases is to store it at reliable solution, e.g. S3.
If you meant long calculations then you need to parallel them or at least subdivide to smaller chunks then store the queue, its status and the intermediate results at the reliable storage. Merhaps the code have to be modified. If your calculations can be parallelized then you may want to check SQS and spot instances, sometimes you can save a lot of money.
If my guess is incorrect then pls clarify.
instead of restarting, rebooting the instance will fix this issue most of the time. Instance reboot persist any data on its instance store volumes.

AWS autoscaling an existing instance

This question has a conceptual and practical parts.
Conceptually I'd like to know if using the autoscaling functionality is equivalent to simply increasing the compute power by a factor of the number of added instances?
Practically ... how does this work? I have one running instance, its database sitting on an LVM composed of multiple EBS volumes, similarly with all website data. Judging from the load on the instance I either need to upgrade to a more powerful instance or introduce this autoscaling. Is it a copy of the running server? If so, how is the database (etc) kept consistent?
I've read through the AWS documentation, and still haven't got the picture yet - I could set one autoscaling group up which would probably clear my doubts, but I am very leery to do this with a production server.
Any nudges in the right direction would be welcome.
Normally if you have a solution that also uses a database, and several machines in the solution, the database is typically not on any of the machines but is instead hosted seperately with each worker machine pointing to the same database - if you are on AWS platform already, then DynamoDB or RDS are both good solutions for this.
In theory, for some applications, upgrading the size of the single machine will give you the same power as adding several smaller machines, but increasing the size of the single machine, while usually these easiest thing to do at first, should not be considered autoscaling and has its own drawbacks. Here are some things to consider:
Using multiple machines instead of one big one gives you some fault tolerance. One or more machines can go down and if your solution is properly designed new machines will spin up to replace them.
Increasing the size of a single machine solution means you are probably paying too much. If you size that single machine big enough to handle peak workloads, that means at other times (maybe most of the time), you are paying for a bigger machine than you need. If you setup your autoscaling solution properly more machines come on line in response to increasing demand, and then they terminate when that demand decreases - you only pay for the power you need when you need it.
When your solution is designed in this manner, you need to think of all of the worker machines as ephermal - likely to disappear at any time, so you need to build your solution differently. Besides using a hosted database (like on DynamoDB or AWS RDS), you also should not store any data on the machines in your auto-scaling group that doesn't also live somewhere else. For example, if part of your app allows users to upload images, you don't store them on the instances, you store them in S3. Same would apply to any other new data that comes in.
You need to be able to figuratively 'pull the plug' at any instant on any of the machines in your ASG without losing data.
Ultimately a properly setup auto-scaling solution will likely serve you better, but without doubt it is simpler to just 'buy a bigger machine' and the extra money you spend on running that bigger machine may be more than offset by the time and effort you don't have to spend re-architecting your solution to properly run in an autoscaling environment. The unique requirements of your solution will ultimately decide which approach is better.