Is it possible to have an AppFabric Cache configuration where I have named caches that use some (but not all) of the servers in a cluster?
Specifically I want to have 3 different named caches: one will expand through server 1 and 2; the second through servers 3 and 4 and the other one through 1, 2, 3 and 4.
Server 1 and 2 are physically on location A, while servers 2 and 3 are on another location B.
Is this kind of configuration viable?
There's no way to do this - when you create a named cache you can't limit to a subset of the servers in your cluster. This is the correct design for maintaining redundancy: if you have a cache on servers 1 and 2, and server 1 goes down, you'd want your cache to move to, say, servers 2 and 4 so you maintain the caches' availability.
You can maintain the specific cache using regions as app fabric supports regions that are limited to machines. May be you can clone regions across servers while creating them.
Related
I am a junior dev ops engineer and have this very basic question.
My team is currently working on providing an AWS OpenSearch cluster. Due to the type of our problem, we require the storage-optimized instances. From the amazon documentation I found that they recommend a minimum number of 3 nodes. The required storage size is known to me, in the OpenSearch Service pricing calculator I found that I can either choose 10 i3.large instances or 5 i3.xlarge ones. I checked the prices, they are the same.
So my question is, when I am faced with such a problem, do I choose the lesser bigger instances or the bigger number of smaller instances? I am particularly interested in the reason.
Thank you!
Each VM has some overhead for the OS so 10 smaller instances would have less compute and RAM available for ES in total than 5 larger instances. Also, if you just leave the default index settings (5 primary shards, 1 replica) and actively write to only 1 index at a time, you'll effectively have only 5 nodes indexing data for you (and these nodes will have less bandwidth because they are smaller).
So, I would usually recommend running a few larger instances instead of many smaller ones. There are some special cases where it won't be true (like a concurrent-search-heavy cluster) but for those, I'd recommend going with even larger instances in the first place.
My use case is as follow:
We have about 500 servers running in an autoscaling EC2 cluster that need to access the same configuration data (layed out in a key/value fashion) several million times per second.
The configuration data isn't very large (1 or 2 GBs) and doesn't change much (a few dozen updates/deletes/inserts per minute during peak time).
Latency is critical for us, so the data needs to be replicated and kept in memory on every single instance running our application.
Eventual consistency is fine. However we need to make sure that every update will be propagated at some point. (knowing that the servers can be shutdown at any time)
The update propagation across the servers should be reliable and easy to setup (we can't have static IPs for our servers, or we don't wanna go the route of "faking" multicast on AWS etc...)
Here are the solutions we've explored in the past:
Using regular java maps and use our custom built system to propagate updates across the cluster. (obviously, it doesn't scale that well)
Using EhCache and its replication feature. But setting it up on EC2 is very painful and somehow unreliable.
Here are the solutions we're thinking of trying out:
Apache Ignite (https://ignite.apache.org/) with a REPLICATED strategy.
Hazelcast's Replicated Map feature. (http://docs.hazelcast.org/docs/latest/manual/html-single/index.html#replicated-map)
Apache Geode on every application node. (http://geode.apache.org/)
I would like to know if each of those solutions would work for our use case. And eventually, what issues I'm likely to face with each of them.
Here is what I found so far:
Hazelcast's Replicated Map is somehow recent and still a bit unreliable (async updates can be lost in case of scaling down)
It seems like Geode became "stable" fairly recently (even though it's supposedly in development since the early 2000s)
Ignite looks like it could be a good fit, but I'm not too sure how their S3 discovery based system will work out if we keep adding / removing node regularly.
Thanks!
Geode should work for your use case. You should be able to use a Geode Replicated region on each node. You can choose to do synchronous OR asynchronous replication. In case of failures, the replicated region gets an initial copy of the data from an existing member in the system, while making sure that no in-flight operations are lost.
In terms of configuration, you will have to start a couple/few member discovery processes (Geode locators) and point each member to these locators. (We recommend that you start one locator/AZ and use 3 AZs to protect against network partitioning).
Geode/GemFire has been stable for a while; powering low latency high scalability requirements for reservation systems at Indian and Chinese railways among other users for a very long time.
Disclosure: I am a committer on Geode.
Ignite provides native AWS integration for discovery over S3 storage: https://apacheignite-mix.readme.io/docs/amazon-aws. It solves main issue - you don't need to change configuration when instances are restarted. In a nutshell, any nodes that successfully joins topology writes its coordinates to a bucket (and removes them when fails or leaves). When you start a new node, it reads this bucket and connects to one the listed addresses.
Hazelcast's Replicated Map will not work for your use-case. Note that it is a map that is replicated across all it's nodes not on the client nodes/servers. Also, as you said, it is not fully reliable yet.
Here is the Hazelcast solution:
Create a Hazelcast cluster with a set of nodes depending upon the size of data.
Create a Distributed map(IMap) and tweak the count & eviction configurations based on size/number of key/value pairs. The data gets partitioned across all the nodes.
Setup Backup count based on how critical the data is and how much time it takes to pull the data from the actual source(DB/Files). Distributed maps have 1 backup by default.
In the client side, setup a NearCache and attach it to the Distributed map. This NearCache will hold the Key/Value pair in the local/client side itself. So the get operations would end up in milliseconds.
Things to consider with NearCache solution:
The first get operation would be slower as it has to go through network to get the data from cluster.
Cache invalidation is not fully reliable as there will be a delay in synchronization with the cluster and may end reading stale data. Again, this is same case across all the cache solutions.
It is client's responsibility to setup timeout and invalidation of Nearcache entries. So that the future pulls would get fresh data from cluster. This depends on how often the data gets refreshed or value is replaced for a key.
We are looking into getting an ELK stack setup on Amazon but we don't really know what we need of machines to handle it smoothly.
Now I know that it will become obvious if it doesn't run smooth but still we hoped to get an idea on what we would need for our situation.
So we 4 servers that generate log files in a custom format. About ~45 million lines of logs each day, generating about 4 files of 600mb (gzipped) so around ~24GB of logs each day.
Now we are looking into the ELK stack and would like the dashboards of Kibana display realtime data, so I was thinking of logging using syslog to logstash.
4 Servers -> Rsyslog (on those 4 servers) -> Logstash (AWS) -> ElasticSearch (AWS) -> Kibana (AWS)
So now we need to figure out what kind of hardware we would need in AWS to handle this.
I read somewhere 3 masters for ElasticSearch and 2 datanodes at minimum.
So that would total 5 servers + 1 server for Kibana and 1 for Logstash?
So I would need a total of 7 servers to get started, but that kinda seems overkill?
I would like to keep my data for 1 month, so 31 days at most, so I would have around ~1.4TB of raw logdata in Elastic Search (~45GB x 31)
But since I don't really have a clue on what the best setup would be, any hints/tips/info would be welcome.
Also a system or tool that would handle this for me (node failure, etc) could be useful.
Thanks in advance,
darkownage
Here's how I've architected my cloud clusters:
3 Master nodes - these nodes coordinate the cluster and keeping three of them helps tolerate failure. Ideally these will spread across availability zones. These can be fairly small and ideally do not receive any requests - their only job is to maintain the cluster. In this case set discovery.zen.minimum_master_nodes = 2 to maintain quorum. These IPs and these IPs only are what you should provide to all cluster nodes in discovery.zen.ping.unicast.hosts
Indexes: you should probably take advantage of daily indexes - see https://www.elastic.co/guide/en/elasticsearch/guide/current/time-based.html This will make more sense below but will also be beneficial if you begin to scale up - you can increase shard count over time without re-indexing.
Data Nodes: Depending on your scale or performance requirements there are a few options - i2.xlarge or d2.xlarge will work well but r3.2xlarge are also a good option. Make sure to keep the JVM heap <30GB. Keep the data paths on ephemeral drives local to the instances - EBS is not really so ideal for this use case but depending on your requirements might be sufficient. Be sure you have multiple data nodes so the replica shards can split across availability zones. As your data requirements increase, just scale these up.
Hot/Warm: Depending on the use case - it sometimes is beneficial to split your data nodes into Hot/Warm (Fast SSD/Slow HDD). This is mainly due to the fact that all writes are in realtime, and the majority of reads are on the past few hours. If you can move yesterday's data onto cheaper, slower drives, it helps out quite a bit. This is a little more involved but you can read more at https://www.elastic.co/blog/hot-warm-architecture. This requires adding some tags and using curator on a nightly basis but is generally worth it due to the cost savings of moving largely unsearched data off of more expensive SSD.
In production, I run ~20 r3.2xlarge for the hot tier and 4-5 d2.xlarge for the warm tier with a replication factor of 2 - this allows ~TB per day of ingest and a decent amount of retention. We scale Hot for volume and Warm for retention.
Overall - good luck! It's a fun stack to build and operate once everything is running smoothly.
PS - Depending on the time/resources you have available, you can run the managed elasticsearch service on AWS, but the last time i looked its ~60% more expensive than running it on your own instances, and YMMV.
Seems like you need something to start with ELK Stack on AWS
Did u tried this couple of CloudFormation scripts, It would ease your installation process and will help you setup your environment in one go.
ELK-Cookbook - CloudFormation Script
ELK-Stack with Google OAuth in Private VPC
Comment below if this doesn't solves your problem.
I'm running Django applications on Webfaction and AWS EC2 Micro Instance(613MB of RAM) servers. From past 2-3 months I'm facing memory over limit issues(currently only 4-5 users are using this application). Due to memory over limit, MySQL and APACHE processes are getting killed. I've taken the following steps to reduce memory consumption-
Removed ".all()" django queries.
Swap space of 1.5 GB.
Apache Configuration changed to:
StartServers 4
MinSpareServers 2
MaxSpareServers 4
MaxClients 7
ServerLimit 7
MaxRequestsPerChild 0
MySQL -> my.cnf changed to:
slow-query-log=1
max_connections=45
query_cache_size=16M
table_cache=128
tmp_table_size=32M
max_heap_table_size=33554432
Installed "Dozer" to find memory leaks(Not reporting any problem).
Somebody please let me know, what else can be done to reduce memory consumption.
Also let me know, how can I track the time, taken by a django filter query.
Do you already use django-debug-toolbar? It helps you with tracking long or unwanted queries – this is for local environment.
For hosted application, make sure you have DEBUG set to False. Django keeps track of all queries for infinity when debug mode is enabled.
If it doesn't help search for global/class attributes that holds big data structures and move those to cache/db.
Also make sure not to sort long lists in your views/forms or iterate through very long querysets as all goes to memory at instance. Try to do it in small batches instead.
Just a quick question.
If I wanted to set up load balancing for my Railo / Coldfusion application with the least fail points, which of the following two setups would be most optimal and why? Or, would they be the same?
01) Setup 4 VPS instances each with 1GB Ram
OR
02) Setup 1 Dedicated box with 4GB Ram with 4 Railo / CF instances each allocated 1GB ram
Thanks
If you have 1 dedicated box then you have ONE single point of failure.
Luis
You need to analyze the application(s) you intend to run on this cluster and evaluate if there's a need to easily be able to reallocate resources, CPU and RAM, to a particular server. This type of need arise in situations where you want to direct all traffic from a certain place or type to specific server(s). Examples include search indexers, landing pages of marketing campaigns, and splitting traffic by region using IP address. In each case you might want to make the server receiving this traffic have more resources, and if they're finite, and the expense of others in the cluster.