Having a very specific access pattern for my data, I wonder about the expected mapreduce performance of Cassandra. These are my requirements:
There will be 10 Million Documents (e.g. JSON, a couple of KB each)
in my database There will be occasional updates of the documents
Users want to create results from the whole dataset that require
processing of each document
Users will want to do this in a
semi-interactive fashion, trying out effects of changes they make to
the processing of each document. Waiting for the result a couple of
minutes is ok.
Users would like to be able to spend money (scaling up
or out) to increase interactive speed if there is a desire to
increase processing speed.
There will not be large user numbers,
processing needs to be done a couple of times per hour, maybe.
Durability is not a primary concern, as the data is replicated from a
source system anyway.
This sounds like a good Job for Cassandra and MapReduce but given that MapReduce is not intended to be used semi-interactively but rather as a background job, I wonder what performance possibilities I can expect using Cassandra.
My other options are plain MySQL with documents stored as CLOBS or partitioned Redis.
Can anyone provide clues on how to estimate the speed possibilities?
Related
At a high / theoretical level I know exactly the type of architecture I want to build and how it would work, but I'm attempting to construct this as cheaply as possible using AWS services and my lack of familiarity with the offerings of AWS has me running in circles.
The Data
We run a video streaming platform. On busy nights we have about 100 simultaneous live streams going with upwards of 30,000 viewers. We expect this number to rise to 100,000 in the next few years. A live stream lasts, on average, 2 hours.
We send a heartbeat from our player every 10 seconds with information about the viewer -- how much data they've viewed, how much data they've buffered, what quality they're streaming, etc.
These heartbeats are sent directly to an AWS Kinesis endpoint.
Finally, we want to retain all past messages for at least 5 years (hopefully longer) so that we can look at historic analytics.
Some back of the envelope calculations suggest we will have 0.1 * 60 * 60 * 2 * 100000 * 365 * 5 = 131 billion heartbeat messages five years from now.
Our Old Pipeline
Our old system had a single Kinesis consumer. Aggregate data was stored in DynamoDB. Whenever a message arrived we would read the record from DynamoDB, update the record, then write the new record back. This read-update-write loop limited the speed at which we could process messages and made it so that each message coming in was dependent on the messages before it, so they could not be processed in parallel.
Part of the reason for this setup is that our message schema was not well designed from the outset. We send the timestamp at which the message was sent, but we do not send "amount of video watched since last heartbeat". As a result in order to compute the total viewer time we need to look up the last heartbeat message sent by this player, subtract the timestamps, and add that value. Similar issues exist with many other metrics.
Our New Pipeline
We've begun to run into scaling issues. During our peak hours analytics can be delayed by as much as four hours while waiting for a backlog of messages to be processed. If this backlog reaches 24 hours Kinesis will start deleting data. So we need to fix our pipeline to remove this dependency on past messages so we can process them in parallel.
The first part of this was updating the messages sent by our players. Our new specification includes only metrics that can be trivially sum'd with no subtraction. So we can just keep adding to the "time viewed" metric, for instance, without any regard to past messages.
The second part of this was ensuring that Kinesis never backs up. We dump the raw messages to S3 as quickly as they arrive with no processing (Kinesis Data Fire Hose) so that we can crunch analytics on them at our leisure.
Finally, we now want to actually extract information from these analytics as quickly as possible. This is where I've hit a snag.
The Questions We Want to Answer
As this is an analytics pipeline, our questions mostly revolve around filtering these messages and then aggregating fields for the remaining messages (possibly, in fact likely, with grouping). For instance:
How many Android users watched last night's stream in HD? (FILTER by stream and OS)
What's the average bandwidth usage among all users? (SUM and COUNT, with later division of the final aggregates which could be done on the dashboard side)
What percent of users last year were on any Apple device (iOS, tvOS, etc)? (COUNT, grouped by OS)
What's the average time spent buffering among Android users for streams in the past year? (a mix of all of the above)
Options
AWS Athena would allow us to query the data in S3 directly as if it were an ANSI SQL table. However reading up on Athena, unless the data is properly formatted it can be incredibly slow. Some benchmarks I've seen show that processing 1.1 billion rows of CSV data can take up to 2 minutes. I'm looking at processing 100x that much data
AWS EMR and AWS Redshift sound like they are built for this purpose, but are complicated to set up and have a high base cost to run (requiring an EC2 cluster to remain active at all times). AWS Redshift also requires data be loaded into it, which sounds like it might be a very slow process, delaying our access to analytics
AWS Glue sounds like it may be able to take the raw messages as they arrive in S3 and convert them to Parquet files for more rapid querying via Athena
We could run a job to regularly batch messages to reduce the total number that must be processed. While a stream is live we'll receive one message every 10 seconds, but we really only care about the totals for a given viewer. This means that when a 2-hour stream concludes we can combine the 720 messages we've received from that player into a single "summary" message about the viewer's experience during the whole stream. This would massively reduce the amount of data we need to process, but exactly how and when to trigger this process isn't clear to me
The Ideal Architecture
This is a Big Data problem. The generic solution to Big Data problems is "don't take your data to your query, take your query to your data". If these messages were spread across 100 small storage nodes then each node could filter, sum, and count the subset of data they hold and pass these aggregates back to a central node which sums the sums and sums the counts. If each node is only operating on 1/100th of the data set then this kind of processing could theoretically be incredibly fast.
My Confusion
While I have a theoretical understanding of the "ideal" architecture, it's not clear to me if AWS works this way or how to construct a system that will function well like this.
S3 is a black box. It's not clear if Athena queries are run on individual nodes and aggregates are further reduced elsewhere, or if there's a system reading all of the data and aggregating it in a central location
Redshift requires the data by copied into a Redshift database. This doesn't sound fast, nor distributed
It's unclear to me how EMR works or if it will suit my purpose. Still researching
AWS Glue seems like it may need to be triggered by some event?
Parquet files seems to be like CSVs, where multiple records reside in a single file. Meanwhile I'm dumping one record per file. But perhaps there's a way to fix that? e.g. batching files every minute or every 5 minutes?
RDS or a similar service might be really good for this (indexing and whatnot) but would require a guaranteed schema (or necessitate migrating if our message schema changed) which is a concern. Migrating terabytes of data if we change our message schema sounds out of the question
Finally, along with wanting to get analytics results in as "real time" as possible (ideally we want to know within 1 minute when someone joins or leaves a stream), we want the dashboards to load quickly. Waiting 30 seconds to see the count of live viewers is horrendous. Dashboards should load in 2 seconds or less (ideally)
The plan is to use QuickSight to create dashboards (our old system had a hack-y Django app that read from our DynamoDB aggregates table, but I'd like to avoid creating more code for people to maintain)
I expect you are going to get a lot of different answers and opinions from the broad set of experts you have pinged with this. There is likely no single best answer to this as there are a lot of variables. Let me give you my best advice based on my experience in the field.
Kinesis to S3 is a good start and not moving data more than needed is the right philosophy.
You didn't mention Kinesis Data Analytics and this could be a solution for SOME of your needs. It is best for questions about what is happening in the data feed right now. The longer timeframe questions are better suited for the tools you mention. If you aren't too interested in what is happening in the past 10 minutes (or so) it could be good to omit.
S3 organization will be key to performing any analytic directly on the data there. You mention parquet formatting which is good but partitioning is far more powerful. Organizing the S3 data into "days" or "hours" of data and setting up the partitioning based on this can greatly speed up any query that is limited in the amount of time that is needed (don't read what you don't need).
Important safety note on S3 - S3 is an object store and as such there is overhead for each object you reference. Having many small objects (10,000+) treated as a single set of data is going to be slow no matter what solution you go with. You need to fix this before you go forward with any solution. You see it takes upwards of .5 sec to look up an object in S3 but if the file is small the transfer time is next to nothing. Now multiply .5 sec times all the objects you have and see how long it will take to read them. This is not a function of the downstream tool you choose but of the S3 organization you have. S3 objects as part of a Big Data solution should be at least 100M in size to not suffer greatly from the object lookup time. The choice of parquet or CSV files is mute without addressing object size and partitioning first.
Athena is good for occasional queries especially if the date ranges are limited. Is this the query pattern you expect? As you say "move the compute to the data" but if you use Athena to do large cross-sectional analytics where a large percentage of the data needs to be used, you are just moving the data to Athena every time you execute this query. Don't stop thinking about data movement at the point it is stored - think about the data movements to do the analytics also.
So a big question is how much data is needed and how often to support your analytics workloads and BI functions? This is the end result you are looking for. If a high percentage of the data is needed frequently then a warehouse solution like Redshift with the data loaded to disk is the right answer. The data load time to Redshift is quite fast as it parallel loads the data from S3 (you see S3 is a cluster and Redshift is a cluster and parallel loads can be done). If loading all your data into Redshift is what you need then the load time is not your main concern - the cost is. Big powerful tool with a price tag to match. The new RA3 instance type bends this curve down significantly for large data size clusters so could be a possibility.
Another tool you haven't mentioned is Redshift Spectrum. This brings several powerful technologies together that could be important to you. First is the power of Redshift with the ability to choose smaller cluster sizes that normally would be used for your data size. S3 filtering and aggregation technology allows Spectrum to perform actions on the data in S3 (yes initial compute actions of the query are performed inside of S3 potentially greatly reducing the data moved to Redshift). If your query patterns support this data reduction in S3 then the data movement will be small and the Redshift cluster can be small (cheap) too. This can be a powerful compromise point for IoT solutions like yours since complex data models and joining are not needed.
You bring up Glue and conversion to parquet. These can be good to do but as I mentioned before partitioning of the data in S3 is usually far more powerful. The value of parquet will increase as the width of your data increases. Parquet is a columnar format so it is advantaged if only a subset of "columns" are needed. The downside is the conversion time/cost and the loss of easy human readability (which can be huge during debug).
EMR is another choice you mention but I generally advise clients against going with EMR unless they need the flexibility it brings to the analytics and they have the skills to use it well. Without these EMR tends to be an unneeded costs sink.
If this is really going to be a Big Data solution then RDS (and Aurora) not good choices. They are designed for transactional workloads, not analytics. The data size and analytics will not fit well or be cost effective.
Another tool in the space is S3 Select. Not likely what you are looking for but something to remember exists and can be a tool in the toolbox.
Hybrid solutions are common in this space if there are variable needs based on some factor. A common one "is time of day" - no one is running extensive reports at 3am so the needed performance is much less. Another is user group - some groups need simple analytics while others need much more power. Another factor is timeliness of data - does everyone need "up to the second" information or is daily information sufficient? Trying to have one tool that does everything for everybody, all the time is often a path to an expensive, oversized solution.
Since Redshift Spectrum and Athena can point at the same S3 data (well organized since both will benefit) both tools can coexist on the same data. Also, Redshift is ideal for sifting through huge mounds of data, it is ideal for producing summary tables and then writing them (in partitioned parquet) to S3 for tools like Athena to use. All these cloud services can be run on schedules and this includes Redshift and EMR (Athena is query on demand) so they don't need to run all the time. Redshift with Spectrum can run a few hours a day to perform deep analytics and summarize data for writing to S3. Your data scientist can also use Redshift for their hardcore work while Athena supports dashboards using the daily summary data and Kinesis Data Analytics as source.
Lastly you bring up a 2 sec requirement for dashboards. This is definitely possible with Quicksight backed up by Redshift or Athena but won't be met for arbitrarily complex / data intensive queries. To meet this you will need the engine to have enough horsepower to produce the data in question. Redshift with local data storage is likely the fastest (Redshift Spectrum with some data pruning done in S3 wins in some cases) and Athena is the weakest / slowest. But the power doesn't matter if the work is small - see your query workload will be a huge deciding factor. The fastest will be to load the needed data into Quicksight storage (SPICE) but this is another localized / summarized version of the data so timeliness is again a factor (how often is this updated).
Based on designing similar systems and a bunch of guesses as to what you need I'd recommend that you:
Fix your object size (Kineses can be configured to do this)
Partition your data by day
Set up a small Redshift cluster (4 X dc2.large) and use Spectrum source address the data
Connect Quicksight to Redshift
Measure the performance (and cost) and compare to requirements (there will likely be gaps)
Adjust to solution (summary tables to S3, Athena, SPICE etc.) to meet goals
The alternative is to hire someone who has set up such systems before and have them review the requirements in detail and make a less "guess-based" recommendation.
I would look into Druid. Not an AWS offering, but easily runs on AWS, with good integration with S3 and Kinesis.
Capable of reading from Kinesis, at high speeds, and make the data available for querying right away. Can also flatten and transform the data as it reads it.
Capable of doing rollups/aggregation/compaction during ingestion (and further reduce data in an async manner). From what you wrote, it seems to me that it could easily reduce the number of rows in the DB by a very large factor.
Capable of fast queries, using standard SQL.
Smart partitioning of the data to scan only the relevant dates.
The down-side is that you will need to keep a cluster up and running for ingestion and for querying. It is pretty scalable, so you can start small.
On the up-side - you're not using 10 different technologies (Athena/Glue/EMR/etc.)
You might want to consider contacting Imply, which can ease the deployment.
A usual approach a lot of companies take is they do heavy weight lifting in athena or bigquery (or some other distributed sql environment) -> aggregate intermediate results into multiple indexed+partitioned postgres/mysql/redshift/clickhouse tables and then connect their APIs to read on those tables. Of course, this works fine except the fact that with an increased amount of intermediate-aggregated data, table indices grow and problems like cumulative sum or sorting become less and less efficient.
With your problem in hand, I think you can get a lot of help with AWS Lambda. AWS Lambda provides a very feasible serverless approach towards solving large granular data problems (if used correctly). For instance, assume that your pipelines partitions incoming stream by YYYYMMMDDHHMM and stores it into some S3 path which has a Lambda listening to it (as a trigger function) then your data ingest + aggregation becomes pretty much simultaneous processes. As soon as a minute is up, a new instance of the same Lambda function will be taking care of data landing into partition YYYYMMMDDHHMM+1. So, this way, you can run thousands of simultaneous processes with a good bunch of Lambda functions doing the same thing in parallel. Of course, this is a rough picture, but I think it can greatly help.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I've been researching different approaches to streaming data to a real-time dashboard. One way that I have done in the past is using a star schema/dimension and fact tables. This would be an implementation of aggregate tables. For example, the dashboard would contain multiple charts, one being the total sales for the day, total sales by product, total sales by manufacturer, etc. etc.
But what if this needed to be real-time? What if the data needs to stream to these charts and do the analytical processing real-time?
I've been looking into solutions like Kinesis streams and Kafka, but I may be missing something obvious. For example, consider the following example. A company runs a website where they sell pies. The company has a backend dashboard where they keep track of all data and analytics related to sales, users, orders, etc.
Custom places order through website
The relational (mysql) database receives this new order
The charts and analytical data updates real-time on the backend, for example total sales for the day, or total sales for the year by user.
If the scenario is that this data needs to be streamed, what is the best approach to this? Aggregate tables seem like the obvious but it seems that would be periodic and not real-time. Kinesis/Kafka feels like it would fit somewhere in here. The other option would be something like Redshift but it's pretty pricey and still may not be the best way to address the issue and scale.
Here is an example of a chart that would need to be updated in real-time that could suffer by just doing place aggregate SQL queries when there are tons and tons of rows to parse.
In case of "always up-to-date" reports like this (sales, users, orders etc) that don't need live updates with near-zero-latency streaming processing might be overkill, and ROLAP-like approach seems to be more optimal in meaning of efforts/result.
You mentioned Redshift, and if you already ready to mirror your data for analytics purposes and only problem is a price you can consider another free open-source alternatives that could be used for handling OLAP (aggregate) queries in the real-time (like Yandex ClickHouse, or maybe MongoDb in some cases).
A lot of depends on the dataset size; unless you have really big data that need to be aggregated (hundreds of GB) you can try to keep using mysql and use some tricks:
use separate slave mysql server with high IOPS for analytics and replicate only tables needed to build your reports; possibly use another table engine, more suitable for analytical queries. Setup indexes specially for these queries, to avoid table full scan if you need to get numbers only for last weeks.
pre-calculate metrics for previous periods (with materialized view-like approach) and refresh them on schedule (say, daily), and then combine pre-calculated aggregates with on-the-fly aggregates only for last period to get actual report data without need to scan whole facts table each time.
use data visualization backend that can efficiently cache reports data in-memory to prevent SQL DB overload because of many similar queries (and if the same report or dashboard is displayed for 100 users SQL DB load will be the same as for 1). BTW, I develop solution like that (cannot adv it here as it is commercial product).
This is a typical trade-off for most the architects. Amazon Redshift offers exemplary read optimisations but AWS stack comes for a price. You may try using Cassandra, but it comes with its own set of challenges. When it comes to analytics, I never recommend going real time for the reasons elaborated below.
Doing analytics at real time is not desired, specially using MySQL
The solution for above comes by seggregating transactional and analytical infra. This involves cost but will make sure you don't have to spend time in housekeeping once you scale. MySQL is a row based RDBMS mostly used for storing transactional data. Being row based, it optimises writes i.e. the writes are almost real time and thus, it compromises on reads. When I say this, I refer to a typical analytics dataset running into millions of records/day. If your dataset is not that voluminous, you might still be able to render a graph showing transactional status. But since you're referring to Kafka, I assume the dataset is very large.
A real-time dashboard with visualisations gives a bad customer experience
Considering the above point, even if you go for a warehouse / a read optimised infra, you need to understand how the visualisations work. If 100 people access the dashboard at the same time, 100 connections will be made to the database, all fetching the same data, putting them in memory, applying calculations, parameters and filters defined in your dashboard, adjust the refined dataset in the visualisation and then render the dashboard. Till this time, the dashboard will simply freeze. A poorly constructed query, inefficient use of indexes etc will further make the matter worse.
The above problems will amplify more and more with the increase in your dataset. Good practices to achieve what you need would be:
To have almost realtime (delay of 1hr, 30 mins, 15 mins etc) rather than an absolute real time system. This will help you to create a flat file with the data already fetched in the memory. Your dashboard will simply read this data and will be extremely fast in terms of responses to filters etc. Also, multiple connections to databases will be avoided.
Have a data structure, database/warehouse optimised for reads.
For these types of operational analytics use-cases where the real-time nature of the data is critical, you're completely correct that most "traditional" methods can be quite clumsy, especially as your data size increases. A quick overview of your options:
Historical Approach (TLDR– Meh)
Up until about 5 years ago, the de facto way to do this looked something like
Set up a primary OLTP database that will handle the data in its raw form and have stricter guarantees on performance or ACID properties. Usually this is something SQL-esque, i.e. MySQL, PostgreSQL.
Set up a secondary OLAP database that is meant for serving offline (aka non user-facing) queries. This could also be a SQL-esque db but its schema would be drastically different because it stores the data in enriched form.
Set up some mechanism by which you can keep these 2 in sync. This pretty much boils down to either a) changing your application to always write to both databases and performing the necessary data enrichment or b) building a stand-alone application that reads from your OLTP database, performs the necessary transformations and enrichment and writes to your OLAP database
Plug your dashboard into your OLAP database which will have a schema and indexes optimized for the kind of queries you want.
Using your example about the pie store, the OLTP database would be used to store the purchases of all the pies and reference things like customer ids, billing information, delivery information, etc. In contrast, the OLAP database might just maintain a table with a schema
purchase_totals(day: Date, weekNumber: int, dayOfWeek: int, year: int, total: float)
While the weekNumber, dayOfWeek, and year and technically redundant they make your queries faster! With the proper indexes on these fields, your dashboard has turned into 5 simple (and fast!) aggregation queries with a group by and sum, and then the differences week-over-week or year-over-year can be computed on the client-side. As long as your dashboard refreshes every minute or so you have near-real-time data at your fingertips.
Current Approach (TLDR– Ok)
The recent trends in computing, database technologies, and data science/analytics have led to improvements to the above process, namely by replacing certain components of it. The changes include
Making the OLTP db, the OLAP db, or both a NoSQL database (Mongo usually being the most popular). The pro here is that you have a more flexible schema which won't break if something upstream changes (say, you start selling cakes in addition to pies).
Keeping the SQL db but shifting to cloud provider solution like AWS RDS or Google Cloud SQL. This fundamentally doesn't change anything about the architecture, but it does significantly reduce your operational burden.
Using hard-to-maintain ETL pipelines on top of streaming platforms like Kafka or AWS Kinesis to act as the middle layer between OLAP and OLTP.
Using dedicated tools for data cleaning and transformation as you plan out how to do your ETL
Using dedicated visualization tools on top of your OLAP db (think Tableau)
Using a pull-based approach for getting data out of your OLTP db or your application directly instead of waiting for it to eventually reach your OLAP db. This is helpful for online services because it actually gives you both the data you want AND confirmation that the service is alive and running well (because it just served your request for data). Systems like Prometheus are quite popular for this now.
We have a use case, where we are downloading large volumes (order of 100 gigabytes per day) of data from hundreds of data sources, massaging and processing this data and then exposing this data to our customers via RESTful API. Today the base data size is ca. 20TB and expected to grow heavily in the future.
For the massaging/processing part, we believe spark can be a very good choice for us. Now for exposing processed/massaged data through an API, one option is to store processed data to a read only database like ElephantDB and make web services to talk to ElephantDB (at least this is how Nathan has proposed in his Big Data book). I was just wondering what would be the implication of we make web services implementation to use SparkSQL to access processed data from Spark. What could be the architecture/design dangers in this case?
Every body is talking about Spark is fast and what not and using SparkSQL for interactive queries. But is it already in a stage to serve large volume of web services queries via SparkSQL where we have very strict SLA for latency serve hundreds and thousands of web services requests per second? If Apache Spark could handle this, we could avoid maintaining yet another system like ElephantDB or Cassandra or what not.
Would like to hear from the experts on this board.
If the results are stored in files, you have no indexes, and SparkSQL also doesn't create indexes. The only thing that can be somewhat fast is reading columns from Parquet files and caching tables.
But in general it's not a good use case to use SparkSQL to serve web requests simply because Spark wasn't made for that.
So you're batch processing the raw data, yes?
The ideal way would be to store the outcome on a key-value format, as you mention with ElephandDB, and also project Voldemort has been shown to be a good fit as read-only storage.
I recommend you to read this article (combining batch and realtime layers) by Nathan Marz: How to beat the CAP theorem
It has however been questioned by Jay Kreps in his article Questioning the Lambda Architecture. The main concern (with the lambda architecture) is that there is problematic to maintain the "same" system logic in different distributed systems to produce the same result.
But since you are using Spark, you can use the same logic with Spark Streaming. Which was not "in the market" when Nathan Marz and Jay Kreps wrote their articles.
You can still use SparkSQL to query the raw data interactively, but since Spark was first implemented as scheduled batch jobs, this will not be the perfect use case. But as you've probably noticed, is that it takes some time to submit spark jobs, this is an overhead that "kills" the idea of fast queries.
Please look into github.com/spark-jobserver/spark-jobserver, the job-server supports sub-second low-latency jobs via long-running job contexts. And can share Spark RDDs between different jobs, which can be proved to be very optimized for different interactive logic on the same dataset. Combine machine learning result and ad-hoc (SparkSQL) queries via HTTP requests. Read more about spark job-server, there are some talks about it online on different Spark Summits.
I am building an application (using Django's ORM) that will ingest a lot of events, let's say 50/s (1-2k per msg). Initially some "real time" processing and monitoring of the events is in scope so I'll be using redis to keep some of that data to make decisions, expunging them when it makes sense. I was going to persist all of the entities, including events in Postgres for "at rest" storage for now.
In the future I will need "analytical" capability for dashboards and other features. I want to use Amazon Redshift for this. I considered just going straight for Redshift and skipping Postgres. But I also see folks say that it should play more of a passive role. Maybe I could keep a window of data in the SQL backend and archive to Redshift regularly.
My question is:
Is it even normal to use something like Redshift as a backend for web applications or does it typically play more of a passive role? If not is it realistic to think I can scale the Postgres enough for the event data to start with only that? Also if not, does the "window of data and archival" method make sense?
EDIT Here are some things I've seen before writing the post:
Some say "yes go for it" regarding the should I use Redshift for this question.
Some say "eh not performant enough for most web apps" and support the front it with a postgres database camp.
Redshift (ParAccel) is an OLAP-optimised DB, based on a fork of a very old version of PostgreSQL.
It's good at parallelised read-mostly queries across lots of data. It's bad at many small transactions, especially many small write transactions as seen in typical OLTP workloads.
You're partway in between. If you don't mind a data loss window, then you could reasonably accumulate data points and have a writer thread or two write batches of them to Redshift in decent sized transactions.
If you can't afford any data loss window and expect to be processing 50+ TPS, then don't consider using Redshift directly. The round-trip costs alone would be horrifying. Use a local database - or even a file based append-only journal that you periodically rotate. Then periodically upload new data to Redshift for analysis.
A few other good reasons you probably shouldn't use Redshift directly:
OLAP DBs with column store designs often work best with star schemas or similar structures. Such schemas are slow and inefficient for OLTP workloads as inserts and updates touch many tables, but they make querying the data along various axes for analysis much more efficient.
Using an ORM to talk to an OLAP DB is asking for trouble. ORMs are quite bad enough on OLTP-optimised DBs, with their unfortunate tendency toward n+1 SELECTs and/or wasteful chained left joins, tendency to do many small inserts instead of a few big ones, etc. This will be even worse on most OLAP-optimised DBs.
Redshift is based on a painfully old PostgreSQL with a bunch of limitations and incompatibilities. Code written for normal PostgreSQL may not work with it.
Personally I'd avoid an ORM entirely for this - I'd just accumulate data locally in an SQLite or a local PostgreSQL or something, sending multi-valued INSERTs or using PostgreSQL's COPY to load chunks of data as I received it from an in-memory buffer. Then I'd use appropriate ETL tools to periodically transform the data from the local DB and merge it with what was already on the analytics server.
Now forget everything I just said and go do some benchmarks with a simulation of your app's workload. That's the only really useful way to tell.
In addition to Redshift's slow transaction processing (by modern DB standards) there's another big challenge:
Redshift only supports serializable transaction isolation, most likely as a compromise to support ACID transactions while also optimizing for parallel OLAP mostly-read workload.
That can result in all kinds of concurrency-related failures that would not have been failures on typical DB that support read-committed isolation by default.
I have a website set up on an EC2 instance which lets users view info from 4 of their social networks.
Once a user joins, the site should update their info every night, to show up-to-date and relevant information the next day.
Initially we had a cron-job which went through each user and did the necessary calls to the APIs and then stored the data on the DB (amazon rds instance).
This operation should take between 2 to 30 seconds per person, which means doing it 1 by 1 would take days to update.
I was looking at MapReduce and would like to know if it would be a suitable option for what im trying to do, but at the moment I can't tell for sure.
Would I be able to give an .sql file to MapReduce, with all the records I want to update + a script that tells MapReduce what to do with each record and have it process them all simultaneously?
If not, what would be the best way to go about it?
Thanks for your help in advance.
I am assuming each user's data is independent of the other users' data, which seems logaical to me. If that-s not the case, please ignore this answer.
Since you have mutually independent data (that is, each user's data is independent from other users') there is no need to use MapReduce. MR is just a paradigm in programming that simplifies data manipulation when the data is not independent (map prepares the data, then there is sorting phase, then reduce pulls the results from the sorted records).
In your case, if you want to use more computers, just split the load between them - each computer should process ~10000 users per hour (very rough estimate). Then users can be distributed among computers beforehand or they can be requested in chunks of 1000 or so users, so the machines that end sooner can process more users.
BUT there is an added bonus in using MR framework (such as Hadoop), even if you only use one phase (map only). It does the error handling for you (nodes failing, jobs failing,...) and it takes care of distributing the input among the nodes.
I'm not sure if MR is worth all the trouble to set it up, depends on your previous experience - YMMV.
If my understanding is correct. should this application to be implement as MapReduce, all the processings are done in the Map phase and reduce might simple output the Map phase result.
So if I were to implement this, I would just divide the job into multiple EC2 instances with each instance process a given range of record in your sql data. This has made the assumption that you have an good idea of how to divide the data to different instances.
The advantage is that you needn't pay for the price of Elastic MapReduce and avoid any possible MapReduce overhead.