Configuring external data source for Elastic MapReduce - amazon-web-services

We want to use Amazon Elastic MapReduce on top of our current DB (we are using Cassandra on EC2). Looking at the Amazon EMR FAQ, it should be possible:
Amazon EMR FAQ: Q: Can I load my data from the internet or somewhere other than Amazon S3?
However, when creating a new job flow, we can only configure a S3 bucket as input data origin.
Any ideas/samples on how to do this?
Thanks!
P.S.: I've seen this question How to use external data with Elastic MapReduce but the answers do not really explain how to do it/configure it, simply that it is possible.

How are you processing the data? EMR is just managed hadoop. You still need to write a process of some sort.
If you are writing a Hadoop Mapreduce job, then you are writing java and you can use Cassandra apis to access it.
If you are wanting to use something like hive, you will need to write a Hive storage handler to use data backed by Cassandra.

Try using scp to copy files to your EMR instance:
my-desktop-box$ scp mylocaldatafile my-emr-node:/path/to/local/file
(or use ftp, or wget, or curl, or anything else you want)
then log into your EMR instance with ssh and load it into hadoop:
my-desktop-box$ ssh my-emr-node
my-emr-node$ hadoop fs -put /path/to/local/file /path/in/hdfs/file

Related

Is it possible to run HBASE on AWS but it stores/pointes to HDFS?

just wanted to know if this question is even relevant?
I tried understanding many blogs but could not reach to a conclusion.
Yes, you can run HBase on Amazon EMR. And you can choose either S3 (via EMRFS) or native HDFS (on cluster):
It utilizes Amazon S3 (with EMRFS) or the Hadoop Distributed Filesystem (HDFS) as a fault-tolerant datastore.

Is it a good practice to have an AWS EMR standing cluster always running structured streaming?

I have a Spark Structured Streaming job which takes data as input from AWS MSK (Kafka) and write to AWS S3. Is it a good idea to have a standing AWS EMR cluster always running the same? Or are there better ways to manage this infrastructure?
Please let me know if you need further details.
You need some worker pool that is consuming and writing.
Your other options include using EKS instead or YARN on EMR to run Spark, or you could not use Spark and use Kafka Connect S3 Sink instead on an EC2/EKS cluster.

Connect to "on-premise" postgresql database with AWS glue

I have a PostgreSQL database which is in effect "on premise" but I have credentials and a JDBC connection string. I want to read the table on AWS glue and use it in a job as a source, and write to S3.
But it is asking for VPC? I don't understand. I can hard code the connection in the Job? This seems like such a basic task for an ETL environment. What am I missing?
Glue can connect to any database using JDBC. This is a good toolbox to fast track pyspark coding.
Basically you need to understand where you are physically located in AWS environment. And identify or create a VPC. From there, establish your ACL and Security Group.
Good luck!

Uploading File to S3, then process in EMR and last transfer to Redshift

I am new in this forum and technology and looking for your advice. I am working on POC and below are my requirement. Could you please guide me the way to achieve the result.
Copy data from NAS to S3.
Use S3 as a source in EMR Job with target to S3/Redshift.
Any link, pdf will also helpful.
Thanks,
Pardeep
There's a lot here that you're asking and there's not a lot of info on your use case to go by so I'm going to be very general in my answer and hopefully it at least points you in the right direction.
You can use Lambda to copy data from your NAS to S3. Assuming your NAS is on-premise and assuming you have a VPN into your VPC or even Direct Connect configured, then you can use a VPC enabled Lambda function to read from the NAS on-premise and write to S3.
If your NAS is running on EC2 the above will remain the same except there's no need for VPN or Direct Connect.
Are you looking to kick off the EMR job from Lambda? You can use S3 as a source for EMR to then output to S3 either from within Lambda or via other means as well.
If you can provide more info on your use case we could probably give you a better quality answer.
Copy data from NAS to S3.
Really depends on the amount of data and the frequency on which you run the copy job. If the data in GBs, then you can install AWS CLI on a machine where NFS is attached. AWS CLI command like CP can be multithreaded and can easily copy your datasets to S3. You might also enable S3 transfer acceleration to speed things up. Having AWS Direct connect to your company network can also speed up any transfers from on-premis to AWS.
http://docs.aws.amazon.com/cli/latest/topic/s3-config.html
http://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration.html
https://aws.amazon.com/directconnect/
If the data is in TBs (which is probably distributed across multiple volumes), then you might have to consider using physical transfer utilities like AWS Snowball,AWSImportExport or AWS Snowmobile based on the use-case.
https://aws.amazon.com/cloud-data-migration/
Use S3 as a source in EMR Job with target to S3/Redshift.
Again, as there are lot of applications on EMR, there are lot of choices. Redshift supports COPY/UNLOAD commands to S3 which any application can make use of. If you want to use SPARK on EMR , then installing databricks spark-redshift driver is a viable option for you.
https://github.com/databricks/spark-redshift
https://databricks.com/blog/2015/10/19/introducing-redshift-data-source-for-spark.html
https://aws.amazon.com/blogs/big-data/powering-amazon-redshift-analytics-with-apache-spark-and-amazon-machine-learning/

How to use external data with Elastic MapReduce

From Amazon's EMR FAQ:
Q: Can I load my data from the internet or somewhere other than Amazon S3?
Yes. Your Hadoop application can load the data from anywhere on the internet or from other AWS services. Note that if you load data from the internet, EC2 bandwidth charges will apply. Amazon Elastic MapReduce also provides Hive-based access to data in DynamoDB.
What are the specifications for loading data from external (non-S3) sources? There seems to be a dearth of resources around this option and doesn't appear to be documented in any form.
If you want to do it "a hadoop way" you should implement DFS over your data source, or to put referances to your source URLs into some file, which will be input for the MR job.
In the same time hadoop is about moving code to data. Even EMR over S3 is not ideal in this perspectice - EC2 and S3 are different cluster. So it is hard to imegine effective MR procesing if datasource is phisically outside of the data center.
Basically what Amazon is saying that programatically you can access any content from internet or any other source via your code. For example you can access a Couch database instance via any HTTP based client APIs.
I know that Cassandra package for java has one source package named org.apache.cassandra.hadoop and there are two classes in it that are needed for getting info from Cassandra when you are running the AWS Elastic MapReduce.
Essential classes: ColumnFamilyInputFormat.java and ConfigHelper.java
Go to this link to see an example of what I'm talking about.