How create a role for long running glue redshift job? - amazon-web-services

A long running glue jobs for exporting data from Redshift to s3 are failed du to S3ServiceException:The provided token has expired. Amazon describes using a custom role as workaround (here). But they do not provide any example. Could somebody provide a cloudformation snipped? What a role should looks like? If I uses glue job should I add action for dynamodb or EMR cluster into Role policy?

Related

Does data in Amazon S3 go on public internet when i use job glue?

I'm using AWS services to create a datapipeline
I have data stored in an Amazon S3 bucket and I plan to use the glue crawler to crawl the data under a prefix to extract the metadata and after a glue job to do ETL and save the data in another bucket.
My question is : in which network the services works and communicates each other? it is possible that the data will be moved from Amazon S3 to glue through the public internet?
is there any link to aws documentation that explain which networks AWS services uses when they transfer data between them?
You need to grand explicit permission to any resource to be able access your S3 bucket.
AIM Roles. Using policy create a role and attach that role to AWS resource.
Bucket Policy is another mechanism to grant access.
By default everything is private, you need to grant access otherwise No is not accessible from the internet.

Create AWS QuickSight resources with AWS CLI command

I am trying to create a script that will provision QuickSight account and will configure the following parameters:
Subscription type
SPICE Capacity
VPC connection
QuickSight access to AWS services
From the AWS CLI QuickSight documentation I couldn't find a way to create the account, choose subscription type and change SPICE Capacity.
What am I missing?
I believe you are looking for the 'register user' command from the CLI.
https://docs.aws.amazon.com/cli/latest/reference/quicksight/register-user.html

Can amazon Athena work without Glue catalog?

Can I use Informatica EDC instead of Glue catalog in AWS.
does AWS Athena tightly coupled with Glue catalog?
Did you check here: [https://docs.aws.amazon.com/athena/latest/ug/glue-upgrade.html?
Looks like you need to perform some AWS Glue upgrade, and also add policies so that Athena can pull catalog information. Also, FAQ is available here https://docs.aws.amazon.com/athena/latest/ug/glue-faq.html. I have not worked on this scenario yet, but working on Glue - Redshift.
In the FAQ, its mentioned as follows:
Why do I need to add AWS Glue policies to Athena users?
Before you upgrade, Athena manages the data catalog, so Athena actions must be allowed for your users to perform queries. After you upgrade to the AWS Glue Data Catalog, Athena actions no longer apply to accessing the AWS Glue Data Catalog, so AWS Glue actions must be allowed for your users. Remember, the managed policy for Athena has already been updated to allow the required AWS Glue actions, so no action is required if you use the managed policy.
What happens if I don’t allow AWS Glue policies for Athena users?
If you upgrade to the AWS Glue Data Catalog and don't update a user's customer-managed or inline IAM policies, Athena queries fail because the user won't be allowed to perform actions in AWS Glue. For the specific actions to allow, see Step 2 - Update Customer-Managed/Inline Policies Associated with Athena Users.

How to use Spark to read data from one AWS account and write to another AWS account?

I have spark jobs running on a EKS cluster to ingest AWS logs from S3 buckets.
Now I have to ingest logs from another AWS account. I have managed to use the below setting to successfully read in data from cross account with hadoop AssumedRoleCredentialProvider.
But how do I save the dataframe back to my own AWS account S3? It seems no way to set the Hadoop S3 config back to my own AWS account.
spark.sparkContext.hadoopConfiguration.set("fs.s3a.assumed.role.external.id","****")
spark.sparkContext.hadoopConfiguration.set("fs.s3a.aws.credentials.provider","org.apache.hadoop.fs.s3a.auth.AssumedRoleCredentialProvider")
spark.sparkContext.hadoopConfiguration.set("fs.s3a.assumed.role.credentials.provider","com.amazonaws.auth.InstanceProfileCredentialsProvider")
spark.sparkContext.hadoopConfiguration.set("fs.s3a.assumed.role.arn","****")
val data = spark.read.json("s3a://cross-account-log-location")
data.count
//change back to InstanceProfileCredentialsProvider not working
spark.sparkContext.hadoopConfiguration.set("fs.s3a.aws.credentials.provider","com.amazonaws.auth.InstanceProfileCredentialsProvider")
data.write.parquet("s3a://bucket-in-my-own-aws-account")
As per the hadoop documentation different S3 buckets can be accessed with different S3A client configurations, having a per bucket configuration including the bucket name.
Eg: fs.s3a.bucket.<bucket name>.access.key
Check the below URL: http://hadoop.apache.org/docs/r2.8.0/hadoop-aws/tools/hadoop-aws/index.html#Configurations_different_S3_buckets

Traditional Data Lake vs AWS Lake Formation

I have been setting up data lakes for clients wherein we load the data from onprem or any other sources, into the S3 (a data lake). We will create an AWS Glue catalog on these raw data to create schemas.
The next step would be to either use an EMR or AWS Glue for some data cleansing, load the transformed data into RDS / REDSHIFT / S3 as final target.
The jobs can be scheduled using Data pipeline, Glue Jobs, or AWS Lambda event trigger depending on the use case / service used.
The analysts, other users would be provided required data / S3 bucket access using IAM service for Quicksight visualizations or data querying using Athena, Drill, etc. or use the data for ML applications in Sagemaker.
My question is how is AWS Lake Formation different from above traditional Data Lakes?
I can define that AWS Lake Formation provides all the above services such as S3, Glue Catalog, ETL code generator in Glue, Job scheduler, etc. are available in a single window? With some more advanced security for users / data (record / column level) that can be configured from within the Lake Formation console.
Is there anything else that makes Lake formation stand out from the traditional cloud based Data Lake?
Thanks
AWS Lake Formation is primarily a Permission control layer which is coupled with AWS Glue to basically provide catalog coupled with permissions control. Lake Formation provides reprieve from managing IAM Permissions and instead provides its own Grant based fine grain permission control using simple DB like grants.
Lake Formation still has some challenges with regards to integration with some data services like EMR.(It requires additional IAM policies)
But overall using Lake Formation with S3, Glue ETL provides everything needed to build a data lake.
Lake Formation can still benefit from a improved UI and Data Discovery.
You can use Lake Formation to implement traditional styled Data Lake or make them more modular and provide support across multiple AWS accounts.
Your understanding is correct, Lake Formation is essentially just a permissions model over the Glue Catalog that allows close integration with the other AWS data lake tools: Athena, S3, Glue, EMR, etc. As well as some additional features like Blueprints (for syncing data from RDBMS to S3), Jobs (for ETL), and Crawlers (for data discovery).
Lake Formation allows easier permission management for "user" IAM roles in your environment by allowing them to be centrally managed through the Lake Formation UI and API. Instead of having to update individual IAM/bucket policies each time a role needs a new access, Lake Formation allows you to onboard a single "service" IAM role to have bucket access and then grant Database/Table/Column level access to the user IAM roles that need it.
The user roles essentially assume the service role to perform their operations (Might not be assume exactly as this is an AWS black-box). So Lake Formation saves you from the hassle of having to manage permissions for all user IAM roles via a mess of IAM/bucket policies.
It also offers some ease of integration with sharing data to cross account resources if your setup requires it.