Unable to add bulk contacts in AWS SES using cli - amazon-web-services

I am trying to run a simple command from aws documentation to upload the contacts.csv from S3 bucket to SES, while trying to do so, am constantly getting the error:
An error occurred (InternalFailure) when calling the CreateImportJob operation (reached max retries: 15): None
I have awscli 2.1.38 installed
The command I have used to transfer bulk contacts from an S3 bucket object to SES from documentation is:
aws sesv2 create-import-job \
--import-destination "{\"ContactListDestination\": {\"ContactListName\":\"ExampleContactListName\", \"ContactListImportAction\":\"PUT\"}}" \
--import-data-source "{\"S3Url\": \"s3://wwsexampletest/email-test\",\"DataFormat\": \"CSV\"}" --profile sameer```
Let me know, if you can run this command in your account. FYI, my SES is out of Sandbox.

Related

gsutil rsync with s3 buckets gives InvalidAccessKeyId error

I am trying to copy all the data from an AWS S3 bucket to a GCS bucket. Acc. to this answer rsync command should have been able to do that. But I am receiving the following error when trying to do that
Caught non-retryable exception while listing s3://my-s3-source/: AccessDeniedException: 403 InvalidAccessKeyId
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>InvalidAccessKeyId</Code><Message>The AWS Access Key Id you provided does not exist in our records.</Message><AWSAccessKeyId>{REDACTED}</AWSAccessKeyId><RequestId>{REDACTED}</RequestId><HostId>{REDACTED}</HostId></Error>
CommandException: Caught non-retryable exception - aborting rsync
This is the command I am trying to run
gsutil -m rsync -r s3://my-s3-source gs://my-gcs-destination
I have the AWS CLI installed which is working fine with the same AccessKeyId and listing buckets as well as objects in the bucket.
Any idea what am I doing wrong here?
gsutil can work with both Google Storage and S3.
gsutil rsync -d -r s3://my-aws-bucket gs://example-bucket
You just need to configure it with both - Google and your AWS S3 credentials. For GCP you need to add the Amazon S3 credentials to ~/.aws/credentials or you can also store your AWS credentials in the .boto configuration file for gsutil. However, when you're accessing an Amazon S3 bucket with gsutil, the Boto library uses your ~/.aws/credentials file to override other credentials, such as any that are stored in ~/.boto.
=== 1st update ===
Also make sure you have to make sure you have the correct IAM permissions on the GCP side and the correct AWS IAM credentials. Also depending if you have a prior version of Migrate for Compute Engine (formerly Velostrata) use this documentation and make sure you set up the VPN, IAM credentials and AWS network. If you are using the current version (5.0), use the following documentation to check everything is configured correctly.

AWS Error Message: Requests specifying Server Side Encryption with AWS KMS managed keys require AWS Signature Version 4

I am facing the following error while writing to S3 bucket using pyspark.
com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 400, AWS Service: Amazon S3, AWS Request ID: A0B0C0000000DEF0, AWS Error Code: InvalidArgument, AWS Error Message: Requests specifying Server Side Encryption with AWS KMS managed keys require AWS Signature Version 4.,
I have applied server-side encryption using AWS KMS service on the S3 bucket.
I am using the following spark-submit command -
spark-submit --packages com.amazonaws:aws-java-sdk-pom:1.10.34,org.apache.hadoop:hadoop-aws:2.7.2 --jars sample-jar sample_pyspark.py
This is the sample code I am working on -
spark_context = SparkContext()
sql_context = SQLContext(spark_context)
spark = SparkSession.builder.appName('abc').getOrCreate()
hadoopConf = spark_context._jsc.hadoopConfiguration()
hadoopConf.set("fs.s3a.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem")
#Have a spark dataframe 'source_data
source_data.coalesce(1).write.mode('overwrite').parquet("s3a://sample-bucket")
Note: Tried to load the spark-dataframe into s3 bucket [without server-side encryption enabled] and it was successful
The error seems to be telling you to enable V4 S3 signatures on the Amazon SDK. One way to do it is from the command line:
spark-submit --conf spark.driver.extraJavaOptions='-Dcom.amazonaws.services.s3.enableV4' \
--conf spark.executor.extraJavaOptions='-Dcom.amazonaws.services.s3.enableV4' \
... (other spark options)
That said, I agree with Steve that you should use a more recent hadoop library.
References:
Amazon s3a returns 400 Bad Request with Spark

AWS CLI returning No Such Bucket exists but I can see the bucket in the console

Trying to learn terraform and running through some examples from their documentation. I am trying to move a file from my pc to a S3 bucket. I create this bucket with the command:
aws s3api create-bucket --bucket=terraform-serverless-example --region=us-east-1
If I then check on the console or call the s3api list-buckets I can see it exists and is functional. I can also upload using the console. However when I try to run this command:
aws s3 cp example.zip s3://terraform-serverless-example/v1.0.0/example.zip
It returns this error:
upload failed: ./code.zip to s3://test-verson1/v1.0.0/code.zip An error occurred (NoSuchBucket) when calling the PutObject operation: The specified bucket does not exist
I have tried to configure the region by adding it as a flag, have made sure permissions are correct and so on and it's all set up as should be. I cant figure out why this maybe and would appreciate any help.

AWS S3 CLI with Transfer Acceleration Download command throws 400 error : when calling the HeadObject operation: Bad Request

I am trying to upload/download to S3 with transfer acceleration endpoint in AWS CLI from an EC2.
aws s3 cp /home/centos/<FOLDER_NAME> s3://<BUCKET_NAME>/<KEY_NAME> --region
ap-south-1 --endpoint-url http://<S3-Transfer-Acc-endpoint> [--recursive]
I am able to upload files and folders successfully and also the download using recursive for folders works.
But when i download a single large file from the bucket, using the following command
aws s3 cp s3://<BUCKET_NAME>/<KEY_NAME> /home/centos/<KEY_NAME> --region
ap-south-1 --endpoint-url http://<S3-Transfer-Acc-endpoint>
I face the below issue
fatal error: An error occurred (400) when calling the HeadObject operation:
Bad Request

AWS SNS error - Invalid parameter while publishing message using aws-cli

I am working with AWS SNS services and completed the initial setup as the AWS documentation. I just needed to test it using aws-cli. So I used the following command to publish a test message to SNS topic from my local PC.
aws sns publish --topic-arn "arn:aws:sns:us-east-1:xxxxxxxxxxx:test-notification-service" --message "Hello, from SNS"
However, I got stuck on the following generic error. It just says Invalid Parameter. I have configured the ~/.aws/credentials as needed.
An error occurred (InvalidParameter) when calling the Publish operation: Invalid parameter: TopicArn
The issue is due to cross-region. You AWS-CLI default region might be different to the region your SNS service location.
Check your AWS-CLI location and make sure you are in the same region as your SNS.
To check your region in AWS CLI use:
aws configure get region
To configure your AWS region you can use the command:
aws configure set region <region-name>
https://docs.aws.amazon.com/cli/latest/reference/configure/set.html
You can just add region parameter --region us-east-1 to your command:
aws sns publish --topic-arn "arn:aws:sns:us-east-1:xxxxxxxxxxx:test-notification-service" --message "Hello, from SNS" --region us-east-1