aws pinpoint update-apns-sandbox-channel command results in: missing credentials - amazon-web-services

aws --version
aws-cli/1.16.76 Python/2.7.10 Darwin/16.7.0 botocore/1.12.66
I'm trying to programmatically add an APNS_SANDBOX channel to a pinpoint app. I'm able to do this successfully via the pinpoint console, but not with aws cli or a lambda function which is the end goal. Changes to our Test/Prod environments can only be made via the CodePipeline, but for testing purposes I'm trying to achieve this with the aws cli.
I've tried both aws cli (using the root credentials) and a lambda function -- both result in the following error:
An error occurred (BadRequestException) when calling the UpdateApnsSandboxChannel operation: Missing credentials
I have tried setting the Certificate field in the UpdateApnsSandboxChannel json object as the path to the .p12 certificate file as well as using a string value retrieved from the openssl tool.
Today I worked with someone from aws support, and they were not able to figure out the issue after trying to debug for a couple of hours. They said they would send an email to the pinpoint team, but they did not have an ETA on when they might respond.
Thanks

I ended up getting this to work successfully -- This is why it was failing:
I was originally making the cli call with the following request object as this is what is including in the documentation:
aws pinpoint update-apns-sandbox-channel --application-id [physicalID] --cli-input-json file:///path-to-requestObject.json
{
"APNSSandboxChannelRequest": {
"BundleId": "com.bundleId.value",
"Certificate":"P12_FILE_PATH_OR_CERT_AS_STRING",
"DefaultAuthenticationMethod": "CERTIFICATE",
"Enabled": true,
"PrivateKey":"PRIVATEKEY_FILE_PATH_OR_AS_STRING",
"TeamId": "",
"TokenKey": "",
"TokenKeyId": ""
},
"ApplicationId": "Pinpoint_PhysicalId"
}
After playing around with it some more I got it to work by removing BundleId, TeamId, TokenKey, and TokenKeyId. I believe these fields are needed when using a p8 certificate.
{
"APNSSandboxChannelRequest": {
"Certificate":"P12_FILE_PATH_OR_CERT_AS_STRING",
"DefaultAuthenticationMethod": "CERTIFICATE",
"Enabled": true,
"PrivateKey":"PRIVATEKEY_FILE_PATH_OR_AS_STRING"
},
"ApplicationId": "Pinpoint_PhysicalId"
}

Related

Phillips-Labs terraform-aws-github-runner not creating ec2 instance

I am trying to set up self-hosted runners for GitHub using Terraform with Phillips-Labs terraform-aws-github-runner module. I see the GH webhook send/receive messages, SQS queue receiving messages and those messages being retrieve. The scale-up lambda is firing and I see the following logs:
2023-01-31 11:50:15.879 INFO [scale-up:22b11002-76d2-5596-9451-4c51746730c2 index.js:119051 scaleUp] Received workflow_job from {my-org}/terraform-aws-github-self-hosted-runners
{}
2023-01-31 11:50:15.880 INFO [scale-up:22b11002-76d2-5596-9451-4c51746730c2 index.js:119084 scaleUp] Received event
{
"runnerType": "Org",
"runnerOwner": "my-org",
"event": "workflow_job",
"id": "11002102910"
}
2023-01-31 11:50:16.188 DEBUG [gh-auth:22b11002-76d2-5596-9451-4c51746730c2 index.js:118486 createAuth] GHES API URL: {"runnerType":"Org","runnerOwner":"my-org","event":"workflow_job","id":"11002102910"}
2023-01-31 11:50:16.193 WARN [scale-runners:22b11002-76d2-5596-9451-4c51746730c2 index.js:118529 Runtime.handler] Ignoring error: error:1E08010C:DECODER routines::unsupported
{
"runnerType": "Org",
"runnerOwner": "my-org",
"event": "workflow_job",
"id": "11002102910"
}
I do not see any EC2 instances being creating. I suspect the GHES API URL: should have a value after it, but I'm not certain. Also, the final log says it is ignoring an error...
I have confirmed my private key pem file is stored as a multi-line secret in secrets manager.
Any advice would be much appreciated!
It looks like not all the permissions needed by the github app are documented. I needed to add a subscription to the Workflow run event.

how to add sharedIdentifier to aws event bridge rule for scheduled execution of aws batch job

I configured aws bridge event rule (via web gui) for running aws batch job - rule is triggered but a I am getting following error after invocation:
shareIdentifier must be specified. (Service: AWSBatch; Status Code: 400; Error Code: ClientException; Request ID: 07da124b-bf1d-4103-892c-2af2af4e5496; Proxy: null)
My job is using scheduling policy and needs shareIdentifier to be set but I don`t know how to set it. Here is screenshot from configuration of rule:
There are no additional settings for subsequent arguments/parameters of job, the only thing I can configure is retries. I also checked aws-cli command for putting rule (https://awscli.amazonaws.com/v2/documentation/api/latest/reference/events/put-rule.html) but it doesn`t seem to have any additional settings. Any suggestions how to solve it? Or working examples?
Edited:
I ended up using java sdk for aws batch: https://mvnrepository.com/artifact/com.amazonaws/aws-java-sdk-batch. I have a scheduled method that periodically spawns jobs with following peace of code:
AWSBatch client = AWSBatchClientBuilder.standard().withRegion("eu-central-1").build();
SubmitJobRequest request = new SubmitJobRequest()
.withJobName("example-test-job-java-sdk")
.withJobQueue("job-queue")
.withShareIdentifier("default")
.withJobDefinition("job-type");
SubmitJobResult response = client.submitJob(request);
log.info("job spawn response: {}", response);
Have you tried to provide additional settings to your target via the input transformer as referenced in the AWS docs AWS Batch Jobs as EventBridge Targets ?
FWIW I'm running into the same problem.
I had a similar issue, from the CLI and the GUI, I just couldn't find a way to pass ShareIdentifier from an Eventbridge rule. In the end I had to use a state machine (step function) instead:
"States": {
"Batch SubmitJob": {
"Type": "Task",
"Resource": "arn:aws:states:::batch:submitJob.sync",
"Parameters": {
"JobName": <name>,
"JobDefinition": <Arn>,
"JobQueue": <QueueName>,
"ShareIdentifier": <Share>
},
...
You can see it could handle ShareIdentifier fine.

AWS Step Function keyerror

I am following this guide to send approval emails to myself: https://aws.amazon.com/blogs/aws/using-callback-urls-for-approval-emails-with-aws-step-functions/
The code in this guide is exactly the same as mine & I have given this input to the step function:
{
"name": "TestName"
}
Every time I try to run the step function, i get the following error:
Error
KeyError
Cause
{
"errorMessage": "'urls'",
"errorType": "KeyError",
"stackTrace": [
" File \"/var/task/lambda_function.py\", line 35, in lambda_handler\n urls = json.loads(response['Payload'].read())['urls']\n"
]
}
Its referring to this line: urls = json.loads(response['Payload'].read())['urls']
this line is part of the code that is in the AWS Lambda function.
What does this error mean, What can I do to fix this?
I have never tested that Lambda/Python end to end doc so i cannot tell if you it works. However, this one definitely works. It invokes multiple AWS Services via a Lambda function and does include sending email messages. It uses the Java V2 AWS SDK.
Using AWS Step Functions and the AWS SDK for Java to build workflows that sends notifications over multiple channels

while installing aws amplify init on terminal .gives error

i am getting this while doing amplify init , so main agenda is to develop authentication through aws-cognito , which is using aws-amplify,
? Do you want to use an AWS profile? Yes
? Please choose the profile you want to use default
init failed
Error: read ECONNRESET
at TLSWrap.onStreamRead (internal/stream_base_commons.js:205:27) {
message: 'read ECONNRESET',
errno: 'ECONNRESET',
code: 'NetworkingError',
syscall: 'read',
region: 'us-east-1',
hostname: 'amplify.us-east-1.amazonaws.com',
retryable: true,
time: 2020-04-16T12:09:59.975Z
You may try the following strategies to eliminate the problem you are facing,
This more of looks like a Network problem as per the logs from your
Terminal, therefore if you have a jittery connection, I would
recommend that you try the same on a stable internet connection.
I will recommend to do an amplify delete in case there is some mis-configuration from the last time you did an amplify init, but the chances of this are very less.
Check your aws environment variables or configuration file maybe the credentials of your aws account are missing. Try doing an aws configure and reset the values of your key,secret, and region.
I hope the above suggestions help you somehow.

Spark is inventing his own AWS secretKey

I'm trying to read a s3 bucket from Spark and up until today Spark always complain that the request return 403
hadoopConf = spark_context._jsc.hadoopConfiguration()
hadoopConf.set("fs.s3a.access.key", "ACCESSKEY")
hadoopConf.set("fs.s3a.secret.key", "SECRETKEY")
hadoopConf.set("fs.s3a.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem")
logs = spark_context.textFile("s3a://mybucket/logs/*)
Spark was saying .... Invalid Access key [ACCESSKEY]
However with the same ACCESSKEY and SECRETKEY this was working with aws-cli
aws s3 ls mybucket/logs/
and in python boto3 this was working
resource = boto3.resource("s3", region_name="us-east-1")
resource.Object("mybucket", "logs/text.py") \
.put(Body=open("text.py", "rb"),ContentType="text/x-py")
so my credentials ARE invalid and the problem is definitely something with Spark..
Today I decided to turn on the "DEBUG" log for the entire spark and to my suprise... Spark is NOT using the [SECRETKEY] I have provided but instead... add a random one???
17/03/08 10:40:04 DEBUG request: Sending Request: HEAD https://mybucket.s3.amazonaws.com / Headers: (Authorization: AWS ACCESSKEY:[RANDON-SECRET-KEY], User-Agent: aws-sdk-java/1.7.4 Mac_OS_X/10.11.6 Java_HotSpot(TM)_64-Bit_Server_VM/25.65-b01/1.8.0_65, Date: Wed, 08 Mar 2017 10:40:04 GMT, Content-Type: application/x-www-form-urlencoded; charset=utf-8, )
This is why it still return 403! Spark is not using the key I provide with fs.s3a.secret.key but instead invent a random one??
For the record I'm running this locally on my machine (OSX) with this command
spark-submit --packages com.amazonaws:aws-java-sdk-pom:1.11.98,org.apache.hadoop:hadoop-aws:2.7.3 test.py
Could some one enlighten me on this?
(updated as my original one was downvoted as clearly considered unacceptable)
The AWS auth protocol doesn't send your secret over the wire. It signs the message. That's why what you see isn't what you passed in.
For further information, please reread.
I ran into a similar issue. Requests that were using valid AWS credentials returned a 403 Forbidden, but only on certain machines. Eventually I found out that the system time on those particular machines were 10 minutes behind. Synchronizing the system clock solved the problem.
Hope this helps!
It is very intriguing this random passkey. Maybe AWS SDK is getting the password from OS environment.
In hadoop 2.8, the default AWS provider chain shows the following list of providers:
BasicAWSCredentialsProvider EnvironmentVariableCredentialsProvider SharedInstanceProfileCredentialsProvider
Order, of course, matters! the AWSCredentialProviderChain, get the first keys from the first provider that provides that information.
if (credentials.getAWSAccessKeyId() != null &&
credentials.getAWSSecretKey() != null) {
log.debug("Loading credentials from " + provider.toString());
lastUsedProvider = provider;
return credentials;
}
See the code in "GrepCode for AWSCredentialProviderChain".
I face similar problem using profile credentials. SDK was ignoring the credentials inside ~/.aws/credentials (as good practice, I encourage you to not store credentials inside the program in any way).
My solution...
Set the credentials provider to use ProfileCredentialsProvider
sc._jsc.hadoopConfiguration().set("fs.s3a.endpoint", "s3.eu-central-1.amazonaws.com") # yes, I am using central eu server.
sc._jsc.hadoopConfiguration().set('fs.s3a.aws.credentials.provider', 'com.amazonaws.auth.profile.ProfileCredentialsProvider')
Folks, go for the IAM configuration based on Roles ... that will open up S3 access policies that should be added to the EMR default one.