I'm getting an error creating a AWS AppSync Authenticated DataSource - amazon-web-services

I working through the Build On Serverless|S2 E4 video and I've gotten to the point of creating an authenticated HTTP datasource using the AWS CLI. I'm getting this error.
Parameter validation failed:
Unknown parameter in httpConfig: "authorizationConfig", must be one of: endpoint
I think I'm using the same information provided in the video, repository and gist, updated for my own aws account. It seems like it's some kind of formatting or missing information error, but, I'm just not seeing the problem.
When I remove the "authorizationConfig" property from the state-machine-datasource.json the command works.
I've reviewed the code against the information in the video as well as documentation and examples here and here provided by aws
This is the command I'm running.
aws appsync create-data-source --api-id {my app sync app id} --name ProcessBookingStateMachine
--type HTTP --http-config file://src/backend/booking/state-machine-datasource.json
--service-role-arn arn:aws:iam::{my account}:role/AppSyncProcessBookingState --profile default
This is my state-machine-datasource.json:
{
"endpoint": "https://states.us-east-2.amazonaws.com",
"authorizationConfig": {
"authorizationType": "AWS_IAM",
"awsIamConfig": {
"signingRegion": "us-east-2",
"signingServiceName": "states"
}
}
}
Thanks,

I needed to update my aws cli to the latest version. The authenticated http datasource is something fairly new I guess.

Related

Unsure how to configure credentials for AWS Amplify cli user - ready to ditch Amplify

I have a react Amplify App. All I want to do is work on it, push the changes to amplify, etc. These are all standard and basic commands like amplify push.
The problem is that shortly after starting to work on my app ( a month or two ), I was no longer allowed to push, pull, or work on the app from the command line. There is no explanation, and the only error is this ...
An error occurred during the push operation: /
Access Denied
✅ Report saved: /var/folders/8j/db7_b0d90tq8hgpfcxrdlr400000gq/T/storygraf/report-1658279884644.zip
✔ Done
The logs created from the error show this.
error.json
{
"message": "Access Denied",
"code": "AccessDenied",
"region": null,
"time": "2022-07-20T01:20:01.876Z",
"requestId": "DRFVQWYWJAHWZ8JR",
"extendedRequestId": "hFfxnwUjbtG/yBPYG+GW3B+XfzgNiI7KBqZ1vLLwDqs/D9Qo+YfIc9dVOxqpMo8NKDtHlw3Uglk=",
"statusCode": 403,
"retryable": false,
"retryDelay": 60.622127086356855
}
I have two users in my .aws/credentials file. One is the default (which is my work account). The other is called "personal". I have tried to push with
amplify push
amplify push --profile default
amplify push --profile personal
It always results in the same.
I followed the procedure located here under the title "Create environment variables to assume the IAM role and verify access" and entered a new AWS_ACCESS_KEY_ID and a new AWS_SECRET_ACCESS_KEY. When I then run the command ...
aws sts get-caller-id
It returns the correct Arn. However, there is a AWS_SESSION_TOKEN variable that the docs say need to be set, and I have no idea what that is.
Running amplify push under this new profile still results in an error.
I have also tried
AWS_PROFILE=personal aws sts get-caller-identity
Again, this results in the correct settings, but the amplify push still fails for the same reasons.
At this point, i'm ready to drop it and move to something else. I've been debugging this for literally months now and it would be far easier to setup a standard react app on S3 and stand up my resources manually without dealing with this.
Any help is appreciated.
This is the same issue for me. There seems to be no way to reconfigure the CLI once its authentication method is set to profile. I'm trying to change it back to amplify studio and have not been able to crack the code on updating it. Documentation in this area is awful.
In the amplify folder there is a .config directory. There are three files:
local-aws-info.json
local-env-info.json
project-config.json
project-config.json is required, but the local-* files maintain state for your local configuration. Delete these and you can reinit the project and reauthenticate the amplify cli for the environment

Elasticsearch 6.3 (AWS) snapshot restore progress ERROR: "/_recovery is not allowed"

I take manual snapshots of an Elasticsearch index
These are stored in a snapshot repo on S3
I have created a new ES cluster, also version 6.3
I have connected the new cluster to the S3 snapshot repo via python script method mentioned in this blog post: https://medium.com/docsapp-product-and-technology/aws-elasticsearch-manual-snapshot-and-restore-on-aws-s3-7e9783cdaecb
I have confirmed that the new cluster has access to the snapshot repo via the GET /_snapshot/manual-snapshot-repo/_all?pretty command
I have initiated a snapshot restore to this new cluster via:
POST /_snapshot/manual-snapshot-repo/snapshot_name/_restore
{
"indices": "reports",
"ignore_unavailable": false,
"include_global_state": false
}
It is clear that this operation has at least partially succeeded as the cluster status has gone from "green" to "yellow" and a GET request to /_cluster/health yields information that suggests actions are occuring on an otherwise empty cluster... not to mention storage is starting to be utilized (when viewing cluster health on AWS).
I would very much like to monitor the progress of the restore operation.
Elasticsearch docs suggest to use the Recovery API. Docs Link: https://www.elastic.co/guide/en/elasticsearch/reference/6.3/indices-recovery.html
It is clear from the docs that GET /_recovery?human or GET /my_index/_recovery?human should yield restore progress.
However, I encounter the following error:
"Message": "Your request: '/_recovery' is not allowed."
I get the same message when attempting the GET command in the following ways:
Via Kibana dev tools
Via chrome address bar (It's just a GET operation after all)
Via Advanced REST Client (a Chrome app)
I have not been able to locate any other mention of this particular error message.
How can I utilize the GET /_recovery?human command on my ElasticSearch 6.3 clusters?
Thank you!
The Amazon managed Elasticsearch does not have all the endpoints available.
For version 6.3 you can check this link for the endpoints available, and _recovery is not on the list, that is why you get that message.
Without the _recovery endpoint you will need to rely on _cluster/health.

“Create new version” ignores custom service account

I'm trying to deploy a new version of a model to AI Platform, it's a custom prediction routine. I've managed to deploy just fine when I have all the resources in the same GCP project, but when I try to deploy and I point the GCS files to a bucket in a different project, it fails to deploy. So I'm trying to pass which service account to use when creating the version, but it keeps ignoring it.
That's the message I get:
googleapiclient.errors.HttpError: <HttpError 400 when requesting https://ml.googleapis.com/v1/projects/[gcp-project-1]/models/[model_name]/versions?alt=json returned "Field: version.deployment_uri Error: The provided GCS prefix [gs://[bucket-gcp-project-2]/] cannot be read by service account service-*****#cloud-ml.google.com.iam.gserviceaccount.com.". Details: "[{'#type': 'type.googleapis.com/google.rpc.BadRequest', 'fieldViolations': [{'field': 'version.deployment_uri', 'description': 'The provided GCS prefix [gs://[bucket-gcp-project-2]/] cannot be read by service account service-******#cloud-ml.google.com.iam.gserviceaccount.com.'}]}]
My request looks like
POST https://ml.googleapis.com/v1/projects/[gcp-project-1]/models/[model_name]/versions?alt=json
{
"name": "v1",
"deploymentUri": "gs://[bucket-gcp-project-2]",
"pythonVersion": "3.5",
"runtimeVersion": "1.13",
"package_uris": "gs://[bucket-gcp-project-2]/model.tar.gz",
"predictionClass": "predictor.Predictor",
"serviceAccount": "my-service-account#[gcp-project-1].iam.gserviceaccount.com"
}
The service account has access in both projects
Specifying a service account is documented as a beta feature. Try using the gcloud SDK, e.g.:
gcloud components install beta
gcloud beta ai-platform versions create v1 \
--service-account my-service-account#[gcp-project-1].iam.gserviceaccount.com ...

Permissions Issue with Google Cloud Data Fusion

I'm following the instructions in the Cloud Data Fusion sample tutorial and everything seems to work fine, until I try to run the pipeline right at the end. Cloud Data Fusion Service API permissions are set for the Google managed Service account as per the instructions. The pipeline preview function works without any issues.
However, when I deploy and run the pipeline it fails after a couple of minutes. Shortly after the status changes from provisioning to running the pipeline stops with the following permissions error:
com.google.api.client.googleapis.json.GoogleJsonResponseException: 403 Forbidden
{
"code" : 403,
"errors" : [ {
"domain" : "global",
"message" : "xxxxxxxxxxx-compute#developer.gserviceaccount.com does not have storage.buckets.create access to project X.",
"reason" : "forbidden"
} ],
"message" : "xxxxxxxxxxx-compute#developer.gserviceaccount.com does not have storage.buckets.create access to project X."
}
xxxxxxxxxxx-compute#developer.gserviceaccount.com is the default Compute Engine service account for my project.
"Project X" is not one of mine though, I've no idea why the pipeline startup code is trying to create a bucket there, it does successfully create temporary buckets ( one called df-xxx and one called dataproc-xxx) in my project before it fails.
I've tried this with two separate accounts and get the same error in both places. I had tried adding storage/admin roles to the various service accounts to no avail but that was before I realized it was attempting to access a different project entirely.
I believe I was able to reproduce this. What's happening is that the BigQuery Source plugin first creates a temporary working GCS bucket to export the data to, and I suspect it is attempting to create it in the Dataset Project ID by default, instead of your own project as it should.
As a workaround, create a GCS bucket in your account, and then in the BigQuery Source configuration of your pipeline, set the "Temporary Bucket Name" configuration to "gs://<your-bucket-name>"
You are missing setting up permissions steps after you create an instance. The instructions to give your service account right permissions is in this page https://cloud.google.com/data-fusion/docs/how-to/create-instance

Deploy Google Cloud Function from Cloud Function

Solved/invalid - see below
I'm trying to deploy a Google Cloud Function from a Google Cloud Function on demand.
However, whatever I try, I get a 403 Forbidden:
HttpError 403 when requesting https://cloudfunctions.googleapis.com/v1/projects/MY_PROJECT/locations/MY_REGION/functions?alt=json returned "The caller does not have permission"
I ended up granting the cloud function service account Project Owner role to make sure it can do anything, yet still I get the same error.
Is this limited intentionally (for example to avoid fork bombs or something) or am I doing something wrong?
Has anyone been able to make this work?
For the record: I ran the same (Python) function locally with Flask using my own account and then it will deploy the new cloud function perfectly, so the code itself seems to be ok.
Update
Code snippet of how I'm trying to deploy the cloud function:
cf_client = discovery.build('cloudfunctions', 'v1')
location = "projects/{MYPROJECT}/locations/europe-west1"
request = {
"name": "projects/{MYPROJECT}/locations/europe-west1/functions/hopper--2376cd24d318cd2d42f000f4f1c31a8f",
"description": "Hopper hopper--2376cd24d318cd2d42f000f4f1c31a8f",
"entryPoint": "pubsub_trigger",
"runtime": "python37",
"availableMemoryMb": 256,
"timeout": "60s",
"sourceArchiveUrl": "gs://staging.{MYPROJECT}.appspot.com/deployment/hopper.zip",
"eventTrigger": {
"eventType": "providers/cloud.pubsub/eventTypes/topic.publish",
"resource": "projects/{MYPROJECT}/topics/hopper-test-input"
},
"environmentVariables": {
"HOPPER_ID": "hopper--2376cd24d318cd2d42f000f4f1c31a8f"
}
}
response = cf_client.projects() \
.locations() \
.functions() \
.create(location=location, body=req) \
.execute()
Update
I feel like such an idiot... it turns out that for some reason I deployed the master function in a different project then the project I gave permissions on. No wonder it didn't work.
The correct answer should be: check that everything is indeed running how/where you expect it to be. Everything was configured correctly and deploying a CF in a CF is not a problem. The project was incorrect, due to a different default project being set on the gcloud utility.