AWS Amplify: How to delete the environment, when resources are already partially deleted? - amazon-web-services

TL;DR: How to delete an amplify environment, when some resources of the service have been deleted manually in the console?
So, I took a course on egghead to learn the aws amplify CLI. Unfortunately, it doesn't teach you how to delete the environment (otherwise it's great though!). My google search back then said you will have to delete the resources manually. I tried (/did) that for the resources I used. I deleted the user account for the CLI (🤦🏻‍♂️), "deleted" the cognito user pool (it still shows up in amplify status), deleted the DynamoDB and the AppSync API (also still shows up).
Now as I mentioned when I run amplify status I get:
| Category | Resource name | Operation | Provider plugin |
| -------- | --------------- | --------- | ----------------- |
| Auth | cognito559c5953 | No Change | awscloudformation |
| Api | AmplifyTodoApp | No Change | awscloudformation |
I wondered - since I thought I deleted them - do they still exist?
So I googled some more. Now it turns out there is also the command amplify delete which automatically deletes all resources associated with your amplify project. Since I deleted the account that I used for the project, that command throws:
The security token included in the request is invalid.
Is there any way I can delete these resources without the user? Are these resources even still online (since I manually deleted them and they do not show up in the console online - even in the CloudFront console)? Or will I have to delete my whole AWS account? I don't want to end up with a big bill one day for these resources.
EDIT: I also deleted the S3 bucket.
EDIT 2: So I managed to use another profile (by changing local-aws-info.json) so I don't get the security request failed error any more. Now I get the error:
Missing region in config
amplify status still yields the same response.

amplify cli determines the status by diffing amplify/#current-cloud-backend and amplify/backend folder inside your project. So what you see when you run amplify status you see isn't accurate in your case.
If you have created multiple environments (in different regions) make sure that you delete them too. The easiest way to delete them if you can't use amplify delete is to go to cloud formation in the region where you have created the environment and deleting the root stack, which ensures that all the resources created by that stack are removed.
PS: The cli creates roles for auth and unauth users when initialized and creates policies for the resources (they don't cost anything if they exist). You could delete them if you don't want them hanging around.

When some resources have been deleted manually (S3 & Cloudformation) then
$amplify delete
Gives Following :
Unable to remove env: dev because deployment bucket amplify-amplifyAPPName-dev-XYZ-deployment does not exist or has been deleted.
Stack has already been deleted or does not exist
Please look at this:
C:user\samadhan\Amplify-Projects\amplifyapp-demo>amplify delete
? Are you sure you want to continue? This CANNOT be undone. (This will delete all the environments of the project from the cloud and wi
pe out all the local files created by Amplify CLI) Yes
- Deleting resources from the cloud. This may take a few minutes...
Deleting env: dev.
Unable to remove env: dev because deployment bucket amplify-
amplifyinitdemo-dev-131139-deployment does not exist or has been deleted.
Stack has already been deleted or does not exist
\ Deleting resources from the cloud. This may take a few minutes...App
dfwx13s2bgtb1 not found.
App dfwx13s2bgtb1 not found.
√ Project already deleted in the cloud.
Project deleted locally.
App Amplify App still showing in Console Unable to delete from Console.
Please Take a look :
Solution:
Using AWS CLI You Can be Fixed This Issue.
Step 1 ) Make Sure AWS CLI is configured with the Same AWS Account if Not Please Create IAM User & Configure it with the same Region.
C:user\samadhan\Amplify-Projects\amplifyapp-demo>aws configure
AWS Access Key ID [****************HZHF]: ****************ICHK
AWS Secret Access Key [****************iBJl]:****************SnaX
Default region name [ap-south-1]: ap-south-1
Default output format [json]: json
Step 2 ) Use Following AWS CLI Commands.
C:user\samadhan\Amplify-Projects\amplifyapp-demo>>aws amplify help
Available Commands
******************
* create-app
* create-backend-environment
* create-deployment
* delete-app
* delete-backend-environment
* get-app
* list-apps
* list-backend-environments
C:user\samadhan\Amplify-Projects\amplifyapp-demo>aws amplify list-apps
{
"apps": [
{
"appId": "d39pvb2qln4v7l",
"appArn": "arn:aws:amplify:ap-south-1:850915XXXXX:apps/d39pvb2qln4v7l",
"name": "react-amplify-demo-project",
"tags": {},
"platform": "WEB",
"createTime": 1640206703.371,
"updateTime": 1640206703.371,
"environmentVariables": {
"_LIVE_PACKAGE_UPDATES": "[{\"pkg\":\"#aws-amplify/cli\",\"type\":\"npm\",\"version\":\"latest\"}]"
},
{
"appId": "d2jsl78ex1asqy",
"appArn": "arn:aws:amplify:ap-south-1:85091xxxxxxxx:apps/d2jsl78ex1asqy",
"name": "fullstackapp",
"tags": {},
"platform": "WEB",
"createTime": 1640250148.974,
"updateTime": 1640250148.974,
"environmentVariables": {
"_LIVE_PACKAGE_UPDATES": "[{\"pkg\":\"#aws-amplify/cli\",\"type\":\"npm\",\"version\":\"latest\"}]"
}
}
Step 3) Use Following CLI Command to Delete App Or App Env
C:user\samadhan\Amplify-Projects\amplifyapp-demo>aws amplify delete-app --app-id d39pvb2qln4v7l
{
"app": {
"appId": "d39pvb2qln4v7l",
"appArn": "arn:aws:amplify:ap-south-1:8509xxxxx:apps/d39pvb2qln4v7l",
"name": "react-amplify-demo-project",
"repository": "https://gitlab.com/samadhanfuke/react-amplify-demo-project",
"platform": "WEB",
"createTime": 1639077857.194,
"updateTime": 1639077857.194,
"iamServiceRoleArn": "arn:aws:iam::850915xxxx:role/amplifyconsole-backend-role",
"environmentVariables": {
"_LIVE_UPDATES": "[{\"name\":\"Amplify CLI\",\"pkg\":\"#aws-amplify/cli\",\"type\":\"npm\",\"version\":\"latest\"}]"
},
"defaultDomain": "d39pvb2qln4v7l.amplifyapp.com",
"enableBranchAutoBuild": false,
"enableBranchAutoDeletion": false,
"enableBasicAuth": false,
"customRules": [
{
"source": "/<*>",
"target": "/index.html",
"status": "404-200"
}
],
"productionBranch": {
"lastDeployTime": 1639078272.607,
"status": "SUCCEED",
"branchName": "preview"
},
"buildSpec": "version: 1\nbackend:\n phases:\n # IMPORTANT - Please verify your build commands\n build:\n commands:\n - '# Execute Amplify CLI with the helper script'\n - amplifyPush --simple\nfrontend:\n phases:\n build:\n commands: []\n artifacts:\n # IMPORTANT - Please verify your build output directory\n baseDirectory: /\n files:\n - '**/*'\n cache:\n paths: []\n",
"customHeaders": "",
"enableAutoBranchCreation": false
}
}
Amplify App With Environment Successfully Deleted.
Check-in Amplify Console.

As of 9/26/2022 several updates have been released that fix issues deleting apps/backends, including issues where the s3 bucket or cloudformation stack was already deleted

Note that deleting the amplify application as documented here, does not remove the resources created in S3. You need to delete these manually.
The content in the bucket amplify-{project name}-{env name}-{some id}-deployment is created and updated when you run amplify init, amplify push among others. It appears to be used as the remote synchronisation directory.
The S3 buckets will be recreated by the amplify root CloudFormation stack, whenever you create a new env or run amplify init.

Related

Unsure how to configure credentials for AWS Amplify cli user - ready to ditch Amplify

I have a react Amplify App. All I want to do is work on it, push the changes to amplify, etc. These are all standard and basic commands like amplify push.
The problem is that shortly after starting to work on my app ( a month or two ), I was no longer allowed to push, pull, or work on the app from the command line. There is no explanation, and the only error is this ...
An error occurred during the push operation: /
Access Denied
✅ Report saved: /var/folders/8j/db7_b0d90tq8hgpfcxrdlr400000gq/T/storygraf/report-1658279884644.zip
✔ Done
The logs created from the error show this.
error.json
{
"message": "Access Denied",
"code": "AccessDenied",
"region": null,
"time": "2022-07-20T01:20:01.876Z",
"requestId": "DRFVQWYWJAHWZ8JR",
"extendedRequestId": "hFfxnwUjbtG/yBPYG+GW3B+XfzgNiI7KBqZ1vLLwDqs/D9Qo+YfIc9dVOxqpMo8NKDtHlw3Uglk=",
"statusCode": 403,
"retryable": false,
"retryDelay": 60.622127086356855
}
I have two users in my .aws/credentials file. One is the default (which is my work account). The other is called "personal". I have tried to push with
amplify push
amplify push --profile default
amplify push --profile personal
It always results in the same.
I followed the procedure located here under the title "Create environment variables to assume the IAM role and verify access" and entered a new AWS_ACCESS_KEY_ID and a new AWS_SECRET_ACCESS_KEY. When I then run the command ...
aws sts get-caller-id
It returns the correct Arn. However, there is a AWS_SESSION_TOKEN variable that the docs say need to be set, and I have no idea what that is.
Running amplify push under this new profile still results in an error.
I have also tried
AWS_PROFILE=personal aws sts get-caller-identity
Again, this results in the correct settings, but the amplify push still fails for the same reasons.
At this point, i'm ready to drop it and move to something else. I've been debugging this for literally months now and it would be far easier to setup a standard react app on S3 and stand up my resources manually without dealing with this.
Any help is appreciated.
This is the same issue for me. There seems to be no way to reconfigure the CLI once its authentication method is set to profile. I'm trying to change it back to amplify studio and have not been able to crack the code on updating it. Documentation in this area is awful.
In the amplify folder there is a .config directory. There are three files:
local-aws-info.json
local-env-info.json
project-config.json
project-config.json is required, but the local-* files maintain state for your local configuration. Delete these and you can reinit the project and reauthenticate the amplify cli for the environment

“Create new version” ignores custom service account

I'm trying to deploy a new version of a model to AI Platform, it's a custom prediction routine. I've managed to deploy just fine when I have all the resources in the same GCP project, but when I try to deploy and I point the GCS files to a bucket in a different project, it fails to deploy. So I'm trying to pass which service account to use when creating the version, but it keeps ignoring it.
That's the message I get:
googleapiclient.errors.HttpError: <HttpError 400 when requesting https://ml.googleapis.com/v1/projects/[gcp-project-1]/models/[model_name]/versions?alt=json returned "Field: version.deployment_uri Error: The provided GCS prefix [gs://[bucket-gcp-project-2]/] cannot be read by service account service-*****#cloud-ml.google.com.iam.gserviceaccount.com.". Details: "[{'#type': 'type.googleapis.com/google.rpc.BadRequest', 'fieldViolations': [{'field': 'version.deployment_uri', 'description': 'The provided GCS prefix [gs://[bucket-gcp-project-2]/] cannot be read by service account service-******#cloud-ml.google.com.iam.gserviceaccount.com.'}]}]
My request looks like
POST https://ml.googleapis.com/v1/projects/[gcp-project-1]/models/[model_name]/versions?alt=json
{
"name": "v1",
"deploymentUri": "gs://[bucket-gcp-project-2]",
"pythonVersion": "3.5",
"runtimeVersion": "1.13",
"package_uris": "gs://[bucket-gcp-project-2]/model.tar.gz",
"predictionClass": "predictor.Predictor",
"serviceAccount": "my-service-account#[gcp-project-1].iam.gserviceaccount.com"
}
The service account has access in both projects
Specifying a service account is documented as a beta feature. Try using the gcloud SDK, e.g.:
gcloud components install beta
gcloud beta ai-platform versions create v1 \
--service-account my-service-account#[gcp-project-1].iam.gserviceaccount.com ...

Permissions Issue with Google Cloud Data Fusion

I'm following the instructions in the Cloud Data Fusion sample tutorial and everything seems to work fine, until I try to run the pipeline right at the end. Cloud Data Fusion Service API permissions are set for the Google managed Service account as per the instructions. The pipeline preview function works without any issues.
However, when I deploy and run the pipeline it fails after a couple of minutes. Shortly after the status changes from provisioning to running the pipeline stops with the following permissions error:
com.google.api.client.googleapis.json.GoogleJsonResponseException: 403 Forbidden
{
"code" : 403,
"errors" : [ {
"domain" : "global",
"message" : "xxxxxxxxxxx-compute#developer.gserviceaccount.com does not have storage.buckets.create access to project X.",
"reason" : "forbidden"
} ],
"message" : "xxxxxxxxxxx-compute#developer.gserviceaccount.com does not have storage.buckets.create access to project X."
}
xxxxxxxxxxx-compute#developer.gserviceaccount.com is the default Compute Engine service account for my project.
"Project X" is not one of mine though, I've no idea why the pipeline startup code is trying to create a bucket there, it does successfully create temporary buckets ( one called df-xxx and one called dataproc-xxx) in my project before it fails.
I've tried this with two separate accounts and get the same error in both places. I had tried adding storage/admin roles to the various service accounts to no avail but that was before I realized it was attempting to access a different project entirely.
I believe I was able to reproduce this. What's happening is that the BigQuery Source plugin first creates a temporary working GCS bucket to export the data to, and I suspect it is attempting to create it in the Dataset Project ID by default, instead of your own project as it should.
As a workaround, create a GCS bucket in your account, and then in the BigQuery Source configuration of your pipeline, set the "Temporary Bucket Name" configuration to "gs://<your-bucket-name>"
You are missing setting up permissions steps after you create an instance. The instructions to give your service account right permissions is in this page https://cloud.google.com/data-fusion/docs/how-to/create-instance

Changing Storage class from Multi-Regional to Coldline in Google Cloud Platform

I just finished my 1 year free trial with Google Cloud Platform and I am now being billed.
When I set my first project up, it looks like I set it up as Multi-Regional. I would only use the Google Cloud Storage in the event of a catastrophic failure in my home where i lose data on both internal and external hard drives (ie. fire, etc) . I believe for this type of backup, I only need Coldline storage. I did change my project over to Coldline but it looks like it only changes new data, not the original stored data because I am still being charged for Multi-regional storage.
From what I understand, I have to change the Object Storage Class either by overwriting the data using "gsutil rewrite -s [STORAGE_CLASS] gs://[PATH_TO_OBJECT]" or by Object Lifestyle Management. I could not figure out how to do either, so I need help doing this (I am not even sure where to type these commands or which approach to use (I am not a programmer!!)).
I also saw in another post that my gsutil command needs to up to date 4.22 or higher. How do I check this?? I also saw in this post that the [PATH_TO_OBJECT] is My Bucket. I see a Project Name, Project ID, and Project number. Which of these (if any) are used in that field for My Bucket?
Thank you for any help
I also saw in another post that my gsutil command needs to up to date
4.22 or higher. How do I check this??
Get the gsutil version:
gsutil version
Update the Cloud SDK which includes gsutil:
Windows:
Open a command prompt with Administrator rights
gcloud components update
Linux:
gcloud components update
I see a Project Name, Project ID, and Project number. Which of these
(if any) are used in that field for My Bucket.
Use the PROJECT_ID. To get a list of the projects that you have access to. This command will list each project.
gcloud projects list
To see which is your default project:
gcloud config list project
If the default project is blank or the wrong one, use the following command.
To set the default project:
gcloud config set project [PROJECT_ID]
From what I understand, I have to change the Object Storage Class
either my overwriting the data
Assuming your bucket name is mybucket.
STEP 1: Change the default storage class for the bucket:
gsutil defstorageclass set coldline gs://mybucket
STEP 2: Change the storage class for each object manually. This is an option if you want to just select a few files.
gsutil rewrite -s coldline gs://mybucket/objectname
STEP 3: Verify the existing lifecycle policy. Change step 4 accordingly if an existing policy exists.
gsutil lifecycle get gs://mybucket
STEP 4: Change the lifecycle of the bucket. This policy will move all files older than 7 days to coldline storage.
POLICY (write to lifecycle.json):
{
"lifecycle": {
"rule": [
{
"action": {
"type": "SetStorageClass",
"storageClass": "COLDLINE"
},
"condition": {
"age": 7,
"matchesStorageClass": [
"MULTI_REGIONAL",
"STANDARD",
"DURABLE_REDUCED_AVAILABILITY"
]
}
}
]
}
}
Command:
gsutil lifecycle set lifecycle.json gs://mybucket

Deploy Google Cloud Function from Cloud Function

Solved/invalid - see below
I'm trying to deploy a Google Cloud Function from a Google Cloud Function on demand.
However, whatever I try, I get a 403 Forbidden:
HttpError 403 when requesting https://cloudfunctions.googleapis.com/v1/projects/MY_PROJECT/locations/MY_REGION/functions?alt=json returned "The caller does not have permission"
I ended up granting the cloud function service account Project Owner role to make sure it can do anything, yet still I get the same error.
Is this limited intentionally (for example to avoid fork bombs or something) or am I doing something wrong?
Has anyone been able to make this work?
For the record: I ran the same (Python) function locally with Flask using my own account and then it will deploy the new cloud function perfectly, so the code itself seems to be ok.
Update
Code snippet of how I'm trying to deploy the cloud function:
cf_client = discovery.build('cloudfunctions', 'v1')
location = "projects/{MYPROJECT}/locations/europe-west1"
request = {
"name": "projects/{MYPROJECT}/locations/europe-west1/functions/hopper--2376cd24d318cd2d42f000f4f1c31a8f",
"description": "Hopper hopper--2376cd24d318cd2d42f000f4f1c31a8f",
"entryPoint": "pubsub_trigger",
"runtime": "python37",
"availableMemoryMb": 256,
"timeout": "60s",
"sourceArchiveUrl": "gs://staging.{MYPROJECT}.appspot.com/deployment/hopper.zip",
"eventTrigger": {
"eventType": "providers/cloud.pubsub/eventTypes/topic.publish",
"resource": "projects/{MYPROJECT}/topics/hopper-test-input"
},
"environmentVariables": {
"HOPPER_ID": "hopper--2376cd24d318cd2d42f000f4f1c31a8f"
}
}
response = cf_client.projects() \
.locations() \
.functions() \
.create(location=location, body=req) \
.execute()
Update
I feel like such an idiot... it turns out that for some reason I deployed the master function in a different project then the project I gave permissions on. No wonder it didn't work.
The correct answer should be: check that everything is indeed running how/where you expect it to be. Everything was configured correctly and deploying a CF in a CF is not a problem. The project was incorrect, due to a different default project being set on the gcloud utility.