I had several issues while working through this workshop including
a. AMAZON REKOGNITION not working,
b. dynamodb tables not updated with images being uploaded.
To avoid getting charged for aws resources, I deleted all the AWS resources using amplify delete command. If I run the amplify publish command, I get the following error:
**StackTrace:**
You are not working inside a valid amplify project.
Use 'amplify init' in the root of your app directory to initialize your project with Amplify
Error: You are not working inside a valid amplify project.
How can I recreate all the AWS resources without going through the amplify init process again?
Related
Problem Statement:
The tailwind nextjs starter template is unable to be deployed properly on AWS using Github Actions. The deployment process involves pushing the export files to S3, and displaying them using S3 + Cloudfront + Route53.
One of my domains example: https://domainA.com works by just sharing this files to S3 without exporting them (Using github actions, I share this files to s3 and then connect it with cloudfront using Origin access identity. (It is working as expected)
but another one of my domains example: https://domainB.com doesn't work and gives access denied issue. (I checked bucket policy and it allows access to s3 bucket, bucket is publicly accessible)
I want to solve above error, please suggest options.
Now coming to another problem, As I have realized that the files in S3 should be output files and
so I now export this files to s3 locations using github actions. The cloudfront is connected to s3 bucket using OAI or public origin access. Once everything is setup correctly, I am able to route to my domain but it is unable to work properly. I am assuming that the system is unable to locate additional files from S3 that it needs.
how can I also solve above error.
Issue: "The tailwind nextjs starter template is unable to be deployed properly on AWS using Github Actions. The deployment process involves pushing the export files to S3, and displaying them using S3 + Cloudfront + Route53. The domain (https://domainB.com) gives an access denied issue despite the S3 bucket being publicly accessible and allowing access."
Solution:
The issue is because of the dynamic routing in Next.js with S3, Cloudfront and Route53. The export files are separated into individual files for each route, leading to the system being unable to locate additional files from S3.
To resolve this issue, there are several options that can be considered:
Amplify: AWS manages a service called Amplify, which is a CI/CD service that deploys Next.js apps to your AWS account.
Serverless Next.js (sls-next) Component: A Serverless Framework Component that allows you to deploy Next.js apps to your AWS account via Serverless Inc's deployment infrastructure. (I used this option)
SST Dev: A platform to deploy Next.js apps to your AWS account.
Using Terraform: An infrastructure as code tool that can be used to deploy Next.js apps to your AWS account.
By choosing one of the above options, you can effectively deploy your Next.js starter template on AWS using Github Actions.
Currently my amplify project has the API, Authentication, File Storage and Functions categories. Everything works perfectly fine until I add Analytics!
I am having the following error when I add analytics to my amplify project in react native and try to push GraphQL changes by performing amplify push:
Failed to create models in the cloud: Modelgen job creation failed
Everything is actually on the same AWS-Region, which is us-east-1.
Many suggests on the web that it's an issue that comes from the credentials that I am using, I verified numerous times and it is not, my amplify credentials and config file are perfectly matching AWS credentials. My aws_access_key_id, aws_secret_access_key and region from the credentials and config file are aligned with amplify user.
I also verified my user IAM permissions, and it has the following one :
AdministratorAccess
AdministratorAccess-Amplify
AmazonMobileAnalyticsFullAccess
When I revert my changes, by removing Analytics from my project using amplify remove analytics I am not having that error anymore when I push GraphQL changes.
Anyone has an idea what could be causing this error?
I have added an S3 lambda trigger in my AWS Amplify project. However when I try to remove that lambda trigger using amplify remove function, it shows the following error.
Resource cannot be removed because it has a dependency on another resource
Dependency: S3 - s3xxxxxxxx
An error occurred when removing the resources from the local directory
AWS Amplify Documentation does not have a clear guide to remove lambda functions. So, how can I remove the function without removing the S3 resource?
Since I created a trigger on the S3 resource, I need to remove that trigger first by running amplify update storage. Then choose the options that you configured previously. When Amplify CLI prompts to select an option, choose Remove the Trigger.
Then run amplify push to sync local changes with the cloud.
Now, if we run amplify remove function again, and choose the S3 trigger function. It will execute without an error. Just remember to do another amplify push to sync and remove the function at last.
I have a React application with AWS Amplify as its backend. I'm using AppSync API and DynamoDB database to save data. AppSync API is the only category that I provisoned in my project.
Category
Resource name
Operation
Provider plugin
Api
testAPI
No Change
awscloudformation
I need to clone this same AWS Amplify backend to another AWS account easily.
Yes, I could create another Amplify project and provision resources one by one. But is there any other easy method to move this Amplify backend to another AWS account?
I found a solution through this (https://github.com/aws-amplify/amplify-cli/issues/3350) Github issue thread. But I'm not 100% sure whether this is the recommend method to migrate Amplify resources.
These are the steps that I followed.
First, I pushed the project into a GitHub repo. This will push only the relevant files inside the amplify directory. (Amplify automatically populates .gitignore when we initialize our backend using amplify init).
Clone this repo to a new directory.
Next, I removed the amplify/team-provider-info.json file.
Run amplify init and you can choose your new AWS profile or you can enter secretAccessKeyId and accessKeyId for the new AWS account. (Refer this guide to create and save an IAM user with AWS Amplify access)
This will create backend resources locally. Now to push those resources, you can execute amplify push.
If you want to export the Amplify backend using a CDK pipeline, you can refer to this guide: https://aws.amazon.com/blogs/mobile/export-amplify-backends-to-cdk-and-use-with-existing-deployment-pipelines/
I want to delete S3 files from the bucket by creating a bamboo plan/script which when run will delete the file.
I tried creating a plan and then creating a task. But in the task i can see no option for Amazon S3 Object in the list.
I have refered to the below url and followed the steps:
https://utoolity.atlassian.net/wiki/spaces/TAWS/pages/19464196/Using+the+Amazon+S3+Object+task+in+Bamboo
Is there any other way i can create a bamboo plan and delete files from S3???
The link in the question is to a paid 3rd party Bamboo plugin (link here) and not installed by default.
You currently have 2 options for AWS and Bamboo Integration:
Purchase the Tasks for AWS Bamboo plugin.
Create a script task that uses the AWS S3 API or AWS SDK to achieve what you are trying to do (Amazon's REST Delete S3 Object).