Hello guys i'am trying to deploy my Next.js app on Amplify connecting my GitHub repository. Now if try to deploy it on server us-east-1 (default server), AWS creates all services (S3 bucket, CloudFront and Amplify) and this works. Now if try to do the same in Europe with server eu-south-1, AWS creates only the Amplify service without creating any S3 Bucket and CloudFront services. So i get the error 404 not found, with no site (blank screen) because there is no site on S3, if i change the redirect to 200 i can see the AccessDenied error. In S3 and CloudFront there is no service linked to this app deployed in Europe, there is only the one uploaded in USA.
Now i think that it's a policy problem, but really i don't know how to solve it.
Related
Problem Statement:
The tailwind nextjs starter template is unable to be deployed properly on AWS using Github Actions. The deployment process involves pushing the export files to S3, and displaying them using S3 + Cloudfront + Route53.
One of my domains example: https://domainA.com works by just sharing this files to S3 without exporting them (Using github actions, I share this files to s3 and then connect it with cloudfront using Origin access identity. (It is working as expected)
but another one of my domains example: https://domainB.com doesn't work and gives access denied issue. (I checked bucket policy and it allows access to s3 bucket, bucket is publicly accessible)
I want to solve above error, please suggest options.
Now coming to another problem, As I have realized that the files in S3 should be output files and
so I now export this files to s3 locations using github actions. The cloudfront is connected to s3 bucket using OAI or public origin access. Once everything is setup correctly, I am able to route to my domain but it is unable to work properly. I am assuming that the system is unable to locate additional files from S3 that it needs.
how can I also solve above error.
Issue: "The tailwind nextjs starter template is unable to be deployed properly on AWS using Github Actions. The deployment process involves pushing the export files to S3, and displaying them using S3 + Cloudfront + Route53. The domain (https://domainB.com) gives an access denied issue despite the S3 bucket being publicly accessible and allowing access."
Solution:
The issue is because of the dynamic routing in Next.js with S3, Cloudfront and Route53. The export files are separated into individual files for each route, leading to the system being unable to locate additional files from S3.
To resolve this issue, there are several options that can be considered:
Amplify: AWS manages a service called Amplify, which is a CI/CD service that deploys Next.js apps to your AWS account.
Serverless Next.js (sls-next) Component: A Serverless Framework Component that allows you to deploy Next.js apps to your AWS account via Serverless Inc's deployment infrastructure. (I used this option)
SST Dev: A platform to deploy Next.js apps to your AWS account.
Using Terraform: An infrastructure as code tool that can be used to deploy Next.js apps to your AWS account.
By choosing one of the above options, you can effectively deploy your Next.js starter template on AWS using Github Actions.
When hosting to aws amplify the s3 image are not working and returning
"url" parameter is valid but upstream response is invalid
Everything works fine if I deploy it via Vercel. However, when I tried to deploy on AWS Amplify, images from s3 bucket are not showing up
Could you please let me know what the issue is?
Hosting one application using s3 static website through cloudfront in one aws account. its working fine but i am migrating to another AWS account its getting error.All files are placed in S3 (Migration Account)
once hitting the url getting the XML code in the screen.
can anyone help me on this
I originally created an S3 bucket with a cloudfont to display the html code in the bucket at the publicly hosted domain hosted. I deleted the contents of the S3 bucket and uploaded new files. The endpoint for the S3 bucket displays fine in the web. The hosted url no longer works, I get an 404 error.
Failed to load resource: the server responded with a status of 404 ()
Could you please provide more details of your CloudFront distribution settings? I wrote this article explaining how Cloudfront and S3 works and how you can deploy React applications step by step! It might be useful for you.
Please have a look at: Medium - How to deploy single page applications to Cloudfront
I discovered my error. Instead of using the drop down menu for the orgin link I used the endpoint of the bucket. It works now.
I am working on an AWS Elastic Beanstalk app that uploads files to an AWS S3 bucket. The Beanstalk app is a .NET Core Web API app, I've followed this guide (http://docs.aws.amazon.com/sdk-for-net/v3/developer-guide/net-dg-config-netcore.html) and have a credentials file on disk for local development with my shared access key and secret. These are the key and secret of the user that I created an S3 bucket with. That user has full access to S3 through IAM. In local development, the application uploads to S3 without a hiccup.
When I deployed the app to the Elastic Beanstalk platform, upload to S3 doesn't work in the elastic beanstalk environment. Local version is still fine.
I deployed the app to AWS Elastic Beanstalk using the AWS Toolkit for visual studio and specified that the app should have S3 full access during the creation process. I have since gone into the instance's role config and verified that it does in fact have S3 full access as a permission. I get an exception that the server terminated the connection abnormally after a timeout when attempting the upload. Is there a step or configuration piece I'm missing? Is there a way I can specify the same shared access key and secret I use locally on the beanstalk app so I can test it? I haven't found a way to give it any credentials from a file or the like.
Thanks,
Sam
For anybody that comes looking with a similar issue. It turned out that my S3 bucket and EB app were in separate regions and this made it so there was a network issue between the two.