I am working on an AWS Elastic Beanstalk app that uploads files to an AWS S3 bucket. The Beanstalk app is a .NET Core Web API app, I've followed this guide (http://docs.aws.amazon.com/sdk-for-net/v3/developer-guide/net-dg-config-netcore.html) and have a credentials file on disk for local development with my shared access key and secret. These are the key and secret of the user that I created an S3 bucket with. That user has full access to S3 through IAM. In local development, the application uploads to S3 without a hiccup.
When I deployed the app to the Elastic Beanstalk platform, upload to S3 doesn't work in the elastic beanstalk environment. Local version is still fine.
I deployed the app to AWS Elastic Beanstalk using the AWS Toolkit for visual studio and specified that the app should have S3 full access during the creation process. I have since gone into the instance's role config and verified that it does in fact have S3 full access as a permission. I get an exception that the server terminated the connection abnormally after a timeout when attempting the upload. Is there a step or configuration piece I'm missing? Is there a way I can specify the same shared access key and secret I use locally on the beanstalk app so I can test it? I haven't found a way to give it any credentials from a file or the like.
Thanks,
Sam
For anybody that comes looking with a similar issue. It turned out that my S3 bucket and EB app were in separate regions and this made it so there was a network issue between the two.
Related
Problem Statement:
The tailwind nextjs starter template is unable to be deployed properly on AWS using Github Actions. The deployment process involves pushing the export files to S3, and displaying them using S3 + Cloudfront + Route53.
One of my domains example: https://domainA.com works by just sharing this files to S3 without exporting them (Using github actions, I share this files to s3 and then connect it with cloudfront using Origin access identity. (It is working as expected)
but another one of my domains example: https://domainB.com doesn't work and gives access denied issue. (I checked bucket policy and it allows access to s3 bucket, bucket is publicly accessible)
I want to solve above error, please suggest options.
Now coming to another problem, As I have realized that the files in S3 should be output files and
so I now export this files to s3 locations using github actions. The cloudfront is connected to s3 bucket using OAI or public origin access. Once everything is setup correctly, I am able to route to my domain but it is unable to work properly. I am assuming that the system is unable to locate additional files from S3 that it needs.
how can I also solve above error.
Issue: "The tailwind nextjs starter template is unable to be deployed properly on AWS using Github Actions. The deployment process involves pushing the export files to S3, and displaying them using S3 + Cloudfront + Route53. The domain (https://domainB.com) gives an access denied issue despite the S3 bucket being publicly accessible and allowing access."
Solution:
The issue is because of the dynamic routing in Next.js with S3, Cloudfront and Route53. The export files are separated into individual files for each route, leading to the system being unable to locate additional files from S3.
To resolve this issue, there are several options that can be considered:
Amplify: AWS manages a service called Amplify, which is a CI/CD service that deploys Next.js apps to your AWS account.
Serverless Next.js (sls-next) Component: A Serverless Framework Component that allows you to deploy Next.js apps to your AWS account via Serverless Inc's deployment infrastructure. (I used this option)
SST Dev: A platform to deploy Next.js apps to your AWS account.
Using Terraform: An infrastructure as code tool that can be used to deploy Next.js apps to your AWS account.
By choosing one of the above options, you can effectively deploy your Next.js starter template on AWS using Github Actions.
If I am member of an organization on AWS, and I use my member account to upload an Elastic Beanstalk app to a specific region via CLI, will it show up for management account as well? Or do I need to change something in settings somewhere to make the app upload to organization's AWS and not my own console. Need clarification
Currently, I used my member account keys in CLI to create and deploy elastic beanstalk app in specific region. It is working fine. However, the organization management account cannot see the environment in their console.
Any clarification would be very much appreciated! As I am not expert in AWS and have only used it as IAM user before to deploy apps.
I am new to AWS. I have developed spring boot application to upload files in S3 bucket.
I have created IAM user and assign AmazonS3FullAccess. I am using that user accessKey and accessSecret in my spring boot application to upload file and its working fine in my localhost but its not working in AWS Elastic BeanStalk instance. I am getting permission denied exception message.
In the Configuration of your Elastic Beanstalk application, in the Security section, there should be an IAM instance profile configured.
Once you've identified the role that it is using, you need to open the Identity and Access Management (IAM) console, navigate to the list of Roles, find the role, and add a new policy to it. The easiest solution is to add an inline policy. Give it permissions to upload files to the bucket and it should start working. There shouldn't be a need to restart the server.
All I did was to add permission policy to my elastic beanstalk role.
I am pretty new to Spring Boot. I am looking to set up my application to use my IAM role for S3 access while my project is hosted on EC2, and local credentials for when I am testing on my machine. I am using DefaultAWSCredentialsProviderChain() in my AmazonS3ClientBuilder, I just can't figure out where I need to set up the credentials for when I am testing locally. I was hoping to set up a configuration file for the AWS credentials that I can put in my .gitignore.
Am I going about this the right way?
Figured it out.
Needed to created a file called "credentials" in my root directory with the following information:
[default]
aws_access_key_id=KEY
aws_secret_access_key=SECRET
Obviously replace KEY and SECRET with your own.
Now DefaultAWSCredentialsProviderChain() can see the credentials on my machine, and will use my IAM role when running on my EC2.
How do I update the IAM security credentials in the environmental variables in an Elastic Beanstalk application?
In my application I'm getting the following error sending a message to the AWS SQS queue. 403 (Forbidden)
bundle.js:27819 Error: The security token included in the request is invalid.
I changed my IAM credentials so I'm assuming I need to update the environmental variables in my Elastic Beanstalk application, and I'm assuming this is the reason for the above error.
I tried to update the security credentials in the environmental variables in my Elastic Beanstalk application by running aws configure. If I'm understanding correctly it has updated the credentials file in my .aws folder. But I don't think it updated the security credentials in the environmental variables in my AWS Elastic Beanstalk application. How to do this?
Thanks!
I tried to update the security credentials in the environmental variables in my Elastic Beanstalk application by running aws configure.
That is incorrect assumption, aws configure updates only .aws contents, which has nothing to do with ElasticBeanstalk environment variables.
If you need to update EB environment variables, then you need to use this command
eb setenv key=value
BUT, and this is a huge but, never store your credentials in a place such as remote instance. That is not how you are supposed to give permissions to your applications. Of course you can do that using environment variables but that is a huge security risk. You should create appropriate role and attach it to your EB environment instead. That way you don't need to manage your credentials and give your application all the permission it needs.