In a Symfony 4.3 application using symfony/dotenv 4.3.11 and aws/aws-sdk-php 3.173.13:
I'd like to authenticate the AWS SDK using credentials provided via environment variables, and I'd like to use the dotenv component to provide those environment variables.
This should be possible: Setting the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables is one way to automatically authenticate the with the aws sdk. And DotEnv should turn your configuration into environment variables.
However, when I set these variables in my .env.local or .env files, I get the following error:
Aws\Exception\CredentialsException: Error retrieving credentials from the instance profile metadata service.
This does not work:
.env.local:
AWS_ACCESS_KEY_ID=XXX
AWS_SECRET_ACCESS_KEY=XXXXXX
$ ./bin/console command-that-uses-aws-sdk
This works:
$ AWS_ACCESS_KEY_ID=XXX AWS_SECRET_ACCESS_KEY=XXXXXX ./bin/console command-that-uses-aws-sdk
Debug info:
I made a symfony command that outputs the environment variables in $_ENV. With AWS_ACCESS_KEY_ID/SECRET in .env.local, sure enough it appears as an environment variable:
...
[SYMFONY_DOTENV_VARS] => MEQ_ENV,APP_ENV,APP_SECRET,DATABASE_URL,AWS_ACCESS_KEY_ID,AWS_SECRET_ACCESS_KEY,AWS_REGION,AWS_ACCOUNT
[AWS_ACCESS_KEY_ID] => XXX
[AWS_SECRET_ACCESS_KEY] => XXXXXX
...
The aws php client documentation states:
The SDK uses the getenv() function to look for the AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and AWS_SESSION_TOKEN environment variables.
=> it uses getenv() not $_ENV.
But the Symfony Dotenv component (by default) just populates $_ENV and doesn't call putenv therefore your settings in .env files are not accessible by getenv().
Here are some options:
call Dotenv())->usePutenv(true) (but as symfony states: Beware that putenv() is not thread safe, that's why this setting defaults to false)
call putenv() manually exclusively for the aws setting
Wrap the aws client in your own symfony service and inject the settings from .env
Related
I am using NextAuth (Cognito Provider) and it is perfectly working on my local machine but when deployed on AWS Amplify, it keeps giving me [next-auth][error][CLIENT_FETCH_ERROR] https://next-auth.js.org/errors#client_fetch_error Unexpected token '<', "<!DOCTYPE "... is not valid JSON and the request https://mydomain.sample-dev.net/api/auth/session is 404
I have followed the documentation and added the required environment variable below:
Executing command: NEXTAUTH_URL=https://mydomain.sample-dev.net
Executing command: NEXTAUTH_SECRET=TestingSecretKey
Executing command: NEXT_PUBLIC_AUTH_SECRET=TestingSecretKey
my pages/api/auth/[...nextauth].ts is
import NextAuth from 'next-auth/next';
import Cognito from 'next-auth/providers/cognito';
export default NextAuth({
providers: [
Cognito({
clientId: process.env.NEXT_PUBLIC_AUTH_CLIENT_ID!,
clientSecret: process.env.NEXT_PUBLIC_AUTH_CLIENT_SECRET!,
issuer: process.env.NEXT_PUBLIC_AUTH_ISSUER!,
idToken: true,
checks: 'nonce',
}),
],
debug: false,
secret: process.env.NEXTAUTH_SECRET || process.env.NEXT_PUBLIC_AUTH_SECRET,
});
any idea on what am I doing wrong or missing?
The [next-auth][error][CLIENT_FETCH_ERROR] error suggests the NEXTAUTH_URL environment variable is not accessible for use as a result of misconfiguration.
In order to resolve the error, you primarily need to define the required environment variables on your AWS Amplify dashboard.
Proceed by select the affected application under App settings, then click on Environment variables and follow the instructions for that purpose.
Then, click on the Build settings (equally under App settings) menu so as to edit your App build specification script and make the environment variables accessible as illustrated below.
build:
commands:
- echo "NEXTAUTH_URL=$NEXTAUTH_URL" >> .env
- npm run build
I've tried to run the following commands as part of a bash script runs in BashOperator:
aws cli ls s3://bucket
aws cli cp ... ...
The script runs successfully, however the aws cli commands return error, showing that aws cli doesn't run with the needed permissions (as was defined in airflow-worker-node role)
Investigating the error:
I've upgraded awscli in the docker running the pod - to version 2.4.9 (I've understood that old version of awscli doesn't support access to s3 based on permission grant by aws role
I've Investigated the pod running my bash_script using the BashOperator:
Using k9s, and D (describe) command:
I saw that ARN_ROLE is defined correctly
Using k9s, and s (shell) command:
I saw that pod environment variables are correct.
aws cli worked with the needed permissions and can access s3 as needed.
aws sts get-caller-identity - reported the right role (airflow-worker-node)
Running the above commands as part of the bash-script which was executed in the BashOperator gave me different results:
Running env showed limited amount of env variables
aws cli returned permission related error.
aws sts get-caller-identity - reported the eks role (eks-worker-node)
How can I grant aws cli in my BashOperator bash-script the needed permissions?
Reviewing the BashOperator source code, I've noticed the following code:
https://github.com/apache/airflow/blob/main/airflow/operators/bash.py
def get_env(self, context):
"""Builds the set of environment variables to be exposed for the bash command"""
system_env = os.environ.copy()
env = self.env
if env is None:
env = system_env
else:
if self.append_env:
system_env.update(env)
env = system_env
And the following documentation:
:param env: If env is not None, it must be a dict that defines the
environment variables for the new process; these are used instead
of inheriting the current process environment, which is the default
behavior. (templated)
:type env: dict
:param append_env: If False(default) uses the environment variables passed in env params
and does not inherit the current process environment. If True, inherits the environment variables
from current passes and then environment variable passed by the user will either update the existing
inherited environment variables or the new variables gets appended to it
:type append_env: bool
If bash operator input env variables is None, it copies the env variables of the father process.
In my case, I provided some env variables therefore it didn’t copy the env variables of the father process into the chid process - which caused the child process (the BashOperator process) to use the default arn_role of eks-worker-node.
The simple solution is to set the following flag in BashOperator(): append_env=True which will append all existing env variables to the env variables I added manually.
I've figured out that in the version I'm running (2.0.1) it isn't supported (it is supported in later versions).
As a temp solution I've add **os.environ - to the BashOperator env parameter:
return BashOperator(
task_id="copy_data_from_mcd_s3",
env={
"dag_input": "{{ dag_run.conf }}",
......
**os.environ,
},
# append_env=True,- should be supported in 2.2.0
bash_command="utils/my_script.sh",
dag=dag,
retries=1,
)
Which solve the problem.
so my goal is to deploy a serverless Dockerized NextJS application on ECS/Fargate.
So when I docker build my project using the command docker build . -f development.Dockerfile --no-cache -t myapp:latest everything is running successfully except Docker build doesn't consider the env file in my project's root directory. Once build finishes, I push the Docker image to Elastic Container Repository(ECR) and my Elastic Container Service(ECS) references that ECR.
So naturally, my built image doesn't have a ENV file(contains the API keys and DB credentials), and as a result my app is deployed but all of the services relying on those credentials are failing because there isn't an ENV file in my container and all of the variables become undefined or null.
To fix this issue I looked at this AWS doc and implemented a solution that stores my .env file in AWS S3 and that S3 ARN gets refrenced in the container service where the .env file is stored. However, that didn't workout and I think it's because of the way I'm setting my
next.config.js to reference my environmental files in my local codebase. I also tried to set my environmental variables manually(very unsecure, screenshot below) when configuring the container in my task defination, and that didn't work either.
My next.confg.js
const dotEnvConfig = { path: `../../${process.env.NODE_ENV}.env` };
require("dotenv").config(dotEnvConfig);
module.exports = {
serverRuntimeConfig: {
// Will only be available on the server side
xyzKey: process.env.xyzSecretKey || "",
},
publicRuntimeConfig: {
// Will be available on both server and client
appUrl: process.env.app_url || "",
},
};
So on my local codebase in the root directory I have two files development.env (local api keys) and production.env(live api keys) and my next.config.js is located in /packages/app/next.config.js
So apparently it was just a plain NextJS's way of handling env variables.
In next.config.js
module.exports = {
env: {
user: process.env.SQL_USER || "",
// add all the env var here
},
};
and to call the environmental variable user in the app all you have to do is call process.env.user and user will reference process.env.SQL_USER in my local .env file where it will be stored as SQL_USER="abc_user"
You should be setting the environment variables in the ECS task definition. In order to prevent storing sensitive values in the task definition you should use AWS Parameter Store, or AWS Secrets Manager, as documented here.
I'm using elastic beanstalk to deploy a Django app. I'd like to SSH on the EC2 instance to execute some shell commands but the environment variables don't seem to be there. I specified them via the AWS GUI (configuration -> environment properties) and they seem to work during the boot-up of my app.
I tried activating and deactivating the virtual env via:
source /var/app/venv/*/bin/activate
Is there some environment (or script I can run) to access an environment with all the properties set? Otherwise, I'm hardly able to run any command like python3 manage.py ... since there is no settings module configured (I know how to specify it manually but my app needs around 7 variables to work).
During deployment, the environment properties are readily available to your .platform hook scripts.
After deployment, e.g. when using eb ssh, you need to load the environment properties manually.
One option is to use the EB get-config tool. The environment properties can be accessed either individually (using the -k option), or as a JSON or YAML object with key-value pairs.
For example, one way to export all environment properties would be:
export $(/opt/elasticbeanstalk/bin/get-config --output YAML environment |
sed -r 's/: /=/' | xargs)
Here the get-config part returns all environment properties as YAML, the sed part replaces the ': ' in the YAML output with '=', and the xargs part fixes quoted numbers.
Note this does not require sudo.
Alternatively, you could refer to this AWS knowledge center post:
Important: On Amazon Linux 2, all environment properties are centralized into a single file called /opt/elasticbeanstalk/deployment/env. You must use this file during Elastic Beanstalk's application deployment process only. ...
The post describes how to make a copy of the env file during deployment, using .platform hooks, and how to set permissions so you can access the file later.
You can also perform similar steps manually, using SSH. Once you have the copy set up, with the proper permissions, you can source it.
Beware:
Note: Environment properties with spaces or special characters are interpreted by the Bash shell and can result in a different value.
Try running the command /opt/elasticbeanstalk/bin/get-config environment after you ssh into the EC2 instance.
If you are trying to access the environment variables in eb script elastic beanstalk
Use this
$(/opt/elasticbeanstalk/bin/get-config environment -k ENVURL)
{ "Ref" : "AWSEBEnvironmentName" }
$(/opt/elasticbeanstalk/bin/get-config environment -k ENVURL)
How can I set secrets.yml for test environment for CircleCI with Rspec? My secrets.yml is not on git.
When I run my tests on CircleCI, they fail with error:
ArgumentError: Missing required arguments: aws_access_key_id, aws_secret_access_key
This solution worked out for me:
Create config/secrets.ci.yml with test environment variables.
Add circle.yml to the root directory:
circle.yml
machine:
ruby:
version:
2.2.2
dependencies:
override:
- mv config/secrets.ci.yml config/secrets.yml
Another option is to use CircleCI Environment Variables. This would allow you to load in secret info without it being in your repository or file system at all.
Not to mention that the AWS CLI will automatically read those variables from the environment if they are present.