I have successfully been able to run custom shell scripts with chef on opswork.
However the applications environment variables is not available. I have tried to run as www-data user but the environment variables are still not available.
Is there any way to access the environment variables set in application in opswork in custom recepies?
Acctually i found out that the environment variables are set in the sites virtualhost and is only available to the application.
Related
How to get access to env vars in Next.js running on Cloud Run. I explicitly need to set variables through Clour Run config (as our team is using it for other services), but Next.js does not seem to pick those up in runtime.
I saw this answer How to Access ENV Variables in Google Cloud Run using Next.js but it seem to be focusing more on getting variables into the browser.
I have a flask app that uses a different database based on production vs development environment variables. I am worried about a developer forgetting to set FLASK_ENV=development before running their local flask app, and suddenly they are making updates to a production database.
My only easy solution I have thought of is restricting the production DB to only accept requests from the production server IP so that way everything will error out if the developer forgets to set the environment variable, but I was wondering if there are better solutions for this issue.
First of all, it is a good practice to limit access to your production database to trusted IPs only.
As you can read in Configuration Handling: Development/Production (Flask Docs), you can have multiple configurations and use inheritance.
class Config(object):
DATABASE_URI = 'sqlite:///:memory:'
class ProductionConfig(Config):
DATABASE_URI = 'mysql://user#localhost/foo'
class DevelopmentConfig(Config):
pass
You can load, all the time, the default configuration that is safe. Only if the production environment variable is set, the real database configuration will be loaded.
Another solution is to use the instance folder (Flask Docs) that mustn't be a part of your git repository.
The instance folder is designed to not be under version control and be deployment-specific.
So, when you deploy your app, just add your production configuration to this instance folder, and nobody would have the prod configuration on their local machine.
They have a few examples and explain very well how to use it in the link that I gave you above.
You can prewrite the environment variables in .flaskenv:
FLASK_ENV=development
Then install python-dotenv:
pip install python-dotenv
Now if you run your application locally with flask run, Flask will automatically read the .flaskenv and set the environment variables in it.
I've deployed my Spring boot application to an ELB with Corretto 11 running on 64bit Amazon Linux 2/3.0.1 platform.
When I am trying to add a new Environment Variable from the AWS Console ( Configuration -> Software) and I hit Apply the update fails and rollbacks to the previous configuration.
This what I get from the AWS Console on my environment dashboard
Here are some of the logs that might be useful
The interesting part is when I create a fresh new environment and upload my .jar file and add the environment variables at the instantiation of my environment it works (meaning the environment variables are set correctly). The problem occurs when I try to update my environment variables when then environment already exists. Am I missing something?
I tried to use $ eb setenv after the $ eb deploy from my circleci but I still get the same error.
I've been digging into this. And now I know why it fails.
The reason is that when you add the env variable to your EB, the EB engine is going to download last application version, unzip and replaces it as current application.
This means, no deployment hooks nor .ebextenstions scripts are not executed. Therefore, if you do any application setup during deployment it is not going to be re-applied, leading to failure.
This is based on my own observations using Python 3.7 running on 64bit Amazon Linux 2/3.0.3 and single-instance EB type.
I found a workaround. If you set your deployment to immutable, this will go away as it’s gonna create a band new ec2 instance for you. Not the best solution if you have quota limitation but it works.
I made a Django project and have successfully deployed it to an Elastic Beanstalk environment, let's say it's called app_name. However, I realized I needed 2 environments: development and production. The purpose of this is so I can try things out in development, and when I know it's fully working, I can deploy it in production for the public to use.
I tried looking around their documentations and found Managing multiple Elastic Beanstalk environments as a group with the EB CLI . It basically says you can create a group of environments in one project with the command:
~/workspace/project-name$ eb create --modules component-a component-b --env-group-suffix group-name
However, I'm not sure what a group means. I mean, I just need a development and production environment.
I'm fairly new at this. How do I create and manage development and production environments for this purpose? I would ever be so grateful if someone were to shed some light to my problem.
Running a group of environments is more for different services doing different things. You would have an environment that handles Service One, and an environment that handles Service Two etc. This isn't really what you want.
You just need an environment in the same application as your production environment. It doesn't have to be in the same application but I like it that way because its useful for deploying an app version to dev, and then deploy the app version to prod once it's tested.
An easy way to do this is run
eb clone app_name (where app_name is the name of your production environment)
This will clone your production environment and prompt you to give it a name, which you might set to app_name_dev. From there you can edit your dev environment to make it more suitable for dev (maybe you'd make the instances smaller, change software variables like MAIL_DRIVER=mailgun to MAIL_DRIVER=mailtrap, connect it to a dev database instead of your prod database, etc)
The downside of doing this would be if your production environment is currently running jobs or doing anything meaningful, you may not want to clone it right away since the new dev environment could start doing these things too, before you manage to update its config to point to a dev DB etc! If this is the case, you would just run eb create my_app_dev and configure it from scratch.
I'm integrating AWS Auto Scaling Group with Code Deploy.
I wrote a bash script for AfterInstall hook.
The script executes composer update, composer dump-autoload since my code is using PHP.
And here is the problem.
When I deploy, deployment fails with this log.
[RuntimeException]
The HOME or COMPOSER_HOME environment variable must be set for composer to run correctly
But when I get to instance via SSH and run composer it works fine.
How do I fix this? Anyone had worked around this issue?
Any answer will be appreciated. Thank you for your time.
I had a similar problem using Elastic Beanstalk and i did fixed it adding an Environment variable
You should be able to achieve this in CodeDeploy too for example on creating the application.
See also https://github.com/composer/composer/issues/4789
Could you make sure the env variable is also accessible by the user you specify in the appspec file to which runs the hook script? If you have multiple user running on the instance, env variable might not be accessible to every user depends how you set it up.
I have the same concern regarding composer install using CodeDeploy. It runs well in develop but when I ran it in production, I'm getting:
[stderr] [RuntimeException]
[stderr] The HOME or COMPOSER_HOME environment variable must be set for composer to run correctly
I SSH to instance and run composer and I get:
user#server:~/httpdocs$ /opt/plesk/php/7.2/bin/php /usr/lib/plesk-9.0/composer.phar install
Loading composer repositories with package information
Installing dependencies (including require-dev) from lock file
Nothing to install or update
Generating optimized autoload files
user#server:~/httpdocs$
I have one ec2 instance and I deploy in 2 separate directories for stg and prod.
codedeploy deployment error