I would like to include variables from a file on the remote host, rather than the control machine Ansible is running on.
For example I have a file /var/database_credentials.yml (on my webserver)
What's the best way to add variables from that file to hostvars so that I can use them in a template?
The include_vars module only takes files from the control machine. I could use the fetch module but that seems like an unnecessary step.
It should not be hard to integrate that with /etc/ansible/facts.d.
You can store JSON files, INI files or executable scripts in that directory and the content/output will be available as server facts after the setup module was executed.
I'm not sure it will take YAML. You might be lucky and it'll work to simply add a symlink to your file /var/database_credentials.yml. (YAML is not mentioned in the docs but it would be strange if YAML is not supported since pretty much everything in Ansible is based on YAML) If not, you can create a script in the language you prefer which reads that file and outputs a JSON object.
See Local Facts (Facts.d) in the docs.
You can register the remote file to a local variable, then parse it with from_yaml.
- name: "Read yml file"
ansible.builtin.shell: "cat /var/database_credentials.yml"
register: result
- name: "Parse yml into variable"
set_fact:
database_credentials: "{{ result.stdout | from_yaml }}"
Related
I am trying to build a Springboot application which connects to a DB. I would like to use a .env file which has the sensitive content. At the first hand, I am testing by changing the port to 8081.
My .env file has the following content
PORT=8081
My application.properties has the following content
server.port=${PORT}
I have a run time error that PORT cannot be resolved, which is to be expected when I did not know how to feed the .env file to properties.
Could someone point me in the right direction?
PS: I am using the port as an example, if this succeeds I will also set the DB Credentials with the .env file.
UPDATE:
I would prefer using .env file because when the application is deployed using AWS CodePipeline, I can have the environment variables set in the CodeBuild stage where I would be building the jar and eventually a docker image in this stage. Something like this.
EnvironmentVariables:
- Name: PORT
Value: "{resolve:secretsmanager:DBCredentials:SecretString:port}"
The error is Caused by java.lang.IllegalArgumentException: Could not resolve placeholder 'PORT' in value "${PORT}"
I think you have the react approach ! But in spring for using multiple environments, it's better to have a properties file or yaml by environment.
Ps: File name must be named like application-{environment name}.properties and must be in the resources folder.
For dev:
File name : application-dev.properties
server.port=8089
For IT:
File name : application-it.properties
server.port=8090
In the application.properties file, where usually we put some shared properties between all the environments, we can add the propertie : spring.profiles.active = dev // you can put what you want depending on your needs. If you have more than one profile you can separate them by " , ".
For more details you can check spring profiles
My app currently uses a folder called "Documents" that is located in the root of the app. This is where it stores supporting docs, temporary files, uploaded files etc. I'm trying to move my app from Azure to Beanstalk and I don't know how to give permissions to this folder and sub-folders. I think it's supposed to be done using .ebextensions but I don't know how to format the config file. Can someone suggest how this config file should look? This is an ASP.NET app running on Windows/IIS.
Unfortunately, you cannot use .ebextensions to set permissions to files/folders within your deployment directory.
If you look at the event hooks for an elastic beanstalk deployment:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-windows-ec2.html#windows-container-commands
You'll find that commands run before the ec2 app and web server are set up, and
container_commands run after the ec2 app and web server are setup, but before your application version is deployed.
The solution is to use a wpp.targets file to set the necessary ACLs.
The following SO post is most useful
Can Web Deploy's setAcl provider be used on a sub-directory?
Given below is the sample .ebextensions config file to create a directory/file and modify the permissions and add some content to the file
====== .ebextensions/custom_directory.config ======
commands:
create_directory:
command: mkdir C:\inetpub\AspNetCoreWebApps\backgroundtasks\mydirectory
command: cacls C:\inetpub\AspNetCoreWebApps\backgroundtasks\mydirectory /t /e /g username:W
files:
"C:/inetpub/AspNetCoreWebApps/backgroundtasks/mydirectory/mytestfile.txt":
content: |
This is my Sample file created from ebextensions
ebextensions go into the root of the application source code through a directory called .ebextensions. For more information on how to use ebextensions, please go through the documentation here
Place a file 01_fix_permissions.config inside .ebextensions folder.
files:
"/opt/elasticbeanstalk/hooks/appdeploy/pre/49_change_permissions.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
sudo chown -R ec2-user:ec2-user tmp/
Following that you can set your folder permissions as you want.
See this answer on Serverfault.
There are platform hooks that you can use to run scripts at various points during deployment that can get you around the shortcomings of the .ebextension Commands and Platform Commands that Napoli describes.
There seems to be some debate on whether or not this setup is officially supported, but judging by comments made on the AWS github, it seems to be not explicitly prohibited.
I can see where Napoli's answer could be the more standard MS way of doing things, but wpp.targets looks like hot trash IMO.
The general scheme of that answer is to use Commands/Platform commands to copy a script file into the appropriate platform hook directory (/opt/elasticbeanstalk/hooks or C:\Program Files\Amazon\ElasticBeanstalk\hooks\ ) to run at your desired stage of deployment.
I think its worth noting that differences exist between platforms and versions such as Amazon Linux 1 and Linux 2.
I hope this helps someone. It took me a day to gather that info and what's on this page and pick what I liked best.
Edit 11/4 - I would like to note that I saw some inconsistencies with the File .ebextension directive when trying to place scripts drirectly into the platform hook dir's during repeated deployments. Specifically the File directive failed to correctly move the backup copies named .bak/.bak1/etc. I would suggest using a Container Command to copy with overwriting from another directory into the desired hook directory to overcome this issue.
I source my environment specific variables from .env file (I'm using dotenv package). This file is not version controlled.
Using code pipeline stage codebuild how do I create this .env file and its contents ?
I'm thinking, I have to use buildspec to create and add content to the .env file, but have no idea how ?
Thanks
So this is actually a simple script, taking the second option of the first answer you can add the following to your buildspec.yml:
phases:
pre_build:
commands:
- printenv > .env
the shell script printenv > .env is all that is needed to output all of your process env to a file.
My suggestion would be to get rid of dotenv and use real environment variables.
The problem with dotenv is that it simply mocks process.env inside your node process. It does not really modify environment variables for you. They are not available outside of your node process.
I would use direnv instead and replace your .env files with .envrc for local development.
For CI and production environments, the environment variables are now defined in your CodeBuild project so .envrc files are not needed.
Please see this related answer for more information from another angle.
If you wish to continue using dotenv, you have several options:
Store the .env file in S3 and retrieve it during the build process.
Construct the .env file using shell scripts and environment variables defined on the CodeBuild project.
I'm sure there are others but they are all hacky for me.
i am very new to ansible and would like to test a few things.
I have a couple of Amazon EC2 instances and would like to install different software components on them. I don't want to have the (plaintext) credentials of the technical users inside of ansible scripts or config files. I know that it is possible to encrypt those files, but I want to try keepass for a central password management tool. So my installation scripts should read the credentials from a .kdbx (Keepass 2) database file before starting the actual installation.
Till now i wrote a basic python script for reading the .kdbx file. The script outputs a json object via:
print json.dumps(inventory, sort_keys=False)
The ouput looks like the following:
{"cdc":
{"cdc_test_server":
{"cdc_test_user":
{"username": "cdc_test_user",
"password": "password"}
}
}
}
Now I want to achieve, that the python script is executed by ansible and the key value pairs of the output are included/registered as ansible variables. So far my playbook looks as follows:
- hosts: 127.0.0.1
connection: local
tasks:
- name: "Test Playboook Functionality"
command: python /usr/local/test.py
register: pass
- debug: var=pass.stdout
- name: "Include json user output"
set_fact: passwords="{{pass.stdout | from_json}}"
- debug: " {{passwords.cdc.cdc_test_server.cdc_test_user.password}} "
The first debug generates the correct json output, but i am not able to include the variables in ansible, so that I can use them via jinja2 notation. set_fact doesn't throw an exception, but the last debug just returns a "Hello world" - message? So my question is: How do I properly include the json key value pairs as ansible variables via task?
See Ansible KeePass Lookup Plugin
ansible_user : "{{ lookup('keepass', 'path/to/entry', 'username') }}"
ansible_become_pass: "{{ lookup('keepass', 'path/to/entry', 'password') }}"
You may want to use facts.d and place your python script there to be available as a fact.
Or write a simple action plugin that returns json object to eliminate the need in stdout->from_json conversion.
Late to the party, but it seems your use case is primarily covered by keepass-inventory. And it doesn't require any playbook "magic". Disclaimer: I contribute to this non-profit.
export KDB_PATH=example.kdbx
export KDB_PASS=example
ansible all --list-hosts -i keepass-inventory.py
Here is my situation:
I have a Django app, which depends on config values being stored in a .env file. This .env file is separate from source control, to keep sensitive info private. This Django app is deployed in a docker container, and I have jenkins set up to rebuild the container whenever changes are checked into our git repository. The build will fail unless there is a .env file present in the build environment. What is the best way to include that file?
I currently have jenkins set up to execute a shell command that writes the file to the build environment, but I can't help but feel like that is sub-optimal, security-wise. What would be a better way to do this?
The answer we have come up with is to store the file on s3, and use aws cli to fetch the file at build time. Since the build is destined to be uploaded to ec2 anyway, it makes sense to use the aws credentials for both operations.
Would including the file to source code with access granted only to you/authorised users break your privacy policy?
Else you can try to always keep the file in Jenkins workspace dir and never delete it when cleaning workspace.