Chef shared config files - amazon-web-services

I'm just learning about Chef, and I'm confused about the best way to handle a situation where there's a single config file that may need to be modified by multiple recipes. The specific case I have is with AWS Cloudwatch Logs. It has a single configuration file (no way that I see to include other files), which needs to list out all of the logfiles it collects. I have different recipes (e.g. a web server, a mail server, etc.) that will cause different logfiles to be created. When those recipes run, I want their logfiles to be added to the AWS Logs config file. But it doesn't seem safe to just append that to the file, because if the recipe is re-run, it will be added again. And I want to keep the logic about which logfiles exist within the recipe where the software that creates them is installed. I'd like to be able to just add a recipe to the runlist and have its logs get added to the config file (and vice versa), without the mail server recipe having to know anything about the files the web server creates and so on.
What is the right way to do this?

You need to have a CloudWatch recipe that actually creates the config file. Each of your other recipes can add some node attributes to indicate what log files they need added, and then the CloudWatch cookbook could grab those attributes and use them to build the config file.
For example:
In the webserver recipe
node['logs']['nginx']['somevhost'] = %w(/path/to/log /path/to/other/log)
In a cloudwatch recipe
node['logs'].each do |app|
app.each do |component|
<add to some collection structured for CloudWatch's config file>
end
end
Even better
I would suggest you take a look at the infochips's silverware cookbook. It has a nice library you can use to advertise your log files in the webserver and mailserver cookbooks, and then discover them in the CloudWatch cookbook.

Related

How to mount a file via CloudFoundry manifest similar to Kubernetes?

With Kubernetes, I used to mount a file containing feature-flags as key/value pairs. Our UI would then simply get the file and read the values.
Like this: What's the best way to share/mount one file into a pod?
Now I want to do the same with the manifest file for CloudFoundry. How can I mount a file so that it will be available in /dist folder at deployment time?
To add more information, when we mount a file, the UI later can download the file and read the content. We are using React and any call to the server has to go through Apigee layer.
The typical approach to mounting files into a CloudFoundry application is called Volume Services. This takes a remote file system like NFS or SMB and mounts it into your application container.
I don't think that's what you want here. It would probably be overkill to mount in a single file. You totally could go this route though.
That said, CloudFoundry does not have a built-in concept that's similar to Kubernetes, where you can take your configuration and mount it as a file. With CloudFoundry, you do have a few similar options. They are not exactly the same though so you'll have to make the determination if one will work for your needs.
You can pass config through environment variables (or through user-provided service bindings, but that comes through an environment variable VCAP_SERVICES as well). This won't be a file, but perhaps you can have your UI read that instead (You didn't mention how the UI gets that file, so I can't comment further. If you elaborate on that point like if it's HTTP or reading from disk, I could perhaps expand on this option).
If it absolutely needs to be a file, your application could read the environment variable contents and write it to disk when it starts. If your application isn't able to do that like if you're using Nginx, you could include a .profile script at the root of your application that reads it and generates the file. For example: echo "$CFG_VAR" > /dist/file or whatever you need to do to generate that file.
A couple of more notes when using environment variables. There are limits to how much information can go in them (sorry I don't know the exact value off the top of my head, but I think it's around 128K). It is also not great for binary configuration, in which case, you'd need to base64 encode your data first.
You can pull the config file from a config server and cache it locally. This can be pretty simple. The first thing your app does when it starts is to reach out and download the file, place it on the disk and the file will persist there for the duration of your application's lifetime.
If you don't have a server-side application like if you're running Nginx, you can include a .profile script (can be any executable script) at the root of your application which can use curl or another tool to download and set up that configuration.
You can replace "config server" with an HTTP server, Git repository, Vault server, CredHub, database, or really any place you can durably store your data.
Not recommended, but you can also push your configuration file with the application. This would be as simple as including it in the directory or archive that you push. This has the obvious downside of coupling your configuration to the application bits that you push. Depending on where you work, the policies you have to follow, and the tools you use this may or may not matter.
There might be other variations you could use as well. Loading the file in your application when it starts or through a .profile script is very flexible.

How to handle private configuration file when deploying?

I am deploying a Django application using the following steps:
Push updates to GIT
Log into AWS
Pull updates from GIT
The issue I am having is my settings production.py file. I have it in my .gitignore so it does not get uploaded to GITHUB due to security. This, of course, means it is not available when I PULL updates onto my server.
What is a good approach for making this file available to my app when it is on the server without having to upload it to GITHUB where it is exposed?
It is definitely a good idea not to check secrets into your repository. However, there's nothing wrong with checking in configuration that is not secret if it's an intrinsic part of your application.
In large scale deployments, typically one sets configuration using a tool for that purpose like Puppet, so that all the pieces that need to be aware of a particular application's configuration can be generated from one source. Similarly, secrets are usually handled using a secret store like Vault and injected into the environment when the process starts.
If you're just running a single server, it's probably just fine to adjust your configuration or application to read secrets from the environment (or possibly a separate file) and set those values on the server. You can then include other configuration settings (secrets excluded) as a file in the repository. If, in the future, you need more flexibility, you can pick up other tools in the future.

how to use regex in jenkins advanced configuration to trigger builds based on changes in specific project folders

We have a microservice project architecture where there is a single project repository with several folders. Each folder has files etc for a specific API. We would like to have that as a single repo but configure separate jobs in jenkins for each API folder. As such we would like to know how to use same repo for scm checkout in jenkins but trigger builds for commits made only to the folders where changes are made. i know it supports regex to include and exclude. But would like to know how best to use that.
So say for example i have a project sample-project with 3 folders abc, def and xyz.
we now have a job in jenkins that checkouts sample-project. Now we would like that jenkins job to be configured in a way that only when anything inside abc folder is changed or committed, it triggers that job otherwise not. How to best implement this.

What is the correct way of sharing your elasticbeanstalk configuration with your team?

I am looking for a way to share the EB configuration so anyone in my team with valid aws creds can deploy the code. By default, EB adds following to your .gitignore file.
# Elastic Beanstalk Files
.elasticbeanstalk/*
!.elasticbeanstalk/*.cfg.yml
!.elasticbeanstalk/*.global.yml
Do I need to check-in these files to share it with the team?
In my opinion, AWS royally messed up with their .gitignore defaults. This was confusing at first, because it seemed like it was there for a good reason. We couldn't find a good reason. Maybe it was just a precaution so you didn't commit something you shouldn't. However, firstly, modifying a project's .gitignore is not something it should be doing by default, in my opinion. And secondly, no one should be committing code they haven't reviewed.
As Kush notes in his reply, you can add the files into a nested directory which would be tracked by your VCS. I'm assuming the reason for this is so that different developers can maintain different configurations. We have zero use for anything remotely resembling this, but it's worth noting as I'm sure someone may.
We've completely removed these entries from our project and commit the entire .elasticbeanstalk and .ebextensions directories.
Assuming you have CLI Access you can create template and share a command like:
eb config save dev-env --cfg prod
Now, open this file in a text editor to modify/remove sections as necessary for your production environment.
Note: AWSConfigurationTemplateVersion is a required field. Do not remove it from the configuration file.
Checking Configurations into Version Control
If you want to check in your saved configurations so that anyone with access to your code can use the same settings in their own environments or if you want to track different versions of the saved configurations, move the file to the .elasticbeanstalk/folder directory. Saved configurations are located in the .elasticbeanstalk/saved_configs/ folder. By moving the configuration file up one level into the .elasticbeanstalk/ folder, the file can be checked in and will still work with the EB CLI. After you move the file, you must add and commit it.
Refer this AWS Blog Post

Where is Appropriate to Put AWS Keys

I'm learning about Strongloop, it's pretty good so far.
Question: What is the appropriate place to put AWS keys? config.json? ..and how would I access them from my application?
Thanks
Ideally you would not put those credentials in any file that is committed. I usually find environment variables to be the best balance of convenience and security.
If you are using strong-pm, then you would do this with slc ctl env-set. If you are using some other supervisor, then you'll need to consult its docs.
A lot of times it is enough to use Upstart or systemd directly, which both make it fairly easy to set environment variables in the service process.
Other than above answer, what you can do is put these in your release procedure.
What we have done in our product is all these entries are kept in a config file which is deployed from the shared folder.
Let me elaborate it.
we have local config files in the git. and separate config files on production servers in a folder names as shared, now, when ever a tag release is deployed from git, the shared folder overwrite these config files.