How do I configure Rundeck in a way that I can execute a job through Ansible over a couple of AWS Ec2 instances? I am using Batix plugin but i believe that it is not configured properly or some personal configuration is missing.
My idea is to trigger a job from Rundeck without defining static inventories on Rundeck and Ansible, if possible. (I add that Ansible + ec2.py and ec2.ini works properly without Rundeck)
Below a snippet of my the configuration file of inventory settings.
project.ansible-generate-inventory=true
resources.source.1.config.ansible-gather-facts=true
resources.source.1.config.ansible-ignore-errors=true
resources.source.1.config.ansible-inventory=/{{ VAR }}
resources.source.1.type=com.batix.rundeck.plugins.AnsibleResourceModelSourceFactory
for VAR I tried these values = etc/ansible/hosts ..... /ec2.py ..... /ec2.py -- list ..... /tmp/data/inventory
You can use Dynamic inventory under Rundeck, take a look at this GitHub thread. Another way is to create a node source like this. Alternatively, you can use the Rundeck EC2 plugin to get directly the AWS EC2 nodes. Take a look at this.
Related
So I have been trying to create an Ansible playbook which creates a new instance to GCP and create a test file inside that instance. I've been using this example project from Github as template. In this example project, there is ansible_hosts -file which contains this host:
[gce_instances]
myinstance[1:4]
but I don't have any idea what it is doing actually?
The fragment your provided is Ansible technology and not actually related to anything GCP specific. This is a good reference doc: Working with Inventory.
At a high level,
[gce_instances]
myinstance[1:4]
the hosts file defines the machine identities against which Ansible is to execute against. With the hosts file, you can define groups of hosts to allow you to apply ansible playbooks to subsets of hosts at a time.
In the example, a group is created that is called gce_instances. There is nothing special or magic about the name. It isn't any kind of key word/phrase special to our story.
Within a group, we specify the hostnames that we wish to work against.
The example given is a wild-card specifier and simply short-hand for:
[gce_instances]
myinstance1
myinstance2
myinstance3
myinstance4
New to terraform, so perhaps it just not supposed to work this way. I want to use aws_s3_bucket_object to upload a package to a bucket- this is part of an app deploy. Im going to be changing the package for each deploy and I want to keep the old versions.
resource "aws_s3_bucket_object" "object" {
bucket = "mybucket-app-versions"
key = "version01.zip"
source = "version01.zip"
}
But after running this for a future deploy I will want to upload version02 and then version03 etc. Terraform replaces the old zip with the new one- expected behavior.
But is there a way to have terraform not destroy the old version? Is this a supported use case here or is this not how I'm supposed to use terraform? I wouldn't want to force this with an ugly hack if terraform doesn't have official support for doing something like what I'm trying to do here.
I could of course just call the S3 api via script, but it would be great to have this defined with the rest of the terraform definition for this app.
When using Terraform for application deployment, the recommended approach is to separate the build step from the deploy step and use Terraform only for the latter.
The responsibility of the build step -- which is implemented using a separate tool, depending on the method of deployment -- is to produce some artifact (an archive, a docker container, a virtual machine image, etc), publish it somewhere, and then pass its location or identifier to Terraform for deployment.
This separation between build and deploy allows for more complex situations, such as rolling back to an older artifact (without rebuilding it) if the new version has problems.
In simple scenarios it is possible to pass the artifact location to Terraform using Input Variables. For example, in your situation where the build process would write a zip file to S3, you might define a variable like this:
variable "archive_name" {
}
This can then be passed to whatever resource needs it using ${var.archive_name} interpolation syntax. To deploy a particular artifact, pass its name on the command line using -var:
$ terraform apply -var="archive_name=version01.zip"
Some organizations prefer to keep a record of the "current" version of each application in some kind of data store, such as HashiCorp Consul, and read it using a data source. This approach can be easier to orchestrate in an automated build pipeline, since it allows this separate data store to be used to indirectly pass the archive name between the build and deploy steps, without needing to pass any unusual arguments to Terraform itself.
Currently, you tell terraform to manage one aws_s3_bucket_object and terraform takes care of its whole life-cycle, meaning terraform will also replace the file if it sees any changes to it.
What you are maybe looking for is the null_resource. You can use it to run a local-exec provisioner to upload the file you need with a script. That way, the old file won't be deleted, as it is not directly managed by terraform. You'd still be calling the API via a script then, but the whole process of uploading to s3 would still be included in your terraform apply step.
Here an outline of the null_resource:
resource "null_resource" "upload_to_s3" {
depends_on = ["<any resource that should already be created before upload>"]
...
triggers = ["<A resource change that must have happened so terraform starts the upload>"]
provisioner "local-exec" {
command = "<command to upload local package to s3>"
}
}
In Node-Red, I'm using some Amazon Web Services nodes (from module node-red-node-aws), and I would like to read some configuration settings from a file (e.g. the access key ID & the secret key for the S3 nodes), but I can't find a way to set everything up dynamically, as this configuration has to be made in a config node, which can't be used in a flow.
Is there a way to do this in Node-Red?
Thanks!
Unless a node implementation specifically allows for dynamic configuration, this is not something that Node-RED does generically.
One approach I have seen is to have a flow update itself using the admin REST API into the runtime - see https://nodered.org/docs/api/admin/methods/post/flows/
That requires you to first GET the current flow configuration, modify the flow definition with the desired values and then post it back.
That approach is not suitable in all cases; the config node still only has a single active configuration.
Another approach, if the configuration is statically held in a file, is to insert them into your flow configuration before starting Node-RED - ie, have a place-holding config node configuration in the flow that you insert the credentials into.
Finally, you can use environment variables: if you set the configuration node's property to be something like $(MY_AWS_CREDS), then the runtime will substitute that environment variable on start-up.
You can update your package.json start script to start Node-RED with your desired credentials as environment variables:
"scripts": {
"start": "AWS_SECRET_ACCESS_KEY=<SECRET_KEY> AWS_ACCESS_KEY_ID=<KEY_ID> ./node_modules/.bin/node-red -s ./settings.js"
}
This worked perfect for me when using the node-red-contrib-aws-dynamodbnode. Just leave the credentials in the node blank and they get picked up from your environment variables.
For the Jenkins Job DSL, I am trying to choose specific ssh agent (plugin) keys for a job (using the sshAgent keyword inside the wrappers context). We have the Jenkins ssh agent plugin installed and several keys setup (this plugin works, as we use it for almost all of our jobs). The Jenkins Job DSL sshAgent command always picks the first key, regardless of whether I specify a different key in our Jenkins setup.
I have tried using just the key name, but also tried key_name + space + description (just like the dropdowns show). That does not work either -- still picks the first key.
Is this a known issue? (I haven't turned up any searches for this yet)
You need to pass the ID of the credentials to the sshAgent DSL method. To get the ID, install at least version 1.21 of the Credentials Plugin. Then navigate to the credentials you want to use, e.g. if the credentials you want to use are global and called "Your Credentials" go to Jenkins > Credentials > Global credentials (unrestricted) > Your Credentials > Update. Then click the "Advanced..." button to reveal the ID. If you did not specify a custom ID when creating the credentials, it's a UUID like 99add9e9-84d4-408a-b644-9162a93ee3e4. Then use this value in your DSL script.
job('example') {
wrappers {
sshAgent('99add9e9-84d4-408a-b644-9162a93ee3e4')
}
}
It's recommended to use a recognizable custom ID when creating new credentials, e.g. deployment-key. That will lead to readable DSL scripts.
job('example') {
wrappers {
sshAgent('deployment-key')
}
}
I have spent couple of hours trying to change default monit config .monitrc file in amazon opsworks. What I did is read all the recipes and find out the template in which this config is created: https://github.com/aws/opsworks-cookbooks/blob/fb21127bf1e79e91ccbeaa47907774898bc237c5/deploy/specs/nodejs_spec.rb
monit_config = file(::File.join(node[:monit][:conf_dir], "node_web_app-#{app}.monitrc")
I tried to change conf_dir variable by passing Custom Chef JSON at deploy but with no luck.
{
"monit": { "conf_dir": "/etc/monit/conf.d/custom" }
}
Could anybody help me. I dont want to rewrite recipes just to change monit config path if that is possible.
Assuming you are using their NodeJS cookbooks, the monitrc is written out inside the opsworks_nodejs definition. It does not appear to be configurable. You can either use something like chef-rewind to hack it in, or write your own recipes instead.