I want to generate MAC address and UUID in attribute and then pass the values to template.
something like this :
Attribute/default.rb:
default['libvirt']['xml_mac_Adrr'] = 'openssl rand -hex 6 | sed 's/\(..\)/\1:/g; s/:$//''
default['libvirt']['xml_uuid'] = 'uuidgen virbr0'
Template/network.erb:
<uuid><%= node['libvirt']['xml_uuid'] %></uuid>
<mac address='<%= node['libvirt']['xml_mac_Adrr']%>'/>
How can I do that?
UPDATE
I want to modify the default.xml network for the virtual network. Basically, we have to do it by virsh-net command
Now I want to use a template to pass UUID & MAC address values to XML file and modify it in guest machine.
this is my recipe:
template '/etc/libvirt/qemu/network/default.xml' do
source 'qemu-network.erb'
owner "root"
group "root"
mode "0644"
end
Yo can use backquotes to execute shell commands inside ruby and capture the response:
default['libvirt']['xml_mac_Adrr'] = `openssl rand -hex 6 | sed 's/\(..\)/\1:/g; s/:$//'`
default['libvirt']['xml_uuid'] = `uuidgen virbr0`
EDIT:
The second problem I see is that you have to use instance variables in the controller to share information with the view. So the best way would be:
#mac = `openssl rand -hex 6 | sed 's/\(..\)/\1:/g; s/:$//'`
#uuid = `uuidgen virbr0`
Then at view level you can use:
<uuid><%=#uuid %></uuid>
<mac address='<%=#mac %>'/>
Within chef relying on system commands should go through the shell_out method (which is included in the recipe dsl) to avoid some quirks when the DSL interpreter is run and getting methosd to clean up the ouput.
I'd go this way:
default['libvirt']['xml_mac_Adrr'] = Chef::ShellOut.new("openssl rand -hex 6 | sed 's/\(..\)/\1:/g; s/:$//'").stdout.chomp
default['libvirt']['xml_uuid'] = Chef::ShellOut.new('uuidgen virbr0').stdout.chomp
But this has a problem, at each run, a new mac address will be generated, so you should use normal and avoid redefining it, this is easiest moved into the recipe, following in recipe file before your template code should do:
node.normal['libvirt']['xml_mac_Adrr'] = shell_out("openssl rand -hex 6 | sed 's/\(..\)/\1:/g; s/:$//'").stdout.chomp unless node['libvirt'].includes?('xml_mac_Adrr')
Related
What are some of the options to back up BigQuery DDLs - particularly views, stored procedure and function code?
We have a significant amount of code in BigQuery and we want to automatically back this up and preferably version it as well. Wondering how others are doing this.
Appreciate any help.
Thanks!
In order to keep and track our BigQuery structure and code, we're using Terraform to manage every resources in big query.
More specifically to your question, We use google_bigquery_routine resource to make sure the changes are reviewed by other team members and every other benefit you get from working with VCS.
Another important part of our TerraForm code is the fact we version our BigQuery module (via github releases/tags) that includes the Tables structure and Routines, version it and use it across multiple environments.
Looks something like:
main.tf
module "bigquery" {
source = "github.com/sample-org/terraform-modules.git?ref=0.0.2/bigquery"
project_id = var.project_id
...
... other vars for the module
...
}
terraform-modules/bigquery/main.tf
resource "google_bigquery_dataset" "test" {
dataset_id = "dataset_id"
project_id = var.project_name
}
resource "google_bigquery_routine" "sproc" {
dataset_id = google_bigquery_dataset.test.dataset_id
routine_id = "routine_id"
routine_type = "PROCEDURE"
language = "SQL"
definition_body = "CREATE FUNCTION Add(x FLOAT64, y FLOAT64) RETURNS FLOAT64 AS (x + y);"
}
This helps us upgrading our infrastructure across all environments without additional code changes
We finally ended up backing up DDLs and routines using INFORMATION_SCHEMA. A scheduled job extracts the relevant metadata and then uploads the content into GCS.
Example SQLs:
select * from <schema>.INFORMATION_SCHEMA.ROUTINES;
select * from <schema>.INFORMATION_SCHEMA.VIEWS;
select *, DDL from <schema>.INFORMATION_SCHEMA.TABLES;
You have to explicitly specify DDL in the column list for the table DDLs to show up.
Please check the documentation as these things evolve rapidly.
I write a table/views and a routines (stored procedures and functions) definition file nightly to Cloud Storage using Cloud Run. See this tutorial about setting it up. Cloud Run has an HTTP endpoint that is scheduled with Cloud Scheduler. It essentially runs this script:
#!/usr/bin/env bash
set -eo pipefail
GCLOUD_REPORT_BUCKET="myproject-code/backups"
objects_report="gs://${GCLOUD_REPORT_BUCKET}/objects-backup-report-$(date +%s).txt"
routines_report="gs://${GCLOUD_REPORT_BUCKET}/routines-backup-report-$(date +%s).txt"
project_id="myproject-dw"
table_defs=()
routine_defs=()
# get list of datasets and table definitions
datasets=$(bq ls --max_results=1000 | grep -v -e "fivetran*" | awk '{print $1}' | tail +3)
for dataset in $datasets
do
echo ${project_id}:${dataset}
# write tables and views to file
tables=$(bq ls --max_results 1000 ${project_id}:${dataset} | awk '{print $1}' | tail +3)
for table in $tables
do
echo ${project_id}:${dataset}.${table}
table_defs+="$(bq show --format=prettyjson ${project_id}:${dataset}.${table})"
done
# write routines (stored procs and functions) to file
routines=$(bq ls --max_results 1000 --routines=true ${project_id}:${dataset} | awk '{print $1}' | tail +3)
for routine in $routines
do
echo ${project_id}:${dataset}.${routine}
routine_defs+="$(bq show --format=prettyjson --routine=true ${project_id}:${dataset}.${routine})"
done
done
echo $table_defs | jq '.' | gsutil -q cp -J - "${objects_report}"
echo $routine_defs | jq '.' | gsutil -q cp -J - "${routines_report}"
# /dev/stderr is sent to Cloud Logging.
echo "objects-backup-report: wrote to ${objects_report}" >&2
echo "Wrote objects report to ${objects_report}"
echo "routines-backup-report: wrote to ${routines_report}" >&2
echo "Wrote routines report to ${routines_report}"
The output is essentially the same as writing a bq ls and bq show commands for all datasets with the results piped to a text file with a date. I may add this to git, but the file includes a timestamp so you know the state of BigQuery by reviewing the file for a certain date.
I am new to bash and having problem understanding how to get this done.
Check all "To:" field email address domains and list all unique domains to a variable to compare it to from domain.
I get the "from address" domain by using
grep -m 1 "From: " filename | cut -f 2 -d '#' | cut -d ">" -f 1
when reading a mail stored in file filename.
For "to address" domain there can be multiple To: addresses and having multiple domains. I am not sure how to get unique domains from "to address field".
Example to address line will be like this:
To: user#domain.com, user2#domain.com,
User Name <sample#domaintest.com>, test#domainname.com
grep -m 1 "^To: " filename | cut -f 2 -d '#' | cut -d ">" -f 1
but there are different format of email. So I am not sure if grep is right or if I should search for awk or something.
I need to get the unique domain list from the "To:" field email address/addresses to a variable in bash script.
Desired output for above example:
domain.com,domaintest.com,domainname.com
If you are hellbent on doing this with line-oriented utilities, there is a utility formail in the Procmail distribution which can normalize things for you somewhat.
bash$ formail -czxTo: <<\==test==
> From: me <sender#example.com>
> To: you <first#example.org>,
> them <other#example.net>
> Subject: quick demo
>
> Very quick, innit.
> ==test==
first#example.org, other#example.net
So with that you have input which you can actually pass to grep or Awk ... or sed.
fromdom=$(formail -czxTo: <message | tr ',' '\n' | sed 's/.*#//')
The From: address will not be normalized by formail -czxFrom: but you can use a neat trick: make formail generate a reply back to the From: address, and then extract the To: header from that.
todoms=$(formail -rtzcxTo: <message | sed 's/.*#//')
In some more detail, -r says to create a new reply to whoever sent you message, and then we do -zcxTo: on that.
(The -t option may or may not do what you want. In this case, I would perhaps omit it. http://www.iki.fi/era/procmail/formail.html has (vague) documentation for what it does; see also the section just before http://www.iki.fi/era/procmail/mini-faq.html#group-writable and sorry for the clumsy link -- there doesn't seem to be a good page-internal anchor to link to.)
Email address normalization is tricky because there are so many variants to choose from.
From: Elvis Parsley <king#graceland.example.com>
From: king#graceland.example.com
From: "Parsley, Elvis" <king#graceland.example.com> (kill me, I have to use Outlook)
From: "quoted#string" <king#graceland.example.com> (wait, he is already dead)
To: This could fold <recipient#example.net>,
over multiple lines <another#example.org>
I would turn to a more capable language with proper support for parsing all of these formats. My choice would be Python, though you could probably also pull this off in a few lines of Ruby or Perl.
The email library was revamped in Python 3.6 so this assumes you have at least that version. The email.Headerregistry class which is new in 3.6 is particularly convenient here.
#!/usr/bin/env python3
from email.policy import default
from email import message_from_binary_file
import sys
if len(sys.argv) == 1:
sys.argv.append('-')
for arg in sys.argv[1:]:
if arg == '-':
handle = sys.stdin
else:
handle = open(arg, 'rb')
message = message_from_binary_file(handle, policy=default)
from_dom = message.get('From').address.domain
to_doms = set()
for addr in message.get('To').addresses:
dom = addr.domain
if dom == from_dom:
continue
to_doms.add(dom)
print(','.join([from_dom] + list(to_doms)))
if arg != '-':
handle.close()
This simply produces a comma-separated list of domain names; you might want to do the rest of the processing in Python too instead, or change this so that it prints something in a slightly different format.
You'd save this in a convenient place (say, /usr/local/bin/fromto) and mark it as executable (chmod 755 /usr/local/bin/fromto). Now you can call this from the shell like any other utility like grep.
My VCS has these tags
0.0.3.156-alpha+2
0.0.3.154
0.0.3.153
build-.139
build-.140
build-.142
build-0.0.1.28
build-0.0.1.29
build-0.0.1.30
build-0.0.1.32
I want to git describe --match "<regex>" to get the latest tag of the form number.number.number.number (so it's 0.0.3.154 in this case)
I have tried with git describe --match "[0-9]*.[0-9]*.[0-9]*.[0-9]*$" but it doesn't result in anything, and neither do these pattern:
"[0-9]*.[0-9]*.[0-9]*.[0-9]+"
"[0-9]*.[0-9]*.[0-9]*.[0-9]{1,}"
I need to get the latest tag in other to bump version for the next release. So i'm thinking of doing this automatically. Please let me know if I miss anything
Thanks
UPDATE:
In my build.gradle file I have a function to get tag like this (follow #Marc reply):
version getVersionFromTag()
def getVersionFromTag() {
def stdout = new ByteArrayOutputStream()
exec {
commandLine 'git', 'tag', '|' , 'grep', '^\([0-9]\+\.\?\)\+$', '|', 'sort' , '-nr', '|', 'head', '-1'
standardOutput = stdout
}
return stdout.toString().trim()
}
Here it gives errors Unexpected Char '\' in the regex above. Hence I removed them to becomes '^([0-9]+.?)+$', then it runs fine but in my final artifact, it does not have the version appended to the name (i.e helloword.jar instead of helloword-0.0.3.154.jar
=> My question is how should I put #Marc's suggested command to the gradle function correctly?
For testing I've put the output of your git describe in a file. This will do:
cat file | grep '^\([0-9]\+\.\?\)\+$' | sort -nr | head -1
0.0.3.154
Suppose you've created some irregular formatted tags and you want to use those as well (like your build--tags) for finding the highest tag:
sed -E 's/^[^0-9.]*//' | grep '^\([0-9]\+\.\?\)\+$' | sort -nr | head -1
I have a real simple script to update a table based on a flat file but am concerend as the list keeps getting longer and longer a non valid formatted variable will get introduced and cause issues.
#!/bin/bash
OLDIFS=$IFS
IFS=,
file1=file.csv
while read mac loc; do
dbaccess modemdb <<EndOfUpdate 2>/dev/null
UPDATE profile
SET localization= '$loc'
WHERE mac_address = '$mac';
EndOfUpdate
done <"$file1"
IFS=$OLDIFS
The file contents are as such.
12:BF:20:1B:D3:22,RED-1234
12:BF:20:2D:FF:1B,BLUE-1234
12:BF:20:ED:74:0D,RED-9901
12:BF:20:02:69:7C,GREEN-4321
12:BF:20:02:6B:42,BROWN
12:BF:20:ED:74:0D,BLACK
What I am having difficulty with is how can I set a format check of the $mac and $loc variables so if they don't match it stops running. the $loc can be any 19 digits so just need to make sure its not null and not longer. The mac address needs to be not null and in the format as in the file. I found reference in another post to this check but not sure how to integrate.
`[[ "$MAC_ADDRESS" =~ "^([0-9a-fA-F]{2}:){5}[0-9a-fA-F]{2}$" ]]`
Looking for help on how to create the validations.
Thanks,
Check MAC address with regex:
#!/bin/bash
file1=file.csv
while IFS="," read mac loc; do
if [[ "$mac" =~ ^([0-9a-fA-F]{2}:){5}[0-9a-fA-F]{2}$ ]]; then
dbaccess modemdb <<EndOfUpdate 2>/dev/null
UPDATE profile
SET localization= '$loc'
WHERE mac_address = '$mac';
EndOfUpdate
else
echo "Error: $mac"
fi
done <"$file1"
Your regex is for bash only a string if you use quotation marks.
I have the sed command like this:
radius_clientsfile=clients.conf
iface_netsize="/64"
wireless_prefix=fd04:bd3:80e8:3::
sed -i "/client $wireless_prefix\\$iface_netsize/ {n s/\(\W*secret\W*=\W\).*/\1$key/}" $radius_clientsfile
clients.conf has the content like this:
client fd04:bd3:80e8:3::/64 {
secret = 00000000000000000000000000000001
}
which aim to replace value of secret by key in clients.conf file. For Example, if key is 00000000000000000000000000000002, the content of clients.conf should be changed as following:
client fd04:bd3:80e8:3::/64 {
secret = 00000000000000000000000000000002
}
This script work on OpenWRT attitude adjustment r35400 for armv5tejl
However, it can not work in Ubuntu 9.04 with error:
sed: -e expression #1, char 36: extra characters after command
Could anyone help me for this situation?
I think you need add a ; between command n and command s, like this
sed -i "/client $wireless_prefix\\$iface_netsize/ {n; s/\(\W*secret\W*=\W\).*/\1$key/}" $radius_clientsfile
This working in my cygwin environment.
You need to separate the commands in the command block with a semi-colon, so add a ; after the n command to separate it from the following command.
Like this:
{n;s/\(\W*secret\W*=\W\).*/\1$key/}