Where (or how) to check for lambda bootstrap script log statements?
For example, if I have an example of the following bootstrap script where can i see date, echo statement?
#!/bin/sh
cd $LAMBDA_TASK_ROOT
date
echo 'post date'
Related
What are some of the options to back up BigQuery DDLs - particularly views, stored procedure and function code?
We have a significant amount of code in BigQuery and we want to automatically back this up and preferably version it as well. Wondering how others are doing this.
Appreciate any help.
Thanks!
In order to keep and track our BigQuery structure and code, we're using Terraform to manage every resources in big query.
More specifically to your question, We use google_bigquery_routine resource to make sure the changes are reviewed by other team members and every other benefit you get from working with VCS.
Another important part of our TerraForm code is the fact we version our BigQuery module (via github releases/tags) that includes the Tables structure and Routines, version it and use it across multiple environments.
Looks something like:
main.tf
module "bigquery" {
source = "github.com/sample-org/terraform-modules.git?ref=0.0.2/bigquery"
project_id = var.project_id
...
... other vars for the module
...
}
terraform-modules/bigquery/main.tf
resource "google_bigquery_dataset" "test" {
dataset_id = "dataset_id"
project_id = var.project_name
}
resource "google_bigquery_routine" "sproc" {
dataset_id = google_bigquery_dataset.test.dataset_id
routine_id = "routine_id"
routine_type = "PROCEDURE"
language = "SQL"
definition_body = "CREATE FUNCTION Add(x FLOAT64, y FLOAT64) RETURNS FLOAT64 AS (x + y);"
}
This helps us upgrading our infrastructure across all environments without additional code changes
We finally ended up backing up DDLs and routines using INFORMATION_SCHEMA. A scheduled job extracts the relevant metadata and then uploads the content into GCS.
Example SQLs:
select * from <schema>.INFORMATION_SCHEMA.ROUTINES;
select * from <schema>.INFORMATION_SCHEMA.VIEWS;
select *, DDL from <schema>.INFORMATION_SCHEMA.TABLES;
You have to explicitly specify DDL in the column list for the table DDLs to show up.
Please check the documentation as these things evolve rapidly.
I write a table/views and a routines (stored procedures and functions) definition file nightly to Cloud Storage using Cloud Run. See this tutorial about setting it up. Cloud Run has an HTTP endpoint that is scheduled with Cloud Scheduler. It essentially runs this script:
#!/usr/bin/env bash
set -eo pipefail
GCLOUD_REPORT_BUCKET="myproject-code/backups"
objects_report="gs://${GCLOUD_REPORT_BUCKET}/objects-backup-report-$(date +%s).txt"
routines_report="gs://${GCLOUD_REPORT_BUCKET}/routines-backup-report-$(date +%s).txt"
project_id="myproject-dw"
table_defs=()
routine_defs=()
# get list of datasets and table definitions
datasets=$(bq ls --max_results=1000 | grep -v -e "fivetran*" | awk '{print $1}' | tail +3)
for dataset in $datasets
do
echo ${project_id}:${dataset}
# write tables and views to file
tables=$(bq ls --max_results 1000 ${project_id}:${dataset} | awk '{print $1}' | tail +3)
for table in $tables
do
echo ${project_id}:${dataset}.${table}
table_defs+="$(bq show --format=prettyjson ${project_id}:${dataset}.${table})"
done
# write routines (stored procs and functions) to file
routines=$(bq ls --max_results 1000 --routines=true ${project_id}:${dataset} | awk '{print $1}' | tail +3)
for routine in $routines
do
echo ${project_id}:${dataset}.${routine}
routine_defs+="$(bq show --format=prettyjson --routine=true ${project_id}:${dataset}.${routine})"
done
done
echo $table_defs | jq '.' | gsutil -q cp -J - "${objects_report}"
echo $routine_defs | jq '.' | gsutil -q cp -J - "${routines_report}"
# /dev/stderr is sent to Cloud Logging.
echo "objects-backup-report: wrote to ${objects_report}" >&2
echo "Wrote objects report to ${objects_report}"
echo "routines-backup-report: wrote to ${routines_report}" >&2
echo "Wrote routines report to ${routines_report}"
The output is essentially the same as writing a bq ls and bq show commands for all datasets with the results piped to a text file with a date. I may add this to git, but the file includes a timestamp so you know the state of BigQuery by reviewing the file for a certain date.
I have branch folder "feature-set" under this folder there's multibranch
I need to run the below script in my Jenkinsfile with a condition if this build runs from any branches under the "feature-set" folder like "feature-set/" then run the script
the script is:
sh """
if [ ${env.BRANCH_NAME} = "feature-set*" ]
then
echo ${env.BRANCH_NAME}
branchName='${env.BRANCH_NAME}' | cut -d'\\/' -f 2
echo \$branchName
npm install
ng build --aot --output-hashing none --sourcemap=false
fi
"""
the current output doesn't get the condition:
[ feature-set/swat5 = feature-set* ]
any help?
I would re-write this to be primarily Jenkins/Groovy syntax and only go to shell when required.
Based on the info you provided I assume your env.BRANCH_NAME always looks like `feature-set/
// Echo first so we can see value if condition fails
echo(env.BRANCH_NAME)
// startsWith better than contains() based on current usecase
if ( (env.BRANCH_NAME).startsWith('feature-set') ) {
// Split branch string into list based on delimiter
List<String> parts = (env.BRANCH_NAME).tokenize('/')
/**
* Grab everything minus the first part
* This handles branches that include additional '/' characters
* e.g. 'feature-set/feat/my-feat'
*/
branchName = parts[1..-1].join('/')
echo(branchName)
sh('npm install && ng build --aot --output-hashing none --sourcemap=false')
}
This seems to be more on shell side. Since you are planning to use shell if condition the below worked for me.
Administrator1#XXXXXXXX:
$ if [[ ${BRANCH_NAME} = feature-set* ]]; then echo "Success"; fi
Success
Remove the quotes and add an additional "[]" at the start and end respectively.
The additional "[]" works as regex
I am trying to get ENVIRONMENT Variables into the EC2 instance (trying to run a django app on Amazon Linux AMI 2018.03.0 (HVM), SSD Volume Type ami-0ff8a91507f77f867 ). How do you get them in the newest version of amazon's linux, or get the logging so it can be traced.
user-data text (modified from here):
#!/bin/bash
#trying to get a file made
touch /tmp/testfile.txt
cat 'This and that' > /tmp/testfile.txt
#trying to log
echo 'Woot!' > /home/ec2-user/user-script-output.txt
#Trying to get the output logged to see what is going wrong
exec > >(tee /var/log/user-data.log|logger -t user-data ) 2>&1
#trying to log
echo "XXXXXXXXXX STARTING USER DATA SCRIPT XXXXXXXXXXXXXX"
#trying to store the ENVIRONMENT VARIABLES
PARAMETER_PATH='/'
REGION='us-east-1'
# Functions
AWS="/usr/local/bin/aws"
get_parameter_store_tags() {
echo $($AWS ssm get-parameters-by-path --with-decryption --path ${PARAMETER_PATH} --region ${REGION})
}
params_to_env () {
params=$1
# If .Ta1gs does not exist we assume ssm Parameteres object.
SELECTOR="Name"
for key in $(echo $params | /usr/bin/jq -r ".[][].${SELECTOR}"); do
value=$(echo $params | /usr/bin/jq -r ".[][] | select(.${SELECTOR}==\"$key\") | .Value")
key=$(echo "${key##*/}" | /usr/bin/tr ':' '_' | /usr/bin/tr '-' '_' | /usr/bin/tr '[:lower:]' '[:upper:]')
export $key="$value"
echo "$key=$value"
done
}
# Get TAGS
if [ -z "$PARAMETER_PATH" ]
then
echo "Please provide a parameter store path. -p option"
exit 1
fi
TAGS=$(get_parameter_store_tags ${PARAMETER_PATH} ${REGION})
echo "Tags fetched via ssm from ${PARAMETER_PATH} ${REGION}"
echo "Adding new variables..."
params_to_env "$TAGS"
Notes -
What i think i know but am unsure
the user-data script is only loaded when it is created, not when I stop and then start mentioned here (although it also says [i think outdated] that the output is logged to /var/log/cloud-init-output.log )
I may not be starting the instance correctly
I don't know where to store the bash script so that it can be executed
What I have verified
the user-data text is on the instance by ssh-ing in and curl http://169.254.169.254/latest/user-data shows the current text (#!/bin/bash …)
What Ive tried
editing rc.local directly to export AWS_ACCESS_KEY_ID='JEFEJEFEJEFEJEFE' … and the like
putting them in the AWS Parameter Store (and can see them via the correct call, I just can't trace getting them into the EC2 instance without logs or confirming if the user-data is getting run)
putting ENV variables in Tags and importing them as mentioned here:
tried outputting the logs to other files as suggested here (Not seeing any log files in the ssh instance or on the system log)
viewing the System Log on the aws webpage to see any errors/logs via selecting the instance -> 'Actions' -> 'Instance Settings' -> 'Get System Log' (not seeing any commands run or log statements [only 1 unrelated word of user])
i am using sos job scheduler which support many language.i accept the shell script to write jobs but i am not a shell script writer.i want to implement a following points in job scheduler:
execute a shell script A. script A return "success" if time is between 6:00AM and 3PM.else it return "fail".
on "success" execute a shell script C or on "Fail" it execute shell script B.
Script B and Script C send email with“Success” or “Failure” in subject line.
please help me to sortout the above discuss problem.
Thanks
There are two command line utilities that are helpful in this case:
date: Displays the current time/date in a specified format.
mail: Sends e-mail from the command line.
Since we only need the full hour for our logic I use the date format "+%H" (hour from 0-23). This gives the following script basis:
#!/bin/sh
hour=$(date +%H)
if [ $hour -gt 6 -a $hour -lt 15 ]; then
echo "message body" | mail -s Success <your e-mail address>
else
echo "message body" | mail -s Failure <your e-mail address>
fi
#!/bin/bash
hour=$(date +%H)
recipient="root"
case "$hour" in
[6-9]|1[0-5])
subject="success"
body="message"
;;
*)
subject="failure"
body="message"
;;
esac
echo $body | mailx -s "$subject" "$recipient"
I'm running VisualSVN on a Windows server.
I'm trying to add a post-commit hook to update our staging project whenever a commit happens.
In VisualSVN, if I type the command in the hook/post-commit dialog, everything works great.
However, if I make a batch file with the exact same command, I get an error that says the post-commit hook has failed. There is no additional information.
My command uses absolute paths.
I've tried putting the batch file in the VisualSVN/bin directory, I get the same error there.
I've made sure VisualSVN has permissions for the directories where the batch file is.
The only thing I can think of is I'm not calling it correctly from VisualSVN. I'm just replacing the svn update command in the hook/post-commit dialog with the batch file name ("c:\VisualSVN\bin\my-batch-file.bat") I've tried it with and without the path (without the path it doesn't find the file at all).
Do I need to use a different syntax in the SVNCommit dialog to call the batch file? What about within the batch file (It just has my svn update command. It works if I run the batch file from the command line.)
Ultimately I want to use a batch file because I want to do a few more things after the commit.
When using VisualSVN > Select the Repo > Properties > Hooks > Post-commit hook.
Where is the code I use for Sending an Email then running a script, which has commands I want to customize
"%VISUALSVN_SERVER%\bin\VisualSVNServerHooks.exe" ^
commit-notification "%1" -r %2 ^
--from support#domainname.com --to "support#domainname.com" ^
--smtp-server mail.domainname.com ^
--no-diffs ^
--detailed-subject
--no-html
set PWSH=%SystemRoot%\System32\WindowsPowerShell\v1.0\powershell.exe
%PWSH% -command $input ^| C:\ServerScripts\SVNScripts\post-commit-wp.ps1 %1 %2
if errorlevel 1 exit %errorlevel%
The script file is located on C:\ServerScripts\SVNScripts\
post-commit-wp.ps1 and I pass in two VisualSVN variables as %1 and %2
%1 = serverpathwithrep
%2 = revision number
The script file is written in Windows PowerShell
# PATH TO SVN.EXE
$svn = "C:\Program Files\VisualSVN Server\bin\svn.exe"
$pathtowebistesWP = "c:\websites-wp\"
# STORE HOOK ARGUMENTS INTO FRIENDLY NAMES
$serverpathwithrep = $args[0]
$revision = $args[1]
# GET DIR NAME ONLY FROM REPO-PATH STRING
# EXAMPLE: C:\REPOSITORIES\DEVHOOKTEST
# RETURNS 'DEVHOOKTEST'
$dirname = ($serverpathwithrep -split '\\')[-1]
# Combine ServerPath with Dir name
$exportpath = -join($pathtowebistesWP, $dirname);
# BUILD URL TO REPOSITORY
$urepos = $serverpathwithrep -replace "\\", "/"
$url = "file:///$urepos/"
# --------------------------------
# SOME TESTING SCRIPTS
# --------------------------------
# STRING BUILDER PATH + DIRNAME
$name = -join($pathtowebistesWP, "testscript.txt");
# CREATE FILE ON SERVER
New-Item $name -ItemType file
# APPEND TEXT TO FILE
Add-Content $name $pathtowebistesWP
Add-Content $name $exportpath
# --------------------------------
# DO EXPORT REPOSITORY REVISION $REVISION TO THE ExportPath
&"$svn" export -r $revision --force "$url" $exportpath
I added comments to explain each line and what it does. In a nutshell, the scripts:
Gets all the parameters
Build a local dir path
Runs SVN export
Places files to a website/publish directory.
Its a simple way of Deploying your newly committed code to a website.
Did you try to execute batch file using 'call' command? I mean:
call C:\Script\myscript.bat
I was trying the same thing and found that you also must have the script in the hooks folder.. the bat file that is.