I have been working with tfsec for about a week so I am still figuring things out. So far the product is pretty awesome. That being said I'm having a bit of trouble getting this custom check for Google Cloud SQL to work as expected. The goal of the check is to ensure the database flag for remote access is set to "off." The TF code below should pass the custom check, but it does not. Instead I get an error (see below):
I figured maybe I am not using subMatch/Predicatedmatch correctly, but no matter what I do the check keeps failing. There is a similar check that is included as a standard check for GCP. I ran the custom check logic through a YAML checker and it came back okay so I can rule that out any YAML specific syntax errors.
TF Code (Pass example)
resource "random_id" "db_name_suffix" {
byte_length = 4
}
resource "google_sql_database_instance" "instance" {
provider = google-beta
name = "private-instance-${random_id.db_name_suffix.hex}"
region = "us-central1"
database_version = "SQLSERVER_2019_STANDARD"
root_password = "#######"
depends_on = [google_service_networking_connection.private_vpc_connection]
settings {
tier = "db-f1-micro"
ip_configuration {
ipv4_enabled = false
private_network = google_compute_network.private_network.id
require_ssl = true
}
backup_configuration {
enabled = true
}
password_validation_policy {
min_length = 6
reuse_interval = 2
complexity = "COMPLEXITY_DEFAULT"
disallow_username_substring = true
password_change_interval = "30s"
enable_password_policy = true
}
database_flags {
name = "contained database authentication"
value = "off"
}
database_flags {
name = "cross db ownership chaining"
value = "off"
}
database_flags {
name = "remote access"
value = "off"
}
}
}
Tfsec Custom Check:
---
checks:
- code: SQL-01 Ensure Remote Access is disabled
description: Ensure Remote Access is disabled
impact: Prevents locally stored procedures form being run remotely
resolution: configure remote access = off
requiredTypes:
- resource
requiredLabels:
- google_sql_database_instance
severity: HIGH
matchSpec:
name: settings
action: isPresent
subMatchOne:
- name: database_flags
action: isPresent
predicateMatchSpec:
- name: name
action: equals
value: remote access
- name: value
action: equals
value: off
errorMessage: DB remote access has not been disabled
relatedLinks:
- http://testcontrols.com/gcp
Error Message
Error: invalid option: failed to load custom checks from ./custom_checks: Check did not pass the expected schema. yaml: unmarshal errors:
line 15: cannot unmarshal !!map into []custom.MatchSpec
I was able to get this working last night finally. This worked for me:
---
checks:
- code: SQL-01 Ensure Remote Access is disabled
description: Ensure Remote Access is disabled
impact: Prevents locally stored procedures form being run remotely
resolution: configure remote access = off
requiredTypes:
- resource
requiredLabels:
- google_sql_database_instance
severity: HIGH
matchSpec:
name: settings
action: isPresent
predicateMatchSpec:
- name: database_flags
action: isPresent
subMatch:
name: name
action: equals
value: remote access
- action: and
subMatch:
name: value
action: equals
value: off
errorMessage: DB remote access has not been disabled
relatedLinks:
- http://testcontrols.com/gcp
Related
I have been using the Cloud Build API in order to get the latest image information from Google Cloud Build that builds our Google Cloud Kubernetes Deployment Image into our Django Backend Application to trigger a new Job/Pod into our cluster.
Below is the code for to collect the info.
from google.cloud.devtools import cloudbuild_v1
def sample_list_builds():
# Create a client
client = cloudbuild_v1.CloudBuildClient()
# Initialize request argument(s)
request = cloudbuild_v1.ListBuildsRequest(
project_id="project_id_value",
)
# Make the request
page_result = client.list_builds(request=request)
# Handle the response
for response in page_result:
print(response)
I just want to exit the loop when the first successful build is found, however I cannot find how to compare against Status.Success. It doesn't seem to be a string. What shall I compare this against ?
images: "eu.gcr.io/.../.../...-dev:f2529...0ac00402"
project_id: "..."
logs_bucket: "gs://106...1.cloudbuild-logs.googleusercontent.com"
source_provenance {
}
build_trigger_id: "...-d5fd-47b7-8949-..."
options {
substitution_option: ALLOW_LOOSE
logging: LEGACY
dynamic_substitutions: true
pool {
}
}
log_url: "https://console.cloud.google.com/cloud-build/builds/...-1106-44d5-a634-...?project=..."
substitutions {
key: "BRANCH_NAME"
value: "staging"
}
substitutions {
key: "COMMIT_SHA"
value: "..."
}
substitutions {
key: "REF_NAME"
value: "staging"
}
substitutions {
key: "REPO_NAME"
value: "videoo-app"
}
substitutions {
key: "REVISION_ID"
value: "....aa3f5276deda3c10ac00402"
}
substitutions {
key: "SHORT_SHA"
value: "f2529c2"
}
substitutions {
key: "TRIGGER_BUILD_CONFIG_PATH"
}
substitutions {
key: "TRIGGER_NAME"
value: "rmgpgab-videoo-app-dev-europe-west1-...--storb"
}
substitutions {
key: "_DEPLOY_REGION"
value: "europe-west1"
}
substitutions {
key: "_ENTRYPOINT"
value: "gunicorn -b :$PORT videoo.wsgi"
}
substitutions {
key: "_GCR_HOSTNAME"
value: "eu.gcr.io"
}
substitutions {
key: "_LABELS"
value: "gcb-trigger-id=...-d5fd-47b7-8949-..."
}
substitutions {
key: "_PLATFORM"
value: "managed"
}
substitutions {
key: "_SERVICE_NAME"
value: "videoo-app-dev"
}
substitutions {
key: "_TRIGGER_ID"
value: "...-d5fd-47b7-8949-..."
}
The following code is not working as expected :
def sample_list_builds():
# Create a client
client = cloudbuild_v1.CloudBuildClient()
# Initialize request argument(s)
request = cloudbuild_v1.ListBuildsRequest(
project_id=settings.PROJECT_ID,
)
# Make the request
page_result = client.list_builds(request=request)
# Handle the response
for response in page_result:
print(response.status)
if response.status=="Status.SUCCESS":
print(response.results['images']['name'])
break
How can I compare the status field against Success case ?
You should have the status and status_detail in the answer.
if you import the Build type
from google.cloud.devtools.cloudbuild_v1.types import Build
you should be able to use e.g. Build.Status.SUCCESS and compare that.
As stated here, Cloud Build publishes messages on a Google Pub/Sub topic when your build's state changes. It could also be helpful to take a look at Pull subscriptions.
I am using Kitchen terraform to deploy/test a environment on GCP.
I am struggling to get the kitchen/inspec part to use the terraform output values, so i can use them in my tests.
This is what I have
My inspec.yml
name: default
depends:
- name: inspec-gcp
url: https://github.com/inspec/inspec-gcp/archive/master.tar.gz
supports:
- platform: gcp
attributes:
- name: gcloud_project
required: true
description: gcp project
type: string
My Kitchen Yaml
driver:
name: terraform
root_module_directory: test/fixtures/tf_module
provisioner:
name: terraform
verifier:
name: terraform
format: documentation
systems:
- name: default
backend: gcp
controls:
- instance
platforms:
- name: terraform
suites:
- name: kt_suite
My Unit test
gcloud_project = attribute('gcloud_project',
{ description: "The name of the project where resources are deployed." })
control "instance" do
describe google_compute_instance(project: "#{gcloud_project}", zone: 'us-central1-c', name: 'test') do
its('status') { should eq 'RUNNING' }
its('machine_type') { should match 'n1-standard-1' }
end
end
my output.tf
output "gcloud_project" {
description = "The name of the GCP project to deploy against. We need this output to pass the value to tests."
value = "${var.project}"
}
The error I am getting is
× instance: /mnt/c/Users/Github/terra-test-project/test/integration/kt_suite/controls/default.rb:4
× Control Source Code Error /mnt/c/Users/Github/terra-test-project/test/integration/kt_suite/controls/default.rb:4
bad URI(is not URI?): "https://compute.googleapis.com/compute/v1/projects/Input 'gcloud_project' does not have a value. Skipping test./zones/us-central1-c/instances/test"
Everything works if i directly declare the project name in the control loop, however obviously dont want to have to do this.
How can i get kitchen/inspec to use the terraform outputs?
Looks like this may just be due to a typo. You've listed gcp_project under attributes in your inspec.yml but gcloud_project everywhere else.
Not sure if this is fixed, but I am using something like below and it works pretty well. I assume that it could be the way you are using google_project attribute.
Unit Test
dataset_name = input('dataset_name')
account_name = input('account_name')
project_id = input('project_id')
control "gcp" do
title "Google Cloud configuration"
describe google_service_account(
name: account_name,
project: project_id
) do
it { should exist }
end
describe google_bigquery_dataset(
name: dataset_name,
project: project_id
) do
it { should exist }
end
end
inspec.yml
name: big_query
depends:
- name: inspec-gcp
git: https://github.com/inspec/inspec-gcp.git
tag: v1.8.0
supports:
- platform: gcp
inputs:
- name: dataset_name
required: true
type: string
- name: account_name
required: true
type: string
- name : project_id
required: true
type: string
I've been trying to deploy AWS WorkSpaces infrastructure using Terraform. The code itself passes the verify and plan check, but it fails to apply.
Source:
module "networking" {
source = "../../modules/networking"
region = var.region
main_cidr_block = var.main_cidr_block
cidr_block_1 = var.cidr_block_1
cidr_block_2 = var.cidr_block_2
size = var.size
}
resource "aws_directory_service_directory" "main" {
name = var.aws_ds_name
password = var.aws_ds_passwd
size = var.size
type = "SimpleAD"
vpc_settings {
vpc_id = module.networking.main_vpc
subnet_ids = ["${module.networking.private-0}", "${module.networking.private-1}"]
}
}
resource "aws_workspaces_directory" "main" {
directory_id = aws_directory_service_directory.main.id
subnet_ids = ["${module.networking.private-0}", "${module.networking.private-1}"]
}
resource "aws_workspaces_ip_group" "main" {
name = "Contractors."
description = "Main IP access control group"
rules {
source = "10.0.0.0/16"
description = "Contractors"
}
}
Error code:
ValidationException: 2 validation errors detected: Value at 'password' failed to satisfy constraint: Member must satisfy regular expression pattern: (?=^.{8,64}$)((?=.*\d)(?=.*[A-Z])(?=.*[a-z])|(?=.*\d)(?=.*[^A-Za-z0-9\s])(?=.*[a-z])|(?=.*[^A-Za-z0-9\s])(?=.*[A-Z])(?=.*[a-z])|(?=.*\d)(?=.*[A-Z])(?=.*[^A-Za-z0-9\s]))^.*; Value '' at 'name' failed to satisfy constraint: Member must satisfy regular expression pattern: ^([a-zA-Z0-9]+[\\.-])+([a-zA-Z0-9])+$
status code: 400, request id: 073f6e61-775e-4ff9-a88e-e1eab97f8519
on modules/workspaces/workspaces.tf line 10, in resource "aws_directory_service_directory" "main":
10: resource "aws_directory_service_directory" "main" {
I am aware that it is a regex issue with the username/passwords, but I haven't set any users for now, and I've reset the security policies for testing reasons.
Anyone had this issue before?
The AWS API for the directory service enforces a constraint on the password attribute and matches what you are seeing in that error when you run terraform apply:
Password
The password for the directory administrator. The directory creation
process creates a directory administrator account with the user name
Administrator and this password.
If you need to change the password for the administrator account, you
can use the ResetUserPassword API call.
Type: String
Pattern:
(?=^.{8,64}$)((?=.*\d)(?=.*[A-Z])(?=.*[a-z])|(?=.*\d)(?=.*[^A-Za-z0-9\s])(?=.*[a-z])|(?=.*[^A-Za-z0-9\s])(?=.*[A-Z])(?=.*[a-z])|(?=.*\d)(?=.*[A-Z])(?=.*[^A-Za-z0-9\s]))^.*
Required: Yes
Normally Terraform is able to validate this with the plan or validate commands but unfortunately the AWS provider is currently missing an appropriate ValidateFunc so it will only fail at apply time instead at the minute.
If you want this to be caught at plan or validate time then you should raise a feature request for it on the provider issue tracker.
I was trying to create a Cloud Composer in GCP using terraform. I was using the terraform version Terraform v0.12.5. But i am unable to launch an instance using terraform.
I am getting the following error
Error: Error waiting to create Environment: Error waiting for Creating Environment: Error code 3, message: Http error status code: 400
Http error message: BAD REQUEST
Additional errors:
{"ResourceType":"appengine.v1.version","ResourceErrorCode":"400","ResourceErrorMessage":{"code":400,"message":"Legacy health checks are no longer supported for the App Engine Flexible environment. Please remove the 'health_check' section from your app.yaml and configure updated health checks. For instructions on migrating to split health checks see https://cloud.google.com/appengine/docs/flexible/java/migrating-to-split-health-checks","status":"INVALID_ARGUMENT","details":[],"statusMessage":"Bad Request","requestPath":"https://appengine.googleapis.com/v1/apps/qabc39fc336994cc4-tp/services/default/versions","httpMethod":"POST"}}
main.tf
resource "google_composer_environment" "sample-composer" {
provider= google-beta
project = "${var.project_id}"
name = "${var.google_composer_environment_name}"
region = "${var.region}"
config {
node_count = "${var.composer_node_count}"
node_config {
zone = "${var.zone}"
disk_size_gb = "${var.disk_size_gb}"
machine_type = "${var.composer_machine_type}"
network = google_compute_network.xxx-network.self_link
subnetwork = google_compute_subnetwork.xxx-subnetwork.self_link
}
software_config {
env_variables = {
AIRFLOW_CONN_SAMPLEMEDIA_FTP_CONNECTION = "ftp://${var.ftp_user}:${var.ftp_password}#${var.ftp_host}"
}
image_version = "${var.composer_airflow_version}"
python_version = "${var.composer_python_version}"
}
}
}
resource "google_compute_network" "sample-network" {
name = "composer-xxx-network"
project = "${var.project_id}"
auto_create_subnetworks = false
}
resource "google_compute_subnetwork" "sample-subnetwork" {
name = "composer-xxx-subnetwork"
project = "${var.project_id}"
ip_cidr_range = "10.2.0.0/16"
region = "${var.region}"
network = google_compute_network.xxx-network.self_link
}
variables.tf
# Machine specific information for creating Instance in GCP
variable "project_id" {
description = "The name of GCP project"
default = "sample-test"
}
variable "google_composer_environment_name" {
description = "The name of the instance"
default = "sample-analytics-dev"
}
variable "region" {
description = "The name of GCP region"
default = "europe-west1"
}
variable "composer_node_count" {
description = "The number of node count"
default = "3"
}
variable "zone" {
description = "The zone in which instance to be launched"
default = "europe-west1-c"
}
variable "disk_size_gb" {
description = "The machine size in GB"
default = "100"
}
variable "composer_machine_type" {
description = "The type of machine to be launched in GCP"
default = "n1-standard-1"
}
# Environmental Variables
variable "ftp_user" {
description = "Environmental variables for FTP user"
default = "test"
}
variable "ftp_password" {
description = "Environmental variables for FTP password"
default = "4444erf"
}
variable "ftp_host" {
description = "Environmental variables for FTP host"
default = "sample.logs.llnw.net"
}
# Versions for Cloud Composer, Aiflow and Python
variable "composer_airflow_version" {
description = "The composer and airflow versions to launch instance in GCP"
default = "composer-1.7.2-airflow-1.10.2"
}
variable "composer_python_version" {
description = "The version of python"
default = "3"
}
# Network information
variable "composer_network_name" {
description = "Environmental variables for FTP user"
default = "composer-xxx-network"
}
variable "composer_subnetwork_name" {
description = "Environmental variables for FTP user"
default = "composer-xxx-subnetwork"
}
Creating Composer on GCP platform works without any issues. When creating using terraform it requires a health check.
I've tested your current user case within my GCP cloudshell Terraform binary and so far no issue occurred, Composer environment has been successfully created:
$ terraform -v
Terraform v0.12.9
+ provider.google v3.1.0
+ provider.google-beta v3.1.0
A few concerns from my side:
The issue you've reported might be relevant to the usage of legacy health checks, which are essentially deprecated and replaced by split health checks:
As of September 15, 2019, if you're using the legacy health checks,
your application will continue to run and receive health checks but
you won't be able to deploy new versions of your application.
You've not specified any info part about your Terraform GCP provider version and I suppose that issue can be hidden there, as I've seen in this Changelog that split_health_checks are enabled in google_app_engine_application.feature_settings since 3.0.0-beta.1 has been released.
Feel free to add some more insights in order to support you resolving the current issue.
I am trying to make a common module to allow an rds cluster to be built - however I want to be able to choose to have it build from either a snapshot or from scratch.
I used a count to choose whether to perform the datasource lookup or not which works. However if it is set to 0 and doesn't run, the resource will fail as it doesn't know what data.aws_db_cluster_snapshot.latest_cluster_snapshot. Is there a way around this that I can't quite think of myself?
Datasource:
data "aws_db_cluster_snapshot" "latest_cluster_snapshot" {
count = "${var.enable_restore == "true" ? 1 : 0}"
db_cluster_identifier = "${var.snapshot_to_restore_from}"
most_recent = true
}
Resource:
resource "aws_rds_cluster" "aurora_cluster" {
...
snapshot_identifier = "${var.enable_restore == "false" ? "" : data.aws_db_cluster_snapshot.latest_cluster_snapshot.id}"
...
}
Versions:
Terraform v0.11.10
provider.aws v2.33.0