logging.rb
require 'logging'
activeadmin = Logging.logger['RootLogger::ActiveAdmin']
activeadmin.add_appenders(
Logging.appenders.stdout,
Logging.appenders.file('active_admin.log')
)
activeadmin.additive = true,
activeadmin.level = "fatal"
I want to seperatley generate activeadmin resources sepertaely?
Related
I use the terraform modules terraform-google-modules/folders/google and terraform-google-modules/project-factory/google and I cannot figure out how to reference one created folder that I need to use in the project factory module. The output seems to be IDS as well not one ID
https://registry.terraform.io/modules/terraform-google-modules/project-factory/google/latest
https://registry.terraform.io/modules/terraform-google-modules/folders/google/latest
I would like to reference the common folder ID so the project I create sits in the right folder hierarchy.
I read plenty of documentation and tried to find examples but everything I find around folders is the folder resources not folder module.
// setup folder structure in gcp
module "folders" {
source = "terraform-google-modules/folders/google"
version = "~> 3.0"
parent = "folders/60860476666"
names = [
"development",
"staging",
"production",
"common",
]
set_roles = true
prefix = "fldr"
per_folder_admins = {
dev = "group:gcp-organization-admins#mydomain.me"
staging = "group:gcp-organization-admins#mydomain.me"
production = "group:gcp-organization-admins#mydomain.me"
}
all_folder_admins = [
"group:gcp-organization-admins#mydomain.me",
]
}
// setup logging project
module "project-factory-logging" {
source = "terraform-google-modules/project-factory/google"
version = "~> 14.0"
random_project_id = true
name = "shared-logging"
folder_id = module.folder**s.????**
org_id = var.organization_id
billing_account = var.billing_account
default_service_account = "deprivilege"
}
To get common folder it, the following should be enough:
folder_id = module.folders.folders_map["common"].name
I am trying to set up continuous export from scc to pubsub topic. While using the terraform resource google_scc_notification_config , I am trying to set up the filter so that it includes all the values . Has anyone been able to set this one in terraform so that all values ( i.e active , inactive all ) ?
Code example
resource "google_scc_notification_config" "custom_notification_config" {
config_id = "scc-findings-export"
organization = "*******"
description = "Export"
pubsub_topic = "ref-to-pub-sub-topic"
streaming_config {
filter = var.scc_notification_filter
}
}
Lokking for a suotable value for scc_notification_filter variable to include all possible data.
We're are hitting a issue with our Django app and are unable to find the underlying problem.
Our Django app runs on Kubernetes and is managed by Helm. When we upgrade the app, a Helm upgrade job is triggered that makes sure that a manage.py migrate is run. The migrate job runs with database admin privileges, not the customers postgres role.
The error we're getting is:
InsufficientPrivilege: permission denied for table RouteInstance
This error must have something to do with creating a reference from a new table to a existing table, but we can't find it. Maybe it's in the Terraform resources config, or are the grants not sufficient. We don't use any extra grants since the admin account should be a real admin (Digitalocean doadmin).
Any help would be awesome, we're stuck at the moment.
Some context on our app deployment:
The Helm template is being deployed by Terraform, the application deployment consists of:
Create a subdomain record
Create a namespace
Create S3 buckets and a account
Deploy the Helm chart (helm_release), here we use a values list with a templatefile. Between these values there are variables:
postgres_user_username (used to run the app)
postgres_user_password
postgres_admin_username (used to perform the helm upgrade job containing the migrate)
postgres_admin_password
Create a postgresql_role for the customer (their app runs under the customers postgres role)
Create a postgresql_database for the app
Set postgresql_default_privileges (table, sequence), see config below
Terraform resouce configs:
resource "postgresql_role" "postgres_role" {
name = "customer_user"
login = true
password = "redacted..."
}
resource "postgresql_database" "app_database" {
name = "app_database"
owner = postgresql_role.postgres_role.name
depends_on = [ postgresql_role.postgres_role ]
}
resource "postgresql_default_privileges" "postgres_privileges_database" {
role = postgresql_role.postgres_role.name
database = postgresql_database.app_database.name
schema = "public"
owner = postgresql_role.postgres_role.name
object_type = "table"
privileges = ["SELECT", "INSERT", "UPDATE", "DELETE"]
}
resource "postgresql_default_privileges" "postgres_privileges_sequence" {
role = postgresql_role.postgres_role.name
database = postgresql_database.app_database.name
schema = "public"
owner = postgresql_role.postgres_role.name
object_type = "sequence"
privileges = ["USAGE", "SELECT"]
}
resource "postgresql_grant" "postgres_grant_database" {
database = postgresql_database.app_database.name
role = postgresql_role.postgres_role.name
object_type = "database"
privileges = ["CONNECT"]
}
resource "postgresql_grant" "postgres_grant_tables" {
database = postgresql_database.app_database.name
role = postgresql_role.postgres_role.name
schema = "public"
object_type = "table"
privileges = ["SELECT", "INSERT", "UPDATE", "DELETE"]
}
resource "postgresql_grant" "postgres_grant_sequences" {
database = postgresql_database.app_database.name
role = postgresql_role.postgres_role.name
schema = "public"
object_type = "sequence"
privileges = ["USAGE", "SELECT", "UPDATE"]
}
i am actually working with terraform to create an auth0_connection from a gcp api credentials:
resource "auth0_connection" "my_connection" {
name = "my_connection"
strategy = "google-apps"
is_domain_connection = false
options {
allowed_audiences = ["myDomain.com"]
scopes = ["email", "profile"]
api_enable_users = true
}
}
But I get this error
400 Bad Request: undefined is not a valid google apps domain
I know that I have to add the domain but when I added to options it doesn't work for me.
Should I add a config in auth0 account with default domain or I should add a config in gcp?
As per the ref from terraform.io your auth0 account may be pre-configured with a google-oauth2 connection. With google-oauth2 connection strategy, options support the following arguments:
resource "auth0_connection" "google_oauth2" {
name = "Google-OAuth2-Connection"
strategy = "google-oauth2"
options {
client_id = "<client-id>"
client_secret = "<client-secret>"
allowed_audiences = [ "example.com", "api.example.com" ]
scopes = [ "email", "profile", "gmail", "youtube" ]
set_user_root_attributes = "on_each_login"
}
}
I'm planning to move all locally stored images to AWS S3.
To make the transition as smooth as possible I don't want to migrate in one step. Instead I want the application to check if there's already an image stored in S3. If not then it should fall back to the local file system and get the old one.
Is that possible?
Update:
Here is the setup
CarrierWave.configure do |config|
config.fog_provider = 'fog/aws'
config.fog_credentials = {
provider: 'AWS',
aws_access_key_id: '...',
aws_secret_access_key: '...',
}
config.fog_directory = Rails.env
config.fog_public = false
end
This is the model
class Item < ActiveRecord::Base
mount_uploader :image, ImageUploader
end
Nothing special at all? Is the setup somehow relevant for a potential solution?
You can map each setting for Carrierwave at the uploader level too.
class AvatarUploader < CarrierWave::Uploader::Base
# Choose what kind of storage to use for this uploader:
storage :fog
# define some uploader specific configurations in the initializer
# to override the global configuration
def initialize(*)
super
self.fog_credentials = {
:provider => 'AWS', # required
:aws_access_key_id => 'YOURAWSKEYID', # required
:aws_secret_access_key => 'YOURAWSSECRET', # required
}
self.fog_directory = "YOURBUCKET"
end
end
Take a look at the Carrierwave Wiki Page.