Log multiple uwsgi logger to stdout - django

I'm running uwsgi inside a Docker container for a Django application. I want to log uwsgi, request, and django logs differently, so I created the following configuration in my .ini file.
[uwsgi]
logger = main file:/dev/stdout
logger = django file:/dev/stdout
logger = req file:/dev/stdout
log-format = "method": "%(method)", "uri": "%(uri)", "proto": "%(proto)", "status": %(status), "referer": "%(referer)", "user_agent": "%(uagent)", "remote_addr": "%(addr)", "http_host": "%(host)", "pid": %(pid), "worker_id": %(wid), "core": %(core), "async_switches": %(switches), "io_errors": %(ioerr), "rq_size": %(cl), "rs_time_ms": %(msecs), "rs_size": %(size), "rs_header_size": %(hsize), "rs_header_count": %(headers), "event": "uwsgi_request"
log-route = main ^((?!django).)*$
log-route = django django
log-route = req uwsgi_request
log-encoder = format:django ${msg}
log-encoder = nl:django
log-encoder = json:main {"timestamp": "${strftime:%%Y-%%m-%%dT%%H:%%M:%%S+00:00Z}", "message":"${msg}","severity": "INFO"}
log-encoder = nl:main
log-encoder = format:req {"timestamp": "${strftime:%%Y-%%m-%%dT%%H:%%M:%%S}", ${msg}}
log-encoder = nl:req
The problem is that now my logs for my django and req don't show up. I'm guessing that's because multiple loggers want to write to /dev/stdout and can't.
How can I 1. Write everything to stdout while 2. Formatting my logs differently based on a regex?
I confirmed this is the case by turning of some of the log routes and seeing everything work.

Related

Packer: Receiving ID not implemented for builder when using build.ID

When trying to pass through build.ID to shell-local post processor the evaluate string in the post processor is ERR_ID_NOT_IMPLEMENTED_BY_BUILDER I am using vsphere-iso.
The docs mention
Here is the list of available build variables:
ID: Represents the VM being provisioned. For example, in Amazon it is the instance ID; in DigitalOcean, it is the Droplet ID; in VMware, it is the VM name.
So I assumed it was supported with vsphere-iso?
Basically I am trying to passthrough the evaluated vm/template name through to a post powershell post processor.
Here is the post processor config:
post-processor "shell-local" {
environment_vars = [
"VCENTER_USER=${var.vsphere_username}",
"VCENTER_PASSWORD=${var.vsphere_password}",
"VCENTER_SERVER=${var.vsphere_endpoint}",
"TEMPLATE_NAME=${build.ID}",
"TEMPLATE_UUID=${local.build_uuid}",
]
env_var_format = "$env:%s=\"%s\"; "
execute_command = ["${var.common_post_processor_cli}.exe", "{{.Vars}} {{.Script}}"]
script = "scripts/windows/cleanup.ps1"
}
Here is the post processor script
param(
[string]
$TemplateName = $env:TEMPLATE_NAME
)
Write-Host $TemplateName
Here is the result logged to the console
==> vsphere-iso.windows-server-standard-dexp (shell-local): Running local shell script: scripts/windows/cleanup.ps1
vsphere-iso.windows-server-standard-dexp (shell-local): ERR_ID_NOT_IMPLEMENTED_BY_BUILDER

AWS sagemaker endpoint received client (400) error

I've deployed a tensorflow multi-label classification model using a sagemaker endpoint as follows:
predictor = sagemaker_model.deploy(initial_instance_count=1, instance_type="ml.m5.2xlarge", endpoint_name='testing-2')
It gets deployed and works fine when I invoke it from the Sagemaker Jupyter instance:
sample = ['this movie was extremely good']
output=predictor.predict(sample)
output:
{'predictions': [[0.00370046496,
4.32942124e-06,
0.00080883503,
9.25126587e-05,
0.00023958087,
0.000130862]]}
However, I am unable to send a request to the deployed endpoint from other notebooks or sagemaker studio. I'm unsure of the request format.
I've tried several variations in the input format and still failed. The error message is as below:
sagemaker error
Request:
{
"body": {
"text": "Testing model's prediction on this text"
},
"contentType": "application/json",
"endpointName": "testing-2",
"customURL": "",
"customHeaders": [
{
"Key": "sm_endpoint_name",
"Value": "testing-2"
}
]
}
Error:
Error invoking endpoint: Received client error (400) from primary with message "{ "error": "Failed to process element:
0 key: text of 'instances' list. Error: INVALID_ARGUMENT: JSON object: does not have named input: text" }".
See https://us-west-2.console.aws.amazon.com/cloudwatch/home?region=us-west-2#logEventViewer:group=/aws/sagemaker/Endpoints/testing-2
in account 793433463428 for more information.
Is there any way to find out exactly how the model expects the request format to be?
Earlier I had the same model on my local system and the way I tested it was using this curl request:
curl -s -H 'Content-Type: application/json' -d '{"text": "what ugly posts"}' http://localhost:7070/sentiment
And it worked fine without any issues.
I've tried different formats and replaced the "text" key inside body with other words like "input", "body", nothing etc.
Based on your description above, I assume you are deploying the TensorFlow model using the SageMaker TensorFlow container.
If you want to view what your model expects as input you can use the saved_model CLI:
1
├── keras_metadata.pb
├── saved_model.pb
└── variables
├── variables.data-00000-of-00001
└── variables.index
!saved_model_cli show --all --dir {"1"}
After you have confirmed the input name above you can invoke the endpoint as follows:
import json
import boto3
client = boto3.client('runtime.sagemaker')
data = {"instances": ['this movie was extremely good']}
response = client.invoke_endpoint(EndpointName=<EndpointName>,
Body=json.dumps(data))
response_body = response['Body']
print(response_body.read())
The same payload can then also be used in Studio when invoking the endpoint.

mongoengine connection connectionerror with django rest framework

I am trying to build Django rest framework with MongoDB. So in my local its working. But in production, i'm using MongoLab as DB backend. But i'm not able make DB connection. I'm keep on getting DB connection authentication error.
command SON([('authenticate', 1), ('user', u'XXXXX'), ('nonce', u'XXXXX'), ('key', u'XXXXXX')]) failed: auth failed
Connection establishment code in settings file:
MONGODB_DATABASES = {
"name": "XXXXX",
"host": "XXX.mlab.com",
"port": 33212,
"username": "XXXX",
"password": "XXXX"
}
mongoengine.connect(
db=MONGODB_DATABASES['name'],
host=MONGODB_DATABASES['host'],
port=MONGODB_DATABASES['port'],
username=MONGODB_DATABASES['username'],
password=MONGODB_DATABASES['password'],
)
The MongoLab mongo version : mongod version: 3.6.6 (MMAPv1). Correct me what i did wrong
I solved the issue by connecting the mongoengine with mLab like this
mongoengine.connect(
"DB-Name",
host="mongodb://username:password#XXXXX.mlab.com:33252/db-name"
)
Thanks Micheal J Roberts

How to run a Google Cloud Build trigger via cli / rest api / cloud functions?

Is there such an option? My use case would be running a trigger for a production build (deploys to production). Ideally, that trigger doesn't need to listen to any change since it is invoked manually via chatbot.
I saw this video CI/CD for Hybrid and Multi-Cloud Customers (Cloud Next '18) announcing there's an API trigger support, I'm not sure if that's what I need.
I did same thing few days ago.
You can submit your builds using gcloud and rest api
gcloud:
gcloud builds submit --no-source --config=cloudbuild.yaml --async --format=json
Rest API:
Send you cloudbuild.yaml as JSON with Auth Token to this url https://cloudbuild.googleapis.com/v1/projects/standf-188123/builds?alt=json
example cloudbuild.yaml:
steps:
- name: 'gcr.io/cloud-builders/docker'
id: Docker Version
args: ["version"]
- name: 'alpine'
id: Hello Cloud Build
args: ["echo", "Hello Cloud Build"]
example rest_json_body:
{"steps": [{"args": ["version"], "id": "Docker Version", "name": "gcr.io/cloud-builders/docker"}, {"args": ["echo", "Hello Cloud Build"], "id": "Hello Cloud Build", "name": "alpine"}]}
This now seems to be possible via API:
https://cloud.google.com/cloud-build/docs/api/reference/rest/v1/projects.triggers/run
request.json:
{
"projectId": "*****",
"commitSha": "************"
}
curl request (with using a gcloud command):
PROJECT_ID="********" TRIGGER_ID="*******************"; curl -X POST -T request.json -H "Authorization: Bearer $(gcloud config config-helper \
--format='value(credential.access_token)')" \
https://cloudbuild.googleapis.com/v1/projects/"$PROJECT_ID"/triggers/"$TRIGGER_ID":run
You can use google client api to create build jobs with python:
import operator
from functools import reduce
from typing import Dict, List, Union
from google.oauth2 import service_account
from googleapiclient import discovery
class GcloudService():
def __init__(self, service_token_path, project_id: Union[str, None]):
self.project_id = project_id
self.service_token_path = service_token_path
self.credentials = service_account.Credentials.from_service_account_file(self.service_token_path)
class CloudBuildApiService(GcloudService):
def __init__(self, *args, **kwargs):
super(CloudBuildApiService, self).__init__(*args, **kwargs)
scoped_credentials = self.credentials.with_scopes(['https://www.googleapis.com/auth/cloud-platform'])
self.service = discovery.build('cloudbuild', 'v1', credentials=scoped_credentials, cache_discovery=False)
def get(self, build_id: str) -> Dict:
return self.service.projects().builds().get(projectId=self.project_id, id=build_id).execute()
def create(self, image_name: str, gcs_name: str, gcs_path: str, env: Dict = None):
args: List[str] = self._get_env(env) if env else []
opt_params: List[str] = [
'-t', f'gcr.io/{self.project_id}/{image_name}',
'-f', f'./{image_name}/Dockerfile',
f'./{image_name}'
]
build_cmd: List[str] = ['build'] + args + opt_params
body = {
"projectId": self.project_id,
"source": {
'storageSource': {
'bucket': gcs_name,
'object': gcs_path,
}
},
"steps": [
{
"name": "gcr.io/cloud-builders/docker",
"args": build_cmd,
},
],
"images": [
[
f'gcr.io/{self.project_id}/{image_name}'
]
],
}
return self.service.projects().builds().create(projectId=self.project_id, body=body).execute()
def _get_env(self, env: Dict) -> List[str]:
env: List[str] = [['--build-arg', f'{key}={value}'] for key, value in env.items()]
# Flatten array
return reduce(operator.iconcat, env, [])
Here is the documentation so that you can implement more functionality: https://cloud.google.com/cloud-build/docs/api
Hope this helps.
If you just want to create a function that you can invoke directly, you have two choices:
An HTTP trigger with a standard API endpoint
A pubsub trigger that you invoke by sending a message to a pubsub topic
The first is the more common approach, as you are effectively creating a web API that any client can call with an HTTP library of their choice.
You should be able to manually trigger a build using curl and a json payload.
For details see: https://cloud.google.com/cloud-build/docs/running-builds/start-build-manually#running_builds.
Given that, you could write a Python Cloud function to replicate the curl call via the requests module.
I was in search of the same thing (Fall 2022) and while I haven't tested yet I wanted to answer before I forget. It appears to be available now in gcloud beta builds triggers run TRIGGER
you can trigger a function via
gcloud functions call NAME --data 'THING'
inside your function you can do pretty much anything possibile within Googles Public API's
if you just want to directly trigger Google Cloud Builder from git then its probably advisable to use Release version tags - so your chatbot might add a release tag to your release branch in git at which point cloud-builder will start the build.
more info here https://cloud.google.com/cloud-build/docs/running-builds/automate-builds

Can't run dev web site using python manage.py

I can't run app using command line. After add more configuration on settings.py file
python manage.py runserver
usage: manage.py [-h] [--config-dir DIR] [--config-file PATH] [--version]
manage.py: error: unrecognized arguments: runserver
This is
configloader.py
from oslo_config import cfg
# from oslo.config import cfg
rabbitmq_opt_group = cfg.OptGroup(name='rabbitmq', title='Configuration for rabbitmq')
rabbitmq_opts = [
cfg.StrOpt('server_ip', default='127.0.0.1', help='ip of rabbitmq server'),
cfg.IntOpt('server_port', default=5672, help='port of rabbitmq server'),
cfg.FloatOpt('retry_time', default=0.2, help='interval for retry connect to server'),
cfg.FloatOpt('interval_increase', default=0.2, help='increase unit after connect to server fail'),
cfg.IntOpt('max_increase', default=10, help='Max sleep time when try to connect to server'),
cfg.StrOpt('username', default='guest', help='username of account to connect rabbitmq server'),
cfg.StrOpt('password', default='guest', help='password of account to connect rabbitmq server'),
]
worker_opt_group = cfg.OptGroup(name='worker', title='Configuration of worker')
worker_opts = [
cfg.IntOpt('max_worker', default='10', help='max worker of service'),
cfg.IntOpt('qos_worker', default='50', help='Max message can consumer by worker in concurrently'),
cfg.StrOpt('queue_name', default='CTL_MJPEG', help='Listening queue name')
]
keep_alive_group = cfg.OptGroup(name='keepaliveworker', title='Configuration of keep alive worker')
keep_alive_opts = [
cfg.IntOpt('max_worker', default='10', help='max worker of keep alive service'),
cfg.IntOpt('qos_worker', default='50', help='Max message can consumer by worker in concurrently'),
cfg.StrOpt('queue_name', default='CTL_MJPEG_RECOVERY', help='listening queue name')
]
monitor_queue_group = cfg.OptGroup(name='queuemonitor', title='Configuration of queue monitor')
monitor_queue_opts = [
cfg.IntOpt('max_worker', default='1', help='max worker of keep alive service'),
cfg.StrOpt('queue_name', default='MONITOR_QUEUE', help='Queue name using receiver event'),
cfg.IntOpt('qos_worker', default='50', help='Max message can consumer by worker in concurrently'),
cfg.StrOpt('monitor_topic', default='queue.*',
help='Monitor queue when queue have been deleted(recovery function)'),
]
log_group = cfg.OptGroup(name='logcfg', title='Log Configuration of queue monitor')
log_opts = [
cfg.StrOpt('log_cfg_dir', default='/etc/cloudmjpeg/log.conf.d', help='Directory save log config'),
cfg.StrOpt('monitor_log', help='log configuration for monitor server'),
cfg.StrOpt('worker_log', help='log configuration for monitor server'),
cfg.StrOpt('queue_monitor_log', help='log configuration for queue monitor server'),
cfg.StrOpt('keep_alive_log', help='log configuration for monitor server'),
]
portal_group = cfg.OptGroup(name='portal', title='Configuration about interact with portal')
portal_opts = [
cfg.BoolOpt('send_file_info', default=False, help='Enable send file info to portal'),
]
alarming_group = cfg.OptGroup(name='alarming', title='Configuration about alarming to portal to send mail to customer')
alarming_opts = [
cfg.BoolOpt('file_size', default=False, help='Enable alarming for file size'),
cfg.BoolOpt('camera_status_change', default=False, help='Enable alarming when status of camera change')
]
monitor_group = cfg.OptGroup(name='monitor', title='Configuration using keep alive data')
monitor_opts = [
cfg.IntOpt('check_interval', default=60, help='Interval check data'),
cfg.StrOpt('email_subject', default='Keep Alive Monitor', help='Subject of Email send to admin'),
cfg.IntOpt('check_alive', default=60, help='If start and end time have interval is check alive, then worker died')
]
ffserver_group = cfg.OptGroup(name='ffserver', title='Configuration for ffserver')
ffserver_opts = [
cfg.IntOpt(name='ffm_file_size', default=500, help='Size of ffm temp. Unit kilo bytes'),
cfg.StrOpt(name='ffm_dir', default='/tmp/ffmpeg-temp/', help='FFm temp file location'),
]
def parser(conf):
CONF = cfg.CONF
CONF.register_group(rabbitmq_opt_group)
CONF.register_opts(rabbitmq_opts, rabbitmq_opt_group)
CONF.register_group(worker_opt_group)
CONF.register_opts(worker_opts, worker_opt_group)
CONF.register_group(keep_alive_group)
CONF.register_opts(keep_alive_opts, keep_alive_group)
CONF.register_group(monitor_queue_group)
CONF.register_opts(monitor_queue_opts, monitor_queue_group)
CONF.register_group(log_group)
CONF.register_opts(log_opts, log_group)
CONF.register_group(portal_group)
CONF.register_opts(portal_opts, portal_group)
CONF.register_group(alarming_group)
CONF.register_opts(alarming_opts, alarming_group)
CONF.register_group(monitor_group)
CONF.register_opts(monitor_opts, monitor_group)
CONF.register_group(ffserver_group)
CONF.register_opts(ffserver_opts, ffserver_group)
CONF(default_config_files=conf)
return CONF
def get_configuration():
CONF = parser(['/etc/cloudmjpeg/cloudmjpeg.conf'])
return CONF
I add my configuration to settings.py. It will load my configuration from /etc/cloudmjpeg/cloudmjpeg.conf.
from cloudmjpeg.utils import configloader
MJPEG_CONF = configloader.get_configuration()
Howerver I remove load my configruration into settings.py. It will run app using python manage.py command line perfect. So that I think when I load my configuration in settings.py, it will be error. Why?. I do not any idea to resolve problem. Please help me.
Thanks