I've deployed two Hasura instances via Cloud Run, but have been getting randomly spiking requests periodically for one of the containers. As far as I can see, this is not being initiated by any of our frontends, and the spikes look irregular. Weirdly enough, this issue is only happening on one of our instances.
Getting the following messages for each request:
#1:
{
"insertId": "x",
"jsonPayload": {
"type": "webhook-log",
"detail": {
"http_error": null,
"response": null,
"message": null,
"method": "GET",
"status_code": 200,
"url": "x/auth"
},
"timestamp": "2021-08-26T22:35:40.857+0000",
"level": "info"
},
"resource": {
"type": "cloud_run_revision",
"labels": {
"service_name": "x",
"configuration_name": "x",
"location": "us-central1",
"project_id": "x",
"revision_name": "x"
}
},
"timestamp": "2021-08-26T22:35:41.839935Z",
"labels": {
"instanceId": "x"
},
"logName": "x",
"receiveTimestamp": "2021-08-26T22:35:42.002274277Z"
}
#2:
{
"insertId": "x",
"jsonPayload": {
"timestamp": "2021-08-26T22:35:40.857+0000",
"detail": {
"user_vars": null,
"event": {
"type": "accepted"
},
"connection_info": {
"msg": null,
"token_expiry": null,
"websocket_id": "x"
}
},
"level": "info",
"type": "websocket-log"
},
"resource": {
"type": "cloud_run_revision",
"labels": {
"project_id": "x",
"revision_name": "x",
"service_name": "x",
"configuration_name": "x",
"location": "us-central1"
}
},
"timestamp": "2021-08-26T22:35:41.839957Z",
"labels": {
"instanceId": "x"
},
"logName": "x",
"receiveTimestamp": "2021-08-26T22:35:42.002274277Z"
}
Drawing a blank right now as to what's going on. Any advice is helpful!!
Got it! Turns out it was a few open WebSocket connections from users keeping their browser tab open.
Lesson learned!
Related
I'm converting Apple records data (in DSTU2 format) to R4 format.
This is the data I'm sending to healthlake server
{
"category": [
{
"text": "Vital Signs",
"coding": [
{
"system": "http://hl7.org/fhir/observation-category",
"code": "vital-signs"
}
]
}
],
"issued": "2017-03-18T00:00:00Z",
"status": "final",
"id": "49a1b0f9-34c2-472d-8b64-34447d307c56",
"code": {
"text": "Temperature",
"coding": [{ "system": "http://loinc.org", "code": "8310-5" }]
},
"encounter": { "reference": "Encounter/355" },
"subject": { "reference": "Patient/82146c45-a7cd-47ee-a5ba-8c588d4c5c9e" },
"valueQuantity": {
"code": "Cel",
"system": "http://unitsofmeasure.org",
"value": 37.6,
"unit": "Cel"
},
"resourceType": "Observation",
"meta": { "lastUpdated": "2023-01-30T09:17:54.772Z" }
}
But the healthlake server is giving me the following error
{"resourceType":"OperationOutcome","issue":[{"severity":"error","code":"processing","diagnostics":"This property must be an Array, not an array","location":["Observation.category[0]"]}]}
What does this error means and how to fix this?
PS: I'd only googled the error and could not get any leads
I'm trying to setup an AWS EventBridge rule that will filter all Okta user events with rawUserAgent as "anything-but" with the "prefix" libwww-perl. My question is that is there a way to chain AWS rule syntax on the same field in the event? I tried something like this, but it didn't work -
{
"detail": {
"eventType": [{
"prefix": "user.session.start"
}],
"outcome": {
"result": [{
"prefix": "FAILURE"
}]
},
"client": {
"userAgent": {
"rawUserAgent": [{
"anything-but": [{"prefix": "libwww-perl"}]
}]
}
}
}
}
Any suggestions on how I can achieve this?
Here's a sample event:
{
"version": "0",
"id": "123",
"detail-type": "SystemLog",
"source": "okta",
"account": "123",
"time": "2022-06-24T13:07:02Z",
"region": "us-east-1",
"resources": [],
"detail": {
"uuid": "123",
"published": "2022-06-24T13:07:02.586Z",
"eventType": "user.session.start",
"version": "0",
"displayMessage": "User login to Okta",
"severity": "INFO",
"client": {
"userAgent": {
"rawUserAgent": "libwww-perl/6.15",
"os": "Unknown",
"browser": "UNKNOWN"
},
"zone": "null",
"device": "Unknown",
"id": null,
"ipAddress": "192.168.1.1",
"geographicalContext": {
"city": null,
"state": null,
"country": "United States",
"postalCode": null,
"geolocation": {
"lat": 37.751,
"lon": -97.822
}
},
"ipChain": [
{
"ip": "192.168.1.1.",
"geographicalContext": {
"city": null,
"state": null
"country": "Canada",
"postalCode": null,
"geolocation": {
"lat": 37.751,
"lon": -97.822
}
},
"version": "V4",
"source": null
}
]
},
"device": null,
"actor": {
"id": "unknown",
"type": "User",
"alternateId": "abc#gmail.com",
"displayName": "unknown",
"detailEntry": null
},
"outcome": {
"result": "FAILURE",
"reason": "VERIFICATION_ERROR"
},
"target": null,
"transaction": {
"type": "WEB",
"id": "YrW29nCfOE-MgiNf6-1UkQAAA8I",
"detail": {}
},
"debugContext": {
"debugData": {
"loginResult": "VERIFICATION_ERROR",
"requestId": "abcd",
"threatSuspected": "true",
"requestUri": "",
"url": ""
}
},
"legacyEventType": "core.user_auth.login_failed",
"authenticationContext": {
"authenticationProvider": null,
"credentialProvider": null,
"credentialType": null,
"issuer": null,
"authenticationStep": 0,
"externalSessionId": "unknown",
"interface": null
},
"securityContext": {
"asNumber": 11174,
"asOrg": "qwerty",
"isp": "qwerty",
"domain": "qwerty.com",
"isProxy": false
},
"insertionTimestamp": null
}
}
You can use this pattern:
{
"detail": {
"eventType": [{
"prefix": "user.session.start"
}],
"client": {
"userAgent": {
"rawUserAgent": [{
"anything-but": {
"prefix": "libwww-perl"
}
}]
}
},
"outcome": {
"result": [{
"prefix": "FAILURE"
}]
}
}
}
I am using Dialogflow Messenger's list response type for a list of 8 services people can select from.
Can Dialogflow not handle more than 3 clicks in the list?
Click service 1: fires fine.
Click service 2: fires fine.
Click service 3 or after that: "I don't understand."
However, people can type in the service and the bot responds just fine. Or people can click 1 service, ask the question again and select a different service and the bot responds fine. But we want people to be able to click all the services in the whole list.
Code used:
{
"richContent": [
[
{
"type": "list",
"event": {
"parameters": {},
"name": "Fire-Application-Development-Intent",
"languageCode": ""
},
"title": "Application Development"
},
{
"title": "Business Intelligence",
"event": {
"name": "Fire-Business-Intelligence-Intent",
"languageCode": "",
"parameters": {}
},
"type": "list"
},
{
"type": "list",
"title": "Digital Assistants (AI)",
"event": {
"parameters": {},
"languageCode": "",
"name": "Fire-Digital-Assistants-Intent"
}
},
{
"type": "list",
"title": "Mobile Development",
"event": {
"languageCode": "",
"parameters": {},
"name": "Fire-Mobile-Development-Intent"
}
},
{
"event": {
"languageCode": "",
"parameters": {},
"name": "Fire-Websites-Intent"
},
"title": "Websites",
"type": "list"
},
{
"type": "list",
"title": "ChatBeacon",
"event": {
"name": "Fire-ChatBeacon-Intent",
"languageCode": "",
"parameters": {}
}
},
{
"title": "Asset Location Tracking",
"event": {
"languageCode": "",
"parameters": {},
"name": "Fire-Asset-Location-Tracking-Intent"
},
"type": "list"
},
{
"event": {
"name": "Fire-NRS-Forms-Intent",
"languageCode": "",
"parameters": {}
},
"type": "list",
"title": "NRS Forms"
}
]
]
}
Since 2020-04-28, I noticed that function context.event_id is no more equals to the labels execution_id in Logs Viewer:
To reproduce the error, create a Cloud Functions triggered by Pub Sub (here with Python):
import logging
def hello_pubsub(event, context):
logging.info(context.event_id)
I expected to get an entry like this:
{
"textPayload": "447023927402809",
"insertId": "000000-599a0542-c78a-42e3-b0d0-bb455078dabf",
"resource": {
"type": "cloud_function",
"labels": {
"project_id": "xxxxxxxxx",
"region": "us-central1",
"function_name": "function-1"
}
},
"timestamp": "2020-04-30T20:07:12.125Z",
"severity": "INFO",
"labels": {
"execution_id": "447023927402809"
},
"logName": "projects/xxxxxxxxx/logs/cloudfunctions.googleapis.com%2Fcloud-functions",
"trace": "projects/xxxxxxxxx/traces/cfa595b77b16d6f27a5f77c472ed0e20",
"receiveTimestamp": "2020-04-30T20:07:14.388866116Z"
}
But the entry contains a different execution_id
{
"textPayload": "447023927402809",
"insertId": "000000-599a0542-c78a-42e3-b0d0-bb455078dabf",
"resource": {
"type": "cloud_function",
"labels": {
"project_id": "xxxxxxxxx",
"region": "us-central1",
"function_name": "function-1"
}
},
"timestamp": "2020-04-30T20:07:12.125Z",
"severity": "INFO",
"labels": {
"execution_id": "k994g1h0pte3"
},
"logName": "projects/xxxxxxxxx/logs/cloudfunctions.googleapis.com%2Fcloud-functions",
"trace": "projects/xxxxxxxxx/traces/cfa595b77b16d6f27a5f77c472ed0e20",
"receiveTimestamp": "2020-04-30T20:07:14.388866116Z"
}
Any ideas about this change? The release page doesn't contain any reference to that:
https://cloud.google.com/functions/docs/release-notes
Thanks,
Philippe
Unfortunately it doesn't seem like this is currently possible.
I've filed an issue internally requesting this feature, and will update this answer if I have updates.
My goal is to copy a table in a postgreSQL database running on AWS RDS to a .csv file on Amazone S3. For this I use AWS data pipeline and found the following tutorial however when I follow all steps my pipeline is stuck at: "WAITING FOR RUNNER" see screenshot. The AWS documentation states:
ensure that you set a valid value for either the runsOn or workerGroup
fields for those tasks
however the field "runs on" is set. Any idea why this pipeline is stuck?
and my definition file:
{
"objects": [
{
"output": {
"ref": "DataNodeId_Z8iDO"
},
"input": {
"ref": "DataNodeId_hEUzs"
},
"name": "DefaultCopyActivity01",
"runsOn": {
"ref": "ResourceId_oR8hY"
},
"id": "CopyActivityId_8zaDw",
"type": "CopyActivity"
},
{
"resourceRole": "DataPipelineDefaultResourceRole",
"role": "DataPipelineDefaultRole",
"name": "DefaultResource1",
"id": "ResourceId_oR8hY",
"type": "Ec2Resource",
"terminateAfter": "1 Hour"
},
{
"*password": "xxxxxxxxx",
"name": "DefaultDatabase1",
"id": "DatabaseId_BWxRr",
"type": "RdsDatabase",
"region": "eu-central-1",
"rdsInstanceId": "aqueduct30v05.cgpnumwmfcqc.eu-central-1.rds.amazonaws.com",
"username": "xxxx"
},
{
"name": "DefaultDataFormat1",
"id": "DataFormatId_wORsu",
"type": "CSV"
},
{
"database": {
"ref": "DatabaseId_BWxRr"
},
"name": "DefaultDataNode2",
"id": "DataNodeId_hEUzs",
"type": "SqlDataNode",
"table": "y2018m07d12_rh_ws_categorization_label_postgis_v01_v04",
"selectQuery": "SELECT * FROM y2018m07d12_rh_ws_categorization_label_postgis_v01_v04 LIMIT 100"
},
{
"failureAndRerunMode": "CASCADE",
"resourceRole": "DataPipelineDefaultResourceRole",
"role": "DataPipelineDefaultRole",
"pipelineLogUri": "s3://rutgerhofste-data-pipeline/logs",
"scheduleType": "ONDEMAND",
"name": "Default",
"id": "Default"
},
{
"dataFormat": {
"ref": "DataFormatId_wORsu"
},
"filePath": "s3://rutgerhofste-data-pipeline/test",
"name": "DefaultDataNode1",
"id": "DataNodeId_Z8iDO",
"type": "S3DataNode"
}
],
"parameters": []
}
Usually "WAITING FOR RUNNER" state implies that it is waiting for a resource (such as an EMR cluster). You seem to have not set 'workGroup' field. It means that you have specified "What" to do, but have not specified "who" should do it.