How to send the results of a newman report to Datadog? - postman

I have built a small microservice that is connected to Datadog, i.e. API calls to this service using Postman are shown in Datadog. I have generated the newman report for the service using -
newman run collection.json --reporters cli,json --reporter-json-export output.json
Now, I want the contents of my newman report output.json to be shown in Datadog. Any help/idea on how to do that would be really appreciated.

please do following actions,
Please construct API call with your API key - that should post data to dataDog - ref here
Then write seperate nodejs script to call this api with attachment.
Call your script after newman execution

Related

Google Cloud workflow with cloud Run

We are making few changes to the present architecture and want to implement Google cloud workflow which will track the flow of a project creation. All the handlers are placed in Cloud Run. Now, how can I call the specific end points in the Workflow from Cloud Run??
I only have one cloud Run URL? I am new to Cloud. Any help will be much appreciated.
To summarize if you want to - Use Workflows with Cloud Run and Cloud Functions. Please have look to this - here
Just to refer, below are the abstracted steps from above for you as an example, to give you an idea, where you have to create a single workflow, connecting one service at a time:
Deploy two Cloud Functions services: the first function generates a
random number, and then passes that number to the second function
which multiplies it.
Using Workflows, connect the two HTTP functions
together. Execute the workflow and return a result that is then
passed to an external API.
Using Workflows, connect an external HTTP
API that returns the log for a given number. Execute the workflow
and return a result that is then passed to a Cloud Run service.
Deploy a Cloud Run service that allows authenticated access only.
The service returns the math.floor for a given number.
Using Workflows, connect the Cloud Run service, execute the entire
workflow, and return a final result.
An excerpt, to give you as an example (from above reference), to create a Cloud Run service based on a container and invoke/attach it to in Workflows...
Build the container image:
export SERVICE_NAME=<your_svc_name>
gcloud builds submit --tag gcr.io/${GOOGLE_CLOUD_PROJECT}/${SERVICE_NAME}
Deploy the container image to Cloud Run, ensuring that it only accepts authenticated calls:
gcloud run deploy ${SERVICE_NAME} \
--image gcr.io/${GOOGLE_CLOUD_PROJECT}/${SERVICE_NAME} \
--platform managed \
--no-allow-unauthenticated
When you see the service URL, the deployment is complete. You will need to specify that URL when updating the workflow definition.
Create a text file with the filename e.g. "workflows.yaml" with the following content:
- randomgen_function:
call: http.get
args:
url: https://us-central1-*****.cloudfunctions.net/randomgen
result: randomgen_result
- multiply_function:
call: http.post
args:
url: https://us-central1-*****.cloudfunctions.net/multiply
body:
input: ${randomgen_result.body.random}
result: multiply_result
- log_function:
call: http.get
args:
url: https://api.mathjs.org/v4/
query:
expr: ${"log(" + string(multiply_result.body.multiplied) + ")"}
result: log_result
- floor_function:
call: http.post
args:
url: https://**service URL**
auth:
type: OIDC
body:
input: ${log_result.body}
result: floor_result
- return_result:
return: ${floor_result}
Note: here you replace service URL with your Cloud Run service URL generated above.
This connects the Cloud Run service in the workflow. Note that the auth key ensures that an authentication token is being passed in the call to the Cloud Run service.
Deploy the workflow, passing in the service account:
cd ~
gcloud workflows deploy <<your_workflows_name>> \
--source=workflow.yaml \
--service-account=${SERVICE_ACCOUNT}#${GOOGLE_CLOUD_PROJECT}.iam.gserviceaccount.com
Execute the workflow:
gcloud workflows run <<your_workflows_name>>
The output should resemble the following:
result: '{"body":............;
...
startTime: '2021-05-05T14:36:48.762896438Z'
state: SUCCEEDED

gcloud logs read not showing data access logs programmatically

I am trying to fetch data access logs for Cloud Profiler API which is created using VM Instance. I can see profile is created successfully in Logs Explorer and logName contains data_access.
Now, I am trying to fetch those logs programmatically. I tried through Cloud Function entries.list API. Number of ways I have tried, I am not getting any error but no logs are showing, all other logs except for Data_Acess logs are visible, just when i filter through data access logs output is nothing, but when i do the same in Console, its there
Same way I tried with gcloud logging read command, still nothing output i am getting.
gcloud beta logging read 'timestamp>="2021-05-13T12:09:05Z" AND logName:"projects/******/logs/cloudaudit.googleapis.com%2Fdata_access"' --limit=10 --format=json --order=asc
I have tried changing the order to desc, different filters I have tried but not working
I am getting proper response from Google API Explorer
Update I got it working after re-authentication but still my cloud function doesn't work. How would i re-authenticate in Cloud Function
headers = {"Authorization": "Bearer "+ credentials.token}
r = requests.post("https://logging.googleapis.com/v2/entries:list", params=payload, headers=headers)
this is how i am running my code in Cloud Function. As a output with same parameters as in gcloud, i am getting {}

Hasura on Google Cloud Run - Monitoring

I would like to have a monitoring on my Hasura API on Google Cloud Run. Actually I'm using the monitoring of Google Cloud but It is not really perfect. I have the count of 200 code request. But I want for example, the number of each query / mutation endpoint request.
I want :
count 123 : /graphql/user
count 234 :/graphql/profil
I have :
count 357 : /graphql
If you have an idea.
Thanks
You can't do this with GraphQL unfortunately. All queries are sent to the /v1/graphql endpoint on Hasura, and the only way to distinguish the operations is by parsing the query parameter of the HTTP request and grabbing the operation name.
If Google Cloud allows you to query properties in logs of HTTP requests, you can set up filters on the body, something like:
"Where [request params].query includes 'MyQueryName'"
Otherwise your two options are:
Use Hasura Cloud (https://hasura.io/cloud), which gives you a count of all operations and detailed metrics (response time, variables, etc) on your console dashboard
Write and deploy a custom middleware server or a script for a reverse proxy that handles this

When i invoke a function(through CLI) , I don't see any print statements(which are there in the chain code) reflecting on the terminal

I am new to Hyperledger. I have deployed a chaincode on hyperledger v0.6 network. When i invoke a function(through CLI) ,only successful transaction id is returned. I don't see any print statements(which are there in the chain code) reflecting on the terminal. Please suggest what to do.
When chaincode contains print statements, the output from these statements is included in a chaincode log.
If you are using the Blockchain service on Bluemix, then you can view chaincode logs from the dashboard for the service. This is found on the “Network” tab by selecting a log file to the right of a particular chaincode ID.
For example, if you are using the Example02 chaincode, you should see output statements similar to the following: OUT - Aval = 90, Bval = 210
If you are using Docker containers, then the Docker logs for a chaincode container will have these output statements. There is a prior post that describes how to view chaincode logs using the docker logs command.

Is there an api to send notifications based on job outputs?

I know there are api to configure the notification when a job is failed or finished.
But what if, say, I run a hive query that count the number of rows in a table. If the returned result is zero I want to send out emails to the concerned parties. How can I do that?
Thanks.
You may want to look at Airflow and Qubole's operator for airflow. We use airflow to orchestrate all jobs being run using Qubole and in some cases non Qubole environments. We DataDog API to report success / failures of each task (Qubole / Non Qubole). DataDog in this case can be replaced by Airflow's email operator. Airflow also has some chat operator (like Slack)
There is no direct api for triggering notification based on results of a query.
However there is a way to do this using Qubole:
-Create a work flow in qubole with following steps:
1. Your query (any query) that writes output to a particular location on s3.
2. A shell script - This script reads result from your s3 and fails the job based on any criteria. For instance in your case, fail the job if result returns 0 rows.
-Schedule this work flow using "Scheduler" API to notify on failure.
You can also use "Sendmail" shell command to send mail based on results in step 2 above.