Can I capture a web service call response time with kubectl logs command? - web-services

I need to capture response time of a web service call to external application.
I am trying to use the following command:
..\Yury>kubectl logs -f podXY --all-containers=true --v=7 -n namespaceZZ >> podXY_logs_7.txt
Should this command output a response time of a web service call without code instrumentation (as I expect)?
I do not see these response times in this log file.

Related

"Error: unknown shorthand flag: 'n' in -nstances" when trying to connect Google Cloud Proxy to Postgresql (Django)

I'm following a google tutorial to set up Django on Cloud Run with Postgresql connected via Google Cloud Proxy. However I keep hitting an error on this command in the Google Cloud Shell.
cloud shell input:
xyz#cloudshell:~ (project-xyz)$ ./cloud-sql-proxy -instances="amz-reporting-files-21:us-west1-c:api-20230212"=tcp:5432
returns:
Error: unknown shorthand flag: 'n' in -nstances=amz-reporting-files-21:us-west1-c:Iamz-ads-api-20230212=tcp:5432
Usage:
cloud-sql-proxy INSTANCE_CONNECTION_NAME... [flags]
Flags:
-a, --address string () Address to bind Cloud SQL instance listeners. (default "127.0.0.1")
--admin-port string Port for localhost-only admin server (default "9091")
-i, --auto-iam-authn () Enables Automatic IAM Authentication for all instances
-c, --credentials-file string Use service account key file as a source of IAM credentials.
--debug Enable the admin server on localhost
--disable-metrics Disable Cloud Monitoring integration (used with --telemetry-project)
--disable-traces Disable Cloud Trace integration (used with --telemetry-project)
--fuse string Mount a directory at the path using FUSE to access Cloud SQL instances.
--fuse-tmp-dir string Temp dir for Unix sockets created with FUSE (default "/tmp/csql-tmp")
-g, --gcloud-auth Use gcloud's user credentials as a source of IAM credentials.
--health-check Enables health check endpoints /startup, /liveness, and /readiness on localhost.
-h, --help Display help information for cloud-sql-proxy
--http-address string Address for Prometheus and health check server (default "localhost")
--http-port string Port for Prometheus and health check server (default "9090")
--impersonate-service-account string Comma separated list of service accounts to impersonate. Last value
is the target account.
-j, --json-credentials string Use service account key JSON as a source of IAM credentials.
--max-connections uint Limit the number of connections. Default is no limit.
--max-sigterm-delay duration Maximum number of seconds to wait for connections to close after receiving a TERM signal.
-p, --port int () Initial port for listeners. Subsequent listeners increment from this value.
--private-ip () Connect to the private ip address for all instances
--prometheus Enable Prometheus HTTP endpoint /metrics on localhost
--prometheus-namespace string Use the provided Prometheus namespace for metrics
--quiet Log error messages only
--quota-project string Specifies the project to use for Cloud SQL Admin API quota tracking.
The IAM principal must have the "serviceusage.services.use" permission
for the given project. See https://cloud.google.com/service-usage/docs/overview and
https://cloud.google.com/storage/docs/requester-pays
--sqladmin-api-endpoint string API endpoint for all Cloud SQL Admin API requests. (default: https://sqladmin.googleapis.com)
-l, --structured-logs Enable structured logging with LogEntry format
--telemetry-prefix string Prefix for Cloud Monitoring metrics.
--telemetry-project string Enable Cloud Monitoring and Cloud Trace with the provided project ID.
--telemetry-sample-rate int Set the Cloud Trace sample rate. A smaller number means more traces. (default 10000)
-t, --token string Use bearer token as a source of IAM credentials.
-u, --unix-socket string (*) Enables Unix sockets for all listeners with the provided directory.
--user-agent string Space separated list of additional user agents, e.g. cloud-sql-proxy-operator/0.0.1
-v, --version Print the cloud-sql-proxy version
While my input is "-instances" the error message returns "-nstances" as if it's either truncating somehow, or as if it's matching my input to the "-i" flag inadvertently.
I've tried shortening my project name to avoid truncating, and tried inputting the command inside a yaml file instead of running it in google cloud shell.
Looks like -instances is not a valid flag for Cloud SQL Proxy tool and hence the error.
Remove that flag, something like below should work.
./cloud-sql-proxy amz-reporting-files-21:us-west1-c:api-20230212 -p 5432
Please refer to the supported flags here.
This is using the latest cloud-sql-proxy version 2.0.0.

AWS Lambda Powershell to create mailbox in Hybrid(run powershell commands in both Office 365 and On-Prem)

Now that AWS Lambda supports PowerShell core according to this blog, has anybody tried running PowerShell commands to create Mailbox in Hybrid env(run PS cmdlets in both On-prem and office 365 env) using lambda? I couldn't find anything online which does that. Most of the Lambda Powershell usecases seems to be related to using PowerShell scripts to automate and manage AWS resources.
I'm working on a POC for a REST service which does all of the mailbox creation operations and was planning to use API gateway to invoke lambda powershell.
I did setup my environment following aws documentation and created a PowerShell script which performs mailbox operation and created and deployed lambda. Upon testing, i'm getting the following errors while creating a PowerShell session for O365 env.
Script snippet:
$Session = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri https://outlook.office365.com/powershell-liveid/ -Credential $mycreds -Authentication Basic -AllowRedirection
Write-Host "Created session for PS"
Import-PSSession $Session
Write-Host "Imported Session"
Write-Host "Getting Mailbox"
Get-Mailbox -Identity 'mailbox'
Cloudwatch Logs:
[Error] - This parameter set requires WSMan, and no supported WSMan client library was found. WSMan is either not installed or unavailable for this system.
[Information] - Created session for PS
[Error] - Cannot validate argument on parameter 'Session'. The argument is null. Provide a valid value for the argument, and then try running the command again.
[Information] - Imported Session
[Information] - Getting Mailbox
[Error] - The term 'Get-Mailbox' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again.
Wondering if anyone has tried invoking Office 365/on-prem mailbox creation PS scripts using lambda or point me to the right direction? Thanks
I would also like to know, if with AWS lambda powershell core can i winrm into another windows box so that i can execute powershell mailbox commands? According to the ans dated in 10/2018 we cannot, but wondering if anyone knows anything latest on this.
I am working the same task. API->Lambda->C#/PowerShell->Office360->CreateMailbox.
However I'm hung up on the same line as well, but slightly different message.
What do you have for a Requires line in your ps1 file?
Requires -Modules #{ModuleName='AWS.Tools.Common';ModuleVersion='4.0.5.0'}
I am assuming you are using ModuleVersion='3.3.618.0' per the linked blog post, but there is a '4.0.5.0' version available. ... However it hasn't help me yet, but perhaps it would help you. Here is link with the upgrade information. https://docs.aws.amazon.com/powershell/latest/userguide/v4migration.html

Postman:Execute request in collection runner after successfully completing first request

I am trying to deploy cloud VM’s using Postman and below is the workflow that I am trying accomplish.
1.) Send request to deploy VM image. (it may take few minutes for the vm to be successfully deployed).
2.) Send another request to check the status of VM deployment , check response for completion.
3.) If response is not completed , send another health check request after 10 seconds, until response contains completed.
4.) If response for above health is successful , execute next request in collection.
Thanks
Add the below logic as test script for the request to check the status of VM deployment.
Send a request to check for the deploy status.
If deploy is not complete, add a wait time of 10 seconds.
setTimeout(function(){}, 10000);
Set the next request as check status
postman.setNextRequest("request name of check deploy status")
If the deploy is complete, using postman.setNextRequest() continue with the next request in collection
If deploy is not complete, repeat with a delay and using postman.setNextRequest(), run the check status request again.

How to schedule task to call gRPC method?

I have .Net server running in Google Kubernetes Engine. It is configured to use gRPC through Google Cloud Endpoints. Now I need to schedule task to call my gRPC method once per day.
The first thing I tried was to use Google Cloud Scheduler to call http methods directly. For that I have:
Set up HTTP to gRPC transcoding on my server to call my gRPC method through http.
Created and enabled SSL certificate as described here.
Created service account in IAM & admin console with Service Account Token Creator and Service Account User permissions.
Created Cloud Scheduler job with my url and Auth header as OIDC token and created above service account.
Deployed Google Cloud Endpoints configuration with following parameters (not only them):
authentication:
providers:
- id: google_service_account
issuer: MY_SERVICE_ACCOUNT_EMAIL
jwks_uri: https://www.googleapis.com/robot/v1/metadata/x509/MY_SERVICE_ACCOUNT_EMAIL
rules:
- selector: "*"
requirements:
- provider_id: google_service_account
After that when I run scheduler job it returns result "Failed". In logs it writes ERROR with status UNKNOWN.
The second thing I tried was to use Google Cloud Scheduler to publish message in Pub Sub topic with my server as subscriber.
Unsuccesfully too because I can't verify ownership of Google Cloud Endpoints domain. I asked regarding question here: How to verify ownership of Google Cloud Endpoints service URL?
Now the question: what is the best way to schedule task that would call gRPC method assuming following environment:
.Net server running on GKE
gRPC
Automated periodical call of that task (I can call manually but it's meaningless)
So you were able to make a HTTP call manually, but not automatically by Google Cloud Scheduler, is that correct?
If so, check to see if the request reach the Cloud Endpoint Proxy in the cloud console Endpoint Logging, it may give you some hints.
Distributed scheduler
more details refer sourcedcode Distributed scheduler
This application can be run on different hosts and offers functionality to
schedule execution of arbitrary command at particular time or periodically.
There are two ways to communicate with application: gRPC and REST. Remote
interfaces are
specified in dsched.proto file
Corresponding REST API could be also found over there in form of API
annotations. We also provide generated Swagger files.
To specify task execution timing, we are using notation adopted by cron.
Scheduled tasks are stored in file and loaded automatically during startup.
Building
Install gRPC
Install gRPC gateway
To parse crontab statements and schedule task execution, we are using gopkg.in/robfig/cron.v2 library.
So it should be installed also: go get -u gopkg.in/robfig/cron.v2. Documentation could be found here
Get dsched package: go get
-u gitlab.com/andreynech/dsched
Now it is possible to run standard go build command in dscheduler and
gateway directories to generate binaries for scheduler and REST/JSON API
gateway. It might be also helpful to examine our
CI configuration file to see how we
set up building environment.
Running
All the scheduling functionality is implemented by dscheduler executable. So
it could be run on system startup or on demand. As described by dscheduler --help,
there are two command line parameters:
-i string - File name to store task list (default "/var/run/dscheduler.db")
-p string - Endpoint to listen (default ":50051")
If there is a need to offer REST/JSON API, gateway application located in
gateway directory should be run. It could reside on the same host as
dscheduler, but typically it would be other host which is accessible over
HTTP from outside and at the same way can talk to dscheduler running in
internal network. This setup was also the reason to split scheduler and
gateway in two executables. gateway is mostly generated application and
supports several command-line parameters described by running gateway --help.
Important parameter is -sched_endpoint string which is endpoint of Scheduler
service (default "localhost:50051"). It specifies the host name and port
where dscheduler is listening for requests.
Scheduling tasks (testing)
There are three ways to control scheduler server:
Using Go client implemented in cli/ directory
Using Python client implemented in py_cli directory
Using REST/JSON API gateway and curl
Go and Python clients have similar set of command line parameters.
$ ./cli --help
Usage of cli:
-a string
The command to execute at time specified by -c parameter
-c string
Statement in crontab format describes when to execute the command
-e string
Host:port to connect (default "localhost:50051")
-l List scheduled tasks
-p Purge all scheduled tasks
-r int
Remove the task with specified id from schedule
-s Schedule task. -c and -a arguments are required in this case
They are using gRPC protocol to talk to scheduler server. Here are several
example invocations:
$ ./cli -l list currently scheduled tasks
$ ./cli -s -c "#every 0h00m10s" -a "df" schedule df command for
execution every 10 seconds
$ ./cli -s -c "0 30 * * * *" -a "ls -l" schedule ls -l command to
run every 30 minutes
$ ./cli -r 3 remove task with ID 3
$ ./cli -p remove all scheduled tasks
It is also possible to use curl to invoke dscheduler functionality over
REST/JSON API gateway. Assuming that dscheduler and gateway applications
are running, here are some invocations to list, add and remove scheduling
entries from the same host (localhost):
curl 'http://localhost:8080/v1/scheduler/list' list currently scheduled tasks
curl -d '{"id":0, "cron":"#every 0h00m10s", "action":"ls"}' -X POST 'http://localhost:8080/v1/scheduler/add' schedule ls command for execution every 10 seconds
curl -d '{"id":0, "cron":"0 30 * * * *", "action":"ls -l"}' -X POST 'http://localhost:8080/v1/scheduler/add' schedule ls -l to run every 30 minutes
curl -d '{"id":2}' -X POST 'http://localhost:8080/v1/scheduler/remove' remove task with ID 2.
curl -X POST 'http://localhost:8080/v1/scheduler/removeall' remove all scheduled tasks
All changes are automatically saved in file.
Thoughts on scheduler service discovery
In large deployment scenarios (like hundreds of hosts) it might be
challenging problem to find out all IP addresses and ports where scheduler
service is started. It would be pretty easy to add support for Zeroconf
(Bonjour/Avahi) technology to simplify service discovery. As alternative, it
might be possible to implement something similar to CORBA Naming Service
where running services register themself and location of naming service is
well known. We decide to collect feedback before deciding for particular
service discovery implementation. So your input very welcome!

View real-time logs from NAOqi application with SSH

Is it possible to view logs from my application without using Choregraphe?
At the moment I am limited to log files from '/var/log/naoqi/servicemanager/'.
I am implementing qi.logger() and would like to connect to the robot IP with SSH and get logs from a specific service.
qicli log-view
only shows system logs. I would like to attach the logger to a my application, maybe using the serivce PID?
Did you try to log into a specific places, like for instance if you start it from an independant python script.
logging.basicConfig(filename='some_files.log',
level=logging.DEBUG,
format='%(levelname)s %(relativeCreated)6d %(threadName)s %(message)s (%(module)s.%(lineno)d)',
filemode='w')
then some tail -f -n /var/log/naoqi/servicemanager/some_files.log
WRN: this is just an hint, I haven't tested this solution...