Collecting client side browser metrics with Stackdriver? - google-cloud-platform

I have seen some solutions for capturing client-side errors and reporting them to Stackdriver.
Does anybody know if it's possible to utilize Stackdriver in some way to collect page load timing metrics and report those? I couldn't find any example of how I might be able to do that in the documentation.

I believe a better approach is to send these information to your back end and have it forward them to stackdriver.
Otherwise, you have to either share credentials to the client to allow them to hit the stackdriver endpoint or open them as public. These are both horrible as someone could start hammering our logging and hide info/increase cost for you.
If you still want to go the "client logging directly" way, it's simply hitting the monitoring.googleapis.com endpoint with authenticated calls (here the auth is the hard part).

Related

Is pubsub suitable to be used by client desktop applications?

If I were to create a client desktop application, I'm trying to find a reliable way to notify client applications of new data that needs to be queried from the server. Would pubsub be a good use for this? Most of the documentation I see for it seems to be focused on server to server communication, and is a bit ambiguous if this would work well for server to client notifications.
If it should work, would I be able to properly authenticate subscribers to limit the topics they could subscribe to? This application would be potentially downloadable by anyone, and I would need to ensure that information intended for one client couldn't end up in the hands of another client.
Cloud Pub/Sub is not going to be a good choice for this use case. First of all, note that each topic and project is limited to 10,000 subscriptions. Therefore, if you intend to have more than that, you will run out of subscriptions. Secondly, note that a subscription only receives messages published after the subscription is created. If you only need messages to be delivered that were published after the user came to the website, this may be okay. However, with these two issues combined, you'll need to consider lifetime of your subscriptions. Do they get deleted when a user logs out? If not, when a user comes back, do you expect them to get all of the messages published since the last time they visited?
Additionally, as discussed in the comments, there is the issue of authentication. Your client-side app would have to have the credentials to subscribe. This would require you to essentially leak those credentials into your client-side code, which could be a vulnerability in your application.
The service designed to deliver notifications of this nature is Firebase Cloud Messaging.
If you want to open the application to anyone on the internet, you can't rely on the IAM service that only works with Google identity -> You can't ask your user to have a Google Account, the user experience will be bad.
Thus, you can't use IAM service to secure the PubSub access, and thus to use PubSub because anyone could access it.
In your use case, the first step is to ask the user to register (create an account, validate email, maybe use payment method,...). Then, you have an identity, but managed by you, not by IAM. You know which messages are for this user and which aren't.
If you want to be notified "in real time", I propose you to use long polling method or streaming to push data to the user. Cloud Run is now capable to do this and I recommend you to have a look on that.

How to get logs for Compute Engine API errors?

I am a total beginner in cloud service management, so this is a very basic question.
I have inherited a kubernetes based project running in Google Cloud. I have discovered recently that there are millions of errors I am unaware of in APIs & Services > Compute Engine API > Metrics menu:
I have tried searching for these values both on google in the docs to no avail. With no link to the list of logs and hundreds of sub menu items I feel completely lost on where to start.
How can I get more information about these errors?
How can I navigate to the relevant logs?
Your question is rather general so I will make some assumptions and educated guesses about your project and try to explain.
This level of error with API calls is of course unusually high and suggesting that some things don't work (for example someone deleted a backend service but left the load balancer without any health checks and it's accepting requests from the outside but there's nothing in the backend to process them).
That is just an exmaple - without more details I'm not even speculate further.
If you want to read more about the messages take the second one from the top - documentation for compute.v1.BackendServicesService.delete.
You can also explore other Compute Engine API methods to see what they do to give you more insight what is happening with your project.
This should give you a good starting point to explore the API.
Now - regarding logs. Just navigate to Logs Viewer and select as a resource whatever you want to analyse (all or a single VM, Load Balancer, firewall rule, etc). You can also include (or exclude) certain level of logs (warning, error etc). Pissibilities are endless.
Your query may look something like this:
Here's more documentation on GCP Logs Viewer to help you out.

Is there a way to tell who started an instance in Google Cloud Platform?

We run only a small handful of instances on Google Cloud Platform and we don't run them all the time. Generally we just fire one up, do what we need to do then shut it down... which is great, except when "we" forget to shut them down.
I've been able to track down the relevant REST APIs and the gcloud sdk but I don't see anything that says who started the instance. Actually it also doesn't have a timestamp on when it was started.
I did find this python app engine script that I might be able to rewrite to stop the instances after X amount of time, but I'd rather find a way to notify the user who started it and let them know the instance is still running.
Has anyone tried to do something similar or seen a way to get the "starter" of the instance in GCP?
You can look into the Audit Logs to determine who did what, where, and when. Further, you can use the Stackdriver Logging API method entries.list to retrieve audit log entries for your use case.
Also you can choose use the Activity Logs to know the details such as the authorized user who made the API request.
With the new API you have to filter on the following:
resource.type="gce_instance"
resource.labels.instance_id="ID"
protoPayload.methodName="v1.compute.instances.start"

How do I trace CEP processing in WSO2 CEP?

I am attempting to emulate the Build Analyzer example with my own bucket and input stream. I believe I have set up everything correctly, but when I run test data, I do not get any results.
The entire log output is
[2013-03-29 08:57:16,988] INFO {org.wso2.carbon.databridge.core.DataBridge} - admin connected
Of course this makes sense when running in a production environment, but I have not been able to figure out how to increase the log level, ideally to some sort of trace mode, so I can see what is going on.
There are plenty of places to increase log levels, but unless I know which log levels to increase, I am afraid all I will do is add noise. How do I turn on appropriate logging in order to figure out what is going on?
As an aside, in my ideal world, there would be a place in the admin console where 1) all the streams and processes are mapped out (and verified) and 2) it would be possible to check a box to start tracing any stream.
TIA,
doug
The Build Analyser example is basically related to REST transport and email broker. The result of the example is an e-mail to a email account which you provide in the bucket configuration. To see the stream definitions you can go through the registry which is given in the management console of the CEP. We have given a graphical window to see the the events which are coming to CEP, In [1] of our documentation gives details about this. (To see the events graphically manner you must have proper input mapping for the bucket.
Yes I agree with you regarding logging. We are working on CEP to provide more user-friendliness to user in our future releases. Thank you for your thoughts and we'll take those in to our consideration.
[1] http://docs.wso2.org/wiki/display/CEP210/CEP+Server+Statistics
Cheers,
Mohan

Facebook Graph API-Account suspension

I have a .Net application that uses list of names/email addresses and finds there match on Facebook using the graph API. During testing, my list had 900 names...I was checking facebook matches for each name in in a loop...The process completed...After that when I opened my Facebook page...it gave me message that my account has been suspended due to suspicious activities?
What am I doing wrong here? Doesn't facebook allow to search large number requests to their server? And 900 doesn't seem to be a big number either..
per the platform policies: https://developers.facebook.com/policy/ this may be the a suspected breach of their "Principals" section.
See Policies I.5
If you exceed, or plan to exceed, any of the following thresholds
please contact us by creating confidential bug report with the
"threshold policy" tag as you may be subject to additional terms: (>5M
MAU) or (>100M API calls per day) or (>50M impressions per day).
Also IV.5
Facebook messaging (i.e., email sent to an #facebook.com address) is
designed for communication between users, and not a channel for
applications to communicate directly with users.
Then the biggie, V. Enforcement. No surprise, it's both automated and also monitored by humans. So maybe seeing 900+ requests coming from your app.
What I'd recommend doing:
Storing what you can client side (in a cache or data store) so you make fewer calls to the API.
Put logging on your API calls so you, the developer, can see exactly what is happening. You might be surprise at what you find there.