I am seeing intermittent, buggy behavior where sometimes I can GET a flow with the /api/data/v9.1/workflows(FLOWGUID) endpoint, but other times it cannot find it. Below is an example of the behavior. Each query was run within moments of each other, using the same access token and Dataverse environment URL.
The targeted flow was created using the API, but I have also seen the behavior with manually created flows.
Workflow with GUID is returned with /api/data/v9.1/workflows?$filter=category eq 5 query.
When copy/pasting the previous flow GUID into /api/data/v9.1/workflows(FLOWGUID), it cannot find it.
Related
Our many end users will, through a web browser, read and write in partly overlapping data.
When a user makes a change, a related change should be broadcasted to relevant other users.
Example use case: Several end users, each on their own device, look at a calendar with available time blocks to make an appointment. One of them creates an appointment, causing that a time block is not available for others anymore. The calendar on the screens of those others is updated accordingly and immediately.
Technically this would mean:
Browser sends 'create appointment' event through WebSocket
This event spins up a Cloud Function, which does the following (and then terminates):
Reserve the required capacity in the database
If this causes that the used time block is not available anymore for other users: Broadcast a 'not available anymore' event through the WebSockets of those other users that are viewing this time block.
In Google Cloud this is possible using an Apigee Java callout, where the Java (if needed) calls a Cloud Function, as described on https://cloud.google.com/apigee/docs/api-platform/develop/how-create-java-callout. However, Apigee runs in Kubernetes (https://cloud.google.com/apigee/docs/hybrid/kubernetes-resources), causing the overhead of containers being up at moments when they are not or sparsely used.
Google Clouds API Gateway https://cloud.google.com/api-gateway doesn't support WebSockets: https://issuetracker.google.com/issues/176472002?pli=1
Is there a way to accomplish our goal through a Cloud Function, without any container?
Issue:
Intermittently new user creation is not working. It is failing with the error - Invalid Input: primary_user_email
Account creation is stopping for both the Admin UI and API also
Is issue Reproducible?
We have automation in place which hits the G suite directory API for user/group/role - creation/modification/deletion. So when we have frequent and parallel executions of this automation we are seeing this issue.
Please note when we don't encounter an issue at the user creating the automation runs smoothly and all the scenarios covered in it are executing properly
Observations
We are not seeing this issue consistently
Mostly after a window of 24 hours, we are able to create a new user once again with Admin UI and API also
We are not reaching the API quotas which are available from the Google's end for 100 seconds and for 24 hours
With the API connection, we are having 2 options - Client credentials(with offline refresh token) and service account approach - both of them have the same inconsistent issue
What we are feeling is that there might be some policies or limits for these API which are blocking the user creation. We have checked the docs available but didn't found any related info.
So we will like to know what actually triggering the user creation blockade so we can work accordingly
References
Directory API used for user creation: https://www.googleapis.com/admin/directory/v1/users
Google DOC we are following: https://developers.google.com/admin-sdk/directory/v1/guides/manage-users?refresh=1
Thank you !!
Seems that the issue you are encountering might be related to your G Suite account.
Therefore, the best solution in this situation is to contact G Suite Support here and choose the most convenient option for you.
Reference
Contact G Suite Support.
The action of creating and deleting users constantly in a short period of time will trigger an internal system flag at Google and the account will be prevented from these actions for a short time-frame(Approximately 24hrs).
Basically this flag detects im-proper use of the API and to protect the infrastructure, user creation is blocked even for valid email addresses.
Ok basically i'm fetching data from website using curl and parsing the contents using CkHtmlToText.
My issue is how to fetch new data website is writing down.
For example website contents are as follow:
-test1
-test2
After 1 second contents are :
-test1
-test2
-test3
How to fetch only the next line website wrote down that i didnt get yet which is " test3".
Any ideas ? Thank you.
Language im using is : Visual c++
HTTP requests are stateless. You make a request, you get a result, then you make another completely independent request, you get another result, and so on. If the resource you are trying to access is changing over time, you need to make multiple requests, where each time you will get the full updated resource.
I imagine you may be describing a web page that automatically updates while you are looking at it (like a Twitter feed, for example). In that case, the response contains a script that allows the browser to fetch new data and inject it into the DOM. Unless you also plan to build the DOM and use a JavaScript engine (basically implementing a web browser) to run the script, this is probably not useful to you. Instead, you are better off finding an API that gives you data in a format that is easy to parse and get updates for. If this API is a REST API (built on HTTP), then you will still need to make independent requests to get updates.
I am working on a health care project we have a device which continiously generates values for the fields ACTIVITY AND FREQUENCY .The values need to be updated continously from python to google fusion table.
The question is quite broad, you probably want to have a look at the documentation of the Google Fusion Tables API if you haven't so far: https://developers.google.com/fusiontables/docs/v1/using
Also it may be worth checking the quota section to make sure that Google Fusion Tables is indeed what you want to use:
https://developers.google.com/fusiontables/docs/v1/using#quota
I'll be glad to try to help if you come up with more specific questions :)
EDIT: since there are quite a few questions around the topic, I'll add some "hints".
A (Google Fusion) table belongs to a Google account. Your script must therefore include a step where it asks for your permission to modify data attached to your Google Account. You can therefore see your script as a web application which needs an authorization to achieve its goal. This web application will use the Google Fusion Tables API and therefore it must be registered in the Google API Console. You will find details about the process of registration and authentication with a Python script here:
https://developers.google.com/fusiontables/docs/articles/oauthfusiontables?hl=fr
I just checked that this works and you can insert rows to a table thereafter, so you may want to have a quick look at my script. Note that you can neither use my application credentials (which are by the way not included) nor my table as you are not authorized to edit it (it's mine!). So you must download your application credentials from the Google API console after having registered and adapt the script so it loads your credentials. Also, the script does not create a table (as of now) so as a first step you can create a table with two columns in the UI and copy paste the table id in the script so it will know in which table to write. Here's the script (sorry it's a bit of a mess right now, I'll do as soon as I can):
https://github.com/etiennecha/master_code/blob/master/code_fusion_tables/code_test_fusion_tables.py
Hope this helps.
Even though I have all the errors in MongoDb, I am not able to see them all in the list.
I am able though to access a specific error by ID (localhost/elmah.axd/detail?id=...)
The message on the top of the page "Errors 1 to 15 of total ..." is also correct.
The only thing I think may not be OK is that the time and date on the Mongo server is not the same with the one on the web server, and I see that web server's time and date are being displayed in the errors interface, and errors are also being sorter by this date and time.
I couldn't find anywhere anything on how does Elmah makes the Mongo queries in order to extract the list of errors and how does it transform the time in the DB in the time on the web server where it displays the data.
Thanks a lot!
Are multiple instances of the application writing to Elmah? Do you have for example a web app and an API app that write to it? Elmah will only display errors for a specific application, you can specify the name in the web.config.
In the same question, a user makes a reference to how the default application name is determined, by using the appdomain GUID. That's another thing that could be different when multiple servers are involved.