Google Directory API - Watch notifcations with far expiration - google-admin-sdk

The Directory API supports watching resources for changes, as documented here:
https://developers.google.com/admin-sdk/directory/v1/guides/push
You can optionally set an expiration when you request a channel, and also a ttl (which is basically the same thing). But - neither of these make a difference, they are ignored.
Is this a bug in Google's API, or is there a workaround??
Here's an example request body:
{
"address":"https://www.example.com",
"expiration":1477664588000,
"id":"**my-id**",
"params":{
"ttl":"86400"
},
"token":"SomeTokenHEre",
"type":"web_hook"
}
Here I'm setting the expiration to 24 hours (unix timestamp in milliseconds), and ALSO setting the ttl to 24 hours (expressed as seconds). The response:
{
"kind": "api#channel",
"id": "*My-id*",
"resourceId": "....",
"resourceUri": "https://www.googleapis.com/admin/directory/v1/users?customer=my_customer&projection=basic&viewType=admin_view&alt=json",
"token": "SomeTokenHere",
"expiration": "1477600105000"
}
And that expiration is in 6 hours time. Always.

I am running into the same issue, and right now I think the problem is that you can only set a max of 6 hours for a channel and will have to "refresh"/recreate them every 6 hours.
I have just confirmed that you can set them for less via the request body method with the ttl param, so that is the valid way of doing it.
e.g. for 5 minutes
{
"address":"https://www.example.com",
"expiration":1477664588000,
"id":"**my-id**",
"params":{
"ttl":"300"
},
"token":"SomeTokenHEre",
"type":"web_hook"
}
But anything larger than that and it defaults it to 6 hours. I can't find this mentioned anywhere or in the documentation, so I am not 100% sure, so if anyone else can confirm or if we can get official words from google devs it would be great.

In the response from google
the expiration is in Unix(ms) epoch time which converted upon gives relative to 2 days which means channel is valid for 2 days
You can optionally set expiry limit though

Related

Nextjs export timeout configuration

I am building a website with NextJS that takes some time to build. It has to create a big dictionary, so when I run next dev it takes around 2 minutes to build.
The issue is, when I run next export to get a static version of the website there is a timeout problem, because the build takes (as I said before), 2 minutes, whihc exceeds the 60 seconds limit pre-configured in next.
In the NEXT documentation: https://nextjs.org/docs/messages/static-page-generation-timeout it explains that you can increase the timeout limit, whose default is 60 seconds: "Increase the timeout by changing the staticPageGenerationTimeout configuration option (default 60 in seconds)."
However it does not specify WHERE you can set that configuration option. In next.config.json? in package.json?
I could not find this information anywhere, and my blind tries of putting this parameter in some of the files mentioned before did not work out at all. So, Does anybody know how to set the timeout of next export? Thank you in advance.
They were a bit more clear in the basic-features/data-fetching part of the docs that it should be placed in the next.config.js
I added this to mine and it worked (got rid of the Error: Collecting page data for /path/[pk] is still timing out after 2 attempts. See more info here https://nextjs.org/docs/messages/page-data-collection-timeout build error):
// next.config.js
module.exports = {
// time in seconds of no pages generating during static
// generation before timing out
staticPageGenerationTimeout: 1000,
}

What is the cause of these google beta transcoding service job validation errors

"failureReason": "Job validation failed: Request field config is
invalid, expected an estimated total output size of at most 400 GB
(current value is 1194622697155 bytes).",
The actual input file was only 8 seconds long. It was created using the safari media recorder api on mac osx.
"failureReason": "Job validation failed: Request field
config.editList[0].startTimeOffset is 0s, expected start time less
than the minimum duration of all inputs for this atom (0s).",
The actual input file was 8 seconds long. It was created using the desktop Chrome media recorder api, with mimeType "webm; codecs=vp9" on mac osx.
Note that Stackoverlow wouldn't allow me to include the tag google-cloud-transcoder suggested by "Getting Support" https://cloud.google.com/transcoder/docs/getting-support?hl=sr
Like Faniel mentioned, your first issue is that your video was less than 10 seconds which is below the minimum 10 seconds for the API.
Your second issue is that the "Duration" information is likely missing from the EBML headers of your .webm file. When you record with MediaRecorder the duration of your video is set to N/A in the file headers as it is not known in advance. This means the Transcoder API will treat the length of your video is Infinity / 0. Some consider this a bug with Chromium.
To confirm this is your issue you can use ts-ebml or ffprobe to inspect the headers of your video. You can also use these tools to repair the headers. Read more about this here and here
Also just try running with the Transcoder API with this demo .webm which has its duration information set correctly.
This Google documentation states that the input file’s length must be at least 5 seconds in duration and should be stored in Cloud Storage (for example, gs://bucket/inputs/file.mp4). Job Validation error can occur when the inputs are not properly packaged and don't contain duration metadata or contain incorrect duration metadata. When the inputs are not properly packaged, we can explicitly specify startTimeOffset and endTimeOffset in the job config to set the correct duration. If the duration of the ffprobe output (in seconds) of the job config is more than 400 GB, it can result in a job validation error. We can use the following formula to estimate the output size.
estimatedTotalOutputSizeInBytes = bitrateBps * outputDurationInSec / 8;
Thanks for the question and feedback. The Transcoder API currently has a minimum duration of 10 seconds which may be why the job wasn't successful.

Solr 7 not removing a node completely due to too small thread pool

Situation
I'm currently trying to set up SolrCloud in an AWS Autoscaling Group, so that it can scale dynamically.
I've also added the following triggers to Solr, so that each node will have 1 (and only one) replication of each collection:
{
"set-cluster-policy": [
{"replica": "<2", "shard": "#EACH", "node": "#EACH"}
],
"set-trigger": [{
"name": "node_added_trigger",
"event": "nodeAdded",
"waitFor": "5s",
"preferredOperation": "ADDREPLICA"
},{
"name": "node_lost_trigger",
"event": "nodeLost",
"waitFor": "120s",
"preferredOperation": "DELETENODE"
}]
}
This works pretty well. But my problem is that when the a node gets removed, it doesn't remove all 19 replicas from this node and I have problems when accessing the "nodes" page:
In the logs, this exception occurs:
Operation deletenode failed:java.util.concurrent.RejectedExecutionException: Task org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$$Lambda$45/1104948431#467049e2 rejected from org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor#773563df[Running, pool size = 10, active threads = 10, queued tasks = 0, completed tasks = 1]
at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)
at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.execute(ExecutorUtil.java:194)
at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:134)
at org.apache.solr.cloud.api.collections.DeleteReplicaCmd.deleteCore(DeleteReplicaCmd.java:276)
at org.apache.solr.cloud.api.collections.DeleteReplicaCmd.deleteReplica(DeleteReplicaCmd.java:95)
at org.apache.solr.cloud.api.collections.DeleteNodeCmd.cleanupReplicas(DeleteNodeCmd.java:109)
at org.apache.solr.cloud.api.collections.DeleteNodeCmd.call(DeleteNodeCmd.java:62)
at org.apache.solr.cloud.api.collections.OverseerCollectionMessageHandler.processMessage(OverseerCollectionMessageHandler.java:292)
at org.apache.solr.cloud.OverseerTaskProcessor$Runner.run(OverseerTaskProcessor.java:496)
at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Problem description
So, the problem is that it only has a pool size of 10, of which 10 are busy and nothing gets queued (synchronous execution). In fact, it really only removed 10 replicas and the other 9 replicas stayed there. When manually sending the API command to delete this node it works fine, since Solr only needs to remove the remaining 9 replicas and everything is good again.
Question
How can I either increase this (small) thread pool size and/or activate queueing the remaining deletion tasks? Another solution might be to retry the failed task until it succeeds.
Using Solr 7.7.1 on Ubuntu Server installed with the installation script from Solr (so I guess it's using Jetty?).
Thanks for your help!
EDIT: I got feedback from the Solr user group mailing list. This seems to be a design flaw: https://issues.apache.org/jira/browse/SOLR-11208
Doesn't seem to be configurable right now, but if anyone having a workaround ready, I'd be happy to learn it.

Set sqlite3 timeout to > 5 seconds in flask-sqlalchemy

Spent the last 20 hours to solve this problem, no luck. I saw a "database is locked" first in my production server when > 100 people tried to register all at once (within 10 seconds), and tried recreating it using JMeter. When I run JMeter, every time it goes beyond 5 seconds I get the "database is locked" error. So, I think I successfully recreated the problem (at least after 20 hours !!!)
Almost everyone including this piece of documentation on sqlite3 recommends that the problem stems from the 5 seconds timeout.
I tried the following:
1- I don't know the equivalent of this in flask-sqlalchemy:
engine = create_engine(..., connect_args={"options": "-c statement_timeout=1000"})
which was accepted as the correct answer here:
Configure query/command timeout with sqlalchemy create_engine?
2- This configuration doesn't have any effect:
db = SQLAlchemy(app)
db.engine.execute("PRAGMA busy_timeout=15000;")
3- This configuration, will give the following error:
app.config['SQLALCHEMY_POOL_TIMEOUT'] = 15
error:
TypeError: Invalid argument(s) 'pool_timeout' sent to create_engine(), using configuration SQLiteDialect_pysqlite/NullPool/Engine.

Phoenix framework send me "cookie named "_toDoListMaster_key" exceeds maximum size of 4096 bytes" when i try to persist an object

All is in the title, when i look for the cookie in a browser i got :
"_toDoListMaster_key SFMyNTY.g3QAAAABbQAAAAtfY3NyZl90b2tlbmQAA25pbA.ehmC7o9_KRHqClwacE38DX1JHZBmcPu79kJQpvDdBo localhost / Session 109 o"
It's just a key so why that error ?
Sorry for my limited english and thank you for your help !!
How many things have you stuffed in cookies?
Cookies of the same origin and same path share 4KiB space. You got this error because maybe something else already eats up 3.99KiB cookie space.
You should not put too many things in cookies, especially things that may scale. If something is used purely on browsers, then consider putting it in window.localStorage. If it is used purely on server, then put it in some sort of database (eg MySQL or Redis).
The header value is by default allowed to be 4096 Bytes in size. In your case the lenght of that headers value is supposedly much bigger.
You can configure a different maximum for cowboy like so (max_header_value_length key):
config :your_app, YourAppWeb.Endpoint,
http: [
port: 8080,
compress: true,
protocol_options: [
request_timeout: 5000,
max_header_value_length: 8192
],
transport_options: [
num_acceptors: 150
]
],
url: [host: "example.com", port: 443, path: "/", scheme: "https"],
cache_static_manifest: "priv/static/cache_manifest.json",
root: ".",
server: true,
is_production: true,
version: Mix.Project.config()[:version]
The documentation of cowboy configuration can be found here
Heads up, it is discouraged to increase the maximum cookie value size as it may lead to the browsers truncating the cookie value.
In some cases it may still be helpful to increase the cookie values size for example if you are not the one setting the cookies but want to handle the clients with oversized cookies.
Nevertheless reach out to the one responsible for the cookies bomb!