Facebook Graph API: (#100) Attachment size exceeds allowable limit - facebook-graph-api

I am uploading an attachment of type video which is around 17MB but somehow facebook is saying it's exceeding the filesize allowed which is 25 MB.
I am getting the following error while trying to upload the video
(#100) Attachment size exceeds allowable limit. Depending on the file, encoding can increase the size of the uploaded file. Please upload the file in chunks to avoid hitting the max file size.
I have even tried uploading it via curl
curl \
-F 'message={"attachment":{"type":"video", "payload":{"is_reusable":true}}}' \
-F 'filedata=#"/home/deepak/Downloads/file.mp4";type=video/mp4' \
"https://graph.facebook.com/v5.0/1052xxxxxxx/message_attachments?access_token=EAxxxxxxxxxxxxxxxxxxl"
I am getting the following error:
{
"error": {
"message": "(#100) Upload attachment failure.",
"type": "OAuthException",
"code": 100,
"error_subcode": 2018047,
"fbtrace_id": "xxxxxxxxxxx"
}
}

Related

GCP Snapshot create API failing through C++ code, but not through postman

I've been trying to make API calls to create snapshot of GCP disk through my code but it keeps giving me this error:
Failed to check snapshot status before creating GCPDisk swift-snapshot-gkeos-dkdwl-dynamic-pvc-1b13097f-7-1630579922. HTTP Error: GET request to the remote host failed [HTTP-Code: 404]: {
"error": {
"code": 404,
"message": "The resource 'projects/rw-migration-dev/global/snapshots/swift-snapshot-gkeos-dkdwl-dynamic-pvc-1b13097f-7-1630579922' was not found",
"errors": [
{
"message": "The resource 'projects/rw-migration-dev/global/snapshots/swift-snapshot-gkeos-dkdwl-dynamic-pvc-1b13097f-7-1630579922' was not found",
"domain": "global",
"reason": "notFound"
}
]
}
}
My program worked fine for a considerable amount of time, but now sometimes it gives errors.
I tried passing the same query through postman and it works fine. Some times it works fine through .
The main problem is with the snapshot creating API,
https://compute.googleapis.com/compute/v1/projects/{projectName}/zones/{diskLocation}/disks/{diskName}/createSnapshot
This URL works fine on postman, after creation you can see the snapshot when you list them, but through code once this API is called it returns an OK 200, but no snapshot is created
Can someone tell me why this is happening?
I think you are trying to use this operation:
https://cloud.google.com/compute/docs/reference/rest/v1/disks/createSnapshot
Which is a long running operation, a 200 response indicates that the snapshot operation started, but it does not indicate that it has finished.
The documentation points to:
https://cloud.google.com/compute/docs/api/how-tos/api-requests-responses#handling_api_responses
You may need to poll the operation until it completes before trying to use the snapshot.

Google explainable ai 429 error Rate of traffic exceeds serving capacity. Decrease your traffic or reduce the size of your model

I'm trying to interpret an image prediction model using Google's explainable ai, but I get a 429 error. The model I want to interpret is a model that uses the mobilenet v3 large model and applies transfer learning.
python code :
ig_response = remote_ig_model.explain(instances)
results :
ValueError: Target URI https://asia-northeast1- ml.googleapis.com/v1/projects/cellimaging/models/A549_4/versions/v_ig_4:explain returns HTTP 429 error.
Please check the raw error message:
{
"error": {
"code": 429,
"message": "Rate of traffic exceeds serving capacity. Decrease your traffic or reduce the size of your model: projects/842842301933/models/A549_4/versions/v_ig_4.",
"status": "RESOURCE_EXHAUSTED"
}
}
When creating a serving model using ai-platform, the command is as follows.
code :
! gcloud beta ai-platform versions create $IG_VERSION --region='asia-northeast1' \
--model $MODEL \
--origin $export_path \
--runtime-version 2.2 \
--framework TENSORFLOW \
--python-version 3.7 \
--machine-type n1-highcpu-32 \
--explanation-method integrated-gradients \
--num-integral-steps 25
I got a 429 error and tried changing the machine-type (n1-standard-4 -> n1-highcpu-32), but the error is not resolved.
You can have a look on this troubleshoot section, your request can't be larger than 1.5Mb
A single online prediction request must contain no more than 1.5 MB of data.
If you have an input of 1024,1024,3, your image size should be 3Mb (1024 x 1024 x 3 bytes) and therefore too big for the API.

How to retrieve the current listeners from a wowza cloud live stream via api?

We are using the wowza cloud to run a weekly live streaming event. Is there a way to get the current listeners as live data from the api?
We found two endpoints, but they appear to be equally dysfunctional:
https://api.cloud.wowza.com/api/v1.4/usage/stream_targets/y7tm2dfl/live leads to
​{
"meta": {
"status": 403,
"code": "ERR-403-RecordUnaccessible",
"title": "Record Unaccessible Error",
"message": "The requested resource isn't accessible.",
"description": ""
},
"request_id": "def6744dc2d7a609c61f488560b80019",
"request_timestamp": "2020-03-27T19:54:14.443Z"
}​
https://api.cloud.wowza.com/api/v1.4/usage/viewer_data/stream_targets/y7tm2dfl leads to
​{
"meta": {
"status": 404,
"code": "ERR-404-RouteNotFound",
"title": "Route Not Found Error",
"message": "The requested endpoint couldn't be found.",
"description": ""
},
"request_id": "11dce4349e0b97011820a39032d9664a",
"request_timestamp": "2020-03-27T19:56:01.637Z"
}​
y7tm2dfl is one of the two stream target ids, we get from calling https://api.cloud.wowza.com/api/v1.4/live_streams/nfpvspdh/stats
Is this the right way? According to this question the data might be only available with an delay of 2 hours...
Anybody knows of something, that can actually count as live data?
Thx a lot!
From Wowza Support:
The below endpoint is the correct one to use for near realtime view counts:
curl -H "wsc-api-key: ${WSC_API_KEY}" \
-H "wsc-access-key: ${WSC_ACCESS_KEY}" \
-H "Content-Type: application/json" \
-X "GET" \
"https://api.cloud.wowza.com/api/v1.4/usage/stream_targets/y7tm2dfl/live"
It appears this stream target "y7tm2dfl" is an Akamai push and will have 2 or more hours timeframe to get results. You'll need to create a new stream target that uses Fastly to take advantage of the near realtime stats.
https://www.wowza.com/docs/add-and-manage-stream-targets-in-wowza-streaming-cloud#add-a-wowza-cdn-on-fastly-target-for-hls-playback
This will retrieve the "Current Unique Viewers", which is defined as "the number of unique viewers for the stream in the last 90 seconds". This is only available with Fastly Stream Targets in api 1.4.

EROFS error when hitting lambda function endpoint

I have deployed my expressjs code to lambda using claudiajs.When I hit the API endpoint generated every alternate request gives me internal server error.I checked the logs and found this
"errorType": "Error",
"errorMessage": "EROFS: read-only file system, mkdir '/var/task/logs'",
"code": "EROFS",
"stack": [
"Error: EROFS: read-only file system, mkdir '/var/task/logs'"
],
"cause": {
"errorType": "Error",
"errorMessage": "EROFS: read-only file system, mkdir '/var/task/logs'",
"code": "EROFS",
"stack": [
"Error: EROFS: read-only file system, mkdir '/var/task/logs'"
],
"errno": -30,
"syscall": "mkdir",
"path": "/var/task/logs"
},
"isOperational": true,
"errno": -30,
"syscall": "mkdir",
"path": "/var/task/logs"
}
I am not able to figure out what could be the issue and why is it occurring only on alternate requests and not every request.How do I go on about it?
Any help would be appreciated.
Lambda dosen't allow you to write the space allocated to you.
Since, you are trying to create a directory may be to write logs, so its giving you the error.
To write application logs with lambda you can use AWS Cloudwatch log streaming.
https://www.npmjs.com/package/lambda-log
You can use /tmp directory for some temporary files, but there is also a limit 512Mb.
Check these links:
AWS Lambda Limits
Accessing Amazon CloudWatch Logs for AWS Lambda

Google Vision API request size limitation (text detection)

I'm using Google Vision API via curl (image is sent as base64-encoded payload within JSON). I can get correct results back only when my request sent via CURL is under 16k or so. As soon as it's over ~16k I'm getting no response at all:
Exactly the same request but with a smaller image
I have added the request over 16k to pastebin:
{
"requests": [
{
"image": {
"content": ...base64...
....
}
Failing request is here:
https://pastebin.com/dl/vL4Ahfw7
I could only find a 20MB limitation in the docs (https://cloud.google.com/vision/docs/supported-files?hl=th) but nothing like the weird issue I have. Thanks.