Facebook API OAuthException code 2 creating ads group - facebook-graph-api

I am facing this less documented error when creating ads group using the FB api. It asks the user to try again later, but I have been trying for the past 18 hours.
The error says:
{"error":{"message":"An unexpected error has occurred. Please retry your request later.","type":"OAuthException","code":2}
Here are the curl commands ( I took the exact example from this doc page https://developers.facebook.com/docs/reference/ads-api/adgroup/ the first example ):
curl \
-F "name=my ad" \
-F "campaign_id={my_campaign_id}" \
-F "bid_type=CPC" \
-F "bid_info={'CLICKS':110}" \
-F "targeting={'countries':['US']}" \
-F "creative={'creative_id':{my_creative_id}}" \
-F "access_token={my_fb_app_access_token}" \
"https://graph.facebook.com/act_{my_account_id}/adgroups"
*the access_token is my whitelisted app token and I have it extended to a long term token.
When I tried to take off any of the required fields, the error was specific. Ex. if I took of creative, it gave me this error: {"error":{"type":"Exception","message":"The Adgroup Create Failed for the following reason: You must include one of \"max_bid\" or \"bid_info\" fields","code":1487087}}
Does anyone have the same issue and does anyone know if it is just a fb server issue and will fix by itself?

According to the official docs at https://developers.facebook.com/docs/graph-api/using-graph-api/v2.0, this appears to be a Temporary issue due to downtime - retry the operation after waiting.. Although this question seems to indicate that may not always be completely accurate.

Related

SNMP++ library ignore USM model and accept every input even without auth

I am building a C++ application which purpose is, among other thing, to receive SNMP traps. For this I am using SNMP ++ library version V3.3 (https://agentpp.com/download.html C++ APIs SNMP++ 3.4.9).
I was expecting for traps using no authentication to be discarded/dropped if configuration was requesting some form of authentication but it does not seem to be the case.
To confirm this behavior I used the provided receive_trap example available in the consoleExamples directory. I commented every call to
usm->add_usm_user(...)
except for the one with "MD5" as security name :
usm->add_usm_user("MD5",
SNMP_AUTHPROTOCOL_HMACMD5, SNMP_PRIVPROTOCOL_NONE,
"MD5UserAuthPassword", "");
I then sent a trap (matching the "MD5" security name) to the application using net-snmp :
snmptrap -v 3 -e 0x090807060504030200 -u MD5 -Z 1,1 -l noAuthNoPriv localhost:10162 '' 1.3.6.1.4.1.8072.2.3.0.1 1.3.6.1.4.1.8072.2.3.2.1 i 123456
Since the application only registered User Security Model requires an MD5 password I would have though the trap would have been refused/dropped/discarded, but it was not :
Trying to register for traps on port 10162.
Waiting for traps/informs...
press return to stop
reason: -7
msg: SNMP++: Received SNMP Notification (trap or inform)
from: 127.0.0.1/45338
ID: 1.3.6.1.4.1.8072.2.3.0.1
Type:167
Oid: 1.3.6.1.4.1.8072.2.3.2.1
Val: 123456
To make sure there was no "default" UserSecurityModel used instead I then commented the remaining
usm->add_usm_user("MD5",
SNMP_AUTHPROTOCOL_HMACMD5, SNMP_PRIVPROTOCOL_NONE,
"MD5UserAuthPassword", "");
and sent my trap again using the same command. This time nothing happened :
Trying to register for traps on port 10162.
Waiting for traps/informs...
press return to stop
V3 is around 18k lines of RFC so it is completely possible I missed or misunderstood something but I would expect to be able to specify which security level I am expecting and drop everything which does not match. What am I missing ?
EDIT Additional testing with SNMPD
I have done some test with SNMPD and I somehow still get similar result.
I have created a user :
net-snmp-create-v3-user -ro -A STrP#SSWRD -a SHA -X STr0ngP#SSWRD -x AES snmpadmin
Then I am trying with authPriv key :
snmpwalk -v3 -a SHA -A STrP#SSWRD -x AES -X STr0ngP#SSWRD -l authPriv -u snmpadmin localhost
The request is accepted
with authNoPriv :
snmpwalk -v3 -a SHA -A STrP#SSWRD -x AES -l AuthNoPriv -u snmpadmin localhost
The request is accepted
with noAuthNoPriv :
snmpwalk -v3 -a SHA -A STrP#SSWRD -x AES -X STr0ngP#SSWR2D -l noauthnoPriv -u snmpadmin localhost
The request is rejected.
As I understand the authNoPriv must be rejected, but is accepted, this is incorrect from what I have read in the RFC and the cisco snmpv3 resume
Disclaimer I am not expert so take the following with a pinch of salt.
I cannot say for the library you are using but regarding the SNMP v3 flow:
In SNMPv3 exchanges, the USM is responsible for validation only on the SNMP authoritative engine side, the authoritative role depending of the kind of message.
Agent authoritative for : GET / SET / TRAP
Receiver authoritative for : INFORM
The RFC 3414 describes the reception of message in section 3.2 :
If the information about the user indicates that it does not
support the securityLevel requested by the caller, then the
usmStatsUnsupportedSecLevels counter is incremented and an error
indication (unsupportedSecurityLevel) together with the OID and value
of the incremented counter is returned to the calling module.
If the securityLevel specifies that the message is to be
authenticated, then the message is authenticated according to the
user’s authentication protocol. To do so a call is made to the
authentication module that implements the user’s authentication
protocol according to the abstract service primitive
So in the step 6, the securityLevel is the one from the message. This means the TRAP is accepted by the USM layer on the receiver side even if the authentication is not provided.
It is then the task of the user of the TRAP to decide if the message must be interpreted or not.

Ways to find out how soon the AWS session expires?

Prerequisites
I have a script that works with AWS but does not deal with credentials explicitly. It just calls AWS API, expecting the credentials to be there according to default credentials provider chain. In fact, the wrapper that calls this script obtains temporary credentials and passes them in environment variables (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_SESSION_TOKEN).
Problem
The wrapper usually reuses existing credentials, and only asks to re-authenticate explicitly when they are about to expire. So there is a possibility that it passes credentials that have a few minutes left to live, which may not be enough, as the script execution usually takes long time. Unfortunately, I don't have control over the wrapper, so I would like to make the script check how much time it has left before making a decision whether to start or abort early to prevent failure in mid-flight.
AWS doesn't seem provide a standard way to query "how much time I have before my current session expires?" If I had control over the wrapper, I would make it pass the expiry date in an environment variable as well. I was hoping that AWS_SESSION_TOKEN is a sort of a JWT token, but unfortunately it is not, and does not seem to contain any timestamp in it.
Can anyone suggest any other ways around the given problem?
You probably need to check for the value of $AWS_SESSION_EXPIRATION
echo $AWS_SESSION_EXPIRATION
It should be giving you the expiration using the Zulu Time (Coordinated Universal Time) like below
2022-05-17T20:20:40Z
Then would need to compare the date on your system against it.
Instructions below work on macOS, for Linux you may have to modify the date function params
Let's check the current system time in Zulu Time:
date -u +'%Y-%m-%dT%H:%M:%SZ'
this should give something like the example:
2022-05-17T20:21:14Z
To automate things you can create a bash function to add to your ~/.bashrc or favorite terminal theme
aws_session_time_left() {
zulu_time_now=$1
aws_session_expiration_epoch="`date -j -u -f '%Y-%m-%dT%H:%M:%SZ' $AWS_SESSION_EXPIRATION '+%s'`"
zulu_time_now_epoch="`date -j -u -f '%Y-%m-%dT%H:%M:%SZ' $zulu_time_now '+%s'`"
if [[ $zulu_time_now < $AWS_SESSION_EXPIRATION ]]; then
secs="`expr $aws_session_expiration_epoch - $zulu_time_now_epoch`"
echo "+`printf '%dh:%02dm:%02ds\n' $((secs/3600)) $((secs%3600/60)) $((secs%60))`"
else
secs="`expr $zulu_time_now_epoch - $aws_session_expiration_epoch`"
echo "-`printf '%dh:%02dm:%02ds\n' $((secs/3600)) $((secs%3600/60)) $((secs%60))`"
fi
}
To consult the result of your function you can run:
aws_session_time_left "`date -u +'%Y-%m-%dT%H:%M:%SZ'`"
When your session is still active can give you something like:
+0h:56m:35s
or (when the session has expired) can give you how long since it expired:
-28h:13m:42s
NOTE the hours can be over 24h
BONUS: simplified standalone script version without parameters below
for example, you can also have as a standalone file get-aws-session-time-left.sh
#!/bin/bash
# Use to find out IF "aws session expiration" exist AND compare the current system time to IT
# These are the expected result types we want to have:
# - "no aws session found" (NOTE: this does not mean there is no aws session open in another terminal)
# - how long until the session expires (for example +0h:59m:45s)
# - how long since the session expired (for example -49h:41m:12s)
# NOTE: the hours do not reset at every 24 hours, we do not require to display the days
# IMPORTANT: the date function arguments work on macOS, for other OS types may need adapting
if [[ $AWS_SESSION_EXPIRATION != '' ]]; then
zulu_time_now="`date -u +'%Y-%m-%dT%H:%M:%SZ'`" # TODO: see important note above
aws_session_expiration_epoch="`date -j -u -f '%Y-%m-%dT%H:%M:%SZ' $AWS_SESSION_EXPIRATION '+%s'`" # TODO: see important note above
zulu_time_now_epoch="`date -j -u -f '%Y-%m-%dT%H:%M:%SZ' $zulu_time_now '+%s'`" # TODO: see important note above
if [[ $zulu_time_now < $AWS_SESSION_EXPIRATION ]]; then
secs="`expr $aws_session_expiration_epoch - $zulu_time_now_epoch`"
echo "+`printf '%dh:%02dm:%02ds\n' $((secs/3600)) $((secs%3600/60)) $((secs%60))`"
else
secs="`expr $zulu_time_now_epoch - $aws_session_expiration_epoch`"
echo "-`printf '%dh:%02dm:%02ds\n' $((secs/3600)) $((secs%3600/60)) $((secs%60))`"
fi
else
echo "no aws session found"
fi
Then to consult the AWS session expiration, you can just run:
bash get-aws-session-time-left.sh
---EDITS---
IMPORTANT:
Some info about the origin of AWS_SESSION_EXPIRATION can be found here
NOTE: aws sso login does not appear to set this environment variable, would be curious if someone can clarify if AWS_SESSION_EXPIRATION is only present when using aws-vault
you can use aws configure get to get the expiry time:
AWS_SESSION_EXPIRATION=$(aws configure get ${AWS_PROFILE}.x_security_token_expires)
(obviously replace MYPROFILE with your profile name.)

GCP Scheduler Error: function execution failed. Details: Attribute 'label' missing from payload

I am following this tutorial in GCP, to make scraper run with schedule.
https://cloud.google.com/scheduler/docs/start-and-stop-compute-engine-instances-on-a-schedule
Seems like it flow works in a row of
1) Scheduler
2) PubSub
3) Function
4) Compute instance
but when i wanted to try whether it is working, it keeps shows an error of
gcloud functions call stopInstancePubSub \
--data '{"data":"eyJ6b25lIjoidXMtd2VzdDEtYiIsImluc3RhbmNlIjoid29ya2RheS1pbnN0YW5jZSJ9Cg=="}'
error: |-
Error: function execution failed. Details:
Attribute 'label' missing from payload
but nowhere i can find the answer to fill the label into the payload, and i don't know what is happening here.
GCP tutorial sucks...
Can anybody help me with this?
p.s) when i do the npm test
➜ scheduleinstance git:(master) npm test
> cloud-functions-schedule-instance#0.1.0 test /Users/yoonhoonsang/Desktop/nodejs-docs-samples/functions/scheduleinstance
> mocha test/*.test.js --timeout=20000
functions_start_instance_pubsub
✓ startInstancePubSub: should accept JSON-formatted event payload with label (284ms)
Error: Could not load the default credentials. Browse to https://cloud.google.com/docs/authentication/getting-started for more information.
at GoogleAuth.getApplicationDefaultAsync (/Users/yoonhoonsang/Desktop/nodejs-docs-samples/functions/scheduleinstance/node_modules/google-auth-library/build/src/auth/googleauth.js:160:19)
at processTicksAndRejections (internal/process/task_queues.js:97:5)
at async GoogleAuth.getClient (/Users/yoonhoonsang/Desktop/nodejs-docs-samples/functions/scheduleinstance/node_modules/google-auth-library/build/src/auth/googleauth.js:502:17)
at async GoogleAuth.authorizeRequest (/Users/yoonhoonsang/Desktop/nodejs-docs-samples/functions/scheduleinstance/node_modules/google-auth-library/build/src/auth/googleauth.js:543:24)
✓ startInstancePubSub: should fail with missing 'zone' attribute
✓ startInstancePubSub: should fail with missing 'label' attribute
✓ startInstancePubSub: should fail with empty event payload
functions_stop_instance_pubsub
✓ stopInstancePubSub: should accept JSON-formatted event payload with label
✓ stopInstancePubSub: should fail with missing 'zone' attribute
✓ stopInstancePubSub: should fail with missing 'label' attribute
✓ stopInstancePubSub: should fail with empty event payload
John Hanley from the comments above:
The error message comes from the code in index.js because you probably did not encode the payload correctly. This is an example where you should not include pictures and you should copy and paste the actual error. The payload that you created is base64 and we cannot decode that from a picture. You should base64 enocde something similar to {"zone":"us-west1-b", "label":"env=dev"}
Your payload decoded: {"zone":"us-west1-b","instance":"workday-instance"}. That does not match what the code expects. Look at the example in my comment again. Base64 encoding is very simple and there are many articles on the Internet. –
Thanks to #JohnHanley, I solved the problem of my question, and I am giving the solutions incase that other people could experience the same problem, since Google Tutorial was not user-friendly.
I was following the tutorial that can set the scheduler of compute instance so that my scraper can work and close at given time.
[Schduler tutorial of gcp]
https://cloud.google.com/scheduler/docs/start-and-stop-compute-engine-instances-on-a-schedule
In this tutorial, process works as follows:
1. Set the scheduler to call the Pubsub
2. Pubsub send message to Cloud Functions
3. Cloud wakes and closes the compute instance
*4. I turned on the compute engine at 23:50, and used cron inside the compute engine to run my scraper at 00:00, and finally turned off the compute engine at 1:00.
I will skip all the non-problematic lines of script, but only deal with that made me sick for few days.
After setting, compute instances, pubsub, you have to deploy the functions.
gcloud functions deploy startInstancePubSub \
--trigger-topic start-instance-event \
--runtime nodejs6
gcloud functions deploy stopInstancePubSub \
--trigger-topic stop-instance-event \
--runtime nodejs6
At here, it says runtime nodejs6, but you have to set it to nodejs8, since nodejs6 has beed depracated and this tutorial don't mentions that.
After, you have to test that functions are callable.
gcloud functions call stopInstancePubSub \
--data '{"data":"eyJ6b25lIjoidXMtd2VzdDEtYiIsImluc3RhbmNlIjoid29ya2RheS1pbnN0YW5jZSJ9Cg=="}'
--data parameter needs the json format data which is encoded into 'base64' like follows.
echo '{"zone":"us-west1-b", "instance":"workday-instance"}' | base64
eyJ6b25lIjoidXMtd2VzdDEtYiIsImluc3RhbmNlIjoid29ya2RheS1pbnN0YW5jZSJ9Cg==
However when I followed the instruction, it returned the error of
gcloud functions call stopInstancePubSub \
--data '{"data":"eyJ6b25lIjoidXMtd2VzdDEtYiIsImluc3RhbmNlIjoid29ya2RheS1pbnN0YW5jZSJ9Cg=="}'
error: |-
Error: function execution failed. Details:
Attribute 'label' missing from payload
Since I was not used to gcp, had no idea of what 'label' meant. But by following the comments from #JohnHanley, I changed the line to
echo '{"zone":"asia-northeast2-a", "label":"env:dev", "instance":"workday-instance"}' | base64
eyJ6b25lIjoiYXNpYS1ub3J0aGVhc3QyLWEiLCAibGFiZWwiOiJlbnY6ZGV2IiwgImluc3RhbmNlIjoid29ya2RheS1pbnN0YW5jZSJ9Cg==
gcloud functions call stopInstancePubSub --data '{"data":"eyJ6b25lIjoiYXNpYS1ub3J0aGVhc3QyLWEiLCAibGFiZWwiOiJlbnY6ZGV2IiwgImluc3RhbmNlIjoid29ya2RheS1pbnN0YW5jZSJ9Cg==
"}'
And this worked like magic. Although I haven't set the labels of the functions, but it worked anyway. But to be completely sustainable, I set the labels of the function to "env:dev" to be sure.
This actually extends to the lines below:
gcloud beta scheduler jobs create pubsub startup-workday-instance \
--schedule '0 9 * * 1-5' \
--topic start-instance-event \
--message-body '{"zone":"us-west1-b","instance":"workday-instance"}' \
--time-zone 'America/Los_Angeles'
In this message-body "label" is missing. I tested the 'with--label-message-version' and 'without-label-message-version', turned out that although 'without version' showed the message that 'I did what you asked for', but actually it didn't

need help in patch operations > Convert to Binary

I want to send image to aws api gateway in base64, hence i went through some articles where it was necessary to perform patch operations to covert the image to binary. (https://medium.com/#adil/how-to-send-an-image-as-a-response-via-aws-lambda-and-api-gateway-3820f3d4b6c8)
But after thoroughly going through the instructions and trying to apply them
chiragMacBook:new chirag912$ aws apigateway update-integration-response \
--rest-api-id q1205tf9ok \
--resource-id t4ssj5 \
--http-method GET \
--status-code 200 \
-- patch-operations '[{"op":"replace","path":"/contentHandling","value": "CONVERT_TO_BINARY"}]'
I came across this error.
An error occurred (NotFoundException) when calling the UpdateIntegrationResponse operation: Invalid Method identifier specified
Not sure if you have already figure this out yourself. But this error usually happens when you try to update integration response for a http method which doesn't exist.
So verify if there is a GET method defined. One common scenario could be that you might have defined a Proxy with ANY method and you are trying to update with GET which would result with the same error message.

Using HyperLedger Fabric with C++ Application

So I am considering HyperLedger Fabric to use with an application I have written in C++. From my understanding, the interactions i.e. posting retrieving data is all done in chaincode, in all of the examples I have seen this is invoked by using the CLI interface docker container.
I simply want to be able to store data produced by my application on a blockchain.
My question is how do I invoke the chaincode externally, surely this is something that is able to be done. I saw that there was a REST SDK but this is no longer supported so I don't want to go near it, to be honest. What other options are available??
Thanks!
There are two official SDKs you can try out.
Fabric Java SDK
Node JS SDK
As correctly mentioned by #Ajaya Mandal, you can use SDKs to automate the invoking process. For example, you can start the node app as written in app.js of balance transfer example and you can hit the API like it is shown in ./testAPI.sh file.
echo "POST invoke chaincode on peers of Org1 and Org2"
echo
VALUES=$(curl -s -X POST \
http://localhost:4000/channels/mychannel/chaincodes/mycc \
-H "authorization: Bearer $ORG1_TOKEN" \
-H "content-type: application/json" \
-d "{
\"peers\": [\"peer0.org1.example.com\",\"peer0.org2.example.com\"],
\"fcn\":\"move\",
\"args\":[\"a\",\"b\",\"10\"]
}")
Here you can add your arguments and pass it as you wish. You can use this thread to see how you can pass an HTTP request from C++.