Facebook Marketing API - Creating an "advideo" in Sandbox Environment - facebook-graph-api

I'm trying to create a video in Sandbox mode but it throws me an output such as;
Params: {'title': 'test1', 'description': 'test'}
Status: 400
Response:
{
"error": {
"message": "Unsupported post request. Object with ID 'act_x' does not exist, cannot be loaded due to missing permissions, or does not support this operation. Please read the Graph API documentation at https://developers.facebook.com/docs/graph-api",
"type": "GraphMethodException",
"code": 100,
"error_subcode": 33,
"fbtrace_id": "AL_IO0ED9eQLAYcVGH2Ae94"
}
}
Here is the code I'm trying to run;
from facebook_business.api import FacebookAdsApi
from facebook_business.adobjects.adaccount import AdAccount
from pathlib import Path
my_app_id = 'xxxx'
my_app_secret = 'xxxx'
my_access_token = 'xxxx'
FacebookAdsApi.init(my_app_id, my_app_secret, my_access_token, api_version="v14.0")
my_account = AdAccount('act_x')
video_path = Path(__file__).parent / 'video.mp4'
fields=[]
params = {
"title" : "test1",
"description": "test",
"source": video_path
}
video = my_account.create_ad_video(params=params, fields=fields)
I'm wondering if im not able to create adimages or advideos in Sandbox mode.

My bad... I've mistakenly removed a character from act_id so it threw that error.
Be sure you have valid credentials, you can upload media in Sandbox mode.

Related

How can I create a bot in Amazon Lex for getting weather update?

I am trying to get weather update. The Python code is working well but I am unable to embed it into Amazon Lex. It is showing received error response.
from botocore.vendored import requests
# using openweathermap api
api_address = 'http://api.openweathermap.org/data/2.5/weather?appid=__api_key_here__&q='
city = input("Enter city >> ")
url = api_address + city
json_data = requests.get(url).json()
formatted_data = json_data['weather'][0]['main']
desc_data = json_data['weather'][0]['description']
print(formatted_data)
print(desc_data)
# print(json_data)
Make sure api is running perfectly python code.
Depends on the next state you need to keep type as ElicitSlot or ElicitInten
If you are using lambda as backend for the lex, we need send the response in a below format.
You can refer the link for the Lambda response formats
Lambda response formats
{
"dialogAction": {
"type": "Close",
"fulfillmentState": "Fulfilled",
"message": {
"contentType": "PlainText",
"content": "Thanks, your pizza has been ordered."
},
"responseCard": {
"version": integer-value,
"contentType": "application/vnd.amazonaws.card.generic",
"genericAttachments": [
{
"title":"card-title",
"subTitle":"card-sub-title",
"imageUrl":"URL of the image to be shown",
"attachmentLinkUrl":"URL of the attachment to be associated with the card",
"buttons":[
{
"text":"button-text",
"value":"Value sent to server on button click"
}
]
}
]
}
}
}

How do get the Linked_Account_Name while calling the cost explorer API

I have the below code to get the cost explorer details using boto3 which will give the data on the basis of account_id.I need the details on the basis of Linked_account_Name. Can someone guide me how to proceed..
response = ce.get_cost_and_usage(
TimePeriod={
'Start': '2020-01-01',
'End': '2020-01-03'
},
Granularity='MONTHLY',
Metrics=[
'UnblendedCost',
],
GroupBy=[
{
'Type': 'DIMENSION',
'Key': 'LINKED_ACCOUNT'
},
]
LINKED_ACCOUNT_NAME is not valid for all three context(COST_AND_USAGE','RESERVATIONS','SAVINGS_PLANS).
Dimensions are also limited to LINKED_ACCOUNT , REGION , or RIGHTSIZING_TYPE in get_cost_and_usage()
So, you won't be able to use it.
You can use
get_dimension_values()
use this link for more info
function to get the Linked Account name.
client = session.client('ce')
response = client.get_dimension_values(
SearchString='123456789098',
TimePeriod={
'Start': '2020-01-01',
'End': '2020-03-01'
},
Dimension='LINKED_ACCOUNT',
Context='COST_AND_USAGE'
)
for each in response['DimensionValues']:
print('Account Name is ->', each['Attributes']['description'])
and output will be like below:
Account Name is -> Test 0100
Its not a complete answer but you can proceed from here.

How do you debug google deployment manager templates?

Im looking at this example: https://github.com/GoogleCloudPlatform/deploymentmanager-samples/tree/master/examples/v2/cloud_functions
which uses this template. I added a print statement to it, but how do I see this output?
import base64
import hashlib
from StringIO import StringIO
import zipfile
def GenerateConfig(ctx):
"""Generate YAML resource configuration."""
in_memory_output_file = StringIO()
function_name = ctx.env['deployment'] + 'cf'
zip_file = zipfile.ZipFile(
in_memory_output_file, mode='w', compression=zipfile.ZIP_DEFLATED)
####################################################
############ HOW DO I SEE THIS????? ################
print('heelo wworrld')
####################################################
####################################################
for imp in ctx.imports:
if imp.startswith(ctx.properties['codeLocation']):
zip_file.writestr(imp[len(ctx.properties['codeLocation']):],
ctx.imports[imp])
zip_file.close()
content = base64.b64encode(in_memory_output_file.getvalue())
m = hashlib.md5()
m.update(content)
source_archive_url = 'gs://%s/%s' % (ctx.properties['codeBucket'],
m.hexdigest() + '.zip')
cmd = "echo '%s' | base64 -d > /function/function.zip;" % (content)
volumes = [{'name': 'function-code', 'path': '/function'}]
build_step = {
'name': 'upload-function-code',
'action': 'gcp-types/cloudbuild-v1:cloudbuild.projects.builds.create',
'metadata': {
'runtimePolicy': ['UPDATE_ON_CHANGE']
},
'properties': {
'steps': [{
'name': 'ubuntu',
'args': ['bash', '-c', cmd],
'volumes': volumes,
}, {
'name': 'gcr.io/cloud-builders/gsutil',
'args': ['cp', '/function/function.zip', source_archive_url],
'volumes': volumes
}],
'timeout':
'120s'
}
}
cloud_function = {
'type': 'gcp-types/cloudfunctions-v1:projects.locations.functions',
'name': function_name,
'properties': {
'parent':
'/'.join([
'projects', ctx.env['project'], 'locations',
ctx.properties['location']
]),
'function':
function_name,
'labels': {
# Add the hash of the contents to trigger an update if the bucket
# object changes
'content-md5': m.hexdigest()
},
'sourceArchiveUrl':
source_archive_url,
'environmentVariables': {
'codeHash': m.hexdigest()
},
'entryPoint':
ctx.properties['entryPoint'],
'httpsTrigger': {},
'timeout':
ctx.properties['timeout'],
'availableMemoryMb':
ctx.properties['availableMemoryMb'],
'runtime':
ctx.properties['runtime']
},
'metadata': {
'dependsOn': ['upload-function-code']
}
}
resources = [build_step, cloud_function]
return {
'resources':
resources,
'outputs': [{
'name': 'sourceArchiveUrl',
'value': source_archive_url
}, {
'name': 'name',
'value': '$(ref.' + function_name + '.name)'
}]
}
EDIT: this is in no way a solution to this problem but I found that if I set a bunch of outputs for info im interested in seeing it kind of helps. So I guess you could roll your own sort of log-ish thing by collecting info/output into a list or something in your python template and then passing all that back as an output- not great but its better than nothing
Deployment Manager is an infrastructure deployment service that automates the creation and management of Google Cloud Platform (GCP) resources. What you are trying to do on deployment manager is not possible due to its managed environment.
As of now, the only way to troubleshoot is to rely on the expanded template from the Deployment Manager Dashboard. There is already a feature request in order to address your use case here. I advise you to star the feature request in order to get updates via email and to place a comment in order to show the interest of the community. All the official communication regarding that feature will be posted there.

Facebook Graph Api publishing to feed returns: "(#100) Param place must be a valid place tag ID"

I am searching at Facebook Graph Api, using graph api explorer, for some place using the following endpoint:
/search?type=place&q=centauro&fields=id,name,link
I am getting this as response:
"data": [
{
"id": "492103517849553",
"name": "Centauro",
"link": "https://www.facebook.com/Centauro-492103484516223/"
},
{
"id": "313439499156253",
"name": "Centauro",
"link": "https://www.facebook.com/Centauro-313439462489590/"
},
{
"id": "175812113006221",
"name": "Centauro",
"link": "https://www.facebook.com/Centauro-175812079672891/"
},
{
"id": "1423220914594882",
"name": "Centauro",
"link": "https://www.facebook.com/pages/Centauro/1423220891261551"
},...
When I try to publish using the field "id" returned:
/me/feed
with fields:
message: Testing
place: 492103517849553
I get the following reponse:
{
"error": {
"message": "(#100) Param place must be a valid place tag ID",
"type": "OAuthException",
"code": 100,
"fbtrace_id": "DfEKOjZX8g+"
}
}
But if I use de final number of the link:
"link": "https://www.facebook.com/Centauro-492103484516223/"
492103484516223
And try again:
/me/feed
with fields:
message: Testing
place: 492103484516223
It works perfectly.
So, is there a way to get te correct place id for publishing? Or is it a bug?
I was also getting the “(#100) Param place must be a valid place tag ID” error, but got it to go away by providing a JSON string within the 'place' element.
So where the content of your request was this:
place: 492103484516223
Format the place information like this instead:
place: {"id": "492103484516223"}
Currently, this is how you can solve it.
import requests
IG_USER_ID = <YOUR INSTAGRAM USER ID>
USER_ACCESS = <YOUR USER ACCESS TOKEN WITH VALID PERMISSIONS>
CONTAINER1_ID = <ID OF FIRST CONTAINER>
CONTAINER2_ID = <ID OF SECOND CONTAINER>
URL = f"https://graph.facebook.com/v13.0/{IG_USER_ID}/media?caption=Fruit%20candies&media_type=CAROUSEL&children={CONTAINER1_ID}%2C{CONTAINER2_ID}&access_token={USER_ACCESS}"
r = requests.post(URL)

BigQuery API returns errors when the target table is files in Spreadsheets (python client)

I got this error response when I called BigQuery API for the file in Spreadsheets.
Error: {
"error": {
"errors": [
{
"domain": "global",
"reason": "jobInternalError",
"message": "The job encountered an internal error during execution and was unable to complete successfully."
}
],
"code": 400,
"message": "The job encountered an internal error during execution and was unable to complete successfully."
}
}
I called the API in a python program, like:
credentials = ServiceAccountCredentials.from_json_keyfile_name(
KEYFILE,
scopes=[
'https://www.googleapis.com/auth/bigquery',
'https://www.googleapis.com/auth/drive',
])
http_auth = credentials.authorize(httplib2.Http())
self.bigquery_service = build('bigquery', 'v2', http=http_auth)
query_response = self.bigquery_service.jobs().query(
projectId=PROJECT_ID,
body={'query': 'select * from <table-from-spreadsheet>'}
).execute()
I got no error when I tried the same query and scopes with Web UI under this document, or queried for the table in BigQuery.
What's wrong with my program?