I am exhausted looking for the solution in the google forum, Stack and AWS help and thus posting a new question.
I have a webservice through which I am generating a presigned POST url for my s3 bucket. I have tested it with python request.post module and it works just fine.
all I'm trying to upload are simple small csv string data. e.g. "a,1,b,2,c,3"
incase the image above is not readable : it looks like this:
url,https://genome-analytics-scratch-space.s3.amazonaws.com/;;AWSAccessKeyId,AKZZZZZZZZZZQ;;ContentType,multipart/form-data;;acl,public-read;;key,slaik/somejson.json;;signature,+IZZZZZZZZZZZZZZM=;;policy,eyZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZJ9;;x-amz-server-side-encryption,AES256;;
I have an absolutely simple ai2 app with just 1 button, 1 web component and a label to show the output.
The block i have designed looks like this.
The output I'm getting is :
I have tried setting my fields & conditions values to for the presigned POST url as mentioned below.
fields = {"acl": "public-read",
"ContentType": "multipart/form-data",
"x-amz-server-side-encryption": "AES256"}
# Ensure that the ACL isn't changed and restrict the user to a length
# between 10 and 100.
conditions = [
{"acl": "public-read"},
["content-length-range", 0, 10485760],
{"ContentType": "multipart/form-data"},
{"x-amz-server-side-encryption": "AES256"}
I have also tried setting content type to text/csv. But all I'm getting is this same error and I have written so much code for the download and other features, a lot is hanging on cracking this for me.
Any guidance, observation, link, clues would be greatly appreciated.
Thanks.
Related
Our website is using pre-signed URLs for getting objects from S3.
presigned_url = s3_client.generate_presigned_url(
"get_object",
Params={"Bucket": someBucket, "Key": somePath},
ExpiresIn=600,
)
This has been working well for us, and we now want to record metrics on the age of the S3 object that they'd be grabbing with this presigned URL / the last modified date.
The only thing I can think of doing something like grabbing the object first, and then getting the age, but then it seems inefficient to be grabbing the object just to grab the age (especially since now the latency is low since its just generating a presigned URL):
response = s3_client.head_object(
Bucket=someBucket, Key=somePath
)
last_modified_time = response["LastModified"]
recordMetric(..., last_modified_time)
presigned_url = s3_client.generate_presigned_url(
"get_object",
Params={"Bucket": someBucket, "Key": somePath},
ExpiresIn=600,
)
Is there a better way to do this or approach the issue?
There’s no need to get the object, and in your code example you’re already doing it correctly. The head_object() function in your example retrieves metadata from the object without retrieving the object itself. To my knowledge, this is the most efficient way to retrieve the object metadata.
I'm looking for a way to fetch Media Insights metrics in Instagram Graph API (https://developers.facebook.com/docs/instagram-api/reference/media/insights) with a nested query based on the userId, even when a client switched from a Personal to a Business account.
I use this nested query to fetch all the data I need : https://graph.facebook.com/v3.2/{userId}?fields=followers_count,media{media_type,caption,timestamp,like_count,insights.metric(reach, impressions)} (this part causes the error: insights.metric(reach, impressions) - it works however for an account that has always been a Business one)
However, because some media linked to the userId were posted before the user switched to a Business account, instead of returning the data only for the media posted after, the API returns this error:
{
"error": {
"message": "Invalid parameter",
"type": "OAuthException",
"code": 100,
"error_data": {
"blame_field_specs": [
[
""
]
]
},
"error_subcode": 2108006,
"is_transient": false,
"error_user_title": "Media Posted Before Business Account Conversion",
"error_user_msg": "The media was posted before the most recent time that the user's account was converted to a business account from a personal account.",
"fbtrace_id": "Gs85pUz14JC"
}
}
Is there a way to know, thru the API, which media were created before and after the account switch from Personal to Business? Or is there a way to fetch the date on which the account was switched?
The only way I currently see to get the data I need is to use the /media edge and query insights for each media until I get an error. Then I would get approximately the date I need. However, this is not optimized at all since we are rate limited to 200 calls per user per hour.
I have the same problem.
For now, I'm Switch between queries (if first have error)
"userId"?fields=id,media.limit(100){insights.metric(reach, impressions)}
"userId"?fields=id,media.limit(100)
I show the user all insights in zero.
I don't know if they're the best alternative, like identify the time of conversion to business and get the post between this range of DateTime
I got the same problem and solved it like this:
Use the nested query just like you did, including insights.metric
If the error appears, do another call without insights.metric - to at least get all other data
For most accounts, it works and there is no additional API call. For the rest, i just cannot get the insights and i have to live with it, i guess - until Facebook/IG fixes the issue.
I got the same problem and solved it like this:
Step1: Convert your Instagram account to a Professional account.
Step2: Then According to Error Post a new post on Instagram and get their Post-ID.
Step3: Then try to get a request using that Post-ID.
{Post-ID}?fields=comments_count,like_count,timestamp,insights.metric(reach,impressions)
curl -i -X GET "https://graph.facebook.com/v12.0/{Post-ID}?fields=comments_count%2Clike_count%2Ctimestamp%2Cinsights.metric(reach%2Cimpressions)&access_token={access_token}"
For more: insights
Here is the relevant logic from a script that can handle this error while still doing a full import. It works by reducing the requested limit to 1 once the error is encountered. It will keep requesting insights until it encounters the error again, then removes insights from the fields and returns to the requested limit.
limit = 50
error_2108006 = False
metrics = 'insights.metric%28impressions%29%2C' # Must be URL encoded for replacement
url = '/PAGE_ID/media?fields=%sid,caption,media_url,media_type&limit=%s' % (metrics, limit)
# While we have more pages
while True:
# Make your API call to Instagram
posts = get_posts_from_instagram(url)
# Check for error 2108006
if posts == 2108006:
# First time getting this error, keep trying to get insights but one by one
if error_2108006 is False:
error_2108006 = True
url = url.replace('limit={}'.format(limit), 'limit=1')
continue
# Not the first time. Strip out insights and return to desired limit.
url = url.replace(metrics, '')
url = url.replace('limit=1', 'limit='.format(limit))
continue
# Do something with the data
for post in posts:
continue
# If there are more pages, fetch the next URL
if 'paging' in posts and 'next' in posts['paging']:
url = posts['paging']['next']
continue
# Done
break
Im interested to know if there is a way using API to get list of consolidated comments for an URL shared in facebook.
What I have tried is using following API to get object_ID of an URL "http://graph.facebook.com/?id={URL}"
and then used "{object_ID}/comments". As a result, I got following output and not able to figure out what I have missed.
{ "data": [ ] }
Thanks for your answer.
I'm attempting to upload raw image data to S3 in the context of a react-native app.
I have the raw data correct and for the most part I think my code inside react native is working correctly to capture image data.
On my rails server, I'm using the amazon ruby gem to build the details of the url and associated authentication data required to post data to the bucket in question which I'm then rendering into react-native just like a regular react web front end.
# inside the rails server controller
s3_data = S3_BUCKET.presigned_post(key: "uploads/#{SecureRandom.uuid}/${filename}", success_action_status: '201', acl: 'public-read', url: 'https://jd-foo.s3-us-west-2.amazonaws.com')
render json: {s3Data: {fields: s3_data.fields, url: s3_data.url}}
At the moment I attempt to post to S3, I'm using ES6 fetch like the below to build my http request.
saveImage(data) {
var url = data.url
var fields = data.fields
var headers = {'Content-Type': 'multipart/form-data'}
var body = `x-amz-algorithm=${encodeURIComponent(fields['x-amz-algorithm'])}&` +
`x-amz-credential=${encodeURIComponent(fields['x-amz-credential'])}&` +
`x-amz-date=${encodeURIComponent(fields['x-amz-date'])}&` +
`x-amz-signature=${encodeURIComponent(fields['x-amz-signature'])}&` +
`acl=${encodeURIComponent(fields['acl'])}&` +
`key=${encodeURIComponent(fields['key'])}&` +
`policy=${encodeURIComponent(fields['policy'])}&` +
`success_action_status=${encodeURIComponent(fields['success_action_status'])}&` +
`file=${encodeURIComponent('12foo')}`
console.log(body);
return fetch(url, {method: 'POST', body: body, headers: headers})
.then((res) => {console.log('s3 inside api res', res['_bodyText']) ; res.json()} );
}
the logging of the body looks like
x-amz-algorithm=AWS4-HMAC-SHA256&x-amz-credential=AKIAJJ22D4PSUNBB5RAQ%2F20151027%2Fus-west-1%2Fs3%2Faws4_request&x-amz-date=20151027T223159Z&x-amz-signature=42b09d7ae134f803b10ef72d220fe74a630a3f826c7f1f625448277d0a6d93c7&acl=public-read&key=uploads%2F46be8ca3-6d3a-4bb7-a658-f2c8e058bc28%2F%24%7Bfilename%7D&policy=eyJleHBpcmF0aW9uIjoiMjAxNS0xMC0yN1QyMzozMTo1OVoiLCJjb25kaXRpb25zIjpbeyJidWNrZXQiOiJqZC1mb28ifSxbInN0YXJ0cy13aXRoIiwiJGtleSIsInVwbG9hZHMvNDZiZThjYTMtNmQzYS00YmI3LWE2NTgtZjJjOGUwNThiYzI4LyJdLHsic3VjY2Vzc19hY3Rpb25fc3RhdHVzIjoiMjAxIn0seyJhY2wiOiJwdWJsaWMtcmVhZCJ9LHsieC1hbXotY3JlZGVudGlhbCI6IkFLSUFKSjIyRDRQU1VOQkI1UkFRLzIwMTUxMDI3L3VzLXdlc3QtMS9zMy9hd3M0X3JlcXVlc3QifSx7IngtYW16LWFsZ29yaXRobSI6IkFXUzQtSE1BQy1TSEEyNTYifSx7IngtYW16LWRhdGUiOiIyMDE1MTAyN1QyMjMxNTlaIn1dfQ%3D%3D&success_action_status=201&file=12foo
It seems like my problems could be tied to both
Bad format of the post body including problems with special characters
Not providing S3 with enough data in post body including keys and other information, the documentation feels a bit unclear about what is/is not required.
The error back from S3 servers looks like
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>MalformedPOSTRequest</Code><Message>The body of your POST request is not well-formed multipart/form-data.</Message> <RequestId>DCE88AC349D7B2E8</RequestId><HostId>AKE1xctETuZMAhBFLfyuFlDxikYUlbAC7YufkM7h8Z8eVQdtLA25Z0Od/a4cMUbfW1nWnGjc+vM=</HostId></Error>
I'm pretty unclear on what my actual problems are and where I should be digging in.
Any input would be greatly appreciated.
<Message>The body of your POST request is not well-formed multipart/form-data.</Message>
It may not be that you're missing values from the body. The most significant issue here is that the structure of your body does not resemble multipart/form-data.
See RFC 2388 for how multipart/form-data works. (Or find a library that builds this for you.)
What you are sending looks more like the application/x-www-form-urlencoded format, which is used by some AWS APIs, but not S3.
There is an example in the S3 docs showing what an example POST body might look like. You should see a substantial difference there.
Note also that POST is intended for browser based uploads. If you are uoloading from code, you're doing a lot of extra work. PUT Object is much more straightforward. The request body is the binary file contents. Or, if this will eventually be done by a browser, then test it with a browser, and let the browser build your form.
I'm testing a post request which contains an image, after the post request if succesfull i receive a url like
http://testbed.example.com/_ah/upload/agx0ZXN0YmVkLXRlc3RyGwsSFV9fQmxvYlVwbG9hZFNlc3Npb25fXxgDDA
as i already tried and checked in stackoverflow, this wont work to try to upload the image
i have a handler in a route like "/upload/image"
and the code looks like:
class UploadScreenshot(webapp2.RequestHandler, blobstore_handlers.BlobstoreUploadHandler):
def post(self):
try:
upload_screenshot = self.get_uploads('file')
upload_url = self.request.get('upload_url')
fbkey = self.request.get('fbkey')
screenshotKey = upload_screenshot[0].key()
feedback = N.FeedbackModel.query(N.FeedbackModel.fbkey == fbkey)
feedback.screenshotBlobID = screenshotKey
feedback.put()
except:
self.error(400)
what could i do to upload to the Blobstore, i have my app in appspot also but i wanna test this before deploying
thanks
Sounds like you got things a bit backwards. By the time your UploadScreenshot handler is called the blob should be already uploaded and your handler can access it using the blob key (which you have in screenshotKey). You can't (re)use the upload url at this point (it should have been already used when the user submited the upload form).
You may want to revisit the blob upload procedure/example.
BTW, this can be fully tested on the development server, the upload URL you get will be a localhost one and the blob is stored on localhost as well.