S3 PUT Bucket to a location endpoint results in a MalformedXML exception - amazon-web-services

I'm trying to create an AWS s3 bucket using libCurl thusly:
Location end-point
curl_easy_setopt(curl, CURLOPT_URL, "http://s3-us-west-2.amazonaws.com/");
Assembled RESTful HTTP header:
PUT / HTTP/1.1
Date:Fri, 18 Apr 2014 19:01:15 GMT
x-amz-content-sha256:ce35ff89b32ad0b67e4638f40e1c31838b170bbfee9ed72597d92bda6d8d9620
host:tempviv.s3-us-west-2.amazonaws.com
x-amz-acl:private
content-type:text/plain
Authorization: AWS4-HMAC-SHA256 Credential=AKIAISN2EXAMPLE/20140418/us-west-2/s3/aws4_request, SignedHeaders=date;x-amz-content-sha256;host;x-amz-acl;content-type, Signature=e9868d1a3038d461ff3cfca5aa29fb5e4a4c9aa3764e7ff04d0c689d61e6f164
Content-Length: 163
The body contains the bucket configuration
http://s3.amazonaws.com/doc/2006-03-01/">us-west-2
I get the following exception back.
MalformedXMLThe XML you provided was not well-formed or did not validate against our published schema
I've been able to carry out the same operation through the aws cli.
Things I've also tried.
1) In the xml, used \ to escape the quotes (i.e., xmlns=\"http:.../\").
2) Not providing a CreateBucketConfiguration ("Although s3 documentation suggests this is not allowed when sending the request to a location endpoint").
3) A get service call to the same end point is listing all the provisioned buckets correctly.
Please do let me know if there is anything else I might be missing here.

Ok, the problem was that I was not transferring the entire xml across as was revealed by a wireshark trace. Once I fixed it, the problem went away.
Btw... escaping the quotes with a \ works but the & quot ; does not.

Related

Wrap JPEG image in a multipart header using AWS Lambda#Edge

I have been trying to read the AWS Lambda#Edge documentation, but I still cannot figure out if the following option is possible.
Assume I have an object (image.jpg, with size 32922 bytes) and I have setup AWS as static website. So I can retrieve:
$ GET http://example.com/image.jpg
I would like to be able to also expose:
$ GET http://example.com/image
Where the response body would be a multipart/related file (for example). Something like this :
--myboundary
Content-Type: image/jpeg;
Content-Length: 32922
MIME-Version: 1.0
<actual binary jpeg data from 'image.jpg'>
--myboundary
Is this something supported out of the box in the AWS Lambda#Edge API ? or should I use another solution to create such response ? In particular it seems that the response only deal with text or base64 (I would need binary in my case).
I finally was able to find complete documentation. I eventually stumble upon:
API Gateway - PORT multipart/form-data
which refers to:
Enabling binary support using the API Gateway console
The above documentation specify the steps to handle binary data. Pay attention that you need to base64 encode the response from lambda to pass it to API Gateway.

Live Stream from AWS MediaLive service not viewable from VLC

I am trying to build a custom live streaming service as documented here:
https://aws.amazon.com/solutions/implementations/live-streaming-on-aws/
I used the pre-provided cloudformation template for "Live Streaming on AWS with MediaStore" which provisioned all the relevant resources for me. Next, I wanted to test my custom streamer.
I used OBS Studio to stream my webcam output to MediaLivePushEndpoint that was created during AWS cloudformation provisioning. OBS Suggests that it is already streaming the webcam stream to the rtmp endpoint to AWS MediaLive RTMP endpoint.
Now, to confirm if I can watch the stream, when I try to set the Input Nerwork Stream in VLC player to the cloudfront endpoint that was created for me (which looks like this: https://aksj2arbacadabra.cloudfront.net/stream/index.m3u8), VLC is unable to fetch the stream and fails with the following error message in the logs. What am I missing? Thanks!
...
...
...
http debug: outgoing request: GET /stream/index.m3u8 HTTP/1.1 Host: d2lasasasauyhk.cloudfront.net Accept: */* Accept-Language: en_US User-Agent: VLC/3.0.11 LibVLC/3.0.11 Range: bytes=0-
http debug: incoming response: HTTP/1.1 404 Not Found Content-Type: application/x-amz-json-1.1 Content-Length: 31 Connection: keep-alive x-amzn-RequestId: HRNVKYNLTdsadasdasasasasaPXAKWD7AQ55HLYBBXHPH6GIBH5WWY x-amzn-ErrorType: ObjectNotFoundException Date: Wed, 18 Nov 2020 04:08:53 GMT X-Cache: Error from cloudfront Via: 1.1 5085d90866d21sadasdasdad53213.cloudfront.net (CloudFront) X-Amz-Cf-Pop: EWR52-C4 X-Amz-Cf-Id: btASELasdasdtzaLkdbIu0hJ_asdasdasdbgiZ5hNn1-utWQ==
access error: HTTP 404 error
main debug: no access modules matched
main debug: dead input
qt debug: IM: Deleting the input
main debug: changing item without a request (current 2/3)
main debug: nothing to play
Updates based on Zach's response:
Here are the parameters I used while deploying the cloudformation template for live streaming using MediaLive (notice that I am using RTMP_PUSH):
I am using MediaLive and not MediaPackage so when I go to MediaLive to my channel, I see this:
Notice that it says that it cannot find the "stream [stream]" but I confirmed that the rtmp endpoint I add to my OBS is exactly the one which was created as an output for me from my cloudformation stack:
Finally, when I try to go to media store to see if there are any objects, it is completely empty:
Vader,
Thank you for the clarification here, I can see the issue is with your settings in OBS. When you setup your input for MediaLive you created a unique Application Name and Instance. Which is part of the URI, the Application Name is LiveStreamingwithMediaStore and the Instance is stream, in OBS you are going to want remove stream from the end of the Server URI and place it in the Stream Key portion, where you currently have a 1.
OBS Settings:
Server: rtmp://server_ip:1935/Application_Name/
Stream Key: Instance_Name
Since you posted the screenshot here on an open forum, which really helped determine the issue, but does expose settings that would allow someone to send to the RTMP input I would suggest that you change the Application Name and Instance.
Zach

AWS s3 upload api call returning 411 status

I have been trying to perform AWS s3 rest api call to upload document to s3 bucket. The document is in the form of a byte array.
PUT /Test.pdf HTTP/1.1
Host: mybucket.s3.amazonaws.com
Authorization: **********
Content-Type: application/pdf
Content-Length: 5039151
x-amz-content-sha256: STREAMING-AWS4-HMAC-SHA256-PAYLOAD
x-amz-date: 20180301T055442Z
When we perform the api call, it gives the response status 411 i.e Length Required. We have already added the Content-Length header with the byte array length as value. But still the issue is repeating. Please help to resolve the issue.
x-amz-content-sha256: STREAMING-AWS4-HMAC-SHA256-PAYLOAD is only used with the non-standards-based chunk upload API. This is a custom encoding that allows you to write chunks of data to the wire. This is not the same thing as the Multipart Upload API, and is not the same thing as Transfer-Encoding: chunked (which S3 doesn't support for uploads).
It's not clear why this would result in 411 Length Required but the error suggests that S3 is not happy with the format of the upload.
For a standard PUT upload, x-amz-content-sha256 must be set to the hex-encoded SHA-256 hash of the request body, or the string UNSIGNED-PAYLOAD. The former is recommended, because it provides an integrity check. If for any reason your data were to become corrupted on the wire in a way that TCP failed to detect, S3 would automatically reject the corrupt upload and not create the object.
See also https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-auth-using-authorization-header.html

Making An HTTP PUT through BrightScript to AWS S3 Bucket with pre-signed url

I've set up an AWS api which obtainins a pre-signed URL for uploading to an AWS S3 bucket.
The pre-signed url has a format like
https://s3.amazonaws.com/mahbukkit/background4.png?AWSAccessKeyId=someaccesskeyQ&Expires=1513287500&x-amz-security-token=somereallylongtokenvalue
where backgournd4.png would be the file I'm uploading.
I can successfully use this URL through Postman By:
configuring it as a PUT call,
setting the body to Binary so I can select the file,
setting the header to Content-Type: image/png
HOWEVER, I'm trying to make this call using BrightScript running on a BrightSign player. I'm pretty sure I'm supposed to be using the roURTransfer object and PutFromFile function described in this doucmentation:
http://docs.brightsign.biz/display/DOC/roUrlTransfer
Unfortunately, I can't find any good working examples showing how to do this.
Could anyone who has experience with BrightScript help me out? I'd really appreciate it.
you are on the right track.
i would do
sub main()
tr = createObject("roUrlTransfer")
headers = {}
headers.addreplace("Content-Type","image/png")
tr.AddHeaders(headers)
info = {}
info.method = "PUT"
info.request_body_file = <fileName>
if tr.AsyncMethod(info)
print "File put Started"
else
print "File put did not start"
end if
delay(100000)
end sub()
note i have used two different methods to populate the two associative arrays. you need to use the addreplace method (rather then the shortcut of .) when the key contains special characters like '-'
this script should work , though i don't have a unit on hand to do a syntax check.
also you should set up a message port etc and Listen to the event that is generated to confirm if the put was successful and/or what the response code is.
note when you read responses from url events. if the response code from the server is anything other then 200 the BrightSign will trash the response body and you can not read it. This is not helpful as services like dropbox like to do a 400 response with more info on what was wrong (bad API key etc) in the body. so in that case you are left in the dark doing trial and error to figure out what was wrong.
good luck, sorry i didn't see this question sooner.

Is there a way to configure Amazon Cloudfront to delay the time before my S3 object reaches clients by specifying a release date? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I would like to upload content to S3 and but schedule a time at which Cloudfront delivers it to clients rather than immediately vending it to clients upon processing. Is there a configuration option to accomplish this?
EDIT: This time should be able to differ per object in S3.
There is something of a configuration option to allow this, and it does allow you to restrict specific files -- or path prefixes -- from being served up prior to a given date and time... though it's slightly... well, I don't even know what derogatory term to use to describe it. :) But it's the only thing I can come up with that uses entirely built-in functionality.
First, a quick reminder, that public/unauthenticated read access to objects in S3 can be granted at the bucket level with bucket policies, or at the object level, using "make everything public" when uploading the object in the console, or sending x-amz-acl: public-read when uploading via the API. If either or both of these is present, the object is publicly readable, except in the face of any policy denying the same access. Deny always wins over Allow.
So, we can create a bucket policy statement matching a specific file or prefix, denying access prior to a certain date and time.
{
"Version": "2012-10-17",
"Id": "Policy1445197123468",
"Statement": [
{
"Sid": "Stmt1445197117172",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::example-bucket/hello.txt",
"Condition": {
"DateLessThan": {
"aws:CurrentTime": "2015-10-18T15:55:00.000-0400"
}
}
}
]
}
Using a wildcard would allow everything under a specific path to be subject to the same restriction.
"Resource": "arn:aws:s3:::example-bucket/cant/see/these/yet/*",
This works, even if the object is public.
This example blocks all GET requests for matching objects by anybody, regardless of permissions they may have. Signed URLs, etc., are not sufficient to override this policy.
The policy statement is checked for validity when it is created; however, the object being matched does not have to exist, yet, so if the policy is created before the object, that doesn't make the policy invalid.
Live test:
Before the expiration time: (unrelated request/response headers removed for clarity)
$ curl -v example-bucket.s3.amazonaws.com/hello.txt
> GET /hello.txt HTTP/1.1
> Host: example-bucket.s3.amazonaws.com
> Accept: */*
>
< HTTP/1.1 403 Forbidden
< Content-Type: application/xml
< Transfer-Encoding: chunked
< Date: Sun, 18 Oct 2015 19:54:55 GMT
< Server: AmazonS3
<
<?xml version="1.0" encoding="UTF-8"?>
* Connection #0 to host example-bucket.s3.amazonaws.com left intact
<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>AAAABBBBCCCCDDDD</RequestId><HostId>g0bbl3dyg00kbunc4Ofl1n3n0iz3h3rehahahasqlbot1337kenqweqwel24234kj41l1ke</HostId></Error>
After the specified date and time:
$ curl -v example-bucket.s3.amazonaws.com/hello.txt
> GET /hello.txt HTTP/1.1
> Host: example-bucket.s3.amazonaws.com
> Accept: */*
>
< HTTP/1.1 200 OK
< Date: Sun, 18 Oct 2015 19:55:05 GMT
< Last-Modified: Sun, 18 Oct 2015 19:36:17 GMT
< ETag: "78016cea74c298162366b9f86bfc3b16"
< Accept-Ranges: bytes
< Content-Type: text/plain
< Content-Length: 15
< Server: AmazonS3
<
Hello, world!
These tests were done against the S3 REST endpoint for the bucket, but the website endpoint for the same bucket yields the same results -- only the error message is in HTML rather than XML.
The positive aspect of this policy is that since the object is public, the policy can be removed any time after the date passes, because it is denying access before a certain time, rather than allowing access after a certain time -- logically the same, but implemented differently. (If the policy allowed access after rather than denying access before, the policy would have to stick around indefinitely; this way, it can just be deleted.)
You could use custom error documents in either S3 or CloudFront to present the viewer with a slightly nicer output... probably CloudFront, since you can select customize each error code individually, creating a custom 403 page.
The major drawbacks to this approach are, of course, that the policy must be edited for each object or path prefix and even though it works per-object, it's not something that's set per object.
And there is a limit to how many policy statements you can include, because of the size restriction on bucket policies:
Note
Bucket policies are limited to 20 KB in size.
http://docs.aws.amazon.com/AmazonS3/latest/dev/access-policy-language-overview.html
The other solution that comes to mind involves deploying a reverse proxy component (such as HAProxy) in EC2 between CloudFront and the bucket, passing the requests through and reading the custom metadata from the object's response headers, looking of a header such as x-amz-meta-embargo-until: 2015-10-18T19:55:00Z and comparing its value to the system clock; if the current time is before the cutoff time, the proxy would drop the connection from S3 and replace the response headers and body with a locally-generated 403 message, so the client would not be able to fetch the object until the designated time had passed.
This solution seems fairly straightforward to implement, but requires a non-built-in component, so it doesn't meet the constraint of the question and I haven't built a proof of concept; however, I already use HAProxy with Lua in front of some buckets to give S3 some other capabilities not offered natively, such as removing sensitive custom metadata from responses and modifying, and directing the browser to apply an XSL stylesheet to, the XML on S3 error responses, so there's no obvious reason that comes to mind why this application wouldn't work equally well.
Lambda#edge can apply your customized access control easily