I want to host a serverless contact page on Amazon Cloudfront.
I've followed this tutorial to the tee:
Processing a Contact Form Using AWS Cloudfront...
There are many steps, but the most important things that I would like to highlight that I have done correctly:
I have set the AWS S3 bucket to public, and given public read access
My Cloudfront behavior allows the POST method (which I use in my HTML contact-form and corresponding js)
My Lambda function and my API Gateway send an e-mail to me (so I know that is working!)
Still, when I press submit on my actual website, I get this error:
This XML file does not appear to have any style information associated with it.
Could anyone please suggest to me where I might be going wrong? The website contact form is here.
Under the assumption this is a website endpoint the following appears in the documentation.
Requests supported: Supports only GET and HEAD requests on objects
It looks as though this is not supported for static websites and instead will need to be worked around.
Related
I want to be able to offer a RT transcription in my browser app using AWS transcribe. I see there is the TranscribeStreamingClient which can have credentials for AWS which are optional - I assume these credentials are required for access to the s3 bucket?
What I want to know is how to setup auth in my app so that I can make sure my users dont go overboard with the amount of minutes they transcribed?
For example:
I would be expecting to generate a pre-signed url that expires in X seconds/minutes on my Backend that I can pass to the web client which it then uses to handle the rest of the communication (similar like S3).
However I don't see such an option and the only solution that I keep circling back to is that I would need to be feeding the audio packets from to my backend which then handles all the auth and just forwards it to the service via the streaming client there. This would be okay but the documentation says that the TranscribeStreamingClient should be compatible for browser integrations. What am I missing?
So I have been attempting to do what I thought would be a simple task. To gain some practice with AWS and a front end framework I wanted to build a webpage where a user could click on an upload button and that would send the file to an S3 bucket where I could interact with it with lambda functions. I have experience with HTML, JavaScript, python, and can find my way around angular, I have also spun up my own flask web servers so I am familiar with the ideas of requests, APIs and REST. Python is the language I am strongest in at the moment.
However now that I have started trying to apply this to AWS I run into allot of issues.
Is there a really simple way of uploading a file (image) to an S3 bucket I am just missing? This seems like it should be easier.
The best practice approach I have found but not gotten to work is that you should generate a presigned URL / post request and then use that from the client side to conduct the upload. However examples such as Official boto3 example, Using an S3 presigned POST, and AWS S3 uploads using presigned URLs. Have not seemed to work for me, usually failing in a HTTP 400, bad request response. I have tried making the request from python requests (response: 400), curl (response: 400), and postman (response: AccessDenied). [dictionary, command, and request are in the links, I will upload code once I know I have a viable approach]
When I was looking into things I found POST Object, this seems to detail the fields for a post request under the V4 framework for AWS. They appear to be different than the fields returned by the methods in the examples above for generating a presigned request. Call S3 presigned URL with postman seems to detail a recent working request consistent with the V4 schema yet the Signature Version 4 signing process don't use boto3 or the aws-sdk which seems like the wrong way to head.
Therefore my questions as a noob are am I going down the right approach and if so what would the debugging strategy be as I have many possible points of failure from malformed requests to a bad CORS on the S3 bucket or a badly configured IAM. I don't know where to begin.
If someone tells me I am not barking up the wrong tree, Ill put up a code example. Any help will be much appreciated.
I am attempting to connect to the Google Analytics API using Matillion ETL on an AWS EC2 instance in an effort to load a data lake.
When I try to add the callback URL into the Google Developer Console http://ec2-99-99-99-99.compute-1.amazonaws.com/oauth_redirect.html, I get the error:
"Invalid Redirect: domain must be added to the authorized domains list before submitting."
I do have amazonaws.com added to the Authorized Domains on the OAuth Consent screen. If I add, compute-1.amazonaws.com/oauth_redirect.html, it accepts it. So I know it's recognizing amazonaws.com, but not for my specific EC2 instance
I was thinking it was because it's a sub-sub-domain, but I'm not sure if that matters. Based on other posts such as this other people have been able to connect.
I've also tried adding a new record set in Route 53 instead of the AWS provided URL, but I don't know how to change the default callback URL in Matillion. I've sent their support team a separate question about that, and will let you know if that resolves it.
I do think this is a problem on the Google side that should resolve it though. Could there be some setting in the Google console that I'm missing to allow this?
Edit: Using the Route 53 URL instead when signing into Matillion will force the OAuth config to use that instead when getting the callback URL. I'm able to connect to Google Analytics now. I will leave this post up in case anyone else runs into the subdomain.subdomain.domain.com issue with Google
As suggested in https://stackoverflow.com/a/36112649:
You can use free DNS by http://xip.io/. So for IP 99.99.99.99 use
http://99.99.99.99.xip.io/callback. And it would be resolved to
http://99.99.99.99/callback.
Further, make sure the redirect URI in the .env file or other similar configuration in AWS is set to http://99.99.99.99.xip.io/callback.
Suppose this scenario:
I'm building service which works with user uploads to my AWS S3 account.
Each site using my service must have upload form which uploads directly to S3. In order to do that each site has to sign it's upload form with AWS Signature Version 4.
The problem is signing requires AWSAccessKeyId and AWSSecretAccessKey which i must share to my service user and that's not acceptable.
I thought i can generate all needed signing data on my side and the just reply with that when user(site) asks for it.
So the question is: is that a bad idea in order to sign upload form site(which is going to upload file to my S3) has to make request to my server for signing data(XHR or server side)?
I'm not entirely sure what you're asking, but if you're asking if it's a bad idea to sign the upload yourself on behalf of the individual sites, than the answer is no...with a caveat.
Signing the upload (really, you should just sign the upload URL) is far less of a security risk than providing other domains your access keys. That's what it's there for, to allow anonymous uploads. Signing the request merely gives the site/user permission to upload, but does not take into account who is uploading or what they are uploading.
This is where your own security checks need to come in. If the form is hosted on multiple domains (all uploading to your S3 bucket), you should first check the domain the form originated from so as to avoid someone putting the form on their own webserver and trying to upload stuff. Depending on how your various sites are configured, there are a couple of ways to do this, of which I am no expert unfortunately.
Secondly, you will want to validate the data being uploaded. Is it pure text, binary, etc.? You will want to validate all of that prior to initiating the upload.
I hope this helps.
Hi I'll try and keep it brief, hope one of you guys knows the answer and I'm not duplicating content.
At the moment I'm using a bucket to take the strain off my server and upload large user files to amazon. This is then reserved to them when they want it via expiring URLs. When the URL expires the user is sent an XML response to say access is denied, and i want to show them a custom error page.
Here Create my own error page for Amazon S3
and Here http://docs.aws.amazon.com/AmazonS3/latest/dev/CustomErrorDocSupport.html
It says you must enable web hosting on the bucket for custom error pages...
So the question is if I do this then just grant any user permissions to access just the custom error pages will this mess anything up with my current usage scenario?
Or is it as simple as everything else stays the same? The docs seem vague and I dont want to mess up my current system...
Sorry if this is a noob question but everyone with the same problem in my research seems happy with the 'Enable hosting' answer and i just want to be sure...
Cheers all
Ed
It's not possible to combine the two things you're trying to combine: query string authentication and custom error pages.
S3 buckets can be made accessible by two different sets of endpoints, each providing a different set of front-end behaviors.
The REST endpoints provide authentication and private content (and SSL), while the Web site endpoints provide custom error (and index) documents, but the objects must be public in order to be accessible, since the web site endpoint does not support authentication (or SSL).
The differences are explained here:
http://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteEndpoints.html#WebsiteRestEndpointDiff
In some environments, I use an intermediate reverse-proxy, hosted in EC2, acting as a front-end for S3 (which gives me the additional capability of rewriting portions of the request headers and capturing access logs in real-time) and I suspect this is the most viable mechanism for also providing "friendly" errors -- as my proxy does if the URL is completely missing elements like Signature= in the URL (since that can't possibly be anything but an error) but have not yet implemented anything to capture 403 Forbidden responses and style them up.
I did do some preliminary testing to add a Link: header to the error response (in the proxy), in an attempt to convince the browser to load an XSL stylesheet, but so far that has not proven viable.